.Net Core – OpenFaas – MongoDB

After playing around with some OpenFaas functions I came across the alexellis/mongodb-function. After reading through the README and having a play around with it I thought I would have a go at porting it to C# using .Net Core. Not only did I do this for a personal challenge but I also thought it could be useful to others who prefer to develop using C#/.Net Core.

This function creates a connection to a MongoDB and maintains the connection for the lifetime of the function. This means that a cold start, time taken to create the connection, will only occur on the initial request. Therefore all successive requests are relatively fast.

The csharp-kestrel-mongo project is the result. This OpenFaas function template when created provides you with a function handler for which you can interact with a MongoDB. The function offers you the ability to specify environment variables that defines your MongoDB instance(s).

It follows a slightly different design architecture compared to the one described in the original mongodb-function.

Credit: Alex Ellis’ original template architecture https://github.com/alexellis/mongodb-function modified for this template

If you are familiar with OpenFaas, a quick rundown on how to get up and running with this function follows using Play with Docker (A free online docker playground).

If you are using Play with Docker, create a Docker instance and install OpenFaas following the Docker Swarm instructions specified on the OpenFaas docs, otherwise you can follow along with your own OpenFaas deployment. You will also need access to a MongoDB. If you are following along using Play with Docker this can be done by running the following Docker commands in your instance:

docker volume create mongodata

docker run --name mongodb -d -p 27017:27017 -v mongodata:/data/db mongo mongod

Once completed the MongoDB function can be downloaded and created:

faas-cli template pull https://github.com/Marcus-Smallman/csharp-kestrel-mongo.git

faas-cli new mongo-function --lang csharp-kestrel-mongo

Note: armhf is also supported so you can deploy this function to your raspberry pi(s)! Simply change the specified language to csharp-kestrel-mongo-armhf

This will pull the template and create a new function called ‘mongo-function’. A mongo-function.yml file will be created which is required to deploy your function. This file allows for environment variables to be set that your function can access. An environment variable called mongo_endpoint is required for this function which specifies where your MongoDB is accessible from. There are also some extra optional environment variables that can be set.

An example of the modified mongo_function.yml file follows:

provider:
name: faas
gateway: http://127.0.0.1:8080

functions:
mongo-function:
lang: csharp-kestrel-mongo
handler: ./mongo-function
image: <docker-registry>/mongo-function
environment:
mongo_endpoint: <your-mongo-endpoint>:27017
mongo_database_name: my_mongo_database
mongo_collection_name: my_mongo_collection

Note: Don’t forget to modify your gateway address! I always forget to do this.

If you are following along with Play with Docker replace the <your-mongo-endpoint> with the IP of the docker instance created and replace the gateway value to your OpenFaas endpoint (the URL).  As you will be also be building and pushing this function to a docker registry, you will need to make sure that <docker-registry> is replaced with the one you will use for this function.

Now that we have everything set up we can get to the fun part, the actual function logic! This will all be done in the generated FunctionHandler.cs file:

public class FunctionHandler
{
public Task<string> Handle(object input)
{
this.GetCollection()
.InsertOne(input.ToBsonDocument());

var response = new ResponseModel()
{
response = input,
status = 201
};

return Task.FromResult(JsonConvert.SerializeObject(response));
}
}

By default the template inserts the request body into MongoDB and returns a response object with the provided input and a HTTP status code of 201. This can of course be changed to do anything you want from creating new MongoDB collection to updating all documents that contain the value OpenFaas. The function can then be deployed:

faas-cli build -f mongo-function.yml

faas-cli push -f mongo-function.yml

faas-cli deploy -f mongo-function.yml

To access and play around with the function we can head over to the OpenFaas gateway UI:

Data can then be sent via the request body to the function as seen in the screen shot above. This will then, if successful, return your given input and a status code of 201.

We can also prove that the data has been written to our MongoDB by using a MongoDB client to connect to our MongoDB and check that the data has been stored:

Cool, huh?

For more technical and in depth usage of the function template please read the documentation described in the project repository.

GitHub project: csharp-kestrel-mongo

– Marcus

Writing OpenFaas Serverless Functions in Go

Prelude

In this post I will take you through the process of writing a Serverless function in Go with OpenFaas.

Note: Knowledge of Linux and Go will be very helpful.

Getting Started

Before we get to writing Serverless functions in Go, you need to have a OpenFaas deployment. If you do, great, you can skip to the next section. If you don’t, then you can follow Alex’s (founder of OpenFaas) tutorials on how to get a deployment up and running. He has a lot of great tutorials and posts that have heavily influenced the creation of this post. You should also check out his post on creating Go Serverless functions as well as he goes much more in depth.

Writing a Serverless function

Note: I will be developing the following function on a raspberry pi. This should not matter though as long as you can run the OpenFaas CLI and build and deploy Docker containers.

The following function that we are going to create will simply return the current time on the machine that executes the function.

First create the function project.

faas-cli new --lang=go-armhf get-time
Note: As I am running Go on ARM architecture the following suffix '-armhf' was added to the '--lang' parameter. This is not required if you are developing and building Go on x86.

What that command above should have done is create 3 seperate items;

  • template folder which holds all the templates required to build and run Serverless functions with your chosen language in OpenFaas.
  • A get-time.yml file that holds the configuration of your function. For example, the name of your function as a docker image, where the function will deploy to and the location of your function handler (the actual code of your function).
  • A get-time folder that will hold your function code.

If you have those three we can open up the handler.go file in the get-time folder. You should see the following:

package function

import (
	"fmt"
)

// Handle a serverless request
func Handle(req []byte) string {
	return fmt.Sprintf("Hello, Go. You said: %s", string(req))
}

First we want to import the time package so that we can get the current time.

import (
        "fmt"
        "time"
)

Now we can change the return message with the current time.

return fmt.Sprintf("The current time on this machine is %s", time.Now())

Before we can build, push and deploy our changes we need to make sure that out get-time.yml file has the correct configuration. The following is an example of my get-time.yml file. I have highlighted the changes that I had made in mine.

provider:
    name: faas
    gateway: http://192.168.1.125:31112

functions:
    get-time:
        lang: go-armhf
        handler: ./get-time
        image: marcussmallman/get-time

Next, we can build this function and deploy it to OpenFaas.

faas-cli build -f get-time.yml

faas-cli push -f get-time.yml

faas-cli deploy -f get-time.yml

If the 3 commands above were successful then we can head over to the OpenFaas UI and expect to see the get-time function there.

Now we can press the Invoke button which will call our function that we just created and hopefully return the current time.

And there we have it.

To End

As simple as this post was to follow, I think it is a great example of how powerful and practical Serverless functions can be. The fact that a function is what scales depending on load is just awesome and potentially very beneficial. That’s not to say it’s an answer to all problems, but I think it is definitely an exciting new technology that we should all be aware of.

– Marcus

DIY Rasberry Pi Kubernetes Cluster

Getting Started

Before proceeding in building a DIY Kubernetes cluster, some knowledge of Kubernetes and Linux will be helpful.

There are many blog posts and tutorials about creating a DIY Kubernetes cluster and I thought I’d give it a shot. I found the following sources very helpful in getting my very own cluster up and running:

If you want to use them instead go ahead, otherwise I will take you through what I did and what worked for me.

What to buy?

There are many options when it comes to building a DIY cluster. You can go for the cheap raspberry pi zero’s and build a cluster out of that, or go for the high end mini pc’s that have a lot better spec. I ended up going a for a reasonably pricey 5 node cluster consisting of the following parts:

If you do intend to purchase these parts, please research other websites first as these links are examples and cheaper options are sure to be available.

Note: Raspberry Pi's run on the ARM architecture. This is important depending on what you are going to use your cluster for as a lot of the containers that you may want to run are built against x86 architecture meaning that it will not run on the Raspberry Pi's. This can be solved by purchasing a UDOO or something similar which has a x86 architecture.

Building the Cluster

Once all the parts have been gathered there will be a few things that need to be done before building the cluster.

First you will need to install a Raspberry Pi OS, I went for Raspbian Jessie Lite,  on each of the SD cards. To do this I used a tool called Etcher. Once all the SD cards have been set up you will also need to create and empty file called ‘ssh’ before you insert them into the Raspberry Pi’s.

Next, the cluster can be built so that we get everything up and running. I will talk you through the process of building mine. First, I connected the power supply and the network switch to the mains. I then connected my router to the power supply. Now that the router has power I was then able to configure it to act like a network bridge between my home network and the cluster network. This is handy if you want to use static IP’s for your cluster nodes, otherwise you can connect them directly to your home network. Once the router was configured I connected it to the network switch so that any other devices connected via the network switch had internet access. I then assembled the Raspberry Pi’s and case. This step probably took me the longest as the screws where so small and fiddly. I then inserted the SD cards into the Raspberry Pi’s and connected all of them to the power supply and network switch. Finally, I tidied everything up and this was the end result:

Running Kubernetes

Now that the cluster is built, it is time to get it running Kubernetes. I will be using Kubeadm to create my Kubernetes cluster. To do this you will need to complete the following steps on each of the cluster nodes:

SSH into the node and change the hostname. The name wants to be relative to what the node is going to be. For example, 4 of the nodes in this cluster will be Kubernetes worker nodes, therefore their hostnames could be something like ‘worker1’, ‘worker2’, ‘worker3’ and ‘worker4’. As for the 5th node, it will be the Kubernetes master node, therefore the hostname could be ‘master’. To change the hostname on a Raspberry Pi use the following command:

sudo raspi-config

Reboot the node once the change has been made.

sudo reboot

Next, docker will need to be installed and some configuration will need to be done. There is a handy command that does all of this for you given the setup that I have:

curl -sL https://gist.githubusercontent.com/alexellis/fdbc90de7691a1b9edb545c17da2d975/raw/b04f1e9250c61a8ff554bfe3475b6dd050062484/prep.sh | sudo sh

Credit: Alex Ellis’ k8s-on-raspbian

If this does not work or you are curious to know what it does then click here to see where I got it from. Hopefully that answers any thoughts or questions you may have.

Now that everything required to run Kubernetes is installed we can now set up our master and worker nodes.

To setup our master node run the following command:

sudo kubeadm init

A problem I encountered within my cluster when running that command was that it would hang and error when trying to initialise the master node. The following had to be completed in order for me to setup my master node. I created the following config file called kubeadm.yml.

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  extraArgs:
    'listen-peer-urls': 'http://127.0.0.1:2380'

And used that when initialising the master node.

sudo kubeadm init --config kubeadm.yml

This could take a while so be patient.

Once the master node has initialised we can connect the other worker nodes to the cluster using the command that was outputted.

sudo kubeadm join --token 5aaee9.0afd0c68d95311e5 192.168.1.251:6443 --discovery-token-ca-cert-hash sha256:e759812509474de12fa90d082ebec2fdb0f44182c9630e69781eb5350631056c

Run the command on each node and that should join them to the cluster. Next, we need to check that they have connected, to do this we need to set the Kubernetes configuration up.

When initialising the master node an admin.conf file is created which contains the config you will need to communicate to the cluster using kubectl. There are two ways of setting the Kubernetes configuration. Either setting the KUBECONFIG environment variable or have a .kube/config file set. I will set the .kube/config file. To do this execute the following commands on the master node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

This will create a copy of the admin.conf and set it in your .kube/config file.

Note: If you want to access the Kubernetes cluster on a different machine then you will want to copy this config onto the machine that wants access to the cluster.

I copied this onto my Windows machine at the following location
C:\Users\<User>\.kube\config

Then set my KUBECONFIG environment variable to point to that location.

CMD: SET KUBECONFIG="C:\Users\<User>\.kube\config"
PowerShell: [Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\Users\<User>\.kube\config")

Now that the Kubernetes config is set, we can run the following command to see if all the nodes have clustered.

kubectl get nodes

Awesome! Now we can finish setting up the networking. I took Hanselman’s approach and used weave for networking within the Kubernetes cluster. There are many others which can be viewed here but weave worked straight away for me.

kubectl apply -f https://git.io/weave-kube-1.6

Double check that everything is up and running using:

kubectl get pods --namespace kube-system

And that’s it, I have a running Kubernetes cluster.

Kubernetes Dashboard

One final thing to make developing against your own Raspberry Pi Kubernetes cluster even better is to set up the Kubernetes dashboard. As the cluster is made up of Raspberry Pi’s, the ARM version of the dashboard will need to be deployed.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml
Note: If this link does not work or you want to use the development manifest they can be found here.

Next I ran,

kubectl proxy

from my Windows machine to see if I could get to the dashboard. I hit a problem which was to do with the fact that the dashboard was running on a worker node, not a master. I believe this is a problem related with Kubeadm. To solve this I had to modify the ./kubernetes-dashboard-arm.yaml file and add the following to the deployment.

nodeSelector:
  node-role.kubernetes.io/master: ""

Once I had redeployed the dashboard with that change I was then able to hit the dashboard at:

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/overview?namespace=_all

And there we have it.

I hope this was helpful and if there are any problems or questions related to this post I’ll be happy to answer and resolve them in the comments.

– Marcus