OpenFaaS: Serverless Functions Deepdive
OpenFaaS is a platform for defining, deploying and monitoring serverless functions written in different programming languages such as #nodejs, #python, #go, #java. It has a well-defined command-line interface and web-based UI. OpenFaaS functions are effectively fully-customizable Docker containers. Read in this deep-dive article how to use OpenFaaS and customize your serverless function with any additional tooling or software package that you need.
Hosting applications in the cloud requires servers and computing resources. Usually you define the number of servers that you will need to run your application, provide the servers and deploy your app. By monitoring you see the actual resource consumption, and can scale down or up. With Kubernetes, this task becomes as easy as executing kubectl scale deployment lighthouse --replicas 10
. But still you need enough servers that can actually provide the capacity for your apps.
Let’s take this idea to the next level. Imagine your application consists of stateless, loosely coupled microservices. Each microservice is deployed so that it can handle the request load automatically. Automating monitoring detects and projects resource consumption. If the load is high, microservices that are under heavy load will get new instances. If the load is low, microservice instances will get reduced. Application provisioning, and therefore resource utilization, is dynamic: Only those resources that are actually needed are provisioned.
This paradigm has different names: Serverless, Functions-as-a-Service or Lambda. The first successful commercial application was Amazon Lambda, later followed by Google Cloud Functions. They offer environments in which you write functions in Java, NodeJS, Python or Go and get an HTTP endpoint for invoking functions. Today, there are many Function-as-a-Service platforms: OpenFaaS, Kubeless, Knative and Apache Camel. These platforms allow self-hosting serverless function.
OpenFaaS is easy to use, intentionally abstracting the underlying but complexity until you are ready to get into the detail. This article was intended to expand my lighthouse SaaS, but it evolved to a deep dive to explain how OpenFaaS templates work. As a first-time user, understanding the templates isn’t necessary to operate OpenFaaS, but I was curious about their details. Be warned: The article is lengthy! But at the end you will learn which template to apply for writing a custom microservice with HTTP status codes that you define, and you will understand how to customize the Docker image that is used for building your function.
OpenFaaS in a Nutshell
OpenFaaS is a platform that enables you to run functions written in various programming languages. It is deployed either via Docker Swarm or Kubernetes, and consists of the following components:
- Gateway: The gateway is a resource of type LoadBalancer, it provides an external IP and listens on port 8080 for incoming traffic. With it you gain access to the dashboard and to the functions that you deploy.
- NATS: Full featured Open Source messaging system that offers distributed pub/sub. OpenFAAS uses Nats for providing asynchronous function invocation.
- Queue Worker: The worker that processes the queued asynchronous requests, passing them on to their target function
- Faas Idler: A tool that checks various function status, such as function invocation. Determines if a function should be scaled.
- Prometheus: Built-In scraping of function metrics
- Alert Manager: Built-In alerting manager for detecting and reacting on high request load
The primary method to interact with the OpenFaaS platform is through the command line interface: faas-cli
covers the complete life-cycle of creating, building, pushing and deploying functions. Each function is deployed as a Docker container. The OpenFaaS community provides official templates for creating functions in Java, Node.JS, Python, Ruby, Go, C-Sharp and PHP.
With the help of the arkade tool, OpenFaaS can be installed with just one line: arkade install openfaas --load-balancer
. More installation options[^1] can be found in the official documentation.
Understanding OpenFaaS Functions
An OpenFaaS function is in essence a Docker Container that runs on the platform. The platform takes care of monitoring, scheduling and scaling of these functions, and for this it needs to provide the runtime requirements specified by the OpenFaaS project. You can create your own Docker container, but it’s better to use the community provided templates that fulfill all these requirements to enable you to just start writing a function.
OpenFaaS templates come in two flavors. The classic watchdog simply forks a new function and uses stdin
and stderr
to process messages. The new of-watchdog enables forking of processes that stay active, which enables consistent connection pools and caches. It also supports HTTP based communication between the main process and the forked process, which can be used for fine grained HTTP responses of your function. My suggestion is to use the of-watchdog.
With the command faas-cli template store list
you will see a list of all currently available templates. It will give you an output as shown:
NAME SOURCE DESCRIPTION
csharp openfaas Classic C# template
dockerfile openfaas Classic Dockerfile template
go openfaas Classic Golang template
java8 openfaas Java 8 template
java11 openfaas Java 11 template
java11-vert-x openfaas Java 11 Vert.x template
node12 openfaas HTTP-based Node 12 template
node openfaas Classic NodeJS 8 template
php7 openfaas Classic PHP 7 template
python openfaas Classic Python 2.7 template
...
The hard part about the templates is to find the one suitable for your project. Most of the templates just provide a HTTP wrapper about a function the you write function: An endpoint to call, with fixed return codes of 200 or 400. If you want to deploy a microservice with several endpoints and custom HTTP status codes, use templates based on the of-watchdog.
For the remainder in this article, I will focus on those templates, and particular on the NodeJS template node10-express-service
.
Understanding the OpenFaaS Templates
So, what's included inside a template? Execute faascli template store pull node10-express-service
and see what it looks like.
> tree template/node10-express-service
template/node10-express-service
├── Dockerfile
├── function
│ ├── handler.js
│ └── package.json
├── index.js
├── package.json
└── template.yml
As you can see, the template consists of a Dockerfile
, a function
subfolder where you place your function, code that provides the HTTP wrapper for your function (here index.js
and package.json
) and a configuration file template.yml
.
Dockerfile
The Dockerfile builds the language-specific execution environment for your function. The Dockerfile instructions are concerned with these tasks:
- Setup the watchdog process to have a clean way of forking subprocesses inside the Docker container
- Provide all the dependencies for the chosen programming language
- Copy the HTTP wrapper for the function code
- Copy the function code
The Watchdog functions are created by using the of-watchdog
image for copying the fwatchdog
binary.
FROM openfaas/of-watchdog:0.5.3 as watchdog
...
COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog
For the programming language specific environment, a suitable Docker image is used as well
FROM node:10.12.0-alpine as ship
The HTTP wrapper code is included in the files index.js
and package.json
. These are copied to the Docker container, and all dependencies are installed.
ENV NPM_CONFIG_LOGLEVEL warn
# Wrapper/boot-strapper
WORKDIR /home/app
COPY package.json ./
RUN npm i
COPY index.js ./
Similarly, the function code contained in function/handler.js
and function/package.json
is copied.
# COPY function node packages and install
WORKDIR /home/app/function
COPY function/*.json ./
RUN npm i || :
COPY function/ ./
Finally, environment variables are set and the fwatchdog
process is started in the container.
ENV cgi_headers="true"
ENV fprocess="node index.js"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:3000"
ENV exec_timeout="10s"
ENV write_timeout="15s"
ENV read_timeout="15s"
HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD ["fwatchdog"]
JavaScript HTTP-Wrapper
The HTTP wrapper code ultimately creates an Express
web server instance that forwards all requests to the provided function definition in handler.js
.
First of all, the main objects for the Express app are created.
const express = require('express')
const app = express()
const handler = require('./function/handler');
Then, the init
function is defined and executed.
async function init() {
await handler({"app": app});
const port = process.env.http_port || 3000;
app.disable('x-powered-by');
app.listen(port, () => {
console.log(`node10-express-service, listening on port: ${port}`)
})
}
init();
As you see, the express server is started and configured with the handler()
method. This method is defined in function/handler.js
, and it configures the basic express app further: Setting middleware to parse the HTTP Body as JSON or raw text, as well as defining that all routes will be forwarded to an object called route
. In your function code, this object will be extended with your custom logic, as we will see later.
class Routing {
constructor(app) {
this.app = app;
}
configure() {
const bodyParser = require('body-parser')
this.app.use(bodyParser.json());
this.app.use(bodyParser.raw());
this.app.use(bodyParser.text({ type : "text/*" }));
this.app.disable('x-powered-by');
}
bind(route) {
this.app.post('/*', route);
this.app.get('/*', route);
this.app.patch('/*', route);
this.app.put('/*', route);
this.app.delete('/*', route);
}
handle(req, res) {
res.send(JSON.stringify(req.body));
}
}
module.exports = async (config) => {
const routing = new Routing(config.app);
routing.configure();
routing.bind(routing.handle);
}
OpenFaaS Configuration File
We know how the Dockerfile works. We have seen the HTTP wrapper. Now we need to understand how to use this template, and then expose our service.
You create a new function skeleton with the command faas-cli new --lang node10-express-service hello-world
. This will create the following files.
hello-world.yml
hello-world
├── handler.js
└── package.json
The configuration file hello-world.yml
connects your function code to the chosen template.
version: 1.0
provider:
name: openfaas
gateway: https://functions.admantium.com
functions:
hello-world:
lang: node10-express-service
handler: ./hello-world
image: hello-world:latest
Now you put your application code inside the file handler.js
.
module.exports = async (config) => {
const app = config.app;
app.get('/', (req, res) => res.send("Hello"));
}
Then you can build your function faas-cli build -f hello-world.yml
. This command triggers the following steps:
- The OpenFaaS template name is read from the configuration file, here
lang: node10-express-service
- The Dockerfile of this template is used to build the Docker image
- This Dockerfile is assumed to exist in the directory
./templates/node10-express-service
- You can customize this Docker image with custom code, e.g. adding base image packages
- This Dockerfile is assumed to exist in the directory
- The image is tagged according to the value given in the configuration file
- When executed, the exported
app
object is passed to the template’s HTTP wrapper, and this instantiates theRouting
class which serves all requests
Conclusion
OpenFaaS is a powerful platform for implementing serverless functions. In this deep dive article, I showed how templates work in detail. Templates provide you a custom and customizable Docker image, that is linked during the build step to your function code via a config file. Also, the programming language specific parts of the templates are artifacts in that language. Taking a look at them, you know how the HTTP wrapper for your function code works. I hope this knowledge helps you to understand templates in depth, and that you can customize them for your specific application design.
Footnotes
[^1]: One note about installation: Be sure to not have any other webserver listening on ports 80, 443 or 8080 on your machine/nodes on which you install OpenFaaS. I had trouble with constantly crashing load balancers, and the culprit was a Nginx server that I had still running on my master node.