Right now Kubeless has support for the following runtimes:
- Python: For the branch 2.7.X
- NodeJS: For the branches 6.X and 8.X
- Ruby: For the branch 2.4.X
Each runtime is encapsulated in a container image. The reference to these images are injected in the Kubeless controller. You can find source code of all runtimes in
Runtimes have a maximum timeout set by the environment variable FUNC_TIMEOUT. This environment variable can be set using the CLI option
--timeout. The default value is 180 seconds. If a function takes more than that in being executed, the process will be terminated.
Configuring Default Runtime Container Images
The Kubeless controller defines a set of default container images per supported runtime variant.
These default container images can be configured via Kubernetes environment variables on the Kubeless controller's deployment container. Or modifying the
kubeless-config ConfigMap that is deployed along with the Kubeless controller.
If you want to obtain the list of possible runtimes execute:
$ kubeless get-server-config INFO Current Server Config: INFO Supported Runtimes are: python2.7, python3.4, python3.6, nodejs6, nodejs8, ruby2.4, dotnetcore2.0
If you want to deploy a custom runtime using an environment variable these are some examples:
|Runtime Variant||Environment Variable Name|
|NodeJS 6.X HTTP Trigger||
|NodeJS 8.X Event Trigger||
|Python 2.7.x HTTP Trigger||
|Ruby 2.4.x HTTP Trigger||
This variant is used when the function is meant to be triggered through HTTP. For doing so we use a web framework that is in charge of receiving request and redirect them to the function. This kind of trigger is supported for all the runtimes.
Node.js HTTP Trigger
For the Node.js runtime we start an Express server and we include the routes for serving the health check and exposing the monitoring metrics. Apart from that we enable CORS requests and Morgan for handling the logging in the server. Monitoring is supported if the function is synchronous or if it uses promises.
When using the Node.js runtime, it is possible to configure a custom registry or scope in case a function needs to install modules from a different source. For doing so it is necessary to set up the environment variables NPM_REGISTRY and NPM_SCOPE when deploying the function:
$ kubeless function deploy myFunction --runtime nodejs6 \ --env NPM_REGISTRY=http://my-registry.com \ --env NPM_SCOPE=@myorg \ --dependencies package.json \ --handler test.foobar \ --from-file test.js \ --trigger-http
Python HTTP Trigger
For python we use Bottle and we also add routes for health check and monitoring metrics.
Ruby HTTP Trigger
For the case of Ruby we use Sinatra as web framework and we add the routes required for the function and the health check. Monitoring is currently not supported yet for this framework. PR is welcome :-)
This variant is used when the function is meant to be triggered through message events in a pre-deployed Kafka system. We include a set of kafka/zookeeper in the deployment manifest of Kubeless release package that will be deployed together with the Kubeless controller. Basically the runtimes are Kafka consumers which listen messages in a specific kafka topic and execute the injected function.
Right now the runtimes that support this kind of events are Python, NodeJS and Ruby.
The pods are deployed using the function handler as group ID. That means that the load will be balanced. If a function is scaled and its deployment has more than one replica only one pod will process a published message.
This is meant for functions that should be triggered following a certain schedule. For specifying the execution frequency we use the Cron format. Every time a scheduled function is executed, a Job is started. This Job will do a HTTP GET request to the function service and will be successful as far as the function returns 200 OK.
For executing scheduled functions we use Kubernetes CronJobs using mostly the default options which means:
- If a Job fails, it won't be restarted but it will be retried in the next scheduled event. The maximum time that a Job will exist is specified with the function timeout (180 seconds by default).
- The concurrency policy is set to
Allow so concurrent jobs may exists.
- The history limit is set to maintain as maximum three successful jobs (and one failed).
If for some reason you want to modify one of the default values for a certain function you can execute
kubectl edit cronjob trigger-<func_name> (where
func_name is the name of your function) and modify the fields required. Once it is saved the CronJob will be updated.
Kubeless runtimes are exposing metrics at
/metrics endpoint and these metrics will be collected by Prometheus. We also include a prometheus setup in
manifests/monitoring to help you easier set it up. The metrics collected are: Number of calls, succeeded and error executions and the time spent per call.
Custom Runtime (Alpha)
NOTE: This feature is under heavy development and may change in the future
We are providing a way to define custom runtime in form of a container image. This way you are able to use any language or any binary with Kubeless as far as the image satisfies the following conditions:
- It runs a web server listening in the port 8080
- It exposes the endpoint
/healthz to perform the container liveness probe
- (Optional) It serves Prometheus metrics in the endpoint
To deploy the container image you just need to specify it using the Kubeless CLI:
$ kubeless function deploy --runtime-image bitnami/tomcat:9.0 webserver $ kubeless function ls NAME NAMESPACE HANDLER RUNTIME TYPE TOPIC webserver default HTTP
Now you can call your function like any other:
$ kubeless function call webserver ... <h2>If you're seeing this, you've successfully installed Tomcat. Congratulations!</h2>
Note that you can also use your own image and inject different functions. That means you have to manage how your runtime starts and looks for the injected function and executes it. Kubeless injects the function into the runtime container via a Kubernetes ConfigMap object mounted at
/kubeless folder, so make sure your runtime looks for function at that folder. Let's see an example:
First we need to create a base image. For this example we will use the Python web server that you can find in the runtimes folder. We will use the following Dockerfile:
FROM python:2.7-slim RUN pip install bottle==0.12.13 cherrypy==8.9.1 wsgi-request-logger prometheus_client lxml ADD kubeless.py / EXPOSE 8080 CMD ["python", "/kubeless.py"]
Once you have built the image you need to push it to a registry to make it available within your cluster. Finally you can call the
deploy command specifying the custom runtime image and the function you want to inject:
$ kubeless function deploy \ --runtime-image tuna/kubeless-python:0.0.6 \ --from-file ./handler.py \ --handler handler.hello \ --runtime python2.7 \ --trigger-http hello $ kubeless function ls NAME NAMESPACE HANDLER RUNTIME TYPE TOPIC get-python default foo.foo python2.7 HTTP $ kubeless function call get-python Connecting to function... Forwarding from 127.0.0.1:30000 -> 8080 Forwarding from [::1]:30000 -> 8080 Handling connection for 30000 hello world
Note that it is possible to specify
--dependencies as well when using custom images and install them using an Init container but that is only possible for the supported runtimes. You can get the list of supported runtimes executing
kubeless function deploy --help.
When using a runtime not supported your function will be stored as
/kubeless/function without extension. For example, injecting a file
my-function.jar would result in the file being mounted as