Autoscaling function deployment in Kubeless

This document gives you an overview of how we do autoscaling for functions in Kubeless and also give you a walkthrough how to configure it for custom metric.


Kubernetes introduces HorizontalPodAutoscaler for pod autoscaling. In kubeless, each function is deployed into a separate Kubernetes deployment, so naturally we leverage HPA to automatically scale function based on defined workload metrics.

If you're on Kubeless CLI, this below command gives you an idea how to setup autoscaling for deployed function:

$ kubeless autoscale --help
autoscale command allows user to list, create, delete autoscale rule
for function on Kubeless

  kubeless autoscale SUBCOMMAND [flags]
  kubeless autoscale [command]

Available Commands:
  create      automatically scale function based on monitored metrics
  delete      delete an autoscale from Kubeless
  list        list all autoscales in Kubeless

  -h, --help   help for autoscale

Use "kubeless autoscale [command] --help" for more information about a command.

Once you create an autoscaling rule for a specific function (with kubeless autoscale create), the corresponding HPA object will be added to the system which is going to monitor your function and auto-scale its pods based on the autoscaling rule you define in the command. The default metric is CPU, but you have option to do autoscaling with custom metrics. At this moment, Kubeless supports qps which stands for number of incoming requests to function per second.

$ kubeless autoscale create --help
automatically scale function based on monitored metrics

  kubeless autoscale create <name> FLAG [flags]

  -h, --help               help for create
      --max int32          maximum number of replicas (default 1)
      --metric string      metric to use for calculating the autoscale. Supported
      metrics: cpu, qps (default "cpu")
      --min int32          minimum number of replicas (default 1)
  -n, --namespace string   Specify namespace for the autoscale
      --value string       value of the average of the metric across all replicas.
      If metric is cpu, value is a number represented as percentage. If metric
      is qps, value must be in format of Quantity

The below part will walk you though setup need to be done in order to make function auto-scaled based on qps metric.

Autoscaling based on CPU usage

To autoscale based on CPU usage, it is required that your function has been deployed with CPU request limits.

To do this, use the --cpu parameter when deploying your function. Please see the Meaning of CPU for the format of the value that should be passed.

Autoscaling with custom metrics

It is possible to use custom metrics (like queries per second) to scale your functions. We are looking for help in order to document the required steps to do so with the different Kubernetes providers for newer versions of Kubernetes. If you want to contribute to this guide PRs are more than welcome :).

Warning This walkthrough is done in kubeadm-dind-cluster v1.7 it may not work for other versions or platforms.

Cluster configuration

Before getting started, ensure that the main components of your cluster are configured for autoscaling on custom metrics. As of Kubernetes 1.7, this requires enabling the aggregation layer on the API server and configuring the controller manager to use the metrics APIs via their REST clients.

Read more about the aggregation and autoscaling in the Kubernetes documentations:

Start the cluster

chmod +x
./ up

Checking the state of the cluster:

docker ps
kubectl cluster-info


The manifests of kubernetes components in kubeadm-dind-cluster locate at /etc/kubernetes/manifests, you can just jump in the master "container" and edit them directly; and kubeadm manages to recreate the components immediately.

These below configurations must be set:

--requestheader-client-ca-file=<path to aggregator CA cert>
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
  • Configure controller manager to use the metrics APIs via their REST clients by these settings in kube-controller-manager:
--master=<apiserver-address>:<port> //port should be 8080

The horizontal-pod-autoscaler-sync-period parameter set the interval time (in second) that the HPA controller synchronizes the number of pods. By default it's 30s. Sometimes we might want to optimize this parameter to make the HPA controller reacts faster.

  • The autoscaling for custom metrics is supported in HPA since v1.7 via autoscaling/v2alpha1 API. It needs to be enabled by setting the runtime-config in kube-apiserver:

Once the kube-apiserver is configured and up and running, kubectl will auto-discover all API groups. Check it using this below command and you will see the autoscaling/v2alpha1 is enabled:

$ kubectl api-versions

Deploy Prometheus to monitor services

The Prometheus setup contains a Prometheus operator and a Prometheus instance

$ kubectl create -f $KUBELESS_REPO/manifests/autoscaling/prometheus-operator.yaml
clusterrole "prometheus-operator" created
serviceaccount "prometheus-operator" created
clusterrolebinding "prometheus-operator" created
deployment "prometheus-operator" created

$ kubectl create -f $KUBELESS_REPO/manifests/autoscaling/sample-prometheus-instance.yaml
clusterrole "prometheus" created
serviceaccount "prometheus" created
clusterrolebinding "prometheus" created
prometheus "sample-metrics-prom" created
service "sample-metrics-prom" created

$ kubectl get svc
NAME                  CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes         <none>        443/TCP          6d
prometheus-operated   None            <none>        9090/TCP         1h

Deploy a custom API server

When the aggregator enabled and configured properly, one can deploy and register a custom API server that provides the API group/version and let the HPA controller queries custom metrics from that.

The custom API server we are using here is basically a Prometheus adapter which can collect metrics from Prometheus and send to HPA controller via REST queries (that's why we must configure HPA controller to use REST client via the --horizontal-pod-autoscaler-use-rest-clients flag)

$ kubectl create -f $KUBELESS_REPO/manifests/autoscaling/custom-metrics.yaml
namespace "custom-metrics" created
serviceaccount "custom-metrics-apiserver" created
clusterrolebinding "custom-metrics:system:auth-delegator" created
rolebinding "custom-metrics-auth-reader" created
clusterrole "custom-metrics-read" created
clusterrolebinding "custom-metrics-read" created
deployment "custom-metrics-apiserver" created
service "api" created
apiservice "" created
clusterrole "custom-metrics-server-resources" created
clusterrolebinding "hpa-controller-custom-metrics" created

At this step, the custom API server is deployed and registered to API aggregator, so we can see it:

$ kubectl api-versions

$ kubectl get po -n custom-metrics
NAME                                        READY     STATUS    RESTARTS   AGE
custom-metrics-apiserver-2956926076-wcgmw   1/1       Running   0          1h

$ kubectl get --raw /apis/

Deploy a sample app

Now we can deploy a sample app and sample HPA rule to do the autoscale with http_requests metric collected and exposed via Prometheus.

$ cat $KUBELESS_REPO/manifests/autoscaling/sample-metrics-app.yaml
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2alpha1
  name: sample-metrics-app-hpa
    kind: Deployment
    name: sample-metrics-app
  minReplicas: 2
  maxReplicas: 10
  - type: Object
        kind: Service
        name: sample-metrics-app
      metricName: http_requests
      targetValue: 100

$ kubectl create -f $KUBELESS_REPO/manifests/autoscaling/sample-metrics-app.yaml
deployment "sample-metrics-app" created
service "sample-metrics-app" created
servicemonitor "sample-metrics-app" created
horizontalpodautoscaler "sample-metrics-app-hpa" created

$ kubectl get hpa

Try to increase some loads by hitting the sample app service, then you can see the HPA scales it up.

Autoscaling on GKE

Let's say you are running Kubeless on GKE. At this moment you can only do autoscaling with default metric (CPU). For custom metrics, GKE team says that it will be supported from GKE 1.9+. So stay tuned.

Further reading

Custom Metrics API

Support for custom metrics