Dafuq did I just see?

Horizontal Pod Autoscaling in Kubernetes using Prometheus

Nov 192018

Hi All,

In this Article, I'm gonna talk about the Kubernetes Horizontal Pod Autoscale object and the Custom Metrics API and how we scale our API's in Hepsiburada.
Before digging into HPA, take a look at https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale

HPA determines if we need more pods and scales the number of Pod. You can scale using the CPU and memory metrics using "K8s Metrics-Server".

However, Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler. With Custom Metrics, you can attach Influxdb/Prometheus or another third party time series db.

There is a nice project and ready to go YAML's in GitHub https://github.com/stefanprodan/k8s-prom-hpa with described and detailed autoscale mechanism deeply.

The Prometheus collects metrics from your applications/pods and stores them on Prometheus. You can use the annotations in your deployment YAML's.

The default path is "/metrics"

annotations:
	prometheus.io/scrape: 'true'
	prometheus.io/path: '/metrics-text'

The Custom Metrics API is responsible for collect data from Prometheus and passes them to HPA.

undefined 

 

After you connect your HPA, you can test and verify its working properly.

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

The exposed metrics that also exists in Prometheus are shown below.

undefined

For example, the "application_httprequests_active" metric is exposed by our API. Also, this can be used with HPA like this.

 

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 5
  maxReplicas: 40
  metrics:
  - type: Pods
    pods:
      metricName: application_httprequests_active
      targetAverageValue: 1000


Here are the instances of our Grafana Dashboards which is connected to Prometheus and shows autoscale's in Kubernetes. You can inspect the Pod memory and the newly created Pods can be seen there. At "07:56" and "08:00" people started to use Search API more and after scaling process, metrics become normal.

 undefined

Atom

boranseref@gmail.com