Member-only story
Kubernetes Autoscaling 1.26: A Game-Changer for KEDA Users?
- Introduction
- A little bit of History
- Kubernetes AutoScaling Capabilities Introduced v2
- Kubernetes Autoscaling v2 vs KEDA
Introduction
Kubernetes Autoscaling has suffered a dramatic change. Since the Kubernetes 1.26 release, all components should migrate their HorizontalPodAutoscaler objects from the v1
to the new release v2
that has been available since Kubernetes 1.23.
HorizontalPodAutoscaler is a crucial component in any workload deployed on a Kubernetes cluster, as the scalability of this solution is one of the great benefits and key features of this kind of environment.
A little bit of History
Kubernetes has introduced a solution for the autoscaling capability since the version Kubernetes 1.3 a long time ago, in 2016. And the solution was based on a control loop that runs at a specific interval that you can configure with the property --horizontal-pod-autoscaler-sync-period
parameters that belong to the kube-controller-manager
.
So, once during this period, it will get the metrics and evaluate through the condition defined on the HorizontalPodAutoscaler component. Initially, it was based on the compute resources used by the pod, main memory, and CPU.

This provided an excellent feature, but with the past of time and adoption of the Kubernetes environment, it has been shown as a little narrow to handle all the scenarios that we should have, and here is where other awesome projects we have discussed here, such as KEDA brings into the picture to provide a much more flexible set of features.
Kubernetes AutoScaling Capabilities Introduced v2
With the release of the v2 of the Autoscaling API objects, we have included a range of capabilities to upgrade the flexibility and options available now. There most relevant ones are the following:
- Scaling on custom metrics: With the new release, you can configure…