We have started rolling out EKS 1.18. This brings EKS on Kubernetes v1.18.9
.
In the process of upgrading EKS the following components have also been upgraded:
- KubeProxy from 1.17.12 to 1.18.9
- CoreDNS from 1.6.6 to 1.7.0
- Cluster Autoscaler from 1.17.4 to 1.18.3.
Upon writing upgrades of customer staging clusters have been rolled out. Production clusters will follow in the next week(s) after some extra validation, so you can expect to be contacted by an engineer to determine an upgrade window.
Important changes between K8s 1.17 and 1.18
Kubernetes 1.18 is a “fit and finish” release. Significant work has gone into improving beta and stable features to ensure users have a better experience. An equal effort has gone into adding new developments and exciting new features that promise to enhance the user experience even more.
-
Extending Ingress with and replacing a deprecated annotation with IngressClass
A new
pathType
field and a newIngressClass
resource has been added to theIngress
specification. ThepathType
field allows specifying how paths should be matched. In addition to the defaultImplementationSpecific
type, there are newExact
andPrefix
path types.The
IngressClass
resource is used to describe a type of Ingress within a Kubernetes cluster. Ingresses can specify the class they are associated with by using a newingressClassName
field on Ingresses. This new resource and field replace the deprecatedkubernetes.io/ingress.class
annotation.Important: We don’t setup an
IngressClass
for our defaultnginx
(andnginx-internal
) Ingress controllers yet and thus still rely on thekubernetes.io/ingress.class
annotation. We plan to implement this in a future update.For more detailed information, check the Improvements to the Ingress API in Kubernetes 1.18 page.
-
Serverside Apply introduces Beta 2
This new version will track and manage changes to fields of all new Kubernetes objects, allowing you to know what changed your resources and when. For more information, check the What is Server-side Apply? documentation.
This feature is still in beta (v2 now) and not used as default. You can run it with
kubectl apply --server-side
. -
Pod Topology Spread moves to Beta
You can use topology spread constraints to control how pods are spread across your cluster among failure-domains such as Regions, zones, nodes, and other user-defined topology domains. For more information, check the Topology Spread Constraints documentation.
-
Kubernetes Topology Manager moves to Beta
This feature allows the CPU and Device Manager to coordinate resource allocation decisions, optimizing for low latency with machine learning and analytics workloads. For more information, check the Control Topology Management Policies on a node documentation.
We leave the Topology Manager’s policy unconfigured (so
none
, default). If you’re interested in this feature, please let your Lead engineer know. -
Configurable Horizontal Pod Autoscaling behavior
Starting from v1.18 the
v2beta2
API allows scaling behavior to be configured through the HPA behavior field. For more information, check the Support for configurable scaling behavior documentation.
For more detailed info on what’s new and changed, please check the Kubernetes 1.18 release announcement and full Kubernetes 1.18.x changelog.
Actions to take
There are no specific actions to take for your workloads. A Skyscrapers engineer will get in contact in the coming week(s) to plan an upgrade window for production.
However, if not already the case, you should consider moving all Ingresses
from the deprecated extensions/v1beta1
apiVersion to networking.k8s.io/v1beta1
in preparation of it’s removal in future Kubernetes versions.