We have updated our EKS control planes and nodes to the latest supported version: 1.16. This brings EKS on K8s v1.16.8.
In the process of upgrading EKS the following components have also been upgraded:
- KubeProxy from 1.15.11 to 1.16.8
- Cluster Autoscaler from 1.15.5 to 1.16.5
- AWS VPC CNI from 1.5.7 to 1.6.1
- Kubernetes Dashboard from 1.10.1 to 2.0.1
Upon writing staging and production clusters of most customers have been upgraded. The upgrade of clusters where we detected usage of incompatible
apiVersions is currently on hold until you have updated these resources (see Actions to Take).
Important changes between K8s 1.15 and 1.16
Significant changes to the Kubernetes API.
GA of Custom resources and Admission webhooks
CRDs are in widespread use as a way to extend Kubernetes to persist and serve new resource types, and have been available in beta since the 1.7 release. The 1.16 release marks the graduation of CRDs to general availability (GA).
Admission webhooks are in widespread use as a Kubernetes extensibility mechanism and have been available in beta since the 1.9 release. The 1.16 release marks the graduation of admission webhooks to general availability (GA).
As part of the 1.16 upgrade, we have also included the following changes:
Kubernetes Dashboard updated from 1.10.1 to 2.0.1
The 2.0 release of the dashboard comes with a whole set of fixes and improvements. The frontend has been completely rewritten in Angular 8 and this release also includes metrics like Pod and Node CPU and memory usage.
Important: We now deploy the K8s dashboard in it’s own namespace, so you’ll have to update your proxy command, for example:
alias kube-dashboard='kubectl auth-proxy -n kubernetes-dashboard https://kubernetes-dashboard.svc'
For more details, please check our documentation.
Dynamic calculation of kube-reserved on K8s nodes
On each K8s node a portion of computing resources is reserved for critical system processes. Previously we hard-coded the amount of these reservations, however thanks to an upstream change in the EKS AMI these values are now dynamically set depending on EC2 Instance size in function of max number of Pods per node.
This change should ensure a more realistic reservation and further improve overall stability.
Replaced old kube-spot-termination-notice-handler with the aws-node-termination-handler
The aws-node-termination-handler is an AWS-maintained project which next to draining Spot nodes when they receive a Termination notice, can also react to other EC2 maintenance events.
Actions to take
Many APIs have graduated to stable and support for deprecated API versions has been removed. In short you need to make sure that most Workloads (like
StatefulSets etc) use
apps/v1 and no longer
apiVersion in your Kubernetes manifests and/or Helm chart. Also make sure to check your
apiVersion when using
As an extra, you can also move
networking.k8s.io/v1beta1 in preparation of it’s deprecation in Kubernetes 1.22.
Full details on which
apiVersions are deprecated can be found here: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/.
If we have detected that you currently use such deprecated versions, you’ll find an issue in your GitHub repository and your Lead will follow up with you further.