For a long time now, we’ve been using the AWS Node Termination Handler for catching Spot instance interruption notices, allowing Kubernetes to respond appropriately by draining these (spot) nodes before they are terminated. You may have noticed this behavior via the “:construction: Instance interruption” notices in the Slack alerts.
More …
You can now create GP3 Persistent Volumes through the gp3
and gp3-encrypted
Storage Classes. This is in addition to the previously available GP2 (gp2
, gp2-encrypted
) and EFS (efs
) volumes.
More …
We’ve completely refactored how we manage our monitoring components, like Prometheus, Grafana and many exporters and alerts. For you, the platform user, nothing will change although there is some disruption in Grafana and Prometheus expected during rollout.
More …
It has come to our attention that many of our 24/7 escalation phone numbers could only receive calls from domestic numbers. We have generated new telepohone numbers, so make sure to verify your new number in your repo’s README!
We’ve rolled out an nginx-ingress update (v0.51.0
) to all clusters, with the following fixes:
More …
Last year we rolled out a major Grafana update, going from 7.5 to 8.1, but had to roll back because a small number of customers were impacted by a breaking change regarding SQL datasources.
More …
We’d like to inform you that we’re renaming and (re-tagging) many network related resources, like the VPC name, subnet names, route tables, etc. Normally this shouldn’t have any impact for you, however if you rely on the VPC name for anything, you will need to update this workload. (*)
More …
We have upgraded our Concourse setups to the latest version 7.7.0. This new version brings several small features and bug fixes. You can check the full changelog in the Concourse releases page.
More …
We have upgraded Istio on all clusters that use it. The version was upgraded from 1.12.0
to 1.13.2
.
More …
All Vault setups have been updated from 1.9.0
to the latest patch version 1.9.4
.
More …
As part of our regular upgrade cycle, the following Kubernetes cluster components have been updated. We’ve already rolled these out to all non-production clusters. Production upgrades will happen on Monday 21/03 during business hours.
More …
As of today we offer KEDA as a default component for horizontally scaling your Pods.
More …
There are no actions to take, and all changes have been rolled out to all environments.
More …
We have exposed more parameters to the cluster-autoscaler, allowing for more fine-grained control. initially, only the scale_down_utilization_threshold
could be configured. Now this is extended with the following parameters:
More …
As part of our regular upgrade cycle, the following Azure specific Kubernetes cluster components have been updated. We’ve already rolled these out to all non-production clusters. Production clusters will follow once we validated everything is stable. There are no actions for you to take.
More …
As part of our regular upgrade cycle, the following AWS specific Kubernetes cluster components have been updated. We’ve already rolled these out to all non-production clusters. Production clusters will follow once we validated everything is stable. There are no actions for you to take.
More …
As part of our regular upgrade cycle, the following Kubernetes cluster components have been updated. We’ve already rolled these out to all non-production clusters. Production clusters will follow once we validated everything is stable. There are no actions for you to take.
More …
We’ve upgraded all Teleport clusters from version 8.0.7 to 8.2.0. This is a minor release, coming with mostly bug and performance fixes.
More …
We’re adding support for the Github actions-runner-controller
as a managed add-on for our Kubernetes platforms. With this controller, the customers using Github Actions will be able to easily deploy self-hosted runners on their clusters. This is useful for deploying workloads on a private-endpoint cluster, since the runner will execute the deploy task from within the cluster itself.
More …
We manage multiple Kubernetes clusters and regularly set up new ones from scratch. There are also a bunch of extra components deployed on each cluster, that we also need to maintain and keep up to date.
More …