Upgrade Concourse to version 5.8.0
We rolled out Concourse version 5.8.0 to all our setups.
More …We rolled out Concourse version 5.8.0 to all our setups.
More …As of now we have the option to deploy Vault on our reference solution out of the box.
More …As you may know, we define our Kubernetes clusters’ desired state in a yaml file, which is stored in the customer private Git repository. That file is then fed into our CI, which is the one responsible for rolling out the cluster.
More …We use Velero as our solution to backup complete K8s cluster workloads (both K8s resources and Persistent Volumes).
More …During our migrations from KOPS to EKS clusters, some customer Pods had issues launching, due to hitting fs.inotify.max_user_instances
and/or fs.inotify.max_user_watches
limits. Turns out these sysctl
have been raised from their defaults for the KOPS base images, but the EKS AMIs still use the OS defaults.
We now make it possible to run (part of) your Kubernetes and/or Concourse worker nodes in public subnets, if the situation requires it. However our default is still to deploy these instances in private subnets.
More …The Concourse team is working hard to have an implementation to accomodate feature environments in Concourse. However this is still WIP at this moment and per request of our customers we researched a way to have feature environments with Concourse.
More …We offer Grafana Loki as default logging solution, which relies on the Promtail daemonset for gathering logs on each K8s node and shipping them to Loki.
More …For some customers, with more complex dashboards, Grafana has recently become unstable sometimes due to hitting our configured memory limits.
More …In our quest to automate most of the components of our infrastructure, we’ve set up CI/CD pipelines to automate the rollout of Teleport servers and their nodes.
More …Some earlier changes in how we label our AWS AutoScaling Groups (ASGs) and which labels the Kubernetes cluster-autoscaler uses for automatically detecting these ASGs caused the scaler to not work properly. This could result in clusters not automatically removing unneeded nodes, or adding extra ones when more capacity is needed.
More …It has come to our attention that in certain cases our Prometheus-based ElasticSearch monitoring wasn’t correctly detecting issues and sending alerts.
More …During the coming days, we’ll roll out Concourse version 5.7.2 to all our setups.
More …We have updated the alert routing from k8s-spot-termination-handler to notify in our shared Slack channel to increase visibility. We’ve rolled this change out to all our clusters during the last couple of days.
More …To complement today’s barrage of changelog updates, here’s some miscellaneous additions that didn’t make it in onther post 😁:
More …We’ve made several improvements to our Kubernetes stacks, allowing us to deploy in different AWS Regions (eg. us-east-1
) and allowing more dynamic usage of the Availability Zones in those regions.
In the past weeks we’ve revisited the resource reservations (requests
and limits
) we made for running all the cluster Add-ons.
In the past week we’ve rolled out a bunch of updates to our Kubernetes cluster-monitoring stack.
More …Previously we used Project Calico as networking plugin (CNI) on all our Kubernetes clusters (KOPS & EKS). However with our move to EKS as base for our reference solution, we will be defaulting to AWS’ own VPC CNI.
More …Previously we shipped your logs with Fluentd to CloudWatch Logs and optionally send them to an ElasticSearch/Kibana cluster (“EFK” stack) for analytics. This setup however was expensive, had quite some problems scaling and was overkill for most of our customers anyway. Due to that we researched for alternatives with the following requirements:
More …