Protect your Containers from Azurescape 😰

A security vulnerability was recently uncovered that allowed for attackers to perform a cross-account container takeover in Microsoft’s public cloud… hence the name ‘Azurescape.’

This is actually extremely relevant to a webinar we hosted on best practices to secure your Kubernetes cluster. Check it out here.

It was found that this issue exists in Azure Container Instances (ACI), where part of the infrastructure that houses these ACI’s also contains multi-tenant Kubernetes clusters. Since these clusters are being shared by other users, a cyberattacker could theoretically also gain full control of the other users’ clusters and cause damage.

Is this something you were aware of? Have you taken any steps to sure up your defenses?

Let’s get the discussion started below. I’m sure our resident Kubernetes expert @magnologan has some thoughts on this :wink:

1 Like

Hi Alex,

Thanks for mentioning me here. Yes, this Azurescape was a major vulnerability in the ACI from Azure. Unfortunately, there wasn’t much the customer could do since this is managed service provided by Azure. Luckily they’ve updated and patched their systems to run the latest Kubernetes and runC versions.

What users can do and must be aware of when running containers and clusters in any environment are:

  • Always try to use the latest version of K8s available. Kubernetes gets updated very frequently, and having the latest version available helps protect you against known vulnerabilities. Also, the Kubernetes team only supports and provides fixes to the last 3 versions released.

  • Make sure you download your images from a trusted source such as a private container registry And also scan your images for vulnerabilities before running them.

  • Implementing some kind of runtime protection such as Falco inside your environment goes a long way to help detect and prevent compromises inside your containers.

I hope this helps! Cheers!


How can one keep their k8s running in the latest version, though, @magnologan?

Is it trivial to update the cluster while keeping the applications running?

1 Like

It all depends on how your clusters are deployed. For managed K8s they usually run a few versions behind, but make it easier for you to update.

I wouldn’t say it is trivial as there are many other dependencies and configurations that may affect the update. You should always test it in a separate environment before applying it in production.

This is so challenging… so a best practice, if I am using EKS for instance, is to create a new cluster, deploy my applications there and, if all works as expected, configure my load balancer to point to the new cluster instead?