AKS has supported assigning Azure Managed Identities to pods for some time, first through Pod Identity and then later through Workload Identity. Using these tools it is possible to give a pod an Azure Identity that it can use to interact with other Azure services - pull secrets from Key Vault, read a file from Blob Storage or write to a database.
Workload Identity is the latest incarnation of this feature and significantly simplified this feature, removing the need to run additional management pods in the cluster and to have the identity injected in every node however it does have some issues of it's own. These issues are particularly evident when operating at scale and wanting to share the same Managed Identity across multiple workloads in the same cluster, or across multiple clusters.
Workload Identity relies on creating a Federated Identity Credential (FIC) in Azure that defines the trust relationship between the AKS OIDC issuer and Entra ID. Each combination of Service Account and Namespace that uses the same Managed Identity requires a separate FIC, as do services running in different clusters.
As your scale grows, this can start to become a problem. Each managed identity can only support up to 20 FICs. Once you hit that limit, your only option is to create another Managed Identity. This leads to the proliferation of Managed Identities that have the same permissions and do the same job, but only exist to work around this problem.
In addition to the 20 FIC limit, there are some other issues with Workload Identity:
- Creation of the FIC is an Azure Resource creation that often needs to occur alongside Kubernetes resource creation and makes automation of app deployment harder
- There can be a cyclic dependency issue where the service account in Kubernetes needs to know the Identity details before the pod is created, but the FIC needs the service account and namespace details to create the OIDC binding
- Additional outbound rules are required to allow the AKS cluster to access the Entra ID (login.microsoftonline.com) endpoints
Introducing Identity Bindings
Identity Bindings are currently in preview. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
Identity Bindings introduce a cleaner, RBAC-driven approach to identity management in AKS. Instead of juggling multiple federated credentials and namespace scoping, you define bindings that link Kubernetes RBAC roles to Azure identities. Pods then request tokens via an AKS-hosted proxy, no external egress required.
Key benefits:
- Centralised Access Control: Authorisation flows through Kubernetes RBAC.
- Cross-Cluster Identity Reuse: Federated credentials can be shared across namespaces and clusters.
- Reduced Networking Requirements: No outbound calls for token acquisition; everything stays within the cluster.
- Simplified IaC: Eliminates the “chicken-and-egg” problem when deploying identities alongside workloads.
Identity Bindings act as the link between applications running in AKS and the Azure managed identities they need to use. Instead of every cluster or namespace requiring its own federated identity configuration, each application is authorised through an Identity Binding defined inside the cluster. The binding expresses which workloads (via service accounts and RBAC) are allowed to request tokens for a given identity.
When a pod needs a token, AKS validates the request against the binding, and if it matches, the request is routed through the cluster’s identity proxy to the single Federated Identity Credential (FIC) associated with the managed identity. The FIC then exchanges the pod’s OIDC token for an Azure access token. This pattern allows multiple clusters or namespaces to share one managed identity cleanly, while keeping all workload‑level authorisation decisions inside Kubernetes.
With Workload Identity, every workload using a managed identity required its own Federated Identity Credential (FIC) tied to a specific namespace and service account, and you had to repeat that for every cluster. Hitting the 20‑FIC limit often forced teams to create duplicate managed identities, and deployments had to be carefully ordered to avoid cyclic dependencies. You also needed outbound access to Entra ID for token requests.
Identity Bindings significantly simplifies this. You create a single binding per cluster–identity pair, authorise workloads through RBAC, and let AKS handle token exchange internally with no external egress. There is no FIC sprawl, no need for identity duplication and less complexity in your automation.
Using Identity Bindings
To get started with using Identity Bindings, you need an AKS cluster and a Managed Identity created. Your Managed Identity should be granted permissions to access the Azure resources you require.
The first thing you need to do is ensure the feature is registered.
az feature register --namespace Microsoft.ContainerService --name IdentityBindingPreview
Next, we need to do is create the identity binding. This is a one-to-one mapping between AKS cluster and Managed Identity, so only needs to be created once for each clusters/identity mapping. You provide the name of the cluster you want the binding mapped to, the full resource ID of the Managed Identity resources, and the name you want to give this binding, and this maps the two resources together. This only needs to be created once per cluster and all further administration is done via Kubernetes.
az aks identity-binding create -g "<resource group name>" --cluster-name "<cluster name>" -n "<binding name>" --managed-identity-resource-id "<managed identity Azure Resource ID>"
Once this has been created, we need to configure access to it inside Kubernetes. To do this we create a ClusterRole which references the Managed Identity ID. Note that this must be a ClusterRole, it cannot just be a Role.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: identity-binding-user
rules:
- verbs: ["use-managed-identity"]
apiGroups: ["cid.wi.aks.azure.com"]
resources: ["<MI_CLIENT_ID>"]
Once this ClusterRole is created, we can assign it to any Namespace and Service Account combination we require, using a ClusterRoleBinding. Indentity Bindings are accessible to all Pods that use that Service Account, in that Namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <clusterrole name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: identity-binding-user
subjects:
- kind: ServiceAccount
name: <service account name>
namespace: <namespace of service account>
Now all that is left to do is configure the Pod to use the Identity Binding, there are a two steps to this.
First we need to apply the required labels and annotations to the pod to enable Identity Binding support:
metadata:
labels:
azure.workload.identity/use: "true"
annotations:
azure.workload.identity/use-identity-binding: "true"
Then, we need to ensure that the Pod is running using the Service Account we granted permission to use the Identity Binding.
spec:
serviceAccountName: <service account name>
Below is an example deployment that uses Identity Bindings.
apiVersion: apps/v1
kind: Deployment
metadata:
name: keyvault-demo
namespace: identity-binding-demo
spec:
replicas: 1
selector:
matchLabels:
app: keyvault-demo
template:
metadata:
labels:
app: keyvault-demo
azure.workload.identity/use: "true"
annotations:
azure.workload.identity/use-identity-binding: "true"
spec:
serviceAccountName: keyvault-demo-sa
containers:
- name: keyvault-demo
...
Once this Pod has been created, the Identity Binding should be attached and you should then be able to use it within your application using your SDK and language of choice. You can see an example of consuming an Identity Binding in GO here .
Demo App
If you want to deploy a demo workload to test out your bindings, you can use the Pod definition below. This requires you to deploy a Key Vault, and grant your managed identity the "Key Vault Secret User" role on that Key Vault. You will also need to update the service principle and namespace to match your environment.
apiVersion: v1
kind: Pod
metadata:
name: demo
namespace: demo
labels:
azure.workload.identity/use: "true"
annotations:
azure.workload.identity/use-identity-binding: "true"
spec:
serviceAccount: demo
containers:
- name: azure-sdk
# source code: https://github.com/Azure/azure-workload-identity/blob/feature/custom-token-endpoint/examples/identitybinding-msal-go/main.go
image: ghcr.io/bahe-msft/azure-workload-identity/identitybinding-msal-go:latest-linux-amd64
env:
- name: KEYVAULT_URL
value: ${KEYVAULT_URL}
- name: SECRET_NAME
value: ${KEYVAULT_SECRET_NAME}
restartPolicy: Never
Once deployed, if you look at the logs, you should see that it is able to read the secret from Key Vault.
kubectl logs demo -n demoI1107 20:03:42.865180 1 main.go:77] "successfully got secret" secret="Hello!"
Conclusion
Identity Bindings offer a much cleaner model for managing workload identities in AKS, especially once you start operating at scale. By moving authorisation decisions into Kubernetes and relying on a single Federated Identity Credential per managed identity, they avoid the FIC sprawl, cyclic dependencies, and networking requirements that made Workload Identity harder to operate in larger environments. The end result is a simpler, more predictable way to let pods acquire Azure tokens. If you’re already using Workload Identity today, Identity Bindings are a natural evolution that reduces operational friction while keeping the security properties you expect.