AZURE AKS
2 TopicsContainers in AKS cannot access Azure resources (Failed to resolve URL)
I have an API server (Python Flask) hosted on AKS. When the service starts, it: Access Azure key-vault to get storage account connection string use the connection string to perform CRUD jobs on Azure storage account > PS. The whole system consists of `ingress(clusterIP & loadbalancer)`, `service (clusterIP)`, and my `flask API` Then I deploy it to AKS, which works fine (except that the CPU usage is usually > 100%). Two days later, I noticed that the server started restarting over and over again. The error message looks like this: `azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x7fc1f5e0c550>: Failed to resolve 'MY_KEY_VAULT.vault.azure.net' ([Errno -3] Temporary failure in name resolution)` At first, I thought it was caused by key vault, so I put the connection string directly in my code. And same thing happened again, `Failed to resolve 'MY_STORAGE_ACCOUNT.blob.core.windows.net' ([Errno -3] Temporary failure in name resolution` After my first deployment, I did nothing to my AKS resources. Below is basic info about my AKS: One possible root cause is that I set auto upgrade to `enable`. Please give me some suggestions for debugging, thanks! [Update -1] I deploy same container to another node pool. Things works fine ...1.1KViews0likes2CommentsSolution for remote development team access to private AKS managed cluster
Hi All, I am exploring options to allow my remote development team access to private AKS managed cluster in Azure with AAD and RBAC enabled . Our access options to AKS are via Bastion or VDi and each pose a unique set of challenges. I will outline each and my overall proposed solution Bastion access via kv and shared VM local credentials: problem is remote developers will require access to Azure portal then bastion into a local VM using kv shared credentials, this may work but not practical because each developers require a unique kubectl profile/config file when access aks, which is overwritten when another user logs on. Also remote access into bastion timeouts occasionally and AKS auth flow via browser into aks sometimes displays a blank page and cumbersome to logon VDI access pose similar challenges, no access to install development tools and all session settings are reset when the user logged off My proposed solution is bastion access via native rdp client access along with an AAD joined VM on the private cluster network. This solution requires no Azure portal access and provides direct RDP access into the AAD VM using AAD credentials and conditional access. Also the problem with kubectl profile no longer an issue as each logon user will have AAD credentials and user profile . Changes required to implement: Bump up Bastion sku from basic to standard to allow RDP native client, however the user (remote) session need to be initiated from a AAD registererd machine or hybrid or AAD join to establish a connection to bastion via RDP native client which then allow rdp access with AAD credentials onto the AAD joined server hosted in Azure Welcome all feedback and or corrections based on my initial solution assessment Thanks Darren668Views0likes0Comments