aks
34 TopicsGA: DCasv6 and ECasv6 confidential VMs based on 4th Generation AMD EPYC™ processors
Today, Azure has expanded its confidential computing offerings with the general availability of the DCasv6 and ECasv6 confidential VM series in regions Korea Central, South Africa North, Switzerland North, UAE North, UK South, West Central US. These VMs are powered by 4th generation AMD EPYC™ processors and feature advanced Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) technology. These confidential VMs offer: Hardware-rooted attestation Memory encryption in multi-tenant environments Enhanced data confidentiality Protection against cloud operators, administrators, and insider threats You can get started today by creating confidential VMs in the Azure portal as explained here. Highlights: 4th generation AMD EPYC processors with SEV-SNP 25% performance improvement over previous generation Ability to rotate keys online AES-256 memory encryption enabled by default Up to 96 vCPUs and 672 GiB RAM for demanding workloads Streamlined Security Organizations in certain regulated industries and sovereign customers migrating to Microsoft Azure need strict security and compliance across all layers of the stack. With Azure Confidential VMs, organizations can ensure the integrity of the boot sequence and the OS kernel while helping administrators safeguard sensitive data against advanced and persistent threats. The DCasv6 and ECasv6 family of confidential VMs support online key rotation to give organizations the ability to dynamically adapt their defenses to rapidly evolving threats. Additionally, these new VMs include AES-256 memory encryption as a default feature. Customers have the option to use Virtualization-Based Security (VBS) in Windows, which is currently in preview to protect private keys from exfiltration via the Guest OS or applications. With VBS enabled, keys are isolated within a secure process, allowing key operations to be carried out without exposing them outside this environment. Faster Performance In addition to the newly announced security upgrades, the new DCasv6 and ECasv6 family of confidential VMs have demonstrated up to 25% improvement in various benchmarks compared to our previous generation of confidential VMs powered by AMD. Organizations that need to run complex workflows like combining multiple private data sets to perform joint analysis, medical research or Confidential AI services can use these new VMs to accelerate their sensitive workload faster than ever before. "While we began our journey with v5 confidential VMs, now we’re seeing noticeable performance improvements with the new v6 confidential VMs based on 4th Gen AMD EPYC “Genoa” processors. These latest confidential VMs are being rolled out across many Azure regions worldwide, including the UAE. So as v6 becomes available in more regions, we can deploy AMD based confidential computing wherever we need, with the same consistency and higher performance." — Mohammed Retmi, Vice President - Sovereign Public Cloud, at Core42, a G42 company. "KT is leveraging Azure confidential computing to secure sensitive and regulated data from its telco business in the cloud. With new V6 CVM offerings in Korea Central Region, KT extends its use to help Korean customers with enhanced security requirements, including regulated industries, benefit from the highest data protection as well as the fastest performance by the latest AMD SEV-SNP technology through its Secure Public Cloud built with Azure confidential computing." — Woojin Jung, EVP, KT Corporation Kubernetes support Deploy resilient, globally available applications on confidential VMs with our managed Kubernetes experience - Azure Kubernetes Service (AKS). AKS now supports the new DCasv6 and ECasv6 family of confidential VMs, enabling organizations to easily deploy, scale and manage confidential Kubernetes clusters on Azure, streamlining developer workflows and reducing manual tasks with integrated continuous integration and continuous delivery (CI/CD) pipelines. AKS brings integrated monitoring and logging to confidential VM node pools with in-depth performance and health insights, the clusters and containerized applications. Azure Linux 3.0 and Ubuntu 24.04 support are now in preview. AKS integration in this generation of confidential VMs also brings support for Azure Linux 3.0, that contains the most essential packages to be resource efficient and contains a secure, hardened Linux kernel specifically tuned for Azure cloud deployments. Ubuntu 24.04 clusters are also supported in addition to Azure Linux 3.0. Organizations wanting to ease the orchestration issues associated with deploying, scaling and managing hundreds of confidential VM node pools can now choose from either of these two for their node pools. General purpose & Memory-intensive workloads Featuring general purpose optimized memory-to-vCPU ratios and support for up to 96 vCPUs and 384 GiB RAM, the DCasv6-series delivers enterprise-grade performance. The DCasv6-series enables organizations to run sensitive workloads with hardware-based security guarantees, making them ideal for applications processing regulated or confidential data. For more memory demanding workloads that exceed even the capabilities of the DCasv6 series, the new ECasv6-series offer high memory-to-vCPU ratios with increased scalability up to 96 vCPUs and 672 GiB of RAM, nearly doubling the memory capacity of DCasv6. You can get started today by creating confidential VMs in the Azure portal as explained here. Additional Resources: Quickstart: Create confidential VM with Azure portal Quickstart: Create confidential VM with ARM template Azure confidential virtual machines FAQBuilt a Real-Time Azure AI + AKS + DevOps Project – Looking for Feedback
Hi everyone, I recently completed a real-time project using Microsoft Azure services to build a cloud-native healthcare monitoring system. The key services used include: Azure AI (Cognitive Services, OpenAI) Azure Kubernetes Service (AKS) Azure DevOps and GitHub Actions Azure Monitor, Key Vault, API Management, and others The project focuses on real-time health risk prediction using simulated sensor data. It's built with containerized microservices, infrastructure as code, and end-to-end automation. GitHub link (with source code and documentation): https://github.com/kavin3021/AI-Driven-Predictive-Healthcare-Ecosystem I would really appreciate your feedback or suggestions to improve the solution. Thank you!128Views0likes2CommentsAzure Kubernetes Service (AKS) forbidden address ranges for vnet
I installed some months ago an AKS cluster with kubenet networking without problems. In our last version upgrade something changed, because it complained that we were using for the vnet where the cluster is placed a private address range that is disallowed in the https://docs.microsoft.com/en-us/azure/aks/configure-kubenet: AKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range, pod address range or cluster virtual network address range The problem is that we use these "forbidden" private address ranges for our network infrastructure in azure (we have a hub & spoke architecture) and on-premises (we have an ExpressRoute connection) and it seems that we have to make a huge change in all our network to be able to upgrade or reinstall the AKS cluster with full connectivity. I tried with Azure support but they say that it is a design decision that can not be changed. If anybody has any suggestion to deal with this AKS upgrade/reinstall problem (that does not require a complete change in our IP addressing policy), that would be very helpful.3.2KViews0likes3CommentsAnnouncing Trusted Launch as default in Azure Portal
In the spirit of ‘Secure-by-default’, today, we are announcing Trusted Launch virtual machines as default in Azure Portal. With Trusted Launch as default, the security settings in Portal are pre-set for you and no special attention is required. Any new VM created on Azure Portal will have Trusted Launch capabilities turned on by default.Private AKS Deployment with Application Gateway: Leveraging Terraform and Azure Devops
Introduction This repository provides a comprehensive guide and toolkit for creating a private Azure Kubernetes Service (AKS) cluster using Terraform. It showcases a detailed process for deploying a private AKS cluster with robust integrations including Azure Container Registry, Azure Storage Account, Azure Key Vault, and more, using Terraform as the infrastructure as code (IaC) tool. Repository For complete details and Terraform scripts, visit my GitHub repository at https://github.com/yazidmissaoui/PrivateAKSCluster-Terraform. This project mirrors the architecture suggested by Microsoft, providing a practical implementation of their recommended private AKS cluster setup. For further reference on the Microsoft architecture, visit their guide here: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/aks-agic/aks-agic. Description This sample shows how to create a https://docs.microsoft.com/en-us/azure/aks/private-clusters using: https://www.terraform.io/intro/index.html as infrastructure as code (IaC) tool to build, change, and version the infrastructure on Azure in a safe, repeatable, and efficient way. https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops to automate the deployment and undeployment of the entire infrastructure on multiple environments on the Azure platform. In a private AKS cluster, the API server endpoint is not exposed via a public IP address. Hence, to manage the API server, you will need to use a virtual machine that has access to the AKS cluster's Azure Virtual Network (VNet). This sample deploys a jumpbox virtual machine in the hub virtual network peered with the virtual network that hosts the private AKS cluster. There are several options for establishing network connectivity to the private cluster. Create a virtual machine in the same Azure Virtual Network (VNet) as the AKS cluster. Use a virtual machine in a separate network and set up Virtual network peering. See the section below for more information on this option. Use an Express Route or VPN connection. Creating a virtual machine in the same virtual network as the AKS cluster or in a peered virtual network is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges. For more information, see https://docs.microsoft.com/en-us/azure/aks/private-clusters. For more information on Azure Private Links, see https://docs.microsoft.com/en-us/azure/private-link/private-link-overview In addition, the sample creates a private endpoint to access all the managed services deployed by the Terraform modules via a private IP address: Azure Container Registry Azure Storage Account Azure Key Vault NOTE If you want to deploy a https://docs.microsoft.com/en-us/azure/aks/private-clusters#create-a-private-aks-cluster-with-a-public-dns-address to simplify the DNS resolution of the API Server to the private IP address of the private endpoint, you can use this project under my https://github.com/paolosalvatori/private-cluster-with-public-dns-zone account or on https://github.com/Azure/azure-quickstart-templates/tree/master/demos/private-aks-cluster-with-public-dns-zone. Architecture The following picture shows the high-level architecture created by the Terraform modules included in this sample: The following picture provides a more detailed view of the infrastructure on Azure. The architecture is composed of the following elements: A hub virtual network with three subnets: AzureBastionSubnet used by Azure Bastion AzureFirewallSubnet used by Azure Firewall A new virtual network with three subnets: SystemSubnet used by the AKS system node pool UserSubnet used by the AKS user node pool VmSubnet used by the jumpbox virtual machine and private endpoints The private AKS cluster uses a user-defined managed identity to create additional resources like load balancers and managed disks in Azure. The private AKS cluster is composed of a: System node pool hosting only critical system pods and services. The worker nodes have node taint which prevents application pods from beings scheduled on this node pool. User node pool hosting user workloads and artifacts. An Azure Firewall used to control the egress traffic from the private AKS cluster. For more information on how to lock down your private AKS cluster and filter outbound traffic, see: https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic https://docs.microsoft.com/en-us/azure/firewall/protect-azure-kubernetes-service An AKS cluster with a private endpoint to the API server hosted by an AKS-managed Azure subscription. The cluster can communicate with the API server exposed via a Private Link Service using a private endpoint. An Azure Bastion resource that provides secure and seamless SSH connectivity to the Vm virtual machine directly in the Azure portal over SSL An Azure Container Registry (ACR) to build, store, and manage container images and artifacts in a private registry for all types of container deployments. When the ACR SKU is equal to Premium, a Private Endpoint is created to allow the private AKS cluster to access ACR via a private IP address. For more information, see https://docs.microsoft.com/en-us/azure/container-registry/container-registry-private-link. A jumpbox virtual machine used to manage the Azure Kubernetes Service cluster A Private DNS Zone for the name resolution of each private endpoint. A Virtual Network Link between each Private DNS Zone and both the hub and spoke virtual networks A Log Analytics workspace to collect the diagnostics logs and metrics of both the AKS cluster and Vm virtual machine.3.5KViews2likes2CommentsAnnouncing: Microsoft moves $25 Billion in credit card transactions to Azure confidential computing
Microsoft is proud to showcase that customers in the financial sector can rely on public Azure to add confidentiality to provide secure and compliant payment solutions that meet or exceed industry standards. Microsoft is committed to hosting 100% of our payment services on Azure, just as we would expect our customers to do. Microsoft’s Commerce Financial Services (CFS) has completed a critical milestone by deploying a level 1 Payment Card Industry Data Security Standard (PCI-DSS) compliant credit card processing and vaulting solution, moving $25 Billion in annual credit card transactions to the public Azure cloud.Containers in AKS cannot access Azure resources (Failed to resolve URL)
I have an API server (Python Flask) hosted on AKS. When the service starts, it: Access Azure key-vault to get storage account connection string use the connection string to perform CRUD jobs on Azure storage account > PS. The whole system consists of `ingress(clusterIP & loadbalancer)`, `service (clusterIP)`, and my `flask API` Then I deploy it to AKS, which works fine (except that the CPU usage is usually > 100%). Two days later, I noticed that the server started restarting over and over again. The error message looks like this: `azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x7fc1f5e0c550>: Failed to resolve 'MY_KEY_VAULT.vault.azure.net' ([Errno -3] Temporary failure in name resolution)` At first, I thought it was caused by key vault, so I put the connection string directly in my code. And same thing happened again, `Failed to resolve 'MY_STORAGE_ACCOUNT.blob.core.windows.net' ([Errno -3] Temporary failure in name resolution` After my first deployment, I did nothing to my AKS resources. Below is basic info about my AKS: One possible root cause is that I set auto upgrade to `enable`. Please give me some suggestions for debugging, thanks! [Update -1] I deploy same container to another node pool. Things works fine ...1.1KViews0likes2CommentsAligning with Kata Confidential Containers to achieve zero trust operator deployments with AKS
Confidential containers on Azure Kubernetes Service (AKS) leveraging Kata confidential containers open-source project are coming soon to Azure. If you would like to be part of the preview, please express your interest here https://aka.ms/cocoakspreviewAzure Container Registry - New comic
- You are a Cloud lover? - But you prefer Azure? - Learning with fun? Maybe you'll like the last Azure Container Registry comic provided by Jules&Léa: If you want to deep dive, do not hesitate to visit the official documentation on Microsoft: https://learn.microsoft.com/en-us/azure/container-registry/container-registry-intro/?WT.mc_id=AZ-MVP-5005062 ++1.2KViews0likes0CommentsSolution for remote development team access to private AKS managed cluster
Hi All, I am exploring options to allow my remote development team access to private AKS managed cluster in Azure with AAD and RBAC enabled . Our access options to AKS are via Bastion or VDi and each pose a unique set of challenges. I will outline each and my overall proposed solution Bastion access via kv and shared VM local credentials: problem is remote developers will require access to Azure portal then bastion into a local VM using kv shared credentials, this may work but not practical because each developers require a unique kubectl profile/config file when access aks, which is overwritten when another user logs on. Also remote access into bastion timeouts occasionally and AKS auth flow via browser into aks sometimes displays a blank page and cumbersome to logon VDI access pose similar challenges, no access to install development tools and all session settings are reset when the user logged off My proposed solution is bastion access via native rdp client access along with an AAD joined VM on the private cluster network. This solution requires no Azure portal access and provides direct RDP access into the AAD VM using AAD credentials and conditional access. Also the problem with kubectl profile no longer an issue as each logon user will have AAD credentials and user profile . Changes required to implement: Bump up Bastion sku from basic to standard to allow RDP native client, however the user (remote) session need to be initiated from a AAD registererd machine or hybrid or AAD join to establish a connection to bastion via RDP native client which then allow rdp access with AAD credentials onto the AAD joined server hosted in Azure Welcome all feedback and or corrections based on my initial solution assessment Thanks Darren673Views0likes0Comments