aks
86 TopicsGA: DCasv6 and ECasv6 confidential VMs based on 4th Generation AMD EPYC™ processors
Today, Azure has expanded its confidential computing offerings with the general availability of the DCasv6 and ECasv6 confidential VM series in regions UAE North and Korea Central. These VMs are powered by 4th generation AMD EPYC™ processors and feature advanced Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) technology. These confidential VMs offer: Hardware-rooted attestation Memory encryption in multi-tenant environments Enhanced data confidentiality Protection against cloud operators, administrators, and insider threats You can get started today by creating confidential VMs in the Azure portal as explained here. Highlights: 4th generation AMD EPYC processors with SEV-SNP 25% performance improvement over previous generation Ability to rotate keys online AES-256 memory encryption enabled by default Up to 96 vCPUs and 672 GiB RAM for demanding workloads Streamlined Security Organizations in certain regulated industries and sovereign customers migrating to Microsoft Azure need strict security and compliance across all layers of the stack. With Azure Confidential VMs, organizations can ensure the integrity of the boot sequence and the OS kernel while helping administrators safeguard sensitive data against advanced and persistent threats. The DCasv6 and ECasv6 family of confidential VMs support online key rotation to give organizations the ability to dynamically adapt their defenses to rapidly evolving threats. Additionally, these new VMs include AES-256 memory encryption as a default feature. Customers have the option to use Virtualization-Based Security (VBS) in Windows, which is currently in preview to protect private keys from exfiltration via the Guest OS or applications. With VBS enabled, keys are isolated within a secure process, allowing key operations to be carried out without exposing them outside this environment. Faster Performance In addition to the newly announced security upgrades, the new DCasv6 and ECasv6 family of confidential VMs have demonstrated up to 25% improvement in various benchmarks compared to our previous generation of confidential VMs powered by AMD. Organizations that need to run complex workflows like combining multiple private data sets to perform joint analysis, medical research or Confidential AI services can use these new VMs to accelerate their sensitive workload faster than ever before. "While we began our journey with v5 confidential VMs, now we’re seeing noticeable performance improvements with the new v6 confidential VMs based on 4th Gen AMD EPYC “Genoa” processors. These latest confidential VMs are being rolled out across many Azure regions worldwide, including the UAE. So as v6 becomes available in more regions, we can deploy AMD based confidential computing wherever we need, with the same consistency and higher performance." — Mohammed Retmi, Vice President - Sovereign Public Cloud, at Core42, a G42 company. "KT is leveraging Azure confidential computing to secure sensitive and regulated data from its telco business in the cloud. With new V6 CVM offerings in Korea Central Region, KT extends its use to help Korean customers with enhanced security requirements, including regulated industries, benefit from the highest data protection as well as the fastest performance by the latest AMD SEV-SNP technology through its Secure Public Cloud built with Azure confidential computing." — Woojin Jung, EVP, KT Corporation Kubernetes support Deploy resilient, globally available applications on confidential VMs with our managed Kubernetes experience - Azure Kubernetes Service (AKS). AKS now supports the new DCasv6 and ECasv6 family of confidential VMs, enabling organizations to easily deploy, scale and manage confidential Kubernetes clusters on Azure, streamlining developer workflows and reducing manual tasks with integrated continuous integration and continuous delivery (CI/CD) pipelines. AKS brings integrated monitoring and logging to confidential VM node pools with in-depth performance and health insights, the clusters and containerized applications. Azure Linux 3.0 and Ubuntu 24.04 support are now in preview. AKS integration in this generation of confidential VMs also brings support for Azure Linux 3.0, that contains the most essential packages to be resource efficient and contains a secure, hardened Linux kernel specifically tuned for Azure cloud deployments. Ubuntu 24.04 clusters are also supported in addition to Azure Linux 3.0. Organizations wanting to ease the orchestration issues associated with deploying, scaling and managing hundreds of confidential VM node pools can now choose from either of these two for their node pools. General purpose & Memory-intensive workloads Featuring general purpose optimized memory-to-vCPU ratios and support for up to 96 vCPUs and 384 GiB RAM, the DCasv6-series delivers enterprise-grade performance. The DCasv6-series enables organizations to run sensitive workloads with hardware-based security guarantees, making them ideal for applications processing regulated or confidential data. For more memory demanding workloads that exceed even the capabilities of the DCasv6 series, the new ECasv6-series offer high memory-to-vCPU ratios with increased scalability up to 96 vCPUs and 672 GiB of RAM, nearly doubling the memory capacity of DCasv6. You can get started today by creating confidential VMs in the Azure portal as explained here. Additional Resources: Quickstart: Create confidential VM with Azure portal Quickstart: Create confidential VM with ARM template Azure confidential virtual machines FAQCopa: An Image Vulnerability Patching Tool
Securing container images is paramount, especially with the widespread adoption of containerization technologies like Docker and Kubernetes. Microsoft has recognized the need for robust image security solutions and has introduced Copa, an open-source tool designed to keep container images secure and address vulnerabilities swiftly. Learn about Copa in this blog.5.2KViews0likes1CommentBuilt a Real-Time Azure AI + AKS + DevOps Project – Looking for Feedback
Hi everyone, I recently completed a real-time project using Microsoft Azure services to build a cloud-native healthcare monitoring system. The key services used include: Azure AI (Cognitive Services, OpenAI) Azure Kubernetes Service (AKS) Azure DevOps and GitHub Actions Azure Monitor, Key Vault, API Management, and others The project focuses on real-time health risk prediction using simulated sensor data. It's built with containerized microservices, infrastructure as code, and end-to-end automation. GitHub link (with source code and documentation): https://github.com/kavin3021/AI-Driven-Predictive-Healthcare-Ecosystem I would really appreciate your feedback or suggestions to improve the solution. Thank you!108Views0likes2CommentsSecuring Kubernetes Applications with Ingress Controller and Content Security Policy
In this guide, we’ll walk through an end-to-end example showing how to: Install the NGINX Ingress Controller as a DaemonSet Configure it to automatically inject a Content Security Policy (CSP) header Deploy a simple “Hello World” NGINX app (myapp) Tie everything together with an Ingress resource that routes traffic and verifies CSP By the end, you’ll have a pattern where: Every request to your application carries a strong CSP header Your application code remains unchanged (CSP is injected at the gateway layer) You can test both inside the cluster and externally to confirm CSP is enforced Why CSP Matters at the Ingress Layer Content Security Policy (CSP) is an HTTP header that helps mitigate cross-site scripting (XSS) and other injection attacks by specifying which sources of content are allowed. Injecting CSP at the Ingress level has three big advantages: Centralized Enforcement : Instead of configuring CSP in each application pod, you define it once in the Ingress controller. All apps behind that controller automatically inherit the policy. Minimal Application Changes : Your Docker images and web servers remain untouched—security lives at the edge (the Ingress). Consistency and Auditability : You can update the CSP policy in one place (the controller’s ConfigMap) and immediately protect every Ingress without modifying application deployments. What are the steps involved a. Install NGINX Ingress Controller as a DaemonSet with CSP Enabled We want the controller to run one Pod per node (High Availability), and we also want it to inject a CSP header globally. Helm makes this easy: 1. Add the official ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update 2. Install (or upgrade) the chart with the following values: controller.kind=DaemonSet -> run one controller Pod on each node controller.config.enable-snippets=true -> allow advanced NGINX snippets controller.config.server-snippet="add_header Content-Security-Policy …" -> define the CSP header helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" What this does: Installs (or upgrades) the ingress-nginx chart into ingress-nginx namespace. Runs the controller as a DaemonSet (one Pod per node) on Linux nodes. Sets enable-snippets: "true" in the controller’s ConfigMap. Defines a server-snippet so every NGINX server block will contain: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; Exposes the ingress controller via a LoadBalancer (or NodePort) IP, depending on your cluster. This adds below CSP entry to the ingress controllers configmap. apiVersion: v1 data: enable-snippets: "true" server-snippet: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; kind: ConfigMap . . . Validation can be done using this command: kubectl edit configmap ingress-nginx-controller -n ingress-nginx Another option would be to use Patch with restart (avoids manual edit) kubectl patch configmap ingress-nginx-controller \ -n ingress-nginx \ --type=merge \ -p '{"data":{"enable-snippets":"true","server-snippet":"\n add_header Content-Security-Policy \"default-src '\''self'\''; script-src '\''self'\''; style-src '\''self'\''; img-src '\''self'\''; object-src '\''none'\''; frame-ancestors '\''self'\''; frame-src '\''self'\''; connect-src '\''self'\''; upgrade-insecure-requests; report-uri /csp-report\" always;"}}' \ && kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx 3. Roll out and verify kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx You should see one Pod per node in “Running” state. 4. (Optional) Inspect the controller’s ConfigMap kubectl get configmap ingress-nginx-controller -n ingress-nginx -o Under data: you’ll find: enable-snippets: "true" server-snippet: | add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; At this point, your NGINX Ingress Controller is running as a DaemonSet, and it will inject the specified CSP header on every response for any Ingress it serves. b. Deploy a Simple 'myapp' NGINX Application Next, we’ll deploy a minimal NGINX app (labeled app=myapp) so we can confirm routing and CSP. 1. Deployment (save as myapp-deployment.yaml): apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest # Replace with your custom image as needed ports: - containerPort: 80 2. Service (save as myapp-service.yaml): apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP 3. Apply both resources: kubectl apply -f myapp-deployment. -f myapp-service . 4. Verify your Pods and Service: kubectl get deployments,svc -l app=myapp -n default You should see: Deployment/myapp-deployment 1/1 1 1 30s Service/myapp-service ClusterIP 10.244.x.y <none> 80/TCP 30s 5. Test the Service directly (inside the cluster): kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: ... Content-Type: text/html Content-Length: 612 Connection: keep-alive At this point, your application is up and running, accessible at http://myapp-service.default.svc.cluster.local from within the cluster c. Create an Ingress to Route to myapp-service Now that the Ingress Controller and app Service are in place, let’s configure an Ingress resource: 1. Ingress definition (save as myapp-ingress.): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 2. Apply the Ingress: kubectl apply -f myapp-ingress . 3. Verify that the Ingress is registered: You should see: NAME CLASS HOSTS ADDRESS PORTS AGE myapp-ingress nginx myapp.local <pending> 80 10s The ADDRESS may be <pending> but in cloud environments, it will eventually become a LoadBalancer IP. d. Verify End‐to‐End (Ingress + CSP): From Inside the Cluster 1. Run a curl pod that sends a request to the Ingress Controller’s internal DNS, setting 'Host: myapp.local': kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" 2. Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: Wed, 05 Jun 2025 12:05:00 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Content-Security-Policy: default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report Notice the Content-Security-Policy header. This confirms the controller has injected our CSP. e. Verify End‐to‐End (Ingress + CSP): From Your Laptop or Browser (Optional) Retrieve the External IP (if your Service is a LoadBalancer): kubectl get svc ingress-nginx-controller -n ingress-nginx Once you have an external IP (e.g. 52.251.20.187), add this to your /etc/hosts: 52.251.20.187 myapp.local Then in your terminal or browser: curl -I http://myapp.local/ You can port‐forward and curl to validate if app is running: kubectl port-forward service/ingress-nginx-controller 8080:80 -n ingress-nginx Add to /etc/hosts: 127.0.0.1 myapp.local Then: curl -I http://myapp.local:8080/ You should again see the 200 OK response along with the Content-Security-Policy header. Recap & Best Practices 1. Run the Ingress Controller as a DaemonSet Ensures one Pod per node for true high availability. Achieved by --set controller.kind=DaemonSet in Helm. 2. Enable Snippets & Inject CSP Globally --set controller.config.enable-snippets=true turns on snippet support. --set-string controller.config.server-snippet="add_header Content-Security-Policy …" inserts a literal block into the controller’s ConfigMap. This causes every server block (all Ingresses) to include your CSP header without modifying individual Ingress manifests. 3. Keep Your Apps Unchanged The CSP lives in the Ingress Controller, not in your app pods. You can standardize security across all Ingresses by adjusting the single CSP snippet. 4. Deploy Your Application, Service, Ingress, and Test We used a minimal NGINX “myapp” Deployment + ClusterIP Service -> Ingress -> Controller -> CSP injection. Verified inside‐cluster with curl -I … -H "Host: myapp.local" that CSP appears. Optionally tested from outside via /etc/hosts or LoadBalancer IP. 5. Next Steps Adjust the CSP policy to fit your application’s needs, for e.g. if you load scripts from a CDN, add that domain under script-src. Add additional security headers (HSTS, X-Frame-Options, etc.) by appending more add_header lines in server-snippet. If you have multiple Ingress Controllers, repeat the same pattern for each. Full Commands (for Reference) #1) Install/Upgrade NGINX Ingress Controller (DaemonSet + CSP) helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" # Wait for DaemonSet kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx #2) myapp-deployment. apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest ports: - containerPort: 80 --- # 3) myapp-service. apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP # Apply Deployment + Service kubectl apply -f myapp-deployment. -f myapp-service. # Test the Service internally kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it \ -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" # 4) myapp-ingress. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 # Apply the Ingress kubectl apply -f myapp-ingress. # Verify and test with CSP kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" With these steps in place, all traffic to myapp.local is routed through your NGINX Ingress Controller, and the strong CSP header is automatically applied at the gateway. This pattern can be adapted to any Kubernetes‐hosted web application, by injecting additional security headers, tailoring the CSP to your needs, and keeping your apps running unmodified. Happy “ingressing!”Enable an Industrial Dataspace on Azure
What is an Industrial Dataspace? An industrial dataspace is an environment designed to enable the secure and efficient exchange of data between different organizations within an industrial ecosystem. Developed by the International Data Spaces Association, it focuses on key principles such as data sovereignty, interoperability, and collaboration. These principles are crucial in the context of Industry 4.0 where interconnected systems and data-driven decision-making optimize industrial processes and create resilient supply chains. A tutorial with step-by-step instructions on how to enable an industrial dataspace on Azure is available here. Use Case: Providing a Carbon Footprint for Produced Products One of the most popular use cases for industrial dataspaces is providing the Product Carbon Footprint (PCF), an increasingly important requirement in customers' buying decisions. The Greenhouse Gas Protocol is a common method for calculating the PCF, splitting the task into scope 1, scope 2, and scope 3 emissions. This example solution focuses on calculating scope 2 emissions from simulated production lines using energy consumption data to determine the carbon footprint for each product. Accessing the Reference Implementation The Product Carbon Footprint reference implementation can be accessed here and deployed to Azure with a single click. During the installation workflow, all the required components are deployed to Azure. This reference implementation supports data modelling with IEC standard Open Platform Communication Unified Architecture (OPC UA), aligned with the OPC Foundation Cloud Initiative. It also uses the IEC standard Asset Administration Shell (AAS) to provide product semantics, creating a Product Carbon Footprint AAS for simulated products and storing it in an AAS Repository. Finally, the implementation uses the IEC/ISO standard Eclipse Dataspace Components (EDC) to establish the trust relationship between the manufacturer and the customer, enabling the actual PCF data transfer via an OpenAPI-compatible REST interface. Conclusion Enabling an industrial dataspace on Azure can help manufacturers meet regulatory requirements, optimize industrial processes, and improve customer engagement by leveraging modern cloud technologies and standards to provide a secure and efficient data exchange environment, ultimately driving transparency and sustainability in the manufacturing industry.646Views1like0CommentsEnd-to-end TLS with AKS, Azure Front Door, Azure Private Link Service, and NGINX Ingress Controller
This article shows how Azure Front Door Premium can be set to use a Private Link Service to expose an AKS-hosted workload via NGINX Ingress Controller configured to use a private IP address on the internal load balancer.17KViews4likes4CommentsFastTrack for Azure (FTA) program retiring December 2024
ATTENTION: As of December 31st, 2024, the FastTrack for Azure (FTA) program will be retired. FTA will support any projects currently in motion to ensure successful completion by December 31st, 2024, but will no longer accept new nominations. For more information on available programs and resources, visit: Azure Migrate, Modernize, and Innovate | Microsoft Azure957Views1like0CommentsPartners accelerating industrial transformation with Azure IoT Operations
In the digital age, the essence of innovation lies not only in groundbreaking technology but also in the power of collaboration. At Microsoft, we have always recognized that our success is intertwined with the success of our partners. Our platform products, including the newly released Azure IoT Operations, are designed to be the foundation upon which our partners can build transformative solutions. These collaborations are more than just business arrangements; they are the bedrock of a thriving ecosystem that drives innovation, addresses customer needs, and propels industry standards forward. Partnerships enable us to extend our reach and impact far beyond what we could achieve alone. By combining our technological prowess with the domain expertise and creativity of our partners, we create a dynamic synergy that fosters groundbreaking advancements. This collaborative spirit is vital as we navigate the complexities of the Internet of Things (IoT) landscape, where diverse applications and specialized knowledge are paramount. Our partners bring unique perspectives and capabilities to the table, ensuring that Azure IoT Operations can cater to a broad spectrum of industries and use cases.3KViews4likes0CommentsDeploy a Magento Open Source LAMP-stack e-commerce app on Azure with one click!
We're thrilled to announce the release of our one-click ARM template for deploying Magento on Azure! Magento, a popular open-source e-commerce platform, can now be effortlessly hosted on Azure, leveraging services like AKS, Virtual Network, Private Link, Azure CDN, Azure Premium File Storage, and Azure Database for MySQL - Flexible Server. Check out the blog and demo video by Mahmut Olcay, Azure Data MVP and Azure Database for MySQL Insider, showcasing the deployment process.1.1KViews1like0Comments