aks
88 TopicsSovereignty in Azure Belgium Central: A Three-Layer Technical Deep Dive
When Belgium Central went live in November 2025, it marked the launch of a new Azure region for Belgian organizations operating in the EU. For many scenarios, it enables customers to run workloads in-country and apply technical controls that can support sovereignty requirements. But "sovereignty" is one of those words that means different things to different people. So, let's break it down into something more tangible. In this post, we'll walk through sovereignty in Azure Belgium Central using three standardized technical layers. Think of them as concentric rings of protection around your data: Layer 1: Data Residency & Locality. Where your data physically lives and how it behaves during failure. Layer 2: Encryption at Rest & In Transit. How data is protected and who holds the keys. Layer 3: Confidential Computing. How data is protected while being processed in memory. Each layer builds on the previous one. Together, they form a comprehensive sovereignty posture. Let's find out what that looks like in practice. Layer 1: Data Residency & Locality This layer answers the most fundamental sovereignty question: where is my data, and does it stay there? In-Country Storage For regionally deployed Azure services, customer data at rest is stored in the selected Azure region. In Belgium Central, this means data at rest for supported services is stored in Belgium. Microsoft indicates the region’s datacenters are located in the Brussels area. When you deploy a resource with location = "belgiumcentral" in Terraform or location: 'belgiumcentral' in Bicep, you’re selecting that Azure region for the resource. This matters for organizations bound by Belgian or EU data residency requirements, and it matters for public sector customers who need assurance that sensitive data doesn't cross national borders without explicit action. Source: Microsoft Digital AmBEtion (microsoft.com/en-be) Three Availability Zones Belgium Central supports Availability Zones. Availability Zones are physically separate locations within an Azure region and are designed with independent power, cooling, and networking. This lets you deploy zone-redundant architectures (for example, spreading VMs, databases, and storage across zones) for high availability while keeping resources in the same Azure region. Availability Zones within a region are connected by high-bandwidth, low-latency networking designed to support zone-redundant services and architectures. Actual latency depends on workload placement and architecture and should be validated for your scenario. Source: The ABC of Azure Belgium Central (Microsoft Community Hub) Non-Paired Region: A Sovereignty Feature, Not a Limitation Azure Belgium Central is a non-paired region. For services that rely on region pairing for automatic geo-replication, behavior and options can differ from non-paired regions. Customers can configure cross-region disaster recovery explicitly and choose a target region based on their requirements. From a sovereignty perspective, some customers may prefer this model because cross-region replication and secondary data locations are customer-selected when configured. Replication and failover capabilities are service-specific, and customers should confirm the data residency and replication behavior for the services they use. Depending on the service and redundancy option, some geo-redundant features (for example, Geo-Redundant Storage (GRS) for Azure Storage) may not be available in non-paired regions. Many designs use Zone-Redundant Storage (ZRS) for in-region redundancy across Availability Zones. For cross-region replication, options such as object replication may be used where supported, with the destination region selected by the customer. Source: Azure region pairs and nonpaired regions (learn.microsoft.com) What This Means Architecturally When designing for Belgium Central, customers may consider: Intra-region redundancy via Availability Zones (for example, ZRS and zone-redundant deployments), where supported. Cross-region disaster recovery when explicitly configured, with a customer-chosen secondary region. Replication behavior that is service-dependent; customers should validate which services replicate within a region, across zones, or across regions, and what configuration is required. Layer 2: Encryption at Rest & In Transit Layer 1 keeps your data in Belgium. Layer 2 makes sure that even if someone gained physical access to the underlying infrastructure, they'd find nothing readable. Encryption at Rest: Platform-Managed by Default By default, all data stored at rest in Azure is encrypted to ensure security and compliance. Storage accounts, managed disks, databases: all use AES-256 encryption with Microsoft-managed keys out of the box. You don't have to configure anything to get this baseline protection. But for sovereignty scenarios, "Microsoft holds the keys" might not be enough. Data at rest is encrypted by default with platform managed keys but double encryption is possible with an extra layer of encryption with customer managed keys (CMK). Source: Double encryption in Azure (learn.microsoft.com) Customer-Managed Keys (CMK): You Hold the Keys Azure services in Belgium Central support Customer-Managed Keys (CMK) through Azure Key Vault. This shifts key ownership from Microsoft to you. You generate, rotate, and revoke keys on your own schedule. Azure services reference your key in Key Vault for encrypt/decrypt operations, but the key itself is under your control. This applies to a broad range of services: VM disk encryption, storage account encryption, Azure SQL Transparent Data Encryption, and more. But not all key storage is created equal. Azure offers three tiers of key management in Belgium Central, and the differences matter for sovereignty: Source: Azure encryption overview (learn.microsoft.com) Key Vault Standard: Software-Protected Keys The entry-level option. Keys are stored encrypted in software, protected by Microsoft's infrastructure, but not in dedicated HSM hardware. This is the entry-level option: software-protected keys stored in a vault, without dedicated HSM hardware. For many general-purpose workloads where regulatory demands don't mandate hardware key protection, Standard is cost-effective and fully functional for CMK scenarios. Key Vault Premium: HSM-Backed Keys (Multi-Tenant) Premium includes everything in Standard plus support for HSM-protected keys. When you create an HSM-backed key in a Premium vault, the key material lives inside Microsoft-managed Hardware Security Modules rather than in software. The HSM hardware is shared (multi-tenant, logically isolated per customer), but the key material is processed and stored within certified HSM devices. Microsoft documentation describes the compliance and validation posture of Key Vault and HSM-backed keys, including FIPS validation details that may vary by hardware generation, region, and service configuration. Customers should refer to the current product documentation and compliance listings for the specific SKU and region in scope. For many scenarios, Key Vault Premium provides HSM-backed key options in a multi-tenant service model and is priced differently than Key Vault Standard and Managed HSM. The right choice depends on regulatory requirements, operational model, and cost considerations. Managed HSM: Single-Tenant, Maximum Isolation For the highest level of key sovereignty, Azure Key Vault Managed HSM provides a single-tenant key management service backed by FIPS 140-3 Level 3 validated hardware. Unlike Key Vault Premium (where HSM-backed keys share a multi-tenant HSM infrastructure), a Managed HSM pool gives you a dedicated, cryptographically isolated HSM environment with your own security domain. Key facts about Managed HSM that matter for sovereignty: Compliance / validation: Managed HSM uses dedicated hardware security modules. Refer to current Microsoft documentation for FIPS validation level and applicability for your region and SKU. Regional deployment: Managed HSM is deployed to an Azure region. Customers should validate data residency and any service-specific data handling behavior for their workload and compliance needs. Security domain: Customers download and control the security domain (a cryptographic backup of HSM credentials), protected using customer-controlled keys. See product documentation for the shared responsibility model and operational details. Access control: Managed HSM provides role-based access controls for key operations. Customers should review the authorization model and administrative boundaries described in the documentation. Managed HSM has a different pricing and operational model than Key Vault (for example, pool-based billing and additional operational steps). It is typically considered when requirements call for dedicated HSM resources, security domain control, or specific compliance needs beyond a shared HSM service model. Choosing the Right Tier Managed HSM is typically considered when requirements call for dedicated HSM resources, security domain control, or administrative separation beyond a shared HSM service model. Key Vault Standard can be a fit for development/test or scenarios where software-protected keys meet your requirements. Key Vault and Managed HSM capabilities are available in Azure Belgium Central, but customers should verify current product, SKU, and service availability by region and validate service-specific data residency behavior for their workload. Source: Azure Key Vault Managed HSM overview (learn.microsoft.com), Managed HSM technical details (learn.microsoft.com), About keys (learn.microsoft.com) Encryption in Transit: MACsec + TLS On the wire, Azure provides two layers of transit encryption: IEEE 802.1AE MACsec. our documentation describes the use of MACsec on portions of the Azure backbone for in-network encryption on supported links. Availability and coverage can vary by scenario; customers should refer to current documentation for details. TLS. Azure services support TLS for client-to-service connections. Supported TLS versions and configuration requirements vary by service; customers should validate the specific service and endpoint configuration they use. Together, these mechanisms help protect data in transit at different layers, depending on the service and network path used. Layer 2 Summary Concern Mechanism Key Detail Data at rest (default) AES-256, platform-managed keys Automatic, no config needed CMK: software keys Key Vault Standard FIPS 140-2 L1, multi-tenant, lowest cost CMK: HSM-backed keys Key Vault Premium FIPS 140-3 L3 (new hardware), multi-tenant CMK: dedicated HSM Managed HSM FIPS 140-3 L3, single-tenant, security domain Data in transit (infra) MACsec (IEEE 802.1AE) Coverage varies by link/scenario; refer to current documentation Data in transit (client) TLS 1.2+ Supported versions vary by service and configuration Trusted Launch and protection of data at rest Trusted Launch is a security feature available for Azure Virtual Machines that helps protect against advanced threats such as rootkits and bootkits. It enables secure boot and virtual Trusted Platform Module (vTPM) on supported VM sizes, ensuring that only signed and verified operating system binaries are loaded during startup. This provides enhanced integrity for the boot process and helps organizations meet compliance requirements for workloads running in the cloud. By leveraging Trusted Launch, customers can monitor and attest to the health of their VMs at boot time, making it easier to detect and respond to potential tampering or compromise. The combination of secure boot and vTPM strengthens the security posture of Azure VMs, offering greater protection for sensitive workloads. Additionally, Trusted Launch strengthens data‑at‑rest protection by isolating encryption keys in a platform‑managed vTPM, binding key release to verified boot integrity, and preventing offline or unauthorized reuse of encrypted disks, even by privileged administrators. Source: Trusted Launch for Azure virtual machines Layer 3: Confidential Computing Layers 1 and 2 protect data where it lives and while it moves. Layer 3 closes the final gap: protecting data while it's being processed in memory. This is the domain of Azure Confidential Computing, and it's where things get genuinely interesting from a sovereignty perspective. Azure Confidential Computing is designed to help reduce certain operator-access risks by using hardware-backed isolation for data while it is being processed in memory. Confidential Virtual Machines Azure Confidential VMs use specialized hardware to create a Trusted Execution Environment (TEE) at the VM level. Two technology families are available: AMD SEV-SNP (DCasv6 / DCadsv6 / ECasv6 / ECadsv6 series) These VMs use AMD's Secure Encrypted Virtualization with Secure Nested Paging. The key properties: The VM's memory is encrypted with keys generated by the AMD processor. These keys are designed to remain within the CPU boundary. The platform is designed to help protect VM memory and state from access by the hypervisor and host management code. Supports Confidential OS disk encryption with either platform-managed keys (PMK) or customer-managed keys (CMK), binding encryption to the VM's virtual TPM on supported configurations. Each VM uses a virtual TPM (vTPM) for key sealing and integrity measurement. Intel TDX (DCesv6 / DCedsv6 series) These VMs use Intel Trust Domain Extensions, which provides full VM memory encryption and integrity protection: The entire VM runs inside a hardware-isolated Trust Domain (TD), designed to help protect data in memory from the hypervisor and host management code. Memory encryption and integrity are enforced by the Intel CPU using dedicated encryption keys per TD. Supports Confidential OS disk encryption (PMK/CMK) and vTPM integration on supported configurations. Additional performance characteristics and hardware details vary by VM size and generation; refer to the current VM size documentation for specifics. The AMD SEV-SNP VM families are currently available in Preview in Azure Belgium Central, with GA planned. The Intel SKU is not currently available in Azure Belgium Central. Source: About Azure confidential VMs (learn.microsoft.com), DC family VM sizes (learn.microsoft.com), Intel TDX confidential VMs GA announcement (techcommunity.microsoft.com) Azure Attestation: Trust, but Verify Confidential computing isn't just about encryption. It's about verifiable trust. Azure Attestation is a free service that validates the integrity of the hardware and firmware environment before your workload runs. Here's how platform attestation works for AMD SEV-SNP and Intel TDX Confidential VMs: When a confidential VM boots, the hardware generates an attestation report containing firmware and platform measurements (an SNP report for AMD, a TDX quote for Intel). Azure Attestation evaluates this report against expected values. Only if the platform passes attestation are decryption keys released from your Key Vault or Managed HSM. These keys unlock the vTPM state and the encrypted OS disk, and the VM starts. If the platform does not meet the attestation policy, key release can be blocked and the VM may not start, depending on configuration. In addition to platform attestation, customers can perform guest-initiated attestation from within the CVM to independently verify the VM's measured hardware and runtime state. This allows applications running inside a confidential VM to obtain an attestation token at runtime, which they can present to relying parties (like a key vault or external service) to prove they are executing in a genuine TEE. This can help reduce reliance on implicit trust by providing cryptographic evidence about the environment at boot and, where implemented, at runtime. Azure Attestation availability is region-dependent; customers should verify current availability in Belgium Central and select the appropriate provider configuration for their scenario. Source: Azure Attestation overview (learn.microsoft.com), Attestation types and scenarios (learn.microsoft.com) Confidential Computing on AKS For containerized workloads, Azure Kubernetes Service supports confidential computing through confidential node pools. You can add node pools backed by confidential VMs alongside regular node pools in the same cluster. You can add AKS node pools using supported confidential VM sizes. In this model, the worker node runs as a confidential VM, so the node’s memory is hardware-protected from the host and hypervisor. Containers scheduled onto that node can run without application refactoring, but the added protection is at the VM/node level. Exact region and SKU availability should be validated for the sizes you plan to deploy. AKS support for confidential VM sizes today includes AMD SEV-SNP with Intel TDX on the roadmap; customers should validate region and SKU availability for the exact AKS node pool sizes they intend to use. Azure Attestation can be integrated into confidential computing architectures on AKS to verify the trust state of nodes or workloads before secrets are released. This is typically implemented at the workload or confidential container level and is not enforced automatically for all AKS pods. Source: Confidential VM node pools on AKS (learn.microsoft.com), Use CVM in AKS (learn.microsoft.com) The Full Data Protection Chain When you combine all three layers, the protection chain when using confidential VMs in Belgium Central looks like this: [Confidential VM boots] → Hardware TEE encrypts VM memory (SEV-SNP or TDX, CPU-generated keys) → Azure Attestation validates platform report (SNP report or TDX quote) → Key Vault (Premium) or Managed HSM conditionally releases disk decryption keys → vTPM state unlocked → OS disk decrypted → VM starts → Data in memory: encrypted and isolated by hardware TEE (Layer 3 – Confidential Compute) → Data at rest: encrypted by CMK from Key Vault / Managed HSM (Layer 2 – Encryption) → Data in transit: protected using TLS (and MACsec on selected Azure backbone links) (Layer 2 – Encryption) → Data stored and processed in Belgium Central where supported and as configured (Layer 1 – Data Residency) These controls are designed to reduce operator-access risk through hardware-backed isolation, attestation, and customer-controlled key options. The exact protection level depends on the selected service, SKU, region, and configuration Bringing It All Together Here's the sovereignty stack for Azure Belgium Central in one view: Layer What It Protects Key Technologies Availability in Belgium Central 1: Data Residency Where data lives 3 AZs, non-paired region, ZRS GA. No cross-border replication by default. 2: Encryption Data at rest + in transit CMK, Key Vault (Std/Premium), Managed HSM, MACsec, TLS GA. All three Key Vault tiers available in-region. 3: Confidential Computing Data in use (memory) SEV-SNP / TDX VMs, Attestation, AKS Availability varies by SKU and region. Confirm confidential VM options (AMD/Intel), attestation, and AKS confidential node support for Belgium Central for the exact sizes you plan to use. Each layer is independently valuable, but the combination can help customers implement stronger technical controls for data residency, encryption, and in-use protection—subject to the specific services, SKUs, regions, and configurations selected. A Few Honest Caveats Because I want to keep this honest and useful: Check regional availability for specific SKUs. Availability can vary by region and can change over time. Before finalizing an architecture, confirm that the exact services and SKUs you plan to use are available in Azure Belgium Central (for example, specific confidential VM sizes, Azure Attestation, Managed HSM, and AKS node pool sizes) using the Azure products-by-region information. Sovereignty is not just technical. The layers above cover technical sovereignty, where data is, who encrypts it, and who can access it in memory. Legal sovereignty (jurisdiction, government access requests, contractual commitments) is a separate conversation. Managed HSM has different pricing and operational characteristics. Managed HSM uses pool-based billing and may require additional operational steps compared to Key Vault. Key Vault Premium supports HSM-backed keys in a multi-tenant model, which may be sufficient for many CMK scenarios. Select the option that meets your compliance and operational requirements. Confidential VM capabilities and integrations vary by VM size, generation, and feature. Some scenarios and integrations (for example, certain backup/DR options, live migration behaviors, accelerated networking, or resize paths) may be limited for specific confidential VM offerings. Validate the current limitations and supported features for the exact confidential VM series and region you plan to use, and plan DR based on the services and mechanisms supported for your scenario. These limitations are being actively worked on. Disclosure: Disaster recovery (DR) design and configuration remain a customer responsibility, including selecting a secondary region and implementing replication, failover, testing, and operational runbooks. Azure service availability and specific features can vary by region, SKU, and deployment model, and may change over time. Replication scope and behavior (in-zone, zone-redundant, regional, or cross-region) are service-specific and depend on the redundancy option selected; validate the data residency and replication details for each service in your architecture. References Microsoft Digital AmBEtion (microsoft.com/en-be) The ABC of Azure Belgium Central (Microsoft Community Hub) Azure region pairs and nonpaired regions (learn.microsoft.com) Azure encryption overview (learn.microsoft.com) Double encryption in Azure (learn.microsoft.com) Azure Key Vault Managed HSM overview (learn.microsoft.com) Managed HSM technical details (learn.microsoft.com) About keys (learn.microsoft.com) About Azure confidential VMs (learn.microsoft.com) DC family VM sizes (learn.microsoft.com) Confidential VM FAQ (learn.microsoft.com) Intel TDX confidential VMs GA announcement (techcommunity.microsoft.com) Confidential VM node pools on AKS (learn.microsoft.com) Use CVM in AKS (learn.microsoft.com) Azure Attestation overview (learn.microsoft.com) Attestation types and scenarios (learn.microsoft.com) Azure products by region (azure.microsoft.com) Trusted Launch for Azure virtual machines (learn.microsoft.com)GA: DCasv6 and ECasv6 confidential VMs based on 4th Generation AMD EPYC™ processors
Today, Azure has expanded its confidential computing offerings with the general availability of the DCasv6 and ECasv6 confidential VMs. Regional availability Jan 30 2026: Canada Central, Canada East, Norway East, Norway West, Italy North, Germany North, France South, Australia East, West US, West US 3, Germany West Central Sep 16 2025: Korea Central, South Africa North, Switzerland North, UAE North, UK South, West Central US These VMs are powered by 4th generation AMD EPYC™ processors and feature advanced Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) technology. These confidential VMs offer: Hardware-rooted attestation Memory encryption in multi-tenant environments Enhanced data confidentiality Protection against cloud operators, administrators, and insider threats You can get started today by creating confidential VMs in the Azure portal as explained here. Highlights: 4th generation AMD EPYC processors with SEV-SNP 25% performance improvement over previous generation Ability to rotate keys online AES-256 memory encryption enabled by default Up to 96 vCPUs and 672 GiB RAM for demanding workloads Streamlined Security Organizations in certain regulated industries and sovereign customers migrating to Microsoft Azure need strict security and compliance across all layers of the stack. With Azure Confidential VMs, organizations can ensure the integrity of the boot sequence and the OS kernel while helping administrators safeguard sensitive data against advanced and persistent threats. The DCasv6 and ECasv6 family of confidential VMs support online key rotation to give organizations the ability to dynamically adapt their defenses to rapidly evolving threats. Additionally, these new VMs include AES-256 memory encryption as a default feature. Customers have the option to use Virtualization-Based Security (VBS) in Windows, which is currently in preview to protect private keys from exfiltration via the Guest OS or applications. With VBS enabled, keys are isolated within a secure process, allowing key operations to be carried out without exposing them outside this environment. Faster Performance In addition to the newly announced security upgrades, the new DCasv6 and ECasv6 family of confidential VMs have demonstrated up to 25% improvement in various benchmarks compared to our previous generation of confidential VMs powered by AMD. Organizations that need to run complex workflows like combining multiple private data sets to perform joint analysis, medical research or Confidential AI services can use these new VMs to accelerate their sensitive workload faster than ever before. "While we began our journey with v5 confidential VMs, now we’re seeing noticeable performance improvements with the new v6 confidential VMs based on 4th Gen AMD EPYC “Genoa” processors. These latest confidential VMs are being rolled out across many Azure regions worldwide, including the UAE. So as v6 becomes available in more regions, we can deploy AMD based confidential computing wherever we need, with the same consistency and higher performance." — Mohammed Retmi, Vice President - Sovereign Public Cloud, at Core42, a G42 company. "KT is leveraging Azure confidential computing to secure sensitive and regulated data from its telco business in the cloud. With new V6 CVM offerings in Korea Central Region, KT extends its use to help Korean customers with enhanced security requirements, including regulated industries, benefit from the highest data protection as well as the fastest performance by the latest AMD SEV-SNP technology through its Secure Public Cloud built with Azure confidential computing." — Woojin Jung, EVP, KT Corporation Kubernetes support Deploy resilient, globally available applications on confidential VMs with our managed Kubernetes experience - Azure Kubernetes Service (AKS). AKS now supports the new DCasv6 and ECasv6 family of confidential VMs, enabling organizations to easily deploy, scale and manage confidential Kubernetes clusters on Azure, streamlining developer workflows and reducing manual tasks with integrated continuous integration and continuous delivery (CI/CD) pipelines. AKS brings integrated monitoring and logging to confidential VM node pools with in-depth performance and health insights, the clusters and containerized applications. Azure Linux 3.0 and Ubuntu 24.04 support are now in preview. AKS integration in this generation of confidential VMs also brings support for Azure Linux 3.0, that contains the most essential packages to be resource efficient and contains a secure, hardened Linux kernel specifically tuned for Azure cloud deployments. Ubuntu 24.04 clusters are also supported in addition to Azure Linux 3.0. Organizations wanting to ease the orchestration issues associated with deploying, scaling and managing hundreds of confidential VM node pools can now choose from either of these two for their node pools. General purpose & Memory-intensive workloads Featuring general purpose optimized memory-to-vCPU ratios and support for up to 96 vCPUs and 384 GiB RAM, the DCasv6-series delivers enterprise-grade performance. The DCasv6-series enables organizations to run sensitive workloads with hardware-based security guarantees, making them ideal for applications processing regulated or confidential data. For more memory demanding workloads that exceed even the capabilities of the DCasv6 series, the new ECasv6-series offer high memory-to-vCPU ratios with increased scalability up to 96 vCPUs and 672 GiB of RAM, nearly doubling the memory capacity of DCasv6. You can get started today by creating confidential VMs in the Azure portal as explained here. Additional Resources: Quickstart: Create confidential VM with Azure portal Quickstart: Create confidential VM with ARM template Azure confidential virtual machines FAQ🚀✨ Get ready for a power-packed November with the Microsoft Zero to Hero Community! ✨🚀
From modernizing legacy applications to building intelligent AI agents, this month is all about innovation, security, and smarter cloud solutions. Whether you’re exploring Azure Service Bus, learning AI on AKS, or discovering how Copilot Studio can extend your AI capabilities, we’ve got something for everyone. 🌍💪 Our incredible lineup of global speakers will help you modernize, automate, and innovate with real-world insights across Azure, AI, and app development. 🌟 💡 November Highlights: 📢 Matthew Hess 📖 Get on the Bus! - The Azure Service Bus 📅 November 8, 2025 – 06:00 PM CET 🔗 https://streamyard.com/watch/jTD8RpCcrvAD?wt.mc_id=MVP_350258 📢 Jonathan "J." Tower 📖 Old to Gold: How to Modernize Your Legacy ASP.NET Apps Gradually 📅 November 15, 2025 – 06:00 PM CET 🔗 https://streamyard.com/watch/9cwXWNSeCW8R?wt.mc_id=MVP_350258 📢 Dharanidharan Balasubramaniam 📖 Build and Extend AI Agents with Microsoft Copilot Studio 📅 November 17, 2025 – 09:00 AM CET / 07:00 PM AEDT 🔗 https://streamyard.com/watch/bfcqHQsYQjNz?wt.mc_id=MVP_350258 📢 Lee Markum 📖 Modern SQL Server Features That Make Life Better 📅 November 22, 2025 – 06:00 PM CET 🔗 https://streamyard.com/watch/D4kqAMh83PUq?wt.mc_id=MVP_350258 📢 Thiago Shimada Ramos 📖 Building Intelligent Applications: Quick Guide to AI on AKS 📅 November 25, 2025 – 09:00 AM CET / 07:00 PM AEDT 🔗 https://streamyard.com/watch/D8mhvsJFEqCS?wt.mc_id=MVP_350258 📢 Wim Matthyssen 📖 Azure Bastion: One does (still) not simply walk into my VNet! v4.00 📅 November 29, 2025 – 06:00 PM CET 🔗 https://streamyard.com/watch/t6VZxDndvSkA?wt.mc_id=MVP_350258 🌎 With sessions across multiple time zones, from Europe to Australia, there’s always an opportunity to learn, connect, and grow. ✨ Don’t miss out on this month’s journey to modernization, intelligence, and security in the Microsoft ecosystem.233Views1like0CommentsCopa: An Image Vulnerability Patching Tool
Securing container images is paramount, especially with the widespread adoption of containerization technologies like Docker and Kubernetes. Microsoft has recognized the need for robust image security solutions and has introduced Copa, an open-source tool designed to keep container images secure and address vulnerabilities swiftly. Learn about Copa in this blog.5.5KViews0likes1CommentBuilt a Real-Time Azure AI + AKS + DevOps Project – Looking for Feedback
Hi everyone, I recently completed a real-time project using Microsoft Azure services to build a cloud-native healthcare monitoring system. The key services used include: Azure AI (Cognitive Services, OpenAI) Azure Kubernetes Service (AKS) Azure DevOps and GitHub Actions Azure Monitor, Key Vault, API Management, and others The project focuses on real-time health risk prediction using simulated sensor data. It's built with containerized microservices, infrastructure as code, and end-to-end automation. GitHub link (with source code and documentation): https://github.com/kavin3021/AI-Driven-Predictive-Healthcare-Ecosystem I would really appreciate your feedback or suggestions to improve the solution. Thank you!169Views0likes2CommentsSecuring Kubernetes Applications with Ingress Controller and Content Security Policy
In this guide, we’ll walk through an end-to-end example showing how to: Install the NGINX Ingress Controller as a DaemonSet Configure it to automatically inject a Content Security Policy (CSP) header Deploy a simple “Hello World” NGINX app (myapp) Tie everything together with an Ingress resource that routes traffic and verifies CSP By the end, you’ll have a pattern where: Every request to your application carries a strong CSP header Your application code remains unchanged (CSP is injected at the gateway layer) You can test both inside the cluster and externally to confirm CSP is enforced Why CSP Matters at the Ingress Layer Content Security Policy (CSP) is an HTTP header that helps mitigate cross-site scripting (XSS) and other injection attacks by specifying which sources of content are allowed. Injecting CSP at the Ingress level has three big advantages: Centralized Enforcement : Instead of configuring CSP in each application pod, you define it once in the Ingress controller. All apps behind that controller automatically inherit the policy. Minimal Application Changes : Your Docker images and web servers remain untouched—security lives at the edge (the Ingress). Consistency and Auditability : You can update the CSP policy in one place (the controller’s ConfigMap) and immediately protect every Ingress without modifying application deployments. What are the steps involved a. Install NGINX Ingress Controller as a DaemonSet with CSP Enabled We want the controller to run one Pod per node (High Availability), and we also want it to inject a CSP header globally. Helm makes this easy: 1. Add the official ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update 2. Install (or upgrade) the chart with the following values: controller.kind=DaemonSet -> run one controller Pod on each node controller.config.enable-snippets=true -> allow advanced NGINX snippets controller.config.server-snippet="add_header Content-Security-Policy …" -> define the CSP header helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" What this does: Installs (or upgrades) the ingress-nginx chart into ingress-nginx namespace. Runs the controller as a DaemonSet (one Pod per node) on Linux nodes. Sets enable-snippets: "true" in the controller’s ConfigMap. Defines a server-snippet so every NGINX server block will contain: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; Exposes the ingress controller via a LoadBalancer (or NodePort) IP, depending on your cluster. This adds below CSP entry to the ingress controllers configmap. apiVersion: v1 data: enable-snippets: "true" server-snippet: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; kind: ConfigMap . . . Validation can be done using this command: kubectl edit configmap ingress-nginx-controller -n ingress-nginx Another option would be to use Patch with restart (avoids manual edit) kubectl patch configmap ingress-nginx-controller \ -n ingress-nginx \ --type=merge \ -p '{"data":{"enable-snippets":"true","server-snippet":"\n add_header Content-Security-Policy \"default-src '\''self'\''; script-src '\''self'\''; style-src '\''self'\''; img-src '\''self'\''; object-src '\''none'\''; frame-ancestors '\''self'\''; frame-src '\''self'\''; connect-src '\''self'\''; upgrade-insecure-requests; report-uri /csp-report\" always;"}}' \ && kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx 3. Roll out and verify kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx You should see one Pod per node in “Running” state. 4. (Optional) Inspect the controller’s ConfigMap kubectl get configmap ingress-nginx-controller -n ingress-nginx -o Under data: you’ll find: enable-snippets: "true" server-snippet: | add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; At this point, your NGINX Ingress Controller is running as a DaemonSet, and it will inject the specified CSP header on every response for any Ingress it serves. b. Deploy a Simple 'myapp' NGINX Application Next, we’ll deploy a minimal NGINX app (labeled app=myapp) so we can confirm routing and CSP. 1. Deployment (save as myapp-deployment.yaml): apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest # Replace with your custom image as needed ports: - containerPort: 80 2. Service (save as myapp-service.yaml): apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP 3. Apply both resources: kubectl apply -f myapp-deployment. -f myapp-service . 4. Verify your Pods and Service: kubectl get deployments,svc -l app=myapp -n default You should see: Deployment/myapp-deployment 1/1 1 1 30s Service/myapp-service ClusterIP 10.244.x.y <none> 80/TCP 30s 5. Test the Service directly (inside the cluster): kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: ... Content-Type: text/html Content-Length: 612 Connection: keep-alive At this point, your application is up and running, accessible at http://myapp-service.default.svc.cluster.local from within the cluster c. Create an Ingress to Route to myapp-service Now that the Ingress Controller and app Service are in place, let’s configure an Ingress resource: 1. Ingress definition (save as myapp-ingress.): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 2. Apply the Ingress: kubectl apply -f myapp-ingress . 3. Verify that the Ingress is registered: You should see: NAME CLASS HOSTS ADDRESS PORTS AGE myapp-ingress nginx myapp.local <pending> 80 10s The ADDRESS may be <pending> but in cloud environments, it will eventually become a LoadBalancer IP. d. Verify End‐to‐End (Ingress + CSP): From Inside the Cluster 1. Run a curl pod that sends a request to the Ingress Controller’s internal DNS, setting 'Host: myapp.local': kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" 2. Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: Wed, 05 Jun 2025 12:05:00 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Content-Security-Policy: default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report Notice the Content-Security-Policy header. This confirms the controller has injected our CSP. e. Verify End‐to‐End (Ingress + CSP): From Your Laptop or Browser (Optional) Retrieve the External IP (if your Service is a LoadBalancer): kubectl get svc ingress-nginx-controller -n ingress-nginx Once you have an external IP (e.g. 52.251.20.187), add this to your /etc/hosts: 52.251.20.187 myapp.local Then in your terminal or browser: curl -I http://myapp.local/ You can port‐forward and curl to validate if app is running: kubectl port-forward service/ingress-nginx-controller 8080:80 -n ingress-nginx Add to /etc/hosts: 127.0.0.1 myapp.local Then: curl -I http://myapp.local:8080/ You should again see the 200 OK response along with the Content-Security-Policy header. Recap & Best Practices 1. Run the Ingress Controller as a DaemonSet Ensures one Pod per node for true high availability. Achieved by --set controller.kind=DaemonSet in Helm. 2. Enable Snippets & Inject CSP Globally --set controller.config.enable-snippets=true turns on snippet support. --set-string controller.config.server-snippet="add_header Content-Security-Policy …" inserts a literal block into the controller’s ConfigMap. This causes every server block (all Ingresses) to include your CSP header without modifying individual Ingress manifests. 3. Keep Your Apps Unchanged The CSP lives in the Ingress Controller, not in your app pods. You can standardize security across all Ingresses by adjusting the single CSP snippet. 4. Deploy Your Application, Service, Ingress, and Test We used a minimal NGINX “myapp” Deployment + ClusterIP Service -> Ingress -> Controller -> CSP injection. Verified inside‐cluster with curl -I … -H "Host: myapp.local" that CSP appears. Optionally tested from outside via /etc/hosts or LoadBalancer IP. 5. Next Steps Adjust the CSP policy to fit your application’s needs, for e.g. if you load scripts from a CDN, add that domain under script-src. Add additional security headers (HSTS, X-Frame-Options, etc.) by appending more add_header lines in server-snippet. If you have multiple Ingress Controllers, repeat the same pattern for each. Full Commands (for Reference) #1) Install/Upgrade NGINX Ingress Controller (DaemonSet + CSP) helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" # Wait for DaemonSet kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx #2) myapp-deployment. apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest ports: - containerPort: 80 --- # 3) myapp-service. apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP # Apply Deployment + Service kubectl apply -f myapp-deployment. -f myapp-service. # Test the Service internally kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it \ -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" # 4) myapp-ingress. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 # Apply the Ingress kubectl apply -f myapp-ingress. # Verify and test with CSP kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" With these steps in place, all traffic to myapp.local is routed through your NGINX Ingress Controller, and the strong CSP header is automatically applied at the gateway. This pattern can be adapted to any Kubernetes‐hosted web application, by injecting additional security headers, tailoring the CSP to your needs, and keeping your apps running unmodified. Happy “ingressing!”Enable an Industrial Dataspace on Azure
What is an Industrial Dataspace? An industrial dataspace is an environment designed to enable the secure and efficient exchange of data between different organizations within an industrial ecosystem. Developed by the International Data Spaces Association, it focuses on key principles such as data sovereignty, interoperability, and collaboration. These principles are crucial in the context of Industry 4.0 where interconnected systems and data-driven decision-making optimize industrial processes and create resilient supply chains. A tutorial with step-by-step instructions on how to enable an industrial dataspace on Azure is available here. Use Case: Providing a Carbon Footprint for Produced Products One of the most popular use cases for industrial dataspaces is providing the Product Carbon Footprint (PCF), an increasingly important requirement in customers' buying decisions. The Greenhouse Gas Protocol is a common method for calculating the PCF, splitting the task into scope 1, scope 2, and scope 3 emissions. This example solution focuses on calculating scope 2 emissions from simulated production lines using energy consumption data to determine the carbon footprint for each product. Accessing the Reference Implementation The Product Carbon Footprint reference implementation can be accessed here and deployed to Azure with a single click. During the installation workflow, all the required components are deployed to Azure. This reference implementation supports data modelling with IEC standard Open Platform Communication Unified Architecture (OPC UA), aligned with the OPC Foundation Cloud Initiative. It also uses the IEC standard Asset Administration Shell (AAS) to provide product semantics, creating a Product Carbon Footprint AAS for simulated products and storing it in an AAS Repository. Finally, the implementation uses the IEC/ISO standard Eclipse Dataspace Components (EDC) to establish the trust relationship between the manufacturer and the customer, enabling the actual PCF data transfer via an OpenAPI-compatible REST interface. Conclusion Enabling an industrial dataspace on Azure can help manufacturers meet regulatory requirements, optimize industrial processes, and improve customer engagement by leveraging modern cloud technologies and standards to provide a secure and efficient data exchange environment, ultimately driving transparency and sustainability in the manufacturing industry.742Views1like0CommentsEnd-to-end TLS with AKS, Azure Front Door, Azure Private Link Service, and NGINX Ingress Controller
This article shows how Azure Front Door Premium can be set to use a Private Link Service to expose an AKS-hosted workload via NGINX Ingress Controller configured to use a private IP address on the internal load balancer.17KViews4likes4Comments