multi-cloud
6 TopicsSecuring Microservices with Cilium and Istio
The adoption of Kubernetes and containerized applications is booming, leading to new challenges in visibility and security. As the landscape of cloud-native applications is rapidly evolving so are the number of sophisticated attacks targeting containerized workloads. Traditional tools often fall short in tracking the usage and traffic flows within these applications. The immutable nature of container images and the short lifespan of containers further necessitate addressing vulnerabilities early in the delivery pipeline. Comprehensive Security Controls in Kubernetes Microsoft Azure offers a range of security controls to ensure comprehensive protection across various layers of the Kubernetes environment. These controls include but are not limited to: Cluster Security: Features such as private clusters, managed cluster identity, and API server authorized ranges enhance security at the cluster level. Node and Pod Security: Hardened bootstrapping, confidential nodes, and pod sandboxing are implemented to secure the nodes and pods within a cluster. Network Security: Advanced Container Networking Services and Cilium Network policies offer granular control over network traffic. Authentication and Authorization: Azure Policy in-cluster enforcement, Entra authentication, and Istio mTLS and authorization policies provide robust identity and access management. Image Scanning: Microsoft Defender for Cloud provides both image and runtime scanning to identify vulnerabilities and threats. Let’s highlight how you can secure micro services while scaling your applications running on Azure Kubernetes Service (AKS) using service mesh for robust traffic management, and network policies for security. Micro segmentation with Network Policies Micro segmentation is crucial for enhancing security within Kubernetes clusters, allowing for the isolation of workloads and controlled traffic between microservices. Azure CNI by Cilium leverages eBPF to provide high-performance networking, security, and observability features. It dynamically inserts eBPF bytecode into the Linux kernel, offering efficient and flexible control over network traffic. Cilium Network Policies enable network isolation within and across Kubernetes clusters. Cilium also provides an identity-based security model, offering Layer 7 (L7) traffic control, and integrates deep observability for L4 to L7 metrics in Kubernetes clusters. A significant advantage of using Azure CNI based on Cilium is its seamless integration with existing AKS environments, requiring minimal modifications to your infrastructure. Note that Cilium Clusterwide Network Policy (CCNP) is not supported at the time of writing this blog post. FQDN Filtering with Advanced Container Networking Services (ACNS) Traditional IP-based policies can be cumbersome to maintain. ACNS allows for DNS-based policies, providing a more granular and user-friendly approach to managing network traffic. This is supported only with Azure CNI powered by Cilium and includes a security agent DNS proxy for FQDN resolution even during upgrades. It’s worth noting that with Cilium’s L7 enforcement, you can control traffic based on HTTP methods, paths, and headers, making it ideal for APIs, microservices, and services that use protocols like HTTP, gRPC, or Kafka. At the time of writing this blog, this capability is not supported via ACNS. More on this in a future blog! AKS Istio Add-On: Mutual TLS (mTLS) and Authorization Policy Istio enhances the security of microservices through its built-in features, including mutual TLS (mTLS) and authorization policies. The Istiod control plane, acting as a certificate authority, issues X.509 certificates to the Envoy sidecar proxies via the Secret Discovery Service (SDS). Integration with Azure Key Vault allows for secure management of root and intermediate certificates. The PeerAuthentication Custom Resource in Istio controls the traffic accepted by workloads. By default, it is set to PERMISSIVE to facilitate migration but can be set to STRICT to enforce mTLS across the mesh. Istio also supports granular authorization policies, allowing for control over IP blocks, namespaces, service accounts, request paths, methods, and headers. The Istio add-on also supports integration with Azure Key Vault (AKV) and the AKV Secrets Store CSI Driver Add-On for plug-in CA certificates, where the root CA lives on a secure machine offline, and the intermediate certs for the Istiod control plane are synced to the cluster by the CSI Driver Add-On. Additionally, certificates for the Istio ingress gateway for TLS termination or SNI passthrough can also be stored in AKV. Defense-In-Depth with Cilium, ACNS and Istio Combining the capabilities of Cilium's eBPF technologies through ACNS and AKS managed Istio addon, AKS provides a defense-in-depth strategy for securing Kubernetes clusters. Azure CNI's Cilium Network Policies and ACNS FQDN filtering enforce Pod-to-Pod and Pod-to-egress policies at Layer 3 and 4, while Istio enforces STRICT mTLS and Layer 7 authorization policies. This multi-layered approach ensures comprehensive security coverage across all layers of the stack. Now, let’s highlight the key steps in achieving this: Step 1: Create an AKS Cluster with Azure CNI (by Cilium), ACNS and Istio Addon enabled. az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --location $LOCATION \ --kubernetes-version 1.30.0 \ --node-count 3 \ --node-vm-size standard_d16_v3 \ --enable-managed-identity \ --network-plugin azure \ --network-dataplane cilium \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 \ --enable-asm \ --enable-acns \ --generate-ssh-keys Step 2: Create Cilium FQDN policy that allows egress traffic to google.com while blocking traffic to httpbin.org. Sample Policy (fqdn-filtering-policy.yaml): apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: sleep-network-policy namespace: foo spec: endpointSelector: matchLabels: app: sleep egress: - toFQDNs: - matchPattern: "*.google.com" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": foo "k8s:app": helloworld - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY Apply policy: kubectl apply -f fqdn-filtering-policy.yaml Step 3: Create an Istio deny-by-default AuthorizationPolicy. This denies all requests across the mesh unless specifically authorized with an “ALLOW” policy. Sample Policy (istio-deny-all-authz.yaml): apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-nothing namespace: aks-istio-system spec: {} Apply policy: kubectl apply -f istio-deny-all-authz.yaml Step 4: Deploy an Istio L7 AuthorizationPolicy to explicitly allow traffic to the “sample” pod in namespace foo for http “GET” requests. Sample Policy (istio-L7-allow-policy.yaml): apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-get-requests namespace: foo spec: selector: matchLabels: app: sample action: ALLOW rules: - to: - operation: methods: [“GET”] Apply policy: kubectl apply -f istio-L7-allow-policy.yaml Step 5: Deploy an Istio strict mTLS PeerAuthentication Resource to enforce that all workloads in the mesh only accept Istio mTLS traffic. Sample PeerAuthentication (istio-peerauth.yaml): apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: strict-mtls namespace: aks-istio-system spec: mtls: mode: STRICT Apply policy: kubectl apply -f istio-peerauth.yaml These examples demonstrate how you can manage traffic to specific FQDNs and enforce L7 authorization rules in your AKS cluster. Conclusion Traditional IP and perimeter security models are insufficient for the dynamic nature of cloud-native environments. More sophisticated security mechanisms, such as identity-based policies and DNS names, are required. Azure CNI, powered by Cilium and ACNS, provides robust FQDN filtering and Layer 3/4 Network Policy enforcement. The Istio add-on offers mTLS for identity-based encryption and Layer7 authorization policies. A defense-in-depth model, incorporating both Azure CNI and service mesh mechanisms, is recommended for maximizing security posture. So, give these a try and let us know (Azure Kubernetes Service Roadmap (Public)) how we can evolve our roadmap to help you build the best with Azure. Credit(s): Niranjan Shankar, Sr. Software Engineer, Microsoft2.1KViews3likes0CommentsAzure Front Door: Resiliency Series – Part 2: Faster recovery (RTO)
In Part 1 of this blog series, we outlined our four‑pillar strategy for resiliency in Azure Front Door: configuration resiliency, data plane resiliency, tenant isolation, and accelerated Recovery Time Objective (RTO). Together, these pillars help Azure Front Door remain continuously available and resilient at global scale. Part 1 focused on the first two pillars: configuration and data plane resiliency. Our goal is to make configuration propagation safer, so incompatible changes never escape pre‑production environments. We discussed how incompatible configurations are blocked early, and how data plane resiliency ensures the system continues serving traffic from a last‑known‑good (LKG) configuration even if a bad change manages to propagate. We also introduced ‘Food Taster’, a dedicated sacrificial process running in each edge server’s data plane, that pretests every configuration change in isolation, before it ever reaches the live data plane. In this post, we turn to the recovery pillar. We describe how we have made key enhancements to the Azure Front Door recovery path so the system can return to full operation in a predictable and bounded timeframe. For a global service like Azure Front Door, serving hundreds of thousands of tenants across 210+ edge sites worldwide, we set an explicit target: to be able to recover any edge site – or all edge sites – within approximately 10 minutes, even in worst‑case scenarios. In typical data plane crash scenarios, we expect recovery in under a second. Repair status The first blog post in this series mentioned the two Azure Front Door incidents from October 2025 – learn more by watching our Azure Incident Retrospective session recordings for the October 9 th incident and/or the October 29 th incident. Before diving into our platform investments for improving our Recovery Time Objectives (RTO), we wanted to provide a quick update on the overall repair items from these incidents. We are pleased to report that the work on configuration propagation and data plane resiliency is now complete and fully deployed across the platform (in the table below, “Completed” means broadly deployed in production). With this, we have reduced configuration propagation latency from ~45 minutes to ~20 minutes. We anticipate reducing this even further – to ~15 minutes by the end of April 2026, while ensuring that platform stability remains our top priority. Learning category Goal Repairs Status Safe customer configuration deployment Incompatible configuration never propagates beyond ‘EUAP or canary regions’ Control plane and data plane defect fixes Forced synchronous configuration processing Additional stages with extended bake time Early detection of crash state Completed Data plane resiliency Configuration processing cannot impact data plane availability Manage data-plane lifecycle to prevent outages caused by configuration-processing defects. Completed Isolated work-process in every data plane server to process and load the configuration. Completed 100% Azure Front Door resiliency posture for Microsoft internal services Microsoft operates an isolated, independent Active/Active fleet with automatic failover for critical Azure services Phase 1: Onboarded critical services batch impacted on Oct 29 th outage running on a day old configuration Completed Phase 2: Automation & hardening of operations, auto-failover and self-management of Azure Front Door onboarding for additional services March 2026 Recovery improvements Data plane crash recovery in under 10 minutes Data plane boot-up time optimized via local cache (~1 hour) Completed Accelerate recovery time < 10 minutes April 2026 Tenant isolation No configuration or traffic regression can impact other tenants Micro cellular Azure Front Door with ingress layered shards June 2026 Why recovery at edge scale is deceptively hard To understand why recovery took as long as it did, it helps to first understand how the Azure Front Door data plane processes configuration. Azure Front Door operates in 210+ edge sites with multiple servers per site. The data plane of each edge server hosts multiple processes. A master process orchestrates the lifecycle of multiple worker processes, that serve customer traffic. A separate configuration translator process runs alongside the data plane processes, and is responsible for converting customer configuration bundles from the control plane into optimized binary FlatBuffer files. This translation step, covering hundreds of thousands of tenants, represents hours of cumulative computation. A per edge server cache is kept locally at each server level – to enable a fast recovery of the data plane, if needed. Once the configuration translator process produces these FlatBuffer files, each worker processes them independently and memory-maps them for zero-copy access. Configuration updates flow through a two-phase commit: new FlatBuffers are first loaded into a staging area and validated, then atomically swapped into production maps. In-flight requests continue using the old configuration, until the last request referencing them completes. The data process recovery is designed to be resilient to different failure modes. A failure or crash at the worker process level has a typical recovery time of less than one second. Since each server has multiple such worker processes which serve customer traffic, this type of crash has no impact on the data plane. In the case of a master process crash, the system automatically tries to recover using the local cache. When the local cache is reused, the system is able to recover quickly – in approximately 60 minutes – since most of the configurations in the cache were already loaded into the data plane before the crash. However, in certain cases if the cache becomes unavailable or must be invalidated because of corruption, the recovery time increases significantly. During the October 29 th incident, a data plane crash triggered a complete recovery sequence that took approximately 4.5 hours. This was not because restarting a process is slow, it is because a defect in the recovery process invalidated the local cache, which meant that “restart” meant rebuilding everything from scratch. The configuration translator process then had to re-fetch and re-translate every one of the hundreds of thousands of customer configurations, before workers could memory-map them and begin serving traffic. This experience has crystallized three fundamental learnings related to our recovery path: Expensive rework: A subset of crashes discarded all previously translated FlatBuffer artifacts, forcing the configuration translator process to repeat hours of conversion work that had already been validated and stored. High restart costs: Every worker on every node had to wait for the configuration translator process to complete the full translation, before it could memory-map any configuration and begin serving requests. Unbounded recovery time: Recovery time grew linearly with total tenant footprint rather than with active traffic, creating a ‘scale penalty’ as more tenants onboarded to the system. Separately and together, the insight was clear: recovery must stop being proportional to the total configuration size. Persisting ‘validated configurations’ across restarts One of the key recovery improvements was strengthening how validated customer configurations are cached and reused across failures, rather than rebuilding configuration states from scratch during recovery. Azure Front Door already cached customer configurations on host‑mounted storage prior to the October incident. The platform enhancements post outage focused on making the local configuration cache resilient to crashes, partial failures, and bad tenant inputs. Our goal was to ensure that recovery behavior is dominated by serving traffic safely, not by reconstructing configuration state. This led us to two explicit design goals… Design goals No category of crash should invalidate the configuration cache: Configuration cache invalidation must never be the default response to failures. Whether the failure is a worker crash, master crash, data plane restart, or coordinated recovery action, previously validated customer configurations should remain usable—unless there is a proven reason to discard it. Bad tenant configuration must not poison the entire cache: A single faulty or incompatible tenant configuration should result in targeted eviction of that tenant’s configuration only—not wholesale cache invalidation across all tenants. Platform enhancements Previously, customer configurations persisted to host‑mounted storage, but certain failure paths treated the cache as unsafe and invalidated it entirely. In those cases, recovery implicitly meant reloading and reprocessing configuration for hundreds of thousands of tenants before traffic could resume, even though the vast majority of cached data was still valid. We changed the recovery model to avoid invalidating customer configurations, with strict scoping around when and how cached entries are discarded: Cached configurations are no longer invalidated based on crash type. Failures are assumed to be orthogonal to configuration correctness unless explicitly proven otherwise. Cache eviction is granular and tenant‑scoped. If a cached configuration fails validation or load checks, only that tenant’s configuration is discarded and reloaded. All other tenant configurations remain available. This ensures that recovery does not regress into a fleet‑wide rebuild due to localized or unrelated faults. Safety and correctness Durability is paired with strong correctness controls, to prevent unsafe configurations from being served: Per‑tenant validation on load: Each cached tenant configuration is validated during the ‘load and verification’ phase, before being promoted for traffic serving. Therefore, failures are contained to that tenant. Targeted re‑translation: When validation fails, only the affected tenant’s configuration is reloaded or reprocessed. Therefore, the cache for other tenants is left untouched. Operational escape hatch: Operators retain the ability to explicitly instruct a clean rebuild of the configuration cache (with proper authorization), preserving control without compromising the default fast‑recovery path. Resulting behavior With these changes, recovery behavior now aligns with real‑world traffic patterns - configuration defects impact tenants locally and predictably, rather than globally. The system now prefers isolated tenant impact, and continued service using last-known-good over aggressive invalidation, both of which are critical for predictable recovery at the scale of Azure Front Door. Making recovery scale with active traffic, not total tenants Reusing configuration cache solves the problem of rebuilding configuration in its entirety, but even with a warm cache, the original startup path had a second bottleneck: eagerly loading a large volume of tenant configurations into memory before serving any traffic. At our scale, memory-mapping, parsing hundreds of thousands of FlatBuffers, constructing internal lookup maps, adding Transport Layer Security (TLS) certificates and configuration blocks for each tenant, collectively added almost an hour to startup time. This was the case even when a majority of those tenants had no active traffic at that moment. We addressed this by fundamentally changing when configuration is loaded into workers. Rather than eagerly loading most of the tenants at startup across all edge locations, Azure Front Door now uses an Machine Learning (ML)-optimized lazy loading model. In the new architecture, instead of loading a large number of tenant configurations, we only load a small subset of tenants that are known to be historically active in a given site, we call this the “warm tenants” list. The warm tenants list per edge site is created through a sophisticated traffic analysis pipeline that leverages ML. However, loading the warm tenants is not good enough, because when a request arrives and we don’t have the configuration in memory, we need to know two things. Firstly, is this a request from a real Azure Front Door tenant – and, if it is, where can I find the configuration? To answer these questions, each worker maintains a hostmap that tracks the state of each tenant’s configuration. This hostmap is constructed during startup, as we process each tenant configuration – if the tenant is in the warm list, we will process and load their configuration fully; if not, then we will just add an entry into the hostmap where all their domain names are mapped to the configuration path location. When a request arrives for one of these tenants, the worker loads and validates that tenant’s configuration on demand, and immediately begins serving traffic. This allows a node to start serving its busiest tenants within a few minutes of startup, while additional tenants are loaded incrementally only when traffic actually arrives—allowing the system to progressively absorb cold tenants as demand increases. The effect on recovery is transformative. Instead of recovery time scaling with the total number of tenants configured on a server, it scales with the number of tenants actively receiving traffic. In practice, even at our busiest edge sites, the active tenant set is a small fraction of the total. Just as importantly, this modified form of lazy loading provides a natural failure isolation boundary. Most Edge sites won’t ever load a faulty configuration of an inactive tenant. When a request for an inactive tenant with an incompatible configuration arrives, impact is contained to a single worker. The configuration load architecture now prefers serving as many customers as quickly as possible, rather than waiting until everything is ready before serving anyone. The above changes are slated to complete in April 2026 and will bring our RTO from the current ~1 hour to under 10 minutes – for complete recovery from a worst case scenario. Continuous validation through Game Days A critical element of our recovery confidence comes from GameDay fault-injection testing. We don’t simply design recovery mechanisms and assume they work—we break the system deliberately and observe how it responds. Since late 2025, we have conducted recurring GameDay drills that simulate the exact failure scenarios we are defending against: Food Taster crash scenarios: Injecting deliberately faulty tenant configurations, to verify that they are caught and isolated with zero impact on live traffic. In our January 2026 GameDay, the Food Taster process crashed as expected, the system halted the update within approximately 5 seconds, and no customer traffic was affected. Master process crash scenarios: Triggering master process crashes across test environments to verify that workers continue serving traffic, that the Local Config Shield engages within 10 seconds, and that the coordinated recovery tool restores full operation within the expected timeframe. Multi-region failure drills: Simulating simultaneous failures across multiple regions to validate that global Config Shield mechanisms engage correctly, and that recovery procedures scale without requiring manual per-region intervention. Fallback test drills for critical Azure services running behind Azure Front Door: In our February 2026 GameDay, we simulated the complete unavailability of Azure Front Door, and successfully validated failover for critical Azure services with no impact to traffic. These drills have both surfaced corner cases and built operational confidence. They have transformed recovery from a theoretical plan into tested, repeatable muscle memory. As we noted in an internal communication to our team: “Game day testing is a deliberate shift from assuming resilience to actively proving it—turning reliability into an observed and repeatable outcome.” Closing Part 1 of this series emphasized preventing unsafe configurations from reaching the data plane, and data plane resiliency in case an incompatible configuration reaches production. This post has shown that prevention alone is not enough—when failures do occur, recovery must be fast, predictable, and bounded. By ensuring that the FlatBuffer cache is never invalidated, by loading only active tenants, and by building safe coordinated recovery tooling, we have transformed failure handling from a fleet-wide crisis into a controlled operation. These recovery investments work in concert with the prevention mechanisms described in Part 1. Together, they ensure that the path from incident detection to full service restoration is measured in minutes, with customer traffic protected at every step. In the next post of this series, we will cover the third pillar of our resiliency strategy: tenant isolation—how micro-cellular architecture and ingress-layered sharding can reduce the blast radius of any failure to a small subset, ensuring that one customer’s configuration or traffic anomaly never becomes everyone’s problem. We deeply value our customers’ trust in Azure Front Door. We are committed to transparently sharing our progress on these resiliency investments, and to exceed expectations for safety, reliability, and operational readiness.1.8KViews3likes0CommentsWhat's New in the World of eBPF from Azure Container Networking!
Azure Container Networking Interface (CNI) continues to evolve, now bolstered by the innovative capabilities of Cilium. Azure CNI Powered by Cilium (ACPC) leverages Cilium’s extended Berkeley Packet Filter (eBPF) technologies to enable features such as network policy enforcement, deep observability, and improved service routing. Here’s a deeper look into the latest features that make management of Azure Kubernetes Service (AKS) clusters more efficient, scalable, and secure. Improved Performance: Cilium Endpoint Slices One of the standout features in the recent updates is the introduction of CiliumEndpointSlice. This feature significantly enhances the performance and scalability of the Cilium dataplane in AKS clusters. Previously, Cilium used Custom Resource Definitions (CRDs) called CiliumEndpoints to manage pods. Each pod had a CiliumEndpoint associated with it, which contained information about the pod’s status and properties. However, this approach placed significant stress on the control plane, especially in larger clusters. To alleviate this load, CiliumEndpointSlice batches CiliumEndpoints and their updates, reducing the number of updates propagated to the control plane. Our performance testing has shown remarkable improvements: Average API Server Responsiveness: Upto 50% decrease in latency, meaning faster processing of queries. Pod Startup Latencies: Upto 60% reduction, allowing for faster deployment and scaling. In-Cluster Network Latency: Upto 80% decrease, translating to better application performance. Note that this feature is Generally Available in AKS clusters, by default, using Cilium 1.17 release and above and does not require additional configuration changes! Learn more about improvements unlocked by CiliumEndpointSlices with Azure CNI by Cilium - High-Scale Kubernetes Networking with Azure CNI Powered by Cilium | Microsoft Community Hub. Deployment Flexibility: Dual Stack for Cilium Network Policies Kubernetes clusters operating on an IPv4/IPv6 dual-stack network enable workloads to natively access both IPv4 and IPv6 endpoints without incurring additional complexities or performance drawbacks. Previously, we had enabled the use of dual stack networking on AKS clusters (starting with AKS 1.29) running Azure CNI powered by Cilium in preview mode. Now, we are happy to announce that the feature is Generally Available! By enabling both IPv4 and IPv6 addressing, you can manage your production AKS clusters in mixed environments, accommodating various network configurations seamlessly. More importantly, dual-stack support in Azure CNI’s Cilium network policies extend security benefits for AKS clusters in those complex environments. For instance, you can enable dual stack AKS clusters using eBPF dataplane as follows: az aks create \ --location <region> \ --resource-group <resourceGroupName> \ --name <clusterName> \ --network-plugin azure \ --network-plugin-mode overlay \ --network-dataplane cilium \ --ip-families ipv4,ipv6 \ --generate-ssh-keys Learn more about Azure CNI’s Network Policies - Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Learn Ease of Use: Node Subnet Mode with Cilium Azure CNI now supports Node Subnet IPAM mode with Cilium Dataplane. In Node Subnet mode IP addresses to pods are assigned from the same subnet as the node itself, simplifying routing and policy management. This mode is particularly beneficial for smaller clusters where managing multiple subnets is cumbersome. AKS clusters using this mode also gain the benefits of improved network observability, Cilium Network Policies and FQDN filtering and more capabilities unlocked by Advanced Container Networking Services (ACNS). More notable, with this feature we now support all IPAM configuration options with eBPF dataplane on AKS clusters. You can create an AKS cluster with node subnet IPAM mode and eBPF dataplane as follows: az aks create \ --name <clusterName> \ --resource-group <resourceGroupName> \ --location <location> \ --network-plugin azure \ --network-dataplane cilium \ --generate-ssh-keys Learn more about Node Subnet - Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Learn. Defense-in-depth: Cilium Layer 7 Policies Azure CNI by Cilium extends its comprehensive Layer4 network policy capabilities to Layer7, offering granular control over application traffic. This feature enables users to define security policies based on application-level protocols and metadata, adding a powerful layer of security and compliance management. Layer7 policies are implemented using Envoy, an open-source service proxy, which is part of ACNS Security Agent operating in conjunction with the Cilium agent. Envoy handles traffic between services and provides necessary visibility and control at the application layer. Policies can be enforced based on HTTP and gRPC methods, paths, headers, and other application-specific attributes. Additionally, Cilium Network Policies support Kafka-based workflows, enhancing security and traffic management. This feature is currently in public preview mode and you can learn more about the getting started experience here - Introducing Layer 7 Network Policies with Advanced Container Networking Services for AKS Clusters! | Microsoft Community Hub. Coming Soon: Transparent Encryption with Wireguard By leveraging Cilium’s Wireguard, customers can achieve regulatory compliance by ensuring that all network traffic, whether HTTP-based or non-HTTP, is encrypted. Users can enable inter-node transparent encryption in their Kubernetes environments using Cilium’s open-source based solution. When Wireguard is enabled, the cilium agent on each cluster node will establish a secure Wireguard tunnel with all other known nodes in the cluster to encrypt traffic between cilium endpoints. This feature will soon be in public preview and will be enabled as part of ACNS. Stay tuned for more details on this. Conclusion These new features in Azure CNI Powered by Cilium underscore our commitment to enhancing default network performance and security in your AKS environments, all while collaborating with the open-source community. From the impressive performance boost with CiliumEndpointSlice to the adaptability of dual-stack support and the advanced security of Layer7 policies and Wireguard based encryption, these innovations ensure your AKS clusters are not just ready for today but are primed for the future. Also, don’t forget to dive into the fascinating world of eBPF-based observability in multi-cloud environments! Check out our latest post - Retina: Bridging Kubernetes Observability and eBPF Across the Clouds. Why wait, try these out now! Stay tuned to the AKS public roadmap for more exciting developments! For additional information, visit the following resources: For more info about Azure CNI Powered by Cilium visit - Configure Azure CNI Powered by Cilium in AKS. For more info about ACNS visit Advanced Container Networking Services (ACNS) for AKS | Microsoft Learn.926Views0likes0CommentsAchieve high-bandwidth, private, and seamless Microsoft Azure connectivity
Multicloud computing is revolutionizing enterprise IT, enabling businesses to harness the strengths of different public cloud providers for greater agility, performance, and cost efficiency. However, without the right connectivity strategy, organizations risk increased complexity, higher latency, and security vulnerabilities—ultimately undermining the benefits of a multicloud approach. Together, Microsoft and Equinix provide a complete platform for multicloud networking with all the ingredients you need to achieve a best-case mix of performance, security, cost-effectiveness, and flexibility. Join us for a two-part webinar series where Equinix and Microsoft experts will guide you through best practices for seamless, secure, and high-performance multicloud networking. Learn how to optimize connectivity, reduce data egress costs, and unlock the full potential of Azure and Equinix’s global interconnection ecosystem. Webinar 1: Key Considerations for Optimized Microsoft Azure Connectivity Summary Achieve high-bandwidth, private, and seamless Microsoft Azure connectivity. As organizations modernize their IT infrastructure, hybrid cloud has emerged as the preferred solution for ensuring scalability, security, and performance. However, just as critical as deciding where to place specific workloads is determining how everything will be connected. Join experts from Microsoft, Equinix and Enterprise Strategy Group as they reveal five key cloud migration considerations for 2025 to help you: Prioritize essential network capabilities Manage top cost drivers Improve network scalability and simplicity Increase app performance Our speakers: Jim Frey – Principal Analyst, Enterprise Strategy Group Jim has over 30 years of experience in networking and software product development, including senior leadership roles in partner marketing at Kentik Technologies. He has also held executive positions in industry research, marketing, and product management. Jim holds a BSc in Engineering from the Colorado School of Mines and an MSc in Computer and Information Sciences from Rensselaer Polytechnic Institute. Kevin Lopez – Director of Solution Sales, Microsoft Azure Kevin is a seasoned leader with over 30 years in sales and business development. He joined Microsoft in 2007 and has held various roles, including Technical Architect and Azure Global Black Belt team member. Currently, he leads the Azure Network Security Global Black Belt team for the Americas is involved in Microsoft Worldwide Learning and social change initiatives. Brian Petit – Principal Solution Architect, Equinix Brian is a hybrid multicloud network professional with decades of experience in data networking infrastructure design, deployment, and applications. As the Senior Principal Solutions Architect dedicated to Microsoft at Equinix, he has responsibility for overall platform integration and alignment with Azure. Learn how to optimize hybrid cloud connectivity with Microsoft Azure ExpressRoute in Equinix. Reserve your spot: Date: Thursday, March 13, 2025 Time (AMER): 10:00 AM Pacific Daylight Time Time (EMEA): 11:00 AM Central European Time Time (APAC): 11:00 AM Singapore Time Duration: 1 hour Webinar 2: Explore Microsoft Azure ExpressRoute use cases and reference architectures Summary Explore Microsoft Azure ExpressRoute use cases and reference architectures Migrating to Azure while managing a hybrid multicloud strategy can be complex, with challenges around network performance, security, and cost optimization. In this technical webinar, Microsoft and Equinix experts will showcase real-world Azure ExpressRoute customer use cases, demonstrating how businesses are overcoming these hurdles to migrate faster, more securely, and cost-effectively—all while maintaining seamless connectivity between on-prem, Microsoft Azure and other clouds. Learn how to: Leverage cloud-adjacent architectures Minimize egress costs Improve network performance Ensure data sovereignty and compliance Our speakers: Mays Algebary – Global Black Belt, Microsoft Azure Mays is a cloud network security professional with a decade of experience safeguarding enterprise infrastructures. As a Global Black Belt at Microsoft, she is an expert in zero-trust architectures, secure cloud networking, and migration strategies. Mays works closely with organizations to strengthen their security posture, minimize risks, and ensure seamless, high-performance cloud connectivity. Brian Petit – Sr. Principal Solution Architect, Equinix Brian is a hybrid multicloud network professional with decades of experience in data networking infrastructure design, deployment, and applications. As the Senior Principal Solutions Architect dedicated to Microsoft at Equinix, he has responsibility for overall platform integration and alignment with Azure. RSVP today to learn how Equinix and Azure can help you migrate with confidence, ensuring seamless connectivity, performance, and security across your hybrid multicloud infrastructure. Reserve your spot: Date: Thursday, March 20, 2025 Time (AMER): 10:00 AM Pacific Daylight Time Time (EMEA): 11:00 AM Central European Time Time (APAC): 11:00 AM Singapore Time Duration: 30 minutes673Views1like0CommentsDecoding On-Premises ADC Rules: Migration to Azure Application Gateway
Overview As Azure Application Gateway evolves, many organizations are considering how their existing on-premises solutions—such as F5, NetScaler, and Radware—can transition to leverage Azure’s native services. During this shift to cloud-native architecture, a frequent question arises: ‘Can Application Gateway support my current load balancing configurations?’” The short answer: It depends on your use case. With the right approach, the transition can be smooth, scalable, and secure. Azure Application Gateway, especially when used with Azure-native services like Web Application Firewall (WAF), Azure Front Door, and Azure Firewall, can support common use cases. This guide provides a functional comparison, outlines what’s supported and offers a blueprint for successful migration. Key Capabilities of Application Gateway Azure Application Gateway v2 brings a host of enhancements that align with the needs of modern, cloud-first organizations: Autoscaling & Zone Redundancy Native WAF + Azure DDoS Protection Native support for Header Rewrites, URL-based routing, and SSL termination Integration with Azure Monitor, Log Analytics, Defender for Cloud Azure-native deployment: ARM/Bicep, CLI, GitOps, Terraform, CI/CD These features make App Gateway a strong option for cloud-first and hybrid scenarios, especially for cloud-first and hybrid scenarios. customers benefit from simplified operations, improved agility, and enhanced security. What are ADC Rules? On-premises ADCs (Application Delivery Controllers) often include advanced traffic management features, such as iRules and Citrix policy expressions. These Layer 4–7 devices go beyond basic load balancing, enabling traffic manipulation at various stages of the connection lifecycle. ADCs are powerful, flexible, and often deeply embedded in enterprise traffic logic. If you rely on these features, migration is still possible—Azure Application Gateway supports many commonly used functionalities out of the box. Common ADCs scenarios: Redirects and rewrites IP filtering and geo-blocking Custom error handling Event-driven logic like HTTP_REQUEST, CLIENT_ACCEPTED Application Gateway Feature Patterns ADCs traffic management features are powerful and flexible, often deeply embedded in enterprise traffic flows. Application Gateway does provide native support for many common scenarios. In this guide, we’ll show you how to translate advanced rules typical patterns into configurations. [Note]: When migrating WAF rules, enable detection mode first to identify false positives before enforcing blocks Citrix Features iRule Feature App Gateway v2 Equivalent Supported for App Gateway? Responder Policies Redirects (301/302) Native redirect rules ✅ Rewrite Policies Header rewrites Rewrite Set rules ✅ GSLB + Responder Policies Geo-based Routing Combining with Azure Front Door ✅ Content Switching Policies URL-based routing Path-based routing rules ✅ Responder/ACLs IP filtering WAF custom rules or NSGs ✅ GSLB + Policy Expressions Geo-blocking WAF rules ✅ Content Switching Policies Path-based routing URL path maps ✅ Content Switching / Rewrite Policies Header-based Routing Limited with parameter-based path selection ➗ Advanced Policy Expressions (Regex supported) Regex-based routing Limited regex support via path parameters ➗ Priority Queues / Rate Control Real-time traffic shaping Limited with Azure Front Door ➗ AppExpert with TCP expressions TCP payload inspection Not supported ❌ Not supported Event-driven hooks (HTTP_REQUEST, etc) Not supported ❌ Not Supported Query Pool Not supported ❌ Not supported Per-request scripting Not supported ❌ Deep packet inspection + Policies (limited) Payload-based routing Not supported ❌ Not supported Full scripting (TCL) Not supported ❌ Translating Advanced Rules Migrating features such as iRules and Citrix policy expressions from ADCs is less about line-by-line translation and more about recognizing patterns. Think of it as translating a language—not word-for-word, but intent-for-intent. How to get started: Tool-assisted translation: Use Copilot or GPT-based tools to translate common ADC rule patterns. Inventory & analyze: Break complex rules into modular App Gateway functions (Redirects, Rewrites) Document: Document everything of original goal and their translated equivalents. Where to Configure in Azure You can implement routing and rewrite logic via: Azure portal UI Azure CLI / PowerShell (az network application-gateway) ARM templates / Bicep (for infrastructure-as-code deployments) REST API (for automation/CI-CD pipelines) Example: Configure header rewrite in the portal Open your Application Gateway in the Azure portal Navigate to Rewrites on the sidebar Click + Add Rewrite Set, then apply it to your routing rule Define your rewrite conditions and actions [NOTE]: Not sure what rewrites are? Learn more here about Rewrite HTTP Headers. Rewrite configuration: click + Add Rewrite set to apply a new Rewrite to your routing rule: Resources Application Gateway v1 to v2: Migrate from App Gateway v1 to v2 Best Practices: Architecture Best Practices for Azure Application Gateway v2 - Microsoft Azure Well-Architected Framework | Microsoft Learn Rewrites: https://learn.microsoft.com/en-us/azure/application-gateway/rewrite-http-headers-url Header-based routing: https://learn.microsoft.com/en-us/azure/application-gateway/parameter-based-path-selection-portal Tuning WAF rules: Tune Azure Web Application Firewall for Azure Front Door | Microsoft Learn Conclusion While AI-powered assistants can help interpret and translate common ADC traffic management patterns, manual recreation and validation of rules are still necessary to ensure accuracy and alignment with your specific requirements. Nevertheless, migrating to Application Gateway v2 is not only feasible—it represents a strategic move toward a modern, cloud-native infrastructure. With thoughtful planning and the right mindset, organizations can maintain traffic flexibility while gaining the agility, scalability, and operational efficiency of the Azure ecosystem. If you are unsure whether your current on-premises configuration can be supported in Azure Application Gateway, please consult the official Azure documentation or reach out to Microsoft support for guidance.540Views0likes0Comments