multi-cloud
5 TopicsDecoding On-Premises ADC Rules: Migration to Azure Application Gateway
Overview As Azure Application Gateway evolves, many organizations are considering how their existing on-premises solutions—such as F5, NetScaler, and Radware—can transition to leverage Azure’s native services. During this shift to cloud-native architecture, a frequent question arises: ‘Can Application Gateway support my current load balancing configurations?’” The short answer: It depends on your use case. With the right approach, the transition can be smooth, scalable, and secure. Azure Application Gateway, especially when used with Azure-native services like Web Application Firewall (WAF), Azure Front Door, and Azure Firewall, can support common use cases. This guide provides a functional comparison, outlines what’s supported and offers a blueprint for successful migration. Key Capabilities of Application Gateway Azure Application Gateway v2 brings a host of enhancements that align with the needs of modern, cloud-first organizations: Autoscaling & Zone Redundancy Native WAF + Azure DDoS Protection Native support for Header Rewrites, URL-based routing, and SSL termination Integration with Azure Monitor, Log Analytics, Defender for Cloud Azure-native deployment: ARM/Bicep, CLI, GitOps, Terraform, CI/CD These features make App Gateway a strong option for cloud-first and hybrid scenarios, especially for cloud-first and hybrid scenarios. customers benefit from simplified operations, improved agility, and enhanced security. What are ADC Rules? On-premises ADCs (Application Delivery Controllers) often include advanced traffic management features, such as iRules and Citrix policy expressions. These Layer 4–7 devices go beyond basic load balancing, enabling traffic manipulation at various stages of the connection lifecycle. ADCs are powerful, flexible, and often deeply embedded in enterprise traffic logic. If you rely on these features, migration is still possible—Azure Application Gateway supports many commonly used functionalities out of the box. Common ADCs scenarios: Redirects and rewrites IP filtering and geo-blocking Custom error handling Event-driven logic like HTTP_REQUEST, CLIENT_ACCEPTED Application Gateway Feature Patterns ADCs traffic management features are powerful and flexible, often deeply embedded in enterprise traffic flows. Application Gateway does provide native support for many common scenarios. In this guide, we’ll show you how to translate advanced rules typical patterns into configurations. [Note]: When migrating WAF rules, enable detection mode first to identify false positives before enforcing blocks Citrix Features iRule Feature App Gateway v2 Equivalent Supported for App Gateway? Responder Policies Redirects (301/302) Native redirect rules ✅ Rewrite Policies Header rewrites Rewrite Set rules ✅ GSLB + Responder Policies Geo-based Routing Combining with Azure Front Door ✅ Content Switching Policies URL-based routing Path-based routing rules ✅ Responder/ACLs IP filtering WAF custom rules or NSGs ✅ GSLB + Policy Expressions Geo-blocking WAF rules ✅ Content Switching Policies Path-based routing URL path maps ✅ Content Switching / Rewrite Policies Header-based Routing Limited with parameter-based path selection ➗ Advanced Policy Expressions (Regex supported) Regex-based routing Limited regex support via path parameters ➗ Priority Queues / Rate Control Real-time traffic shaping Limited with Azure Front Door ➗ AppExpert with TCP expressions TCP payload inspection Not supported ❌ Not supported Event-driven hooks (HTTP_REQUEST, etc) Not supported ❌ Not Supported Query Pool Not supported ❌ Not supported Per-request scripting Not supported ❌ Deep packet inspection + Policies (limited) Payload-based routing Not supported ❌ Not supported Full scripting (TCL) Not supported ❌ Translating Advanced Rules Migrating features such as iRules and Citrix policy expressions from ADCs is less about line-by-line translation and more about recognizing patterns. Think of it as translating a language—not word-for-word, but intent-for-intent. How to get started: Tool-assisted translation: Use Copilot or GPT-based tools to translate common ADC rule patterns. Inventory & analyze: Break complex rules into modular App Gateway functions (Redirects, Rewrites) Document: Document everything of original goal and their translated equivalents. Where to Configure in Azure You can implement routing and rewrite logic via: Azure portal UI Azure CLI / PowerShell (az network application-gateway) ARM templates / Bicep (for infrastructure-as-code deployments) REST API (for automation/CI-CD pipelines) Example: Configure header rewrite in the portal Open your Application Gateway in the Azure portal Navigate to Rewrites on the sidebar Click + Add Rewrite Set, then apply it to your routing rule Define your rewrite conditions and actions [NOTE]: Not sure what rewrites are? Learn more here about Rewrite HTTP Headers. Rewrite configuration: click + Add Rewrite set to apply a new Rewrite to your routing rule: Resources Application Gateway v1 to v2: Migrate from App Gateway v1 to v2 Best Practices: Architecture Best Practices for Azure Application Gateway v2 - Microsoft Azure Well-Architected Framework | Microsoft Learn Rewrites: https://learn.microsoft.com/en-us/azure/application-gateway/rewrite-http-headers-url Header-based routing: https://learn.microsoft.com/en-us/azure/application-gateway/parameter-based-path-selection-portal Tuning WAF rules: Tune Azure Web Application Firewall for Azure Front Door | Microsoft Learn Conclusion While AI-powered assistants can help interpret and translate common ADC traffic management patterns, manual recreation and validation of rules are still necessary to ensure accuracy and alignment with your specific requirements. Nevertheless, migrating to Application Gateway v2 is not only feasible—it represents a strategic move toward a modern, cloud-native infrastructure. With thoughtful planning and the right mindset, organizations can maintain traffic flexibility while gaining the agility, scalability, and operational efficiency of the Azure ecosystem. If you are unsure whether your current on-premises configuration can be supported in Azure Application Gateway, please consult the official Azure documentation or reach out to Microsoft support for guidance.288Views0likes0CommentsWhat's New in the World of eBPF from Azure Container Networking!
Azure Container Networking Interface (CNI) continues to evolve, now bolstered by the innovative capabilities of Cilium. Azure CNI Powered by Cilium (ACPC) leverages Cilium’s extended Berkeley Packet Filter (eBPF) technologies to enable features such as network policy enforcement, deep observability, and improved service routing. Here’s a deeper look into the latest features that make management of Azure Kubernetes Service (AKS) clusters more efficient, scalable, and secure. Improved Performance: Cilium Endpoint Slices One of the standout features in the recent updates is the introduction of CiliumEndpointSlice. This feature significantly enhances the performance and scalability of the Cilium dataplane in AKS clusters. Previously, Cilium used Custom Resource Definitions (CRDs) called CiliumEndpoints to manage pods. Each pod had a CiliumEndpoint associated with it, which contained information about the pod’s status and properties. However, this approach placed significant stress on the control plane, especially in larger clusters. To alleviate this load, CiliumEndpointSlice batches CiliumEndpoints and their updates, reducing the number of updates propagated to the control plane. Our performance testing has shown remarkable improvements: Average API Server Responsiveness: Upto 50% decrease in latency, meaning faster processing of queries. Pod Startup Latencies: Upto 60% reduction, allowing for faster deployment and scaling. In-Cluster Network Latency: Upto 80% decrease, translating to better application performance. Note that this feature is Generally Available in AKS clusters, by default, using Cilium 1.17 release and above and does not require additional configuration changes! Learn more about improvements unlocked by CiliumEndpointSlices with Azure CNI by Cilium - High-Scale Kubernetes Networking with Azure CNI Powered by Cilium | Microsoft Community Hub. Deployment Flexibility: Dual Stack for Cilium Network Policies Kubernetes clusters operating on an IPv4/IPv6 dual-stack network enable workloads to natively access both IPv4 and IPv6 endpoints without incurring additional complexities or performance drawbacks. Previously, we had enabled the use of dual stack networking on AKS clusters (starting with AKS 1.29) running Azure CNI powered by Cilium in preview mode. Now, we are happy to announce that the feature is Generally Available! By enabling both IPv4 and IPv6 addressing, you can manage your production AKS clusters in mixed environments, accommodating various network configurations seamlessly. More importantly, dual-stack support in Azure CNI’s Cilium network policies extend security benefits for AKS clusters in those complex environments. For instance, you can enable dual stack AKS clusters using eBPF dataplane as follows: az aks create \ --location <region> \ --resource-group <resourceGroupName> \ --name <clusterName> \ --network-plugin azure \ --network-plugin-mode overlay \ --network-dataplane cilium \ --ip-families ipv4,ipv6 \ --generate-ssh-keys Learn more about Azure CNI’s Network Policies - Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Learn Ease of Use: Node Subnet Mode with Cilium Azure CNI now supports Node Subnet IPAM mode with Cilium Dataplane. In Node Subnet mode IP addresses to pods are assigned from the same subnet as the node itself, simplifying routing and policy management. This mode is particularly beneficial for smaller clusters where managing multiple subnets is cumbersome. AKS clusters using this mode also gain the benefits of improved network observability, Cilium Network Policies and FQDN filtering and more capabilities unlocked by Advanced Container Networking Services (ACNS). More notable, with this feature we now support all IPAM configuration options with eBPF dataplane on AKS clusters. You can create an AKS cluster with node subnet IPAM mode and eBPF dataplane as follows: az aks create \ --name <clusterName> \ --resource-group <resourceGroupName> \ --location <location> \ --network-plugin azure \ --network-dataplane cilium \ --generate-ssh-keys Learn more about Node Subnet - Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Learn. Defense-in-depth: Cilium Layer 7 Policies Azure CNI by Cilium extends its comprehensive Layer4 network policy capabilities to Layer7, offering granular control over application traffic. This feature enables users to define security policies based on application-level protocols and metadata, adding a powerful layer of security and compliance management. Layer7 policies are implemented using Envoy, an open-source service proxy, which is part of ACNS Security Agent operating in conjunction with the Cilium agent. Envoy handles traffic between services and provides necessary visibility and control at the application layer. Policies can be enforced based on HTTP and gRPC methods, paths, headers, and other application-specific attributes. Additionally, Cilium Network Policies support Kafka-based workflows, enhancing security and traffic management. This feature is currently in public preview mode and you can learn more about the getting started experience here - Introducing Layer 7 Network Policies with Advanced Container Networking Services for AKS Clusters! | Microsoft Community Hub. Coming Soon: Transparent Encryption with Wireguard By leveraging Cilium’s Wireguard, customers can achieve regulatory compliance by ensuring that all network traffic, whether HTTP-based or non-HTTP, is encrypted. Users can enable inter-node transparent encryption in their Kubernetes environments using Cilium’s open-source based solution. When Wireguard is enabled, the cilium agent on each cluster node will establish a secure Wireguard tunnel with all other known nodes in the cluster to encrypt traffic between cilium endpoints. This feature will soon be in public preview and will be enabled as part of ACNS. Stay tuned for more details on this. Conclusion These new features in Azure CNI Powered by Cilium underscore our commitment to enhancing default network performance and security in your AKS environments, all while collaborating with the open-source community. From the impressive performance boost with CiliumEndpointSlice to the adaptability of dual-stack support and the advanced security of Layer7 policies and Wireguard based encryption, these innovations ensure your AKS clusters are not just ready for today but are primed for the future. Also, don’t forget to dive into the fascinating world of eBPF-based observability in multi-cloud environments! Check out our latest post - Retina: Bridging Kubernetes Observability and eBPF Across the Clouds. Why wait, try these out now! Stay tuned to the AKS public roadmap for more exciting developments! For additional information, visit the following resources: For more info about Azure CNI Powered by Cilium visit - Configure Azure CNI Powered by Cilium in AKS. For more info about ACNS visit Advanced Container Networking Services (ACNS) for AKS | Microsoft Learn.669Views0likes0CommentsSecuring Microservices with Cilium and Istio
The adoption of Kubernetes and containerized applications is booming, leading to new challenges in visibility and security. As the landscape of cloud-native applications is rapidly evolving so are the number of sophisticated attacks targeting containerized workloads. Traditional tools often fall short in tracking the usage and traffic flows within these applications. The immutable nature of container images and the short lifespan of containers further necessitate addressing vulnerabilities early in the delivery pipeline. Comprehensive Security Controls in Kubernetes Microsoft Azure offers a range of security controls to ensure comprehensive protection across various layers of the Kubernetes environment. These controls include but are not limited to: Cluster Security: Features such as private clusters, managed cluster identity, and API server authorized ranges enhance security at the cluster level. Node and Pod Security: Hardened bootstrapping, confidential nodes, and pod sandboxing are implemented to secure the nodes and pods within a cluster. Network Security: Advanced Container Networking Services and Cilium Network policies offer granular control over network traffic. Authentication and Authorization: Azure Policy in-cluster enforcement, Entra authentication, and Istio mTLS and authorization policies provide robust identity and access management. Image Scanning: Microsoft Defender for Cloud provides both image and runtime scanning to identify vulnerabilities and threats. Let’s highlight how you can secure micro services while scaling your applications running on Azure Kubernetes Service (AKS) using service mesh for robust traffic management, and network policies for security. Micro segmentation with Network Policies Micro segmentation is crucial for enhancing security within Kubernetes clusters, allowing for the isolation of workloads and controlled traffic between microservices. Azure CNI by Cilium leverages eBPF to provide high-performance networking, security, and observability features. It dynamically inserts eBPF bytecode into the Linux kernel, offering efficient and flexible control over network traffic. Cilium Network Policies enable network isolation within and across Kubernetes clusters. Cilium also provides an identity-based security model, offering Layer 7 (L7) traffic control, and integrates deep observability for L4 to L7 metrics in Kubernetes clusters. A significant advantage of using Azure CNI based on Cilium is its seamless integration with existing AKS environments, requiring minimal modifications to your infrastructure. Note that Cilium Clusterwide Network Policy (CCNP) is not supported at the time of writing this blog post. FQDN Filtering with Advanced Container Networking Services (ACNS) Traditional IP-based policies can be cumbersome to maintain. ACNS allows for DNS-based policies, providing a more granular and user-friendly approach to managing network traffic. This is supported only with Azure CNI powered by Cilium and includes a security agent DNS proxy for FQDN resolution even during upgrades. It’s worth noting that with Cilium’s L7 enforcement, you can control traffic based on HTTP methods, paths, and headers, making it ideal for APIs, microservices, and services that use protocols like HTTP, gRPC, or Kafka. At the time of writing this blog, this capability is not supported via ACNS. More on this in a future blog! AKS Istio Add-On: Mutual TLS (mTLS) and Authorization Policy Istio enhances the security of microservices through its built-in features, including mutual TLS (mTLS) and authorization policies. The Istiod control plane, acting as a certificate authority, issues X.509 certificates to the Envoy sidecar proxies via the Secret Discovery Service (SDS). Integration with Azure Key Vault allows for secure management of root and intermediate certificates. The PeerAuthentication Custom Resource in Istio controls the traffic accepted by workloads. By default, it is set to PERMISSIVE to facilitate migration but can be set to STRICT to enforce mTLS across the mesh. Istio also supports granular authorization policies, allowing for control over IP blocks, namespaces, service accounts, request paths, methods, and headers. The Istio add-on also supports integration with Azure Key Vault (AKV) and the AKV Secrets Store CSI Driver Add-On for plug-in CA certificates, where the root CA lives on a secure machine offline, and the intermediate certs for the Istiod control plane are synced to the cluster by the CSI Driver Add-On. Additionally, certificates for the Istio ingress gateway for TLS termination or SNI passthrough can also be stored in AKV. Defense-In-Depth with Cilium, ACNS and Istio Combining the capabilities of Cilium's eBPF technologies through ACNS and AKS managed Istio addon, AKS provides a defense-in-depth strategy for securing Kubernetes clusters. Azure CNI's Cilium Network Policies and ACNS FQDN filtering enforce Pod-to-Pod and Pod-to-egress policies at Layer 3 and 4, while Istio enforces STRICT mTLS and Layer 7 authorization policies. This multi-layered approach ensures comprehensive security coverage across all layers of the stack. Now, let’s highlight the key steps in achieving this: Step 1: Create an AKS Cluster with Azure CNI (by Cilium), ACNS and Istio Addon enabled. az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --location $LOCATION \ --kubernetes-version 1.30.0 \ --node-count 3 \ --node-vm-size standard_d16_v3 \ --enable-managed-identity \ --network-plugin azure \ --network-dataplane cilium \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 \ --enable-asm \ --enable-acns \ --generate-ssh-keys Step 2: Create Cilium FQDN policy that allows egress traffic to google.com while blocking traffic to httpbin.org. Sample Policy (fqdn-filtering-policy.yaml): apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: sleep-network-policy namespace: foo spec: endpointSelector: matchLabels: app: sleep egress: - toFQDNs: - matchPattern: "*.google.com" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": foo "k8s:app": helloworld - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY Apply policy: kubectl apply -f fqdn-filtering-policy.yaml Step 3: Create an Istio deny-by-default AuthorizationPolicy. This denies all requests across the mesh unless specifically authorized with an “ALLOW” policy. Sample Policy (istio-deny-all-authz.yaml): apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-nothing namespace: aks-istio-system spec: {} Apply policy: kubectl apply -f istio-deny-all-authz.yaml Step 4: Deploy an Istio L7 AuthorizationPolicy to explicitly allow traffic to the “sample” pod in namespace foo for http “GET” requests. Sample Policy (istio-L7-allow-policy.yaml): apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-get-requests namespace: foo spec: selector: matchLabels: app: sample action: ALLOW rules: - to: - operation: methods: [“GET”] Apply policy: kubectl apply -f istio-L7-allow-policy.yaml Step 5: Deploy an Istio strict mTLS PeerAuthentication Resource to enforce that all workloads in the mesh only accept Istio mTLS traffic. Sample PeerAuthentication (istio-peerauth.yaml): apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: strict-mtls namespace: aks-istio-system spec: mtls: mode: STRICT Apply policy: kubectl apply -f istio-peerauth.yaml These examples demonstrate how you can manage traffic to specific FQDNs and enforce L7 authorization rules in your AKS cluster. Conclusion Traditional IP and perimeter security models are insufficient for the dynamic nature of cloud-native environments. More sophisticated security mechanisms, such as identity-based policies and DNS names, are required. Azure CNI, powered by Cilium and ACNS, provides robust FQDN filtering and Layer 3/4 Network Policy enforcement. The Istio add-on offers mTLS for identity-based encryption and Layer7 authorization policies. A defense-in-depth model, incorporating both Azure CNI and service mesh mechanisms, is recommended for maximizing security posture. So, give these a try and let us know (Azure Kubernetes Service Roadmap (Public)) how we can evolve our roadmap to help you build the best with Azure. Credit(s): Niranjan Shankar, Sr. Software Engineer, Microsoft1.3KViews2likes0CommentsAchieve high-bandwidth, private, and seamless Microsoft Azure connectivity
Multicloud computing is revolutionizing enterprise IT, enabling businesses to harness the strengths of different public cloud providers for greater agility, performance, and cost efficiency. However, without the right connectivity strategy, organizations risk increased complexity, higher latency, and security vulnerabilities—ultimately undermining the benefits of a multicloud approach. Together, Microsoft and Equinix provide a complete platform for multicloud networking with all the ingredients you need to achieve a best-case mix of performance, security, cost-effectiveness, and flexibility. Join us for a two-part webinar series where Equinix and Microsoft experts will guide you through best practices for seamless, secure, and high-performance multicloud networking. Learn how to optimize connectivity, reduce data egress costs, and unlock the full potential of Azure and Equinix’s global interconnection ecosystem. Webinar 1: Key Considerations for Optimized Microsoft Azure Connectivity Summary Achieve high-bandwidth, private, and seamless Microsoft Azure connectivity. As organizations modernize their IT infrastructure, hybrid cloud has emerged as the preferred solution for ensuring scalability, security, and performance. However, just as critical as deciding where to place specific workloads is determining how everything will be connected. Join experts from Microsoft, Equinix and Enterprise Strategy Group as they reveal five key cloud migration considerations for 2025 to help you: Prioritize essential network capabilities Manage top cost drivers Improve network scalability and simplicity Increase app performance Our speakers: Jim Frey – Principal Analyst, Enterprise Strategy Group Jim has over 30 years of experience in networking and software product development, including senior leadership roles in partner marketing at Kentik Technologies. He has also held executive positions in industry research, marketing, and product management. Jim holds a BSc in Engineering from the Colorado School of Mines and an MSc in Computer and Information Sciences from Rensselaer Polytechnic Institute. Kevin Lopez – Director of Solution Sales, Microsoft Azure Kevin is a seasoned leader with over 30 years in sales and business development. He joined Microsoft in 2007 and has held various roles, including Technical Architect and Azure Global Black Belt team member. Currently, he leads the Azure Network Security Global Black Belt team for the Americas is involved in Microsoft Worldwide Learning and social change initiatives. Brian Petit – Principal Solution Architect, Equinix Brian is a hybrid multicloud network professional with decades of experience in data networking infrastructure design, deployment, and applications. As the Senior Principal Solutions Architect dedicated to Microsoft at Equinix, he has responsibility for overall platform integration and alignment with Azure. Learn how to optimize hybrid cloud connectivity with Microsoft Azure ExpressRoute in Equinix. Reserve your spot: Date: Thursday, March 13, 2025 Time (AMER): 10:00 AM Pacific Daylight Time Time (EMEA): 11:00 AM Central European Time Time (APAC): 11:00 AM Singapore Time Duration: 1 hour Webinar 2: Explore Microsoft Azure ExpressRoute use cases and reference architectures Summary Explore Microsoft Azure ExpressRoute use cases and reference architectures Migrating to Azure while managing a hybrid multicloud strategy can be complex, with challenges around network performance, security, and cost optimization. In this technical webinar, Microsoft and Equinix experts will showcase real-world Azure ExpressRoute customer use cases, demonstrating how businesses are overcoming these hurdles to migrate faster, more securely, and cost-effectively—all while maintaining seamless connectivity between on-prem, Microsoft Azure and other clouds. Learn how to: Leverage cloud-adjacent architectures Minimize egress costs Improve network performance Ensure data sovereignty and compliance Our speakers: Mays Algebary – Global Black Belt, Microsoft Azure Mays is a cloud network security professional with a decade of experience safeguarding enterprise infrastructures. As a Global Black Belt at Microsoft, she is an expert in zero-trust architectures, secure cloud networking, and migration strategies. Mays works closely with organizations to strengthen their security posture, minimize risks, and ensure seamless, high-performance cloud connectivity. Brian Petit – Sr. Principal Solution Architect, Equinix Brian is a hybrid multicloud network professional with decades of experience in data networking infrastructure design, deployment, and applications. As the Senior Principal Solutions Architect dedicated to Microsoft at Equinix, he has responsibility for overall platform integration and alignment with Azure. RSVP today to learn how Equinix and Azure can help you migrate with confidence, ensuring seamless connectivity, performance, and security across your hybrid multicloud infrastructure. Reserve your spot: Date: Thursday, March 20, 2025 Time (AMER): 10:00 AM Pacific Daylight Time Time (EMEA): 11:00 AM Central European Time Time (APAC): 11:00 AM Singapore Time Duration: 30 minutes557Views1like0Comments