azure private link
23 TopicsAzure ExpressRoute Direct: A Comprehensive Overview
What is Express Route Azure ExpressRoute allows you to extend your on-premises network into the Microsoft cloud over a private connection made possible through a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365. ExpressRoute allows you to create a connection between your on-premises network and the Microsoft cloud in four different ways, CloudExchange Colocation, Point-to-point Ethernet Connection, Any-to-any (IPVPN) Connection, and ExpressRoute Direct. ExpressRoute Direct gives you the ability to connect directly into the Microsoft global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100-Gbps or 10-Gbps connectivity that supports active-active connectivity at scale. Why ExpressRoute Direct Is Becoming the Preferred Choice for Customers ExpressRoute Direct with ExpressRoute Local – Free Egress: ExpressRoute Direct includes ExpressRoute Local, which allows private connectivity to Azure services within the same metro or peering location. This setup is particularly cost-effective because egress (outbound) data transfer is free, regardless of whether you're on a metered or unlimited data plan. By avoiding Microsoft's global backbone, ExpressRoute Local offers high-speed, low-latency connections for regionally co-located workloads without incurring additional data transfer charges. Dual Port Architecture Both ExpressRoute Direct and the service provider model feature a dual-port architecture, with two physical fiber pairs connected to separate Microsoft router ports and configured in an active/active BGP setup that distributes traffic across both links simultaneously for redundancy and improved throughput. What sets Microsoft apart is making this level of resiliency standard, not optional. Forward-thinking customers in regions like Sydney take it even further by deploying ExpressRoute Direct across multiple colocation facilities for example, placing one port pair in Equinix SY2 and another in NextDC S1 creating four connections across two geographically separate sites. This design protects against facility-level outages from power failures, natural disasters, or accidental infrastructure damage, ensuring business continuity for organizations where downtime is simply not an option. When Geography Limits Your Options: Not every region offers facility diversity, example New Zealand has only one ExpressRoute peering location, businesses needing geographic redundancy must connect to Sydney incurring Auckland to Sydney link costs but gaining critical diversity to mitigate outages. While ExpressRoute’s dual ports provide active/active redundancy, both are on the same Microsoft edge, so true disaster recovery requires using Sydney’s edge. ExpressRoute Direct scales from basic dual-port setups to multi-facility deployments and offers another advantage: free data transfer within the same geopolitical region. Once traffic enters Microsoft’s network, New Zealand customers can move data between Azure services across the trans-Tasman link without per-GB fees, with Microsoft absorbing those costs. Premium SKU: Global Reach: Azure ExpressRoute Direct with the Premium SKU enables Global Reach, allowing private connectivity between your on-premises networks across different geographic regions through Microsoft's global backbone. This means you can link ExpressRoute circuits in different countries or continents, facilitating secure and high-performance data exchange between global offices or data centers. The Premium SKU extends the capabilities of ExpressRoute Direct by supporting cross-region connectivity, increased route limits, and access to more Azure regions, making it ideal for multinational enterprises with distributed infrastructure. MACsec: Defense in Depth and Enterprise Security ExpressRoute Direct uniquely supports MACsec (IEEE 802.1AE) encryption at the data-link layer, allowing your router and Microsoft's router to establish encrypted communication even within the colocation facility. This optional feature provides additional security for compliance-sensitive workloads in banking or government environments. High-Performance Data Transfer for the Enterprise: Azure ExpressRoute Direct enables ultra-fast and secure data transfer between on-premises infrastructure and Azure by offering dedicated bandwidth of 10 to 100 Gbps. This high-speed connectivity is ideal for large-scale data movement scenarios such as AI workloads, backup, and disaster recovery. It ensures consistent performance, low latency, and enhanced reliability, making it well-suited for hybrid and multicloud environments that require frequent or time-sensitive data synchronization. FastPath Support: Azure ExpressRoute Direct now supports FastPath for Private Endpoints and Private Link, enabling low-latency, high-throughput connections by bypassing the virtual network gateway. This feature is available only with ExpressRoute Direct circuits (10 Gbps or 100 Gbps) and is in limited general availability. While a gateway is still needed for route exchange, traffic flows directly once FastPath is enabled. Supported gateway ExpressRoute Direct Setup Workflow Before provisioning ExpressRoute Direct resources, proper planning is essential. Key considerations for connectivity include understanding the two connectivity patterns available for ExpressRoute Direct from the customer edge to Microsoft Enterprise Edge (MSEE). Option 1: Colocation of Customer Equipment: This is a common pattern where the customer racks their network device (edge router) in the same third-party data center facility that houses Microsoft's networking gear (e.g., Equinix or NextDC). They install their router or firewall there and then order a short cross-connect from their cage to Microsoft's cage in that facility. The cross-connect is simply a fiber cable run through the facility's patch panel connecting the two parties. This direct colocation approach has the advantage of a single, highly efficient physical link (no intermediate hops) between the customer and Microsoft, completing the layer-1 connectivity in one step. Option 2: Using a Carrier/Exchange Provider: If the customer prefers not to move hardware into a new facility (due to cost or complexity), they can leverage a provider that already has presence in the relevant colocation. In this case, the customer connects from their data center to the provider's network, and the provider extends connectivity into the Microsoft peering location. For instance, the customer could contract with Megaport or a local telco to carry traffic from their on-premises location into Megaport's equipment, and Megaport in turn handles the cross-connection to Microsoft in the target facility. The conversation cited that the customer had already set up connections to Megaport in their data center. Using an exchange can simplify logistics since the provider arranges the cross-connect and often provides an LOA on the customer's behalf. It may also be more cost-effective where the customer's location is far from any Microsoft peering site. Many enterprises find that placing equipment in a well-connected colocation facility works best for their needs. Banks and large organizations have successfully taken this approach, such as placing routers in Equinix Sydney or NextDC Sydney to establish a direct fiber link to Azure. However, we understand that not every organization wants the capital expense or complexity of managing physical equipment in a new location. For those situations, using a cloud exchange like Megaport offers a practical alternative that still delivers the dedicated connectivity you're looking for, while letting someone else handle the infrastructure management. Once the decision on the connectivity pattern is made, the next step is to provision ExpressRoute Direct ports and establish the physical link: Step1: Provisioning Express Route Direct Ports Through the Azure portal (or CLI), the customer creates an ExpressRoute Direct resource. Customer must select an appropriate peering location, which corresponds to the colocation facility housing Azure's routers. For example, the customer would select the specific facility (such as "Vocus Auckland" or "Equinix Sydney SY2") where they intend to connect. Customer also choose the port bandwidth (either 10 Gbps or 100 Gbps) and the encapsulation type (Dot1Q or QinQ) during this setup. Azure then allocates two ports on two separate Microsoft devices in that location – essentially giving the customer a primary and secondary interface for redundancy, to remove a single point of failure affecting their connectivity. ****Critical considerations we need to keep in mind during this step**** Encapsulation: When configuring ExpressRoute Direct ports, the customer must choose an encapsulation method. Dot1Q (802.1Q) uses a single VLAN tag for the circuit, whereas Q-in-Q (802.1ad) uses stacked VLAN tags (an Outer S-Tag and Inner C-Tag). Q-in-Q allows multiple circuits on one physical port with overlapping customer VLAN IDs because Azure assigns a unique outer tag per circuit (making it ideal if the customer needs several ExpressRoute circuits on the same port). Dot1Q, by contrast, requires each VLAN ID to be unique across all circuits on the port, and is often used if the equipment doesn’t support Q-in-Q. (Most modern deployments prefer Q-in-Q for flexibility.) Capacity Planning: This offering allows customers to overprovision and utilize 20Gbps of capacity. Design for 10 Gbps with redundancy, not 20 Gbps total capacity. During Microsoft's monthly maintenance windows, one port may go offline, and your network must handle this seamlessly. Step 2: Generate Letter of Authorization After the ExpressRoute Direct resource is created, Microsoft generates a Letter of Authorization. The LOA is a document (often a PDF) that authorizes the data center operator to connect a specific Microsoft port to the designated port. It includes details like the facility name, patch panel identifier, and port numbers on Microsoft's side. If co-locating your own gear, you will also obtain a corresponding LOA from the facility for your port (or simply indicate your port details on the cross-connect order form). If a provider like Megaport is involved, that provider will generate an LOA for their port as well. Two LOAs are typically needed – one for Microsoft's ports and one for the other party's ports which are then submitted to the facility to execute the cross-connect. Step 3: Complete Cross Connect with data center provider Using the LOAs, the data center’s technicians will perform the cross-connection in the meet-me room. At this point, the physical fiber link is established between the Microsoft router and the customer (or provider) equipment. The link goes through a patch panel in the MMR – Meet me room rather than a direct cable between cages, for security and manageability. After patching, the circuit is in place but typically kept “administratively down” until ready. *****Critical considerations we need to keep in mind during this step. ***** When port allocation conflicts occur, engage Microsoft Support rather than recreating resources. They coordinate with colocation providers to resolve conflicts or issue new LOAs. Step 4: Change Admin Status of each link Once the cross-connect is physically completed, you can head into Azure's portal and flip the Admin State of each ExpressRoute Direct link to "Enabled." This action lights up the optical interface on Microsoft's side and starts your billing meter running, so you'll want to make sure everything is working properly first. The great thing is that Azure gives you visibility into the health of your fiber connection through optical power metrics. You can check the receive light levels right in the portal , a healthy connection should show power readings somewhere between -1 dBm and -9 dBm, which indicates a strong fiber signal. If you're seeing readings outside this range, or worse, no light at all, that's a red flag pointing to a potential issue like a mis-patch or faulty fiber connector. There was a real case where someone had a bad fiber connector that was caught because the light levels were too low, and the facility had to come back and re-patch the connection. So, this optical power check is really your first line of defence , once you see good light levels within the acceptable range, you know your physical layer is solid and you're ready to move on to the next steps. ****Critical considerations we need to keep in mind during this step. **** Proactive Monitoring: Set up alerts for BGP session failures and optical power thresholds. Link failures might not immediately impact users but require quick restoration to maintain full redundancy. At this stage, you've successfully navigated the physical infrastructure challenge, ExpressRoute Direct port pair is provisioned, fiber cross-connects are in place, and those critical optical power levels are showing healthy readings. Essentially, private physical highway directly connecting your network edge to Microsoft's backbone infrastructure has been built Step 5: Create Express Route Circuits ExpressRoute circuits represent the logical layer that transforms your physical ExpressRoute Direct ports into functional network connections. Through the Azure portal, organizations create circuit resources linked to their ExpressRoute Direct infrastructure, specifying bandwidth requirements and selecting the appropriate SKU (Local, Standard, or Premium) based on connectivity needs. A key advantage is the ability to provision multiple circuits on the same physical port pair, provided aggregate bandwidth stays within physical limits. For example, an organization with 10 Gbps ExpressRoute Direct might run a 1 Gbps non-production circuit alongside a 5 Gbps production circuit on the same infrastructure. Azure handles the technical complexity through automatic VLAN management: Step 6: Establish Peering Once your ExpressRoute circuit is created and VLAN connectivity is established, the next crucial step involves setting up BGP (Border Gateway Protocol) sessions between your network and Microsoft's infrastructure. ExpressRoute supports two primary BGP peering types: Private Peering for accessing Azure Virtual Networks and Microsoft Peering for reaching Microsoft SaaS services like Office 365 and Azure PaaS offerings. For most enterprise scenarios connecting data centers to Azure workloads, Private Peering becomes the focal point. Azure provides specific BGP IP addresses for your circuit configuration, defining /30 subnets for both primary and secondary link peering, which you'll configure on your edge router to exchange routing information. The typical flow involves your organization advertising on-premises network prefixes while Azure advertises VNet prefixes through these BGP sessions, creating dynamic route discovery between your environments. Importantly, both primary and secondary links maintain active BGP sessions, ensuring that if one connection fails, the secondary BGP session seamlessly maintains connectivity and keeps your network resilient against single points of failure. Step 7: Routing and Testing Once BGP sessions are established, your ExpressRoute circuit becomes fully operational, seamlessly extending your on-premises network into Azure virtual networks. Connectivity testing with ping, traceroute, and application traffic confirms that your on-premises servers can now communicate directly with Azure VMs through the private ExpressRoute path, bypassing the public internet entirely. The traffic remains completely isolated to your circuit via VLAN tags, ensuring no intermingling with other tenants while delivering the low latency and predictable performance that only dedicated connectivity can provide. At the end of this stage, the customer’s data center is linked to Azure at layer-3 via a private, resilient connection. They can access Azure resources as if they were on the same LAN extension, with low latency and high throughput. All that remains is to connect this circuit to relevant Azure virtual networks (via an ExpressRoute Gateway) and verify end-to-end application traffic. Step by step instructions are available as below Configure Azure ExpressRoute Direct using the Azure portal | Microsoft Learn Azure ExpressRoute: Configure ExpressRoute Direct | Microsoft Learn Azure ExpressRoute: Configure ExpressRoute Direct: CLI | Microsoft Learn1.5KViews3likes3CommentsWordPress App how to restrict access to specific pages on the site
Hello all, I have a WordPress App hosted on Azure and I am struggling with how I can secure specific pages from public access. For example: http://www.mysite.com/wp-admin http://www.mysite.com/info.php I'd like it so that only specific IP addresses or Microsoft user accounts can access some, such as admin pages and for some pages I'd like no access at all, to where it just blocks any sort of visit. I've viewed the documentation for Front Door and some networking restrictions but that seems to be just IP addresses and I'm confused about how I can set those rule for specific pages within the App. I know WordPress offer plugins which have this sort of functionality but I'd like to take advantage of Azure's security features rather than plugins from WordPress. Any help is very appreciated. Thank you534Views0likes1CommentA Guide to Azure Data Transfer Pricing
Understanding Azure networking charges is essential for businesses aiming to manage their budgets effectively. Given the complexity of Azure networking pricing, which involves various influencing factors, the goal here is to bring a clearer understanding of the associated data transfer costs by breaking down the pricing models into the following use cases: VM to VM VM to Private Endpoint VM to Internal Standard Load Balancer (ILB) VM to Internet Hybrid connectivity Please note this is a first version, with a second version to follow that will include additional scenarios. Disclaimer: Pricing may change over time, check the public Azure pricing calculator for up-to-date pricing information. Actual pricing may vary depending on agreements, purchase dates, and currency exchange rates. Sign in to the Azure pricing calculator to see pricing based on your current program/offer with Microsoft. 1. VM to VM 1.1. VM to VM, same VNet Data transfer within the same virtual network (VNet) is free of charge. This means that traffic between VMs within the same VNet will not incur any additional costs. Doc. Data transfer across Availability Zones (AZ) is free. Doc. 1.2. VM to VM, across VNet peering Azure VNet peering enables seamless connectivity between two virtual networks, allowing resources in different VNets to communicate with each other as if they were within the same network. When data is transferred between VNets, charges apply for both ingress and egress data. Doc: VM to VM, across VNet peering, same region VM to VM, across Global VNet peering Azure regions are grouped into 3 Zones (distinct from Avaialbility Zones within a specific Azure region). The pricing for Global VNet Peering is based on that geographic structure. Data transfer between VNets in different zones incurs outbound and inbound data transfer rates for the respective zones. When data is transferred from a VNet in Zone 1 to a VNet in Zone 2, outbound data transfer rates for Zone 1 and inbound data transfer rates for Zone 2 will be applicable. Doc. 1.3. VM to VM, through Network Virtual Appliance (NVA) Data transfer through an NVA involves charges for both ingress and egress data, depending on the volume of data processed. When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via an NVA (firewall...) in the hub VNet, it incurs VM to VM pricing twice. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 2. VM to Private Endpoint (PE) Private Endpoint pricing includes charges for the provisioned resource and data transfer costs based on traffic direction. For instance, writing to a Storage Account through a Private Endpoint incurs outbound data charges, while reading incurs inbound data charges. Doc: 2.1. VM to PE, same VNet Since data transfer within a VNet is free, charges are only applied for data processing through the Private Endpoint. Cross-region traffic will incur additional costs if the Storage Account and the Private Endpoint are located in different regions. 2.2. VM to PE, across VNet peering Accessing Private Endpoints from a peered network incurs only Private Link Premium charges, with no peering fees. Doc. VM to PE, across VNet peering, same region VM to PE, across VNet peering, PE region != SA region 2.3. VM to PE, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA. However, as per the PE pricing model, there are no charges between the NVA and the PE. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 3. VM to Internal Load Balancer (ILB) Azure Standard Load Balancer pricing is based on the number of load balancing rules as well as the volume of data processed. Doc: 3.1. VM to ILB, same VNet Data transfer within the same virtual network (VNet) is free. However, the data processed by the ILB is charged based on its volume and on the number load balancing rules implemented. Only the inbound traffic is processed by the ILB (and charged), the return traffic goes direct from the backend to the source VM (free of charge). 3.2. VM to ILB, across VNet peering In addition to the Load Balancer costs, data transfer charges between VNets apply for both ingress and egress. 3.3. VM to ILB, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA and VM to ILB charges between the NVA and the ILB/backend resource. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 4. VM to internet 4.1. Data transfer and inter-region pricing model Bandwidth refers to data moving in and out of Azure data centers, as well as data moving between Azure data centers; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing, or Peering. Doc: 4.2. Routing Preference in Azure and internet egress pricing model When creating a public IP in Azure, Azure Routing Preference allows you to choose how your traffic routes between Azure and the Internet. You can select either the Microsoft Global Network or the public internet for routing your traffic. Doc: See how this choice can impact the performance and reliability of network traffic: By selecting a Routing Preference set to Microsoft network, ingress traffic enters the Microsoft network closest to the user, and egress traffic exits the network closest to the user, minimizing travel on the public internet (“Cold Potato” routing). On the contrary, setting the Routing Preference to internet, ingress traffic enters the Microsoft network closest to the hosted service region. Transit ISP networks are used to route traffic, travel on the Microsoft Global Network is minimized (“Hot Potato” routing). Bandwidth pricing for internet egress, Doc: 4.3. VM to internet, direct Data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. It is important to note that default outbound access for VMs in Azure will be retired on September 30 2025, migration to an explicit outbound internet connectivity method is recommended. Doc. 4.4. VM to internet, with a public IP Here a standard public IP is explicitly associated to a VM NIC, that incurs additional costs. Like in the previous scenario, data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. 4.5. VM to internet, with NAT Gateway In addition to the previous costs, data transfer through a NAT Gateway involves charges for both the data processed and the NAT Gateway itself, Doc: 5. Hybrid connectivity Hybrid connectivity involves connecting on-premises networks to Azure VNets. The pricing model includes charges for data transfer between the on-premises network and Azure, as well as any additional costs for using Network Virtual Appliances (NVAs) or Azure Firewalls in the hub VNet. 5.1. H&S Hybrid connectivity without firewall inspection in the hub For an inbound flow, from the ExpressRoute Gateway to a spoke VNet, VNet peering charges are applied once on the spoke inbound. There are no charges on the hub outbound. For an outbound flow, from a spoke VNet to an ER branch, VNet peering charges are applied once, outbound of the spoke only. There are no charges on the hub inbound. Doc. The table above does not include ExpressRoute connectivity related costs. 5.2. H&S Hybrid connectivity with firewall inspection in the hub Since traffic transits and is inspected via a firewall in the hub VNet (Azure Firewall or 3P firewall NVA), the previous concepts do not apply. “Standard” inter-VNet VM-to-VM charges apply between the FW and the destination VM : inbound and outbound on both directions. Once outbound from the source VNet (Hub or Spoke), once inbound on the destination VNet (Spoke or Hub). The table above reflects only data transfer charges within Azure and does not include NVA/Azure Firewall processing costs nor the costs related to ExpressRoute connectivity. 5.3. H&S Hybrid connectivity via a 3rd party connectivity NVA (SDWAN or IPSec) Standard inter-VNet VM-to-VM charges apply between the NVA and the destination VM: inbound and outbound on both directions, both in the Hub VNet and in the Spoke VNet. 5.4. vWAN scenarios VNet peering is charged only from the point of view of the spoke – see examples and vWAN pricing components. Next steps with cost management To optimize cost management, Azure offers tools for monitoring and analyzing network charges. Azure Cost Management and Billing allows you to track and allocate costs across various services and resources, ensuring transparency and control over your expenses. By leveraging these tools, businesses can gain a deeper understanding of their network costs and make informed decisions to optimize their Azure spending.13KViews14likes2CommentsMastering Azure at Scale: Why AVNM Is a Game-Changer for Network Management
In today’s dynamic cloud-first landscape, managing distributed and large-scale network infrastructures has become increasingly complex. As enterprises expand their digital footprint across multiple Azure subscriptions and regions, the demand for centralized, scalable, and policy-driven network governance becomes critical. Azure Virtual Network Manager (AVNM) emerges as a strategic solution enabling unified control, automation, and security enforcement across diverse environments. Key Challenges in Large Scale Network Management Managing multiple VNETs increases operational complexity Decentralized security approaches may leave vulnerabilities Validating network changes becomes more critical as VNETs grow IP address consumption requires efficient management solutions Complex network topologies demand simplified peering strategies. What is Azure Virtual Network Manager (AVNM) AVNM is a centralized network management service in Azure that allows you to: 🌐 Group and manage virtual networks across multiple subscriptions and regions. 🔐 Apply security and connectivity configurations (like hub-and-spoke or mesh topologies) at scale. ⚙️ Automate network configuration using static or dynamic membership (via Azure Policy). 📊 Enforce organization-wide security rules that can override standard NSG rules. 🛠️ Deploy changes regionally with control over rollout sequence and frequency. AVNM is especially useful for large enterprises managing complex, multi-region Azure environments, helping reduce operational overhead and improve consistency in network security and connectivity Use Case of AVNM: 1) Network Segmentation and Connectivity Features: AVNM allows network segmentation into development, production, testing, or team-based groups, and supports static and dynamic grouping. Connectivity configuration features include hub-and-spoke, mesh, and direct connectivity topologies, simplifying the creation and management of complex network topologies. Network Segmentation Features: Segment networks into Dev, Prod, Test, or team groups. Group VNets and subnets at subscription, management group, or tenant level. Use static or dynamic grouping with name or tags. Basic Editor allows editing ID/Tags/Resource Groups/Subscription conditions with GUI. Advanced Editor specifies flexible criteria with JSON. Membership changes reflect automatically in configurations. Apply configurations to network groups. Network Group Management: Simplified management of network groups. Manually pick VNets for static membership. Azure Policy notifies for dynamic membership. Diagrams illustrate segmentation into multiple production nodes and flowchart shows managing VNets and applying dynamic membership through Azure Policy. Connectivity Configuration Features: Create different topologies with a few clicks. Topologies include Hub and Spoke, Mesh, and Hub-and-Spoke with direct connectivity. Diagrams illustrate connections between nodes. Use cases highlight gateways, firewalls, and common infrastructure shared by spoke virtual networks in the hub. Emphasizes less hop and lower latency connectivity across regions, subscriptions, and tenants. 2) Security Configuration Features: Admin rules enforce organizational-level security policies, applied to all resources in desired network groups, and can overwrite conflicting rules. User rules managed by AVNM allow micro-segmentation and conflict-free rules with modularity. Security admin rules work with NSGs, evaluated prior to NSG rules, ensuring consistent, high-priority network security policies globally. Admin Rules and User Rules: Admin rules target network admins and central governance teams, applying security policies at a high level and automatically to new VMs. User rules, managed by AVNM, are designed for product/service teams, enabling micro-segmentation and modular, conflict-free rules. Security Admin Rules and NSGs: Security admin rules are evaluated before NSG rules, ensuring high-priority network security. They can allow, deny, or always allow traffic. Allowed and denied traffic can be viewed in VNet flow logs. Comparison of Security Admin Rules and NSG Rules: Security admin rules, applied by network admins, enforce organizational security with higher priority. NSG rules, applied by individual teams, offer flexibility within the guard rail and operate with lower priority. 3) Virtual Network Verifier: AVNM Network Verifier prevents errors through compliance checking, simplifies debugging, enhances network management, allows role-based access, and offers detailed analysis and reporting. AVNM IPAM creates pools for IP address planning, automatically assigns non-overlapped CIDRs, reserves IPs for specific demands, and prevents Azure address space from overlapping on-premises/multi-cloud environments. Virtual Network Verifier's Goal: Virtual Network Verifier aims to assist with virtual network management by providing diagnostics, security compliance, and guard rail checks. It includes a diagram of a virtual network setup involving ExpressRoute, VNet connections, Virtual WAN, and SD-WAN branches with remote users. An illustration shows a person using a magnifying glass and checkmarks for successful verification and a red cross for failures. Benefits of using AVNM Network Verifier: AVNM Network Verifier offers verification of network configurations, simplified debugging, enhanced network management, flexible and delegated access, and detailed analysis and reporting. A diagram visually represents the scope of the AVNM instance and its role in network verification. 4) IPAM: AVNM IPAM creates pools for IP address planning, automatically assigns non-overlapped CIDRs, reserves IPs for specific demands, and prevents Azure address space from overlapping on-premises/multi-cloud environments. Benefits of using AVNM IPAM: AVNM IPAM benefits include creating pools for IP address planning, automatically assigning non-overlapped CIDRs, reserving IPs for specific demands, preventing Azure address space from overlapping with on-premises or multi-cloud environments, and enforcing users to create VNets with no overlap CIDR. A diagram shows the distribution of IP pools between two organizations, highlighting the non-overlapping nature of their CIDRs. View allocations for an IP address pool: Users can view allocation details for an IP address pool, including how many IPs are allocated to subnets, CIDR blocks, etc., and how many IPs are consumed by Azure resources like VMs or VMSSs. A screenshot of the Azure portal interface shows the allocation and usage of IP addresses within an address pool. Historically, AVNM pricing was structured around Azure subscriptions, which often limited flexibility for customers managing diverse network scopes. With the latest update, pricing is now aligned to virtual networks empowering customers with greater control and adaptability in how they adopt AVNM across their environments. https://learn.microsoft.com/en-us/azure/virtual-network-manager/overview#pricing Azure Virtual Network Manager (AVNM) is a game-changer for network management, offering centralized control, enhanced security, and simplified connectivity configurations. Whether you're managing a small network or a complex multi-region environment, AVNM provides the tools and features needed to streamline operations and ensure robust network security.991Views3likes0CommentsEnhance your cloud resources' security posture with network security perimeter
With network security perimeter, we are bringing in an ability to define a logical network boundary for PaaS resources deployed outside customer virtual networks and secure their public connectivity.3.9KViews8likes0CommentsCross-Tenant Connectivity between Databricks and Storage account using Private Link
High Level Architecture Cross-Tenant Architecture Overview The setup involves two separate Azure tenants: Tenant A: Hosts the Storage Account. Tenant B: Hosts Azure Databricks. Databricks in Tenant B accesses the Hierarchical Namespace (HNS) enabled Storage Account in Tenant A using a Private Endpoint. The Storage Account in Tenant A is HNS enabled (i.e., suitable for Azure Data Lake Storage Gen2). A Private DNS Zone must be configured to resolve the NIC of the private endpoint created for the storage account. A multi-tenant Service Principal (SPN) is required to: Allow secure, managed access for Databricks in Tenant B to the storage account in Tenant A. Provide role-based access control (RBAC) access to the storage account. Steps to Configure Cross-Tenant Private Endpoint Create an HNS-enabled Storage Account in Tenant A. Capture the Resource ID and Subresource name (blob). In Tenant B, create a Private Endpoint: Use the Resource ID and Subresource name of the storage account from Tenant A. After creating the Private Endpoint: An approval request will be triggered in Tenant A. Approval is found under: Storage Account > Networking > Private endpoint connections In Tenant A, approve the private endpoint request: Minimum roles required: Private Link Service Owner OR Network Contributor Update the DNS configuration: Ensure the Private DNS Zone resolves the NIC of the private endpoint to allow successful name resolution for the storage account from Tenant B. Multi-Tenant SPN Configuration and Access In Tenant B, create a Multi-Tenant App Registration (SPN). In Tenant A, grant admin consent using the following URL: https://login.microsoftonline.com/{organization}/adminconsent?client_id={client-id} Replace: {organization} = Tenant A's Directory (Tenant) ID {client-id} = Application (Client) ID of the SPN Minimum Entra ID Role required to approve admin consent: Application Administrator After consent is granted, assign RBAC to the SPN in Tenant A: Role: Storage Blob Data Contributor Scope: On the target Storage Account Azure documentation references Multi-Tenant SPNs: https://learn.microsoft.com/en-us/entra/identity-platform/howto-convert-app-to-be-multi-tenant Tenant wide Admin Consent: Grant tenant-wide admin consent to an application - Microsoft Entra ID | Microsoft Learn1.5KViews1like0CommentsUtilizing Azure Key vault with Private link in DevOps
Azure Key Vault is a cloud service that provides secure storage and access to secrets such as API keys, passwords, certificates, or cryptographic keys. To enhance security and disable public access, Azure Key Vault can be integrated with Private Endpoint powered by Azure Private Link. This private endpoint uses a private IP address from your VNet and brings the service into your VNet, effectively eliminating exposure from the public Internet by traversing traffic between your virtual network and the service over the Microsoft backbone network.Effortless Private Endpoint Management in Azure Landing Zones: A Streamlined and Compliant Approach
Azure Landing Zones empower workload owners to deploy infrastructure while adhering to corporate policies. However, managing Private Endpoints can be challenging without granting excessive access. This article introduces a custom role for Subscription Owners, allowing them to manage Private Endpoints efficiently and securely. By ensuring compliance with Azure Landing Zone policies and minimizing access permissions, this streamlined approach simplifies Azure deployments. Curious about how this can enhance your Azure experience? Dive into the full article!5.3KViews3likes5CommentsIssue with Azure VM Conditional Access for Office 365 and Dynamic Public IP Detection
Hi all, I have a VM in Azure where I need to allow an account with MFA to bypass the requirement on this specific server when using Office 365. I've tried to achieve this using Conditional Access by excluding locations, specifically the IP range of my Azure environment. Although I’ve disconnected any public IPs from this server, the Conditional Access policy still isn’t working as intended. The issue seems to be that it continues to detect a public IP, which changes frequently, making it impossible to exclude. What am I doing wrong?1.6KViews0likes5CommentsIntroducing Private Link based networking with Azure Database for PostgreSQL – Flexible Server
Today we are proud to announce preview of Azure Private Link support for private networking with Azure Database for PostgreSQL - Flexible server, in addition to already existing networking capabilities provided by VNET injection.