Blog Post

Azure Architecture Blog
6 MIN READ

Example Reference Network Topologies for API Management in private mode.

Mauricio_Rojas's avatar
Sep 20, 2023

The following suggested Network example topologies are thought to follow Vnet integration practices within the API management. Therefore, these revolve around API management in internal mode (Azure API Management with an Azure virtual network | Microsoft Learn). API Management in internal mode requires a Premium SKU or Development (not recommended for production). Please estimate API management cost appropriately using the Azure Pricing calculator.  

 

Considerations for Azure API Management Plane.

 

Azure API Management comprises three essential components: an API gateway (data plane), a management plane, and a developer portal (user plane). These components are hosted on Azure's infrastructure and are fully managed by default.

 

The data plane handles end-user traffic, which is generated when users access APIs or other backend services for tasks such as content retrieval, querying, and data uploading. It facilitates consistent configuration of routing, security, throttling, caching, and observability. Notably, Azure API Management supports multi-region deployment, allowing the addition of regional API gateways to instantiate across various supported Azure regions. For of multi-region deployment please refer to Deploy Azure API Management instance to multiple Azure regions - Azure API Management | Microsoft Learn

 

The management plane, on the other hand, involves a different type of traffic. It pertains to user-defined inputs for provisioning and configuring the API Management instance's service settings. These inputs can be provided through Azure Portal, PowerShell, Azure Cloud Shell, Visual Studio, or a variety of other tools such as Ansible and Terraform. For of the management plane, please refer to Azure API Management - Overview and key concepts | Microsoft Learn

 

The user plane is focused on the open-source developer portal. This portal serves as a platform for users to discover APIs, onboard themselves to utilize these APIs, and learn about their integration into applications. The user plane interacts with the management plane to access detailed API information, create accounts, and subscribe to APIs, among other functions.

 

Design Consideration for Small Size Deployments

 

Single Vnet deployment

 

The following example topology is intended for implementations where routing configuration requirements are simple and can be handled automatically by the Azure Virtual Network (VNET) routing. In this single VNET topology, PaaS services in different subnets have connectivity between each other, to the Application Gateway subnet, and to the API management subnet, where these resources are allocated. Also, all these components can communicate to their respective private endpoints as needed.

DNS resolution has a single virtual link attached to this virtual network, as referenced in this Private Endpoint DNS article.

API calls that are internal can flow from east to west privately and securely, as none of these endpoints are exposed Publicly. 

Additional security, if required, can be implemented by Network Security Groups (NSGs) without the need to route to a Firewall. Note: NSGs in internal or external mode are mandatory. API calls to the Internet can be accomplished by using a NAT gateway attached to the subnet that requires public access. One of the advantages of using NAT gateway, as opposed to a public IP, is the fact that it provides prevention against Source NAT (SNAT) exhaustion. For more information see the NAT gateway documentation.

 

 

 

Optional Hub Deployment


As depicted in the diagram, if traffic inspection is required for further security, along with other capabilities, such as Intrusion Detection and Prevention (IDS, IDPS), we can simply add a Hub VNET with an Azure Firewall or a Network Virtual Appliance (NVA). Furthermore, if hybrid connectivity to On Premises is needed, the optional Hub Vnet can contain a VNET Gateway. The Hub VNET is intended to be optional considering your organization’s ability to manage the routing complexity of a Hub and Spoke model and Firewall features implementation.  


This deployment can be scaled, should the organization grow by simply adding additional spokes in different subscriptions or different resource groups.

 

DNS resolution can be achieved by adding the relevant Virtual Networks to the Virtual Links of the private DNS zones.

If you require DNS integration to On Premises or need to integrate this solution with Custom DNS servers, please follow the guidance for Private DNS integration at Scale

 

Design Consideration for Medium Size Deployments

 

If your organization is adopting the Enterprise Landing Zone model or is comprised of multiple departments across various subscriptions, each requiring a centralized shared Virtual Network, the Hub and Spoke model (https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli) is recommended. For reasons related to cost efficiency, consider implementing a shared and centralized API Management service that can initiate API calls to resources located in the Spokes, as illustrated in the diagram. This approach is often more economical compared to having a separate API Management instance for each individual Spoke.

 

 

Within this model, Azure Firewall or Network Virtual Appliances can be employed to scrutinize API calls traversing from East to West between the Hub resources such as Application Gateway and the endpoints residing on the Spokes. Please beware of the routing complexity, as the use of User Defined Routes (UDRs) may be required.

 

Note: This setup requires detailed knowledge in Azure Network Routing. Please ensure familiarity with how Azure Virtual Network Routing works, and how the use of User Defined Routes need to be implemented to accomplish the desired routing state.

 

To explore different setups for traffic inspection, please review the architecture scenario examples for Application Gateway and Firewall.

Outbound API calls destined for the internet (North South traffic) could be accomplished by routing traffic to the Azure firewall or NVA and utilizing its Network Address Translation (NAT) capabilities.

 

Additionally, NAT gateway can still be used if Azure Firewall or NVA is not desired.

 

Incoming HTTP/HTTPs traffic coming from On Premises can also target the Application Gateway Private IP that front ends the API Management.

 

DNS

It is possible that further DNS infrastructure is needed depending on your organization requirements for Name Resolution. If your organization requires more than adding Vnets to DNS Zones virtual links, including using custom DNS servers or Azure DNS Resolver to be able to resolve across your implementation, please consider the following DNS Scenarios that are covered in depth.

 

Design Consideration for Large Size Deployments:

 

For organizations seeking to deploy large-scale solutions, a Multi Region Hub and Spoke architecture, coupled with Azure Global Load Balancers, such as Azure Traffic Manager or Front Door, offers an effective approach. This document outlines two distinct scenarios to aid in your deployment strategy.

 

Scenario 1: Multi Region Hub and Spoke with Application Gateway and API Management

 

In this model, each geographical region is equipped with its own dedicated Application Gateway and API Management instance. The architecture capitalizes on Global Load Balancers, strategically distributing incoming traffic based on the geographical location of the client. This ensures optimal performance and responsiveness.

 

Furthermore, the deployment can incorporate Azure Firewall or Network Virtual Appliances as needed, mirroring the approach outlined in the Medium Size Hub and Spoke Model for each region.

 

For comprehensive insights into selecting the most appropriate Load Balancing method, we encourage you to consult the Global Load Balancing Documentation. Specific details on Load Balancing options can be found in the Azure Architecture Center's Load-balancing options - Azure Architecture Center | Microsoft Learn

 

For in-depth guidance on setting up a Multi Region Hub and Spoke topology, we recommend reviewing the tutorial titled Use Azure Firewall to route a multi hub and spoke topology | Microsoft Learn.

 

 

 

It's important to note that this design is subject to certain limitations imposed by Application Gateway. If the constraints imposed on aspects such as the number of rules, rewrite rules, and other features present challenges, we suggest exploring Scenario 2 outlined below. Detailed insights regarding Application Gateway's limitations can be accessed in the Azure subscription limits and quotas - Azure Resource Manager | Microsoft Learn via Microsoft Learn.

 

Scenario 2: Multi Region Hub and Spoke with Application Gateway and API Management on Spokes.

 

This architectural configuration extends the groundwork laid in the Single Spoke Scenario. By employing a dedicated Application Gateway for each spoke, complemented by its own API Management, you can effectively circumvent limitations associated with listeners, rules, and other constraints typically associated with a single Application Gateway per region. This approach is particularly advantageous for organizations that maintain separate, segmented environments such as Development, Staging, and Production, each with its distinct spoke Virtual Networks as outlined in the Landing Zone documentation.

 

Furthermore, this setup capitalizes on the capabilities of a Multi-Region Hub and Spoke architecture, harmonizing seamlessly with Global Load Balancer Scenarios.

 

 

Conclusion

 

In summary, this article delves into an in-depth analysis of various networking design choices and critical factors to consider when implementing Azure APIM Management in internal mode, all tailored to the unique scale and requirements of your organization.

Your insights and experiences with these design approaches are invaluable, so we invite you to share your thoughts or anecdotes by leaving a comment below. Your contributions will further enrich our collective understanding of effective Azure APIM implementations.

Updated Sep 28, 2023
Version 3.0
  • andyc383838's avatar
    andyc383838
    Copper Contributor

    Thanks for the write up, and very useful to see this in such concise detail. 

     

    One question I do have, and I must admit it's confusing me a bit (which probably isn't hard to be honest). But much of the recent guidance from Microsoft regarding app gateway is to place it in the spokes and not in the hub. Effectively deploying a dedicated App Gateway 'per app', in the spoke. Now, this guidance seems odd to me as I've generally placed AppGWs in the hub, where they are supported by a net-sec team (that understands networking , net-sec, WAFs etc.. 

     

    Does this contradict that guidance or is there something I'm missing?

     

    Thanks

    Andy

  • andyc383838 Hello Andy, thanks for your valuable comment. To be fair an honest, there's been a lot of debate on that. My opinionated take is to put in on the Hub as it's easier to manage routing according to this article which is one of my fav (Application Gateway and Firewall). I would only create separate App GW per spoke if you hit into App GW specific limits, like counts of listeners, routing rules and so on. If you don't, I would just put your back end pools on the spokes and manage the App GW on the hub.

  • Mohsenhs's avatar
    Mohsenhs
    Copper Contributor

    Hi Mauricio_Rojas

     

    Thank you for writing this nice explanation.

     

    I have researched Microsoft's reference architecture, and based on the documentation, the Hub is intended for shared services such as VPN, Firewall, and Active Directory Domain Services. In contrast, Spokes are the appropriate place to host both production and non-production resources.

     

    I assume the main concern you highlighted in your explanation is the potential cost associated with this solution. However, according to Microsoft’s documentation, when both non-production and production environments are required—and considering that APIM and Service Bus are part of the integration services—these services are better suited for deployment in the Spokes. For resources deployed within the same region, the VNET peering cost is approximately $30 per TB of data (though this varies by region).

     

    I’ve also posted my argument as a question in the Community Hub for further discussion:

    Spoke Hub Model - Integration services - Microsoft Q&A

     

    Could you kindly confirm if my line of thinking is correct?

     

    Thanks,

    Mohsen

     

     

     

     

  • andyc383838's avatar
    andyc383838
    Copper Contributor

    Hi Mohsenhs , I wouldn’t get too hung up on whether a service is classed as an integration service or shared-service. I see these as just useful ‘tags’ to group azure services into a common heading. Ultimately, put them where works best for you. Logic-apps and APIM are unlikely to share similar network design requirements, but yet they are both classed as ‘integration services’.

     

    Ultimately anything is valid, just deploy in a way that works for you. There are actually several CAF reference architectures that show the AppGW in the hub (cloud adoption framework hub-and-spoke network topology) and as above, APIM in the hub makes an awful lot of sense for many organisations.

     

    Ultimately, my main concern wasn’t cost, although that is valid as there are other costs in addition to data transfer (each deployed instance has a cost). But my main thought was around management and admin overhead. If your net-sec team are responsible for all WAFs and all Ingress/egress then deploying in a hub does fit well with this centralised operating model.

     

    if the Operating model is de-centralised, and each app team is responsible for their own networking & security, then spoke deployments could work too.

     

    Other services like APIM are more complicated. First, it’s expensive. Second, APIM can very easily be shared across an entire business (not always of course) because it scales very well and because of the costs. Plus the benefits it provides when developers make use of the Developer Portal mean it works well as a shared service. APIM likely has much wider use beyond a single project or application. in which case, would a spoke be the best place for it? While it’s convenient for Microsoft to refer to the Hub as for ‘shared-services’, the definition of what a shared-service actually is, should really up to each organisation to determine.

     

    So bottom line, any deployment is likely fine. Just gather the requirements, document any constraints and limitations and justify the decision.

     

  • Mohsenhs's avatar
    Mohsenhs
    Copper Contributor

    Hi andyc383838 

    Thanks for your response. I am not sure if " put them where works best for you" is based on the Microsoft official recommendation. Hub deployment is tied to a specific Azure subscription and therefore manging different environments   (e.g. Dev, test, Prod) can be challenging and may not follow the best practices Microsoft recommend. Moreover, Integration services have different components that need to communicate with each other regularly, so network connectivity, security and cost need to be considered as well. With regards to Shared Services, in different reference architecture, express route, firewall, DNS, and Active Directory Domain Services are considered as shared services and not APIM ( I have included the references in the question I posted in the community). 

    With regard to App GW, I am quoting the below paragraph from MS documentation:

    "The Application Gateway shown in the diagram above can live in spoke with the application it's serving for better management and scale. However, corporate policy might dictate you place the Application Gateway in the hub for centralized management and segregation of duty ."  See this.

    To your point, Application Gateway (App GW) is considered a shared service and can be placed either in the Spoke or Hub, depending on your organization's governance model. However, APIM and Service Bus are different cases as they fall under integration services (if an organization follows a segregation of duties model). Given the typical requirements for multiple environments, the Spoke is generally better suited for these integration services (at least based on my current understanding of MSFT documentation and discussions, i.e. may change in the future).

     



  • andyc383838's avatar
    andyc383838
    Copper Contributor

    Hi Mohsenhs, you seem to be getting a little insulting now. I was only trying to help by offering my view. As you did post asking for help. You don't seem to want to accept that Microsoft provide these services to be flexible so you can deploy them in a way that works for you. I am not saying "You MUST do it this way".

     

    So I will back out and leave you to it. 

     

    All the best.