The following suggested Network example topologies are thought to follow Vnet integration practices within the API management. Therefore, these revolve around API management in internal mode (Azure API Management with an Azure virtual network | Microsoft Learn). API Management in internal mode requires a Premium SKU or Development (not recommended for production). Please estimate API management cost appropriately using the Azure Pricing calculator.
Considerations for Azure API Management Plane.
Azure API Management comprises three essential components: an API gateway (data plane), a management plane, and a developer portal (user plane). These components are hosted on Azure's infrastructure and are fully managed by default.
The data plane handles end-user traffic, which is generated when users access APIs or other backend services for tasks such as content retrieval, querying, and data uploading. It facilitates consistent configuration of routing, security, throttling, caching, and observability. Notably, Azure API Management supports multi-region deployment, allowing the addition of regional API gateways to instantiate across various supported Azure regions. For of multi-region deployment please refer to Deploy Azure API Management instance to multiple Azure regions - Azure API Management | Microsoft Learn
The management plane, on the other hand, involves a different type of traffic. It pertains to user-defined inputs for provisioning and configuring the API Management instance's service settings. These inputs can be provided through Azure Portal, PowerShell, Azure Cloud Shell, Visual Studio, or a variety of other tools such as Ansible and Terraform. For of the management plane, please refer to Azure API Management - Overview and key concepts | Microsoft Learn
The user plane is focused on the open-source developer portal. This portal serves as a platform for users to discover APIs, onboard themselves to utilize these APIs, and learn about their integration into applications. The user plane interacts with the management plane to access detailed API information, create accounts, and subscribe to APIs, among other functions.
Design Consideration for Small Size Deployments
Single Vnet deployment
The following example topology is intended for implementations where routing configuration requirements are simple and can be handled automatically by the Azure Virtual Network (VNET) routing. In this single VNET topology, PaaS services in different subnets have connectivity between each other, to the Application Gateway subnet, and to the API management subnet, where these resources are allocated. Also, all these components can communicate to their respective private endpoints as needed.
DNS resolution has a single virtual link attached to this virtual network, as referenced in this Private Endpoint DNS article.
API calls that are internal can flow from east to west privately and securely, as none of these endpoints are exposed Publicly.
Additional security, if required, can be implemented by Network Security Groups (NSGs) without the need to route to a Firewall. Note: NSGs in internal or external mode are mandatory. API calls to the Internet can be accomplished by using a NAT gateway attached to the subnet that requires public access. One of the advantages of using NAT gateway, as opposed to a public IP, is the fact that it provides prevention against Source NAT (SNAT) exhaustion. For more information see the NAT gateway documentation.
Optional Hub Deployment
As depicted in the diagram, if traffic inspection is required for further security, along with other capabilities, such as Intrusion Detection and Prevention (IDS, IDPS), we can simply add a Hub VNET with an Azure Firewall or a Network Virtual Appliance (NVA). Furthermore, if hybrid connectivity to On Premises is needed, the optional Hub Vnet can contain a VNET Gateway. The Hub VNET is intended to be optional considering your organization’s ability to manage the routing complexity of a Hub and Spoke model and Firewall features implementation.
This deployment can be scaled, should the organization grow by simply adding additional spokes in different subscriptions or different resource groups.
DNS resolution can be achieved by adding the relevant Virtual Networks to the Virtual Links of the private DNS zones.
If you require DNS integration to On Premises or need to integrate this solution with Custom DNS servers, please follow the guidance for Private DNS integration at Scale
Design Consideration for Medium Size Deployments
If your organization is adopting the Enterprise Landing Zone model or is comprised of multiple departments across various subscriptions, each requiring a centralized shared Virtual Network, the Hub and Spoke model (https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli) is recommended. For reasons related to cost efficiency, consider implementing a shared and centralized API Management service that can initiate API calls to resources located in the Spokes, as illustrated in the diagram. This approach is often more economical compared to having a separate API Management instance for each individual Spoke.
Within this model, Azure Firewall or Network Virtual Appliances can be employed to scrutinize API calls traversing from East to West between the Hub resources such as Application Gateway and the endpoints residing on the Spokes. Please beware of the routing complexity, as the use of User Defined Routes (UDRs) may be required.
Note: This setup requires detailed knowledge in Azure Network Routing. Please ensure familiarity with how Azure Virtual Network Routing works, and how the use of User Defined Routes need to be implemented to accomplish the desired routing state.
To explore different setups for traffic inspection, please review the architecture scenario examples for Application Gateway and Firewall.
Outbound API calls destined for the internet (North South traffic) could be accomplished by routing traffic to the Azure firewall or NVA and utilizing its Network Address Translation (NAT) capabilities.
Additionally, NAT gateway can still be used if Azure Firewall or NVA is not desired.
Incoming HTTP/HTTPs traffic coming from On Premises can also target the Application Gateway Private IP that front ends the API Management.
DNS
It is possible that further DNS infrastructure is needed depending on your organization requirements for Name Resolution. If your organization requires more than adding Vnets to DNS Zones virtual links, including using custom DNS servers or Azure DNS Resolver to be able to resolve across your implementation, please consider the following DNS Scenarios that are covered in depth.
Design Consideration for Large Size Deployments:
For organizations seeking to deploy large-scale solutions, a Multi Region Hub and Spoke architecture, coupled with Azure Global Load Balancers, such as Azure Traffic Manager or Front Door, offers an effective approach. This document outlines two distinct scenarios to aid in your deployment strategy.
Scenario 1: Multi Region Hub and Spoke with Application Gateway and API Management
In this model, each geographical region is equipped with its own dedicated Application Gateway and API Management instance. The architecture capitalizes on Global Load Balancers, strategically distributing incoming traffic based on the geographical location of the client. This ensures optimal performance and responsiveness.
Furthermore, the deployment can incorporate Azure Firewall or Network Virtual Appliances as needed, mirroring the approach outlined in the Medium Size Hub and Spoke Model for each region.
For comprehensive insights into selecting the most appropriate Load Balancing method, we encourage you to consult the Global Load Balancing Documentation. Specific details on Load Balancing options can be found in the Azure Architecture Center's Load-balancing options - Azure Architecture Center | Microsoft Learn
For in-depth guidance on setting up a Multi Region Hub and Spoke topology, we recommend reviewing the tutorial titled Use Azure Firewall to route a multi hub and spoke topology | Microsoft Learn.
It's important to note that this design is subject to certain limitations imposed by Application Gateway. If the constraints imposed on aspects such as the number of rules, rewrite rules, and other features present challenges, we suggest exploring Scenario 2 outlined below. Detailed insights regarding Application Gateway's limitations can be accessed in the Azure subscription limits and quotas - Azure Resource Manager | Microsoft Learn via Microsoft Learn.
Scenario 2: Multi Region Hub and Spoke with Application Gateway and API Management on Spokes.
This architectural configuration extends the groundwork laid in the Single Spoke Scenario. By employing a dedicated Application Gateway for each spoke, complemented by its own API Management, you can effectively circumvent limitations associated with listeners, rules, and other constraints typically associated with a single Application Gateway per region. This approach is particularly advantageous for organizations that maintain separate, segmented environments such as Development, Staging, and Production, each with its distinct spoke Virtual Networks as outlined in the Landing Zone documentation.
Furthermore, this setup capitalizes on the capabilities of a Multi-Region Hub and Spoke architecture, harmonizing seamlessly with Global Load Balancer Scenarios.
Conclusion
In summary, this article delves into an in-depth analysis of various networking design choices and critical factors to consider when implementing Azure APIM Management in internal mode, all tailored to the unique scale and requirements of your organization.
Your insights and experiences with these design approaches are invaluable, so we invite you to share your thoughts or anecdotes by leaving a comment below. Your contributions will further enrich our collective understanding of effective Azure APIM implementations.