Advanced scenarios with private endpoints to Azure SQL Managed Instance
Published Aug 24 2023 09:00 AM 5,815 Views
Microsoft

In the previous installment of this mini-series, we covered basic scenarios involving private endpoints. If you aren’t familiar with private endpoints and Private Link in general, it might be a good idea to quickly review them to get the feel of how they apply when Azure SQL Managed Instance is in the mix.

 

In this article, we’ll dive into more involved scenarios that build on those from last week:

5. Hub and spoke topology

6. Partner or ISV giving access to their customers

7. Two SQLs talking to each other: linked server, transactional replication

8. Failover group listener using private endpoints

 

Ready? Let’s go!

 

5.     Hub and spoke topology

 

Hub and spoke is a common network topology that connects several “spoke” networks with a central “hub” network so they can all access the hub's resources, but not reach across to other spokes unless so configured. In our case, the “hub” resource is the network with the Azure SQL Managed Instance, while “spokes” are individual virtual networks with applications.

 

Normally we would implement hub and spokes with Azure network peering so that each application has a network route to Azure SQL Managed Instance but can’t see other applications. If, however, all we need is to ensure that the applications can connect to the Managed Instance and not to any other resources in that network, we can consider replicating the hub and spokes topology with private endpoints and not peer the networks themselves:

 

A diagram showing hub and spokes topology consisting of multiple virtual networks with applications and tools connecting to the central virtual network with SQL Managed Instance via private endpoints.A diagram showing hub and spokes topology consisting of multiple virtual networks with applications and tools connecting to the central virtual network with SQL Managed Instance via private endpoints.

Doing so will spare us the trouble of planning out IP address allocation across the hub and spokes address spaces. It also makes the entire setup more secure, since private endpoints are unidirectional only, while peering would present the potential intruder with a broader swath of network real estate to scan and exploit.

 

6.     Partner or ISV giving access to their customers

 

In this scenario, we’re a partner or an independent software vendor (ISV) managing multiple Azure SQL Managed Instances on behalf of our customers. Some of our customers are Azure-native, while others aren’t. We want to provide our clientele with a way to connect to their Azure SQL Managed Instances without unduly exposing our IP address space or their connection endpoints to other customers.

 

Looking at the connectivity requirements alone, this scenario meshes the first two scenarios—within-Azure and on-premises connectivity—but it also adds a wrinkle: whatever we implement must be such that we, a Microsoft partner or an ISV, can manage it at scale.

 

In effect, we need to achieve two goals:

 

  1. Separation of security zones: each customer should have connectivity only to their Azure SQL Managed Instance(s) and not to any that belong to others.
  2. Separation of duties: A clearly defined ownership and management boundary should exist to allow for downstream configuration on the customers’ end and operations and automation on ours.

As a partner or ISV, we can achieve both by dedicating a single virtual network to each customer. We will deploy this virtual network, partition it into one or more subnets, and then deploy private endpoints to their Azure SQL Managed Instance(s).

 

A diagram showing zone separation and cross-tenant communication with private endpoints in a multi-tenant setup.A diagram showing zone separation and cross-tenant communication with private endpoints in a multi-tenant setup.

If this customer already has Azure presence, then as the last step, we authorize our customer’s network administrator to manage this freshly minted virtual network while we remain its owners. Network admin can then take care of downstream connectivity by, for example, configuring virtual network peering. (It may be a good idea to talk to the network administrator before we deploy the virtual network so that they can help us select the right IP address range.)

 

For customers outside of Azure we have multiple options. For example, we can take it upon ourselves to further configure this virtual network so that the customer’s application can access it from elsewhere, as discussed in scenario 2. Or we can involve Microsoft Entra (formerly Azure Active Directory) for its identity governance and federation capabilities, authorizing customers’ administrators to manage the virtual network on their own. Our choice will, of course, largely depend on our service offering, customer capabilities, and contract details.

 

7.     Two SQLs talking to each other: linked server, transactional replication

 

There are situations where you want to have two SQLs talking to each other. In certain cases, this can be implemented via private endpoints instead of connecting the two networks together. All such scenarios that work over SQL Server’s port 1433 are good candidates. Scenarios that require SQLs to talk on different port numbers, like 5022, cannot yet be implemented via private endpoints.

 

A diagram showing how two Azure SQL Managed Instances can communicate with each other via private endpoints deployed in their corresponding virtual networks.A diagram showing how two Azure SQL Managed Instances can communicate with each other via private endpoints deployed in their corresponding virtual networks.

The layout is simple: two Azure SQL Managed Instances exist in separate virtual networks. We create a private endpoint to one in the other’s virtual network and vice versa, keeping in mind that private endpoints must go in different subnets from where Azure SQL Managed Instances are. Hey presto, the two Azure SQL Managed Instances can talk to each other, and scenarios like linked server and transactional replication are enabled.

 

Matters become a little more involved when one of the SQLs is an on-premises SQL Server. There are still ways to establish this connectivity, but because private endpoints are an Azure-only feature, we’d need to connect the on-premises machine to Azure first. Refer to scenario 2 for some basic considerations on how this can be done. In all such scenarios, the usual gotchas hold: Azure SQL Managed Instance can only expose port 1433 (you can still configure it to talk to any custom port on your on-premises machine), and the on-premises SQL Server will be talking to Azure SQL Managed Instance in proxy mode.

 

8.     Failover group listener using private endpoints

 

When our reliability and compliance requirements call for a geo-replicated database, auto-failover groups often present a suitable solution. One important feature of auto-failover groups are the failover group listeners, which always direct incoming connection requests to the current primary or secondary to maintain database availability in the face of a regional failure.

 

While listeners are a useful feature, the default implementation calls for interconnectivity between the three networks—two for the managed instances engaged in an auto-failover group, and one hosting our application—which is typically achieved via network peering. Good network design suggests that we avoid network peering whenever a narrower connectivity method can be had instead. Fewer network-level integrations mean that our connectivity mesh is more secure, more manageable, and less likely to force IP address reallocation. (Of note, peering isn’t the only way, but alternatives also present us with similar problems).

 

With this, a reasonable question to ask is: can I establish direct app-to-service connectivity to my failover group listener with private endpoints?

 

The short answer is “yes”—but the “how” may surprise you!

 

At its simplest, a failover group listener is no more than an Internet alias. Take, for example, a read-write listener that looks like this:

 

myfog.0123abcd.database.windows.net → mysqlmi01.0123abcd.database.windows.net

 

myfog is the name of the failover group, and mysqlmi01 is the current primary replica, with both existing in the DNS zone 0123abcd.database.windows.net.

 

This listener’s domain name would be defined in a DNS record looking something like this:

 

zone 0123abcd.database.windows.net⟩

Name

Type

TTL

Data

myfog

CNAME

30 seconds

mysqlmi01.0123abcd.database.windows.net

 

When a failover occurs, the listener gets updated to alias the new primary replica (say, mysqlmi02). But, the name of the listener (myfog) and its zone (0123abcd.database.windows.net) won’t change, and so neither will the address that our applications use—it will always be myfog.0123abcd.database.windows.net. Moreover, this record is hosted in Azure DNS such that it is always globally resolvable, meaning that we can always tell which of the two instances it aliases at any given time.

 

Let’s examine this in a diagram:

 

A diagram showing how an application communicates with Azure DNS to resolve the current address of primary replica in an auto-failover group. App's virtual network is peered with the networks in which Azure SQL Managed Instances reside.A diagram showing how an application communicates with Azure DNS to resolve the current address of primary replica in an auto-failover group. App's virtual network is peered with the networks in which Azure SQL Managed Instances reside.

Our app resides in a virtual network that is peered with the two networks which host the auto-failover group instances (note the thick blue double-sided arrows). The latter two networks are peered as well, as required for the failover groups to function. When our app tries to connect to the read-write listener, it first queries Azure DNS behind the scenes which tells the IP address of the current primary, 10.1.0.1. Because our app’s network is in a peering relationship with the address space containing this IP address, it can reach and connect to the current primary replica.

 

So how do we remove the two horizontal peering arrows? It’s quite simple, actually: we just need to ensure that Azure DNS always returns an IP address inside of our directly addressable IP address space (10.9.x.x) and that those locally-reachable IP addresses connect us to the two managed instances. As we know, private endpoints do exactly that.

 

So, we should create two private endpoints to the two Azure SQL Managed Instances, placing them inside our app’s virtual network. In the process of doing so, we’ll also configure Azure DNS to return the IP addresses of those private endpoints when queried about the whereabouts of mysqlmi01 and mysqlmi02. Best of all, this behavior will be local to the app’s virtual network and thus won’t have any side effects on the rest of our network topology!

 

With those changes in place, our connectivity diagram looks like this:

 

A diagram showing the above functionality using private endpoints to the two Azure SQL Managed Instances. Auto-failover listener works as before, but the app's virtual network doesn't need to be peered to the other two.A diagram showing the above functionality using private endpoints to the two Azure SQL Managed Instances. Auto-failover listener works as before, but the app's virtual network doesn't need to be peered to the other two.

And there, we got the failover group listener working without peering or gateways, just private endpoints. Note that this setup enables both the read-write and read-only listeners at once, since they use the same DNS mechanism of aliasing.

 

Closing thoughts

 

That’s a wrap on the scenarios we wanted to share this time round! Are you thinking of one we didn’t give a proper treatment or even—gasp!—failed to mention at all? By all means, drop us a line below. We’re always happy to learn and share :)

 

Until next time, when we’ll be talking about non-scenarios for private endpoints, you can visit the following links:

 

1 Comment
Version history
Last update:
‎Jan 12 2024 02:02 AM
Updated by: