Creating connections between different compute and backend services on Azure is a common task that all developers have to deal with while implementing a piece of code. There are a lot of considerations one must deal with while planning, deploying and using these connections such as which identity do I use, what network configurations do I implement and where do I storage the connection configuration either in key vault or in App Configuration service to name a few.
Following these practices is important to ensure that as a developer you play your part in not just implementing best practices but also align with azure security baseline and the concept of zero trust.
This is where a service connection can help simplify some of these design and implementation choices for you.
A service connection represents an abstraction of the link between two services and the Service Connector is an Azure extension resource provider designed to provide a simple way to create and manage connections between Azure services.
Fundamentally a service connection
- Configures network settings, authentication, and manages connection environment variables or properties for you.
- Validates connections and provides suggestions to fix faulty connections.
Source services and target services support multiple simultaneous service connections, which means that you can connect each resource to multiple resources. Service Connector manages connections in the properties of the source instance. Creating, getting, updating and deleting connections is done directly by opening the source service instance in the Azure portal, or by using the CLI commands of the source service. Connections can be made across subscriptions or tenants, meaning that source and target services can belong to different subscriptions or tenants.
Note: Identity best practice is to use a managed identity to connect two azure services, however a managed identity cannot span multiple Entra tenants. Therefore, when you are creating a service connection between resources which are in two separate tenants then prefer to use a service principal or a connection string which is fetched from a key vault while creating the service connection.
Let’s try to create a new service connection. We will use the portal experience to look at all the options. We will use an azure web app to explore.
- Under “Settings” for an azure web app you will find an option called “Service Connector” as shown below.
- First, we will create a connection for a storage account. When you click on “Create” you will see a dialogue box open.
- We see a list of services and we will select “Storage – Blob” as the choice.
- When you have selected the service, a name is auto populated using a convention. You can select one of the subscriptions you have access to. You can then select a storage account from that subscription. The available client type was only “.NET” for me as my azure app service is .NET 6 based and the portal was nice enough to pick that up.
- Next screen is where we setup “Authentication”.
- As you can see, it gives you four options. Leveraging a “System assigned managed identity” should be preferred given that the service connection will be tied to the identity of the azure web app and once you delete the web app along with which the service connection will also be deleted, the RBAC set in the target service will also be deleted allowing for a better management of the authentication cleanup.
- You can select which RBAC role to provide. The drop down will list roles based on the target service. Follow the principle of least privilege.
- Important point to note is the two environment variables that will get created as part of the flow. You will need to use them in the code when you want to create a connection to the target storage account. You can edit them if you want to follow you own naming convention.
- You can also choose to store the configuration inside of the App Configuration service. If you select the check box, then it will show you a dropdown with an option to choose from the list of available instances.
- Next will be “Networking”.
As you can see, the only option to choose from is the option to configure the firewall rules in the target storage account and this is because I have not configured my web app with VNet integration. In case you want to use a private endpoint in the target service (recommended for production setup) or use the service endpoint you must configure the VNet integration for the app first. I am glad the portal experience makes it intuitive but when you are doing it from the CLI then the commands will fail if the VNet integration is not setup.
- When you hit “Review + Create”, here is what happens in the backend
- If the web app does not have a system assigned managed identity, then it gets created and is assigned the required RBAC role to the storage account.
- Network configurations are updated in the firewall settings of the storage account.
- Environment variable is created. Now this variable and the service connector is a slot level deployment for the web app so if you have multiple slots in your web app then you will need to create the service connection per slot. In this case however, the above steps are not repeated and only the required environment variable is created in the slot.
- Service Connector creates connections between Azure services using an on-behalf-of token. The on-behalf-of (OBO) flow describes the scenario of a web API using an identity other than its own to call another web API. Referred to as delegation in OAuth, the intent is to pass a user's identity and permissions through the request chain. You can validate this by looking at the activity logs where you will see all operations being executed under the identity of the person or Entra application (in case of IaC). As a result, one who is creating the service connection needs to have correct permissions. Refer link for more details.
Next, we will look at another example where we will try to connect to an SQL database. I will not repeat all the steps but try to highlight the differences and an issue which exists at the time of writing this blog post on May 7, 2024.
When creating the service connector for SQL database and as you will see for many other options when using any of the identity options apart from connection string is that the portal requests you to run two commands on the cloud shell as shown in image below.
When you click on “Create on Cloud Shell”, the portal launches the shell and tries to execute the commands. However, the second command fails with an error as discussed in linked github issue.
The workaround mentioned is to run the same set of commands in the local CLI environment works wonderfully well. So, in case you are stuck or get an error with the CLI commands then please try in the local CLI shell instead of cloud shell.
The sequence of operations in this case is
- I selected a system assigned managed identity and hence it checked for the presence of one.
- The operations are performed using an on-behalf-of token and hence the user I had logged in with is added as an Entra administrator on the SQL server.
- Post this the managed identity is added as a user in the database.
- The firewall settings on the database server are updated.
- The environment variable is created on the web app which can then be used in the code.
- An interesting point I noted is that the operation using my identity caused an alert on the defender to fire for suspected login from an unusual location. This was because I work from India and the resources I have deployed are in East US, so when the resource provider used my identity to make the changes, the location detected was unusual.
Another positive about service connector is the built-in support for Availability Zones for HA and DR to a paired region irrespective of whether you are following a cross-region DR for your choice of compute or not.
Service connectors handle business continuity and disaster recovery (BCRD) for storage and compute. The platform strives to have as minimal of an impact as possible in case of issues in storage/compute, in any region. The data layer design prioritizes availability over latency in the event of a disaster – meaning that if a region goes down, Service Connector will attempt to serve the end-user request from its paired region.
During the failover action, Service Connector handles the DNS remapping to the available regions. All data and action from customer view serves as usual after failover. Service Connector will change its DNS in about one hour. Performing a manual failover would take more time. As Service Connector is a resource provider built on top of other Azure services, the actual time depends on the failover time of the underlying services.
Refer to the official documentation on link. As with other services, it is highly recommended that you use availability zone support in regions where the capability is available.
The portal also has a nice “Network view” where you can see a diagram of how the service connector is connecting your compute to the backend service.
The portal also has a nice option to “Validate” the connection and as a best practice, one should always validate the service connection before using it and in case you suddenly start facing challenges with a service connection not working, then also try to validate the service connection again as it will check or the pre-requisites on the compute and the target service so any breaking changes can be caught and then you can go ahead and fix.
Service Connections work with other compute options as well like Function Apps, AKS, Azure Container Apps and Azure Spring Apps.
I experimented with function apps as well and the experience is the same. In case you are a big fan of event driven programming then do note that service connections and triggers are two very different concepts and have no overlap. However, one catch is that you should be careful when creating a trigger and service connection on the same instance of the target service especially if you are working with “create” operations.
I unfortunately created a function app trigger on a storage account blob added event and as part of reading the file and writing it back in another storage account container using the service connection, I picked the same storage account and container. This created a cycle of events where the same file ended up invoking my function app in an infinite loop.
To get started on delving deeper refer: https://learn.microsoft.com/en-us/azure/service-connector/concept-service-connector-internals