Azure SQL for Developers - Performance and Automation Updates
Published Oct 12 2022 12:06 PM 3,606 Views
Bronze Contributor

Build, test, and deploy scalable apps quickly with Azure SQL updates for developers. Check out platform enhancements for modern, cloud-based apps: 


  • Create apps locally with Azure SQL Database emulator 
  • Initiate serverless operations with SQL trigger in Azure Functions 
  • Build simple and secure API connections to data with Data API Builder, coming soon 

Main Screen Shot 2022-10-10 at 1.04.11 PM.png

Principal Architect from the SQL engineering team at Microsoft, Bob Ward, joins Jeremy Chapman to share recent updates for the developer experience.


Create a local database copy.

1- Emulator Screen Shot 2022-10-10 at 12.55.23 PM.png

Check out more SQL updates for developers including Azure SQL Database emulator using Docker Desktop.


Respond fast to customers with event-driven workflows.

2- Trigger Screen Shot 2022-10-10 at 12.56.23 PM.png

Start using the SQL stored procedure sp_invoke_external_rest_endpoint. See how to trigger Azure Functions from Azure SQL Database.


Save time coding.

3- Data API Builder Screen Shot 2022-10-10 at 12.50.42 PM.png

Optimize speed and bandwidth while executing operations on your app. Build a better API experience with Data API Builder, coming soon.


Watch our video.



00:00 — Introduction 

01:41 — Highlights of solutions for developers 

03:13 — Built-in query intelligence 

05:56 — Azure SQL Database emulator 

07:44 — SQL trigger in Azure Functions 

10:44 — Data API Builder 

12:32 — Wrap up


Link References: 

Get published code for assigning work orders based on geolocation at


Unfamiliar with Microsoft Mechanics? 

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. 


Keep getting this insider knowledge, join us on social: 

Video Transcript:

- Coming up, we take a fresh look at Azure SQL, the new platform enhancements for modern cloud-based apps, which give you more flexibility to build, test, and deploy scalable apps quickly. Including how you can create apps locally with the new Azure SQL Database emulator. The new SQL trigger in Azure Functions that you can use to initiate serverless operations, and building simpler and secure API connections to your data using the upcoming Data API Builder. And to go deeper on all these updates, we’re joined once again by Bob Ward, from the SQL Engineering team at Microsoft. Welcome back.


- Happy to be back!


- And in person this time. So last time you’re on the show, we actually took a look at the SQL engine, wherever SQL runs and all of its updates. So, also we showed a lot of integration and updates with Azure, for things like cloud backup and restore. Today though, we wanted to go deeper on the updates for the developer experience. You know, for those of us who might be building new cloud-native apps or moving app frontends or backends into the cloud.


- Yeah, there’s a ton of new and recent updates to share. Especially if you haven’t looked at Azure SQL Database for a while, which make it a really strong platform for your apps. There are really a number of things that set SQL apart from other cloud platforms, as you build and modernize your apps. Did you know, for example, SQL doesn’t even require a VM or server?


- I bet a lot of people wouldn’t.


- Its database engine’s the same, even if you’re running it in the Azure PaaS service, or in containers. Also, SQL isn’t just about tables anymore. It works with Graph, JSON, XML, and spatial data types. Then for deployment, Azure SQL is fully integrated with GitHub actions. So you can use the same app frontend deployment tools and processes to deploy your data backend running SQL.


- And something else to point out, is it’s designed for flexibility and scalability.


- For sure. You know, we’re really trying to solve a pain point for you, so you don’t have to worry so much as your app usage grows. You can manage compute resources and costs easily. For example, if you have massive apps that are extremely resource intensive, Azure SQL Database Hyperscale supports up to 100 terabyte database sizes, and you can scale up the number of virtual cores instantly. And you can scale out to create multiple secondary replicas. With serverless options, if you have smaller apps with inconsistent traffic patterns, Azure SQL Database will automatically scale compute in near real-time, on demand, as needed. And actually another problem we solve, is for over-provisioning. When you have solutions that use multiple databases. In the past, you might have to over-provision each database for peak requirements. Now, instead of provisioning for peak utilization per database, with the elastic pools option, to save cost, you can pool those resources with a shared utilization threshold. So even as utilization per database varies, as long as the database collectively stay within the threshold, your costs are predictable. And those are just some of the highlights.


- And even though a lot of what you just showed has been available for some time now, this is probably going to be news to a lot of people, especially those of us who might be using open source options, where the obvious appeal there is about maybe lower cost or efficiency or flexibility for app development. But also on the production side of apps as well.


- Well, open source databases certainly have their appeal. But when it comes to the cloud, costs are a different factor, because ultimately, you’re still using compute and storage resources. So you still need to pay for the VM or the container your database runs in, or the PaaS service running in the cloud. And regardless of how you deploy and run SQL, it’s the same engine. So you get things like built-in query intelligence, getting you faster with no code changes. It automatically figures out the best way to return that data you request. To show you how far we’ve come, here’s an example. We want it to be the best option for your apps. Let’s take a look at it. I have a pretty standard analytic query that I’ve just run here. And if you look in the bottom right hand corner, you’ll see it executes about 17 seconds. I’ll jump into properties here, and you can see that around 23 million rows are being queried. Right above my query, you can see it was run at dbcompat 130, which is from SQL 16. All I need to do is change it to dbcompat 160 with SQL 2022. So I’ll go ahead and highlight this step and run it. Below that is the exact same query I ran before. So I’ll highlight these lines and run them. And then we’ll take around about two seconds. Well, it’s actually two seconds. So just changing dbcompat took my query time down from 17 seconds, to two seconds. We can even do better. Now I’ll make one more change to turn this into a clustered columnstore index. I’ll highlight these five lines and execute it. Creating the columnstore index is a one-time operation that takes a moment, so I’ve sped it up just a little bit. Below that is our same query. As I highlight it, you’ll see I’ve added one line to clear the procedure cache, but the rest is identical. Let me run it one more time. Look at this, we’re down to one second, compared to 17 seconds at dbcompat 130. Without the columnstore index, that’s almost a 95 percent improvement. So you can see how the performance just evolves without requiring you to make any code changes to take advantage of these new improvements.


- And this is really an incredible example for all the developers that are watching right now. Why don’t we take a look at some of the recent updates by showing them.


- Let’s do that. I think it all starts with the local dev test experience. We’ve added things like simpler, more secure access to your live data, along with new ways to trigger workflow automation as operations perform against the database. And more flexibility over the data types you use. Let’s take a look here. Here’s the app we are updating to enter and automatically process work orders. Behind the scenes, it’s running multiple processes. There’s an online form to create work orders. Those get entered as new rows into an Azure SQL Database. As each new row is entered, it triggers an operation to map the location, which is now written back into the same row of the database. Then a few additional operations use Azure Functions to find the closest technician with the availability to do the job, assigns them and writes their information to our database. Now in this case, I’m working in a different region to various members of my dev team, who are working on different OS platforms. They need to connect to the data backend. And normally for a cloud database, this wouldn’t be a problem. Or even if I ran servers or VMs in our data center. That said, we need more than one instance per person, so it might get a little bit expensive. And that’s where Azure SQL Database emulator comes in, which gives you the option of creating a local consistent copy. Here I’m starting with Visual Studio Code. You’ll see that I have my create table operation with all my columns. In the connections here, you can see my database, which is a connection to an existing Azure SQL DB. If I don’t already have my database schema as code, I can quickly create a project from the existing database. And to save time, I’ve already created the project from the database. And in database project, here is my new project called “Work Orders Database.” It’s just a right click to pull up the publish command to build a local copy of the database with the emulator. You’ll see this option called “Publish to new Azure SQL Database emulator.” The emulator is a free containerized instance of Azure SQL Database that you can run locally using Docker Desktops as you see on my Mac. Also works for Windows and most common Linux distros. Even hosted environments like GitHub Codespaces. Now I’ll leave the standard port 1433. I’ll go in and enter my admin password and confirm. Now I can select which version. I’ll choose the lite one, because it’s a smaller image, and covers the majority of use cases. And I’ll just keep the rest of my default settings. Once this deployment completes, the emulator, with a local copy of the DB, is available for my code. Looks like it’s ready. So now maybe I’ll go over to the SQL Server tab and there’s our emulator on the local host, ready to go. And if you’re using Azure Data Studio you can start from there. And the great news is, it doesn’t even require an Azure subscription. And of course you can connect to it as you write your code.


- Okay, so now you’ve got your workflow ready. You’ve kind of got all the pieces in terms of what you want your app to do, and the local components to build and test the updates for your app. So what’s next?


- Well, I’ve got a plan. I’ve got a database, but I need to write some code. Let’s start by digging into our data connections and workflow to update what our app can do. In the past, we’d want a combination of real-time operations and daily batch jobs. We want everything now to be event-driven, so that we can respond to the customers faster. Let’s see here. There’s a brand new stored procedure available called “sp_invoke_external_rest_endpoint”. So that once our form inserts a row into our database, it triggers a REST API, which in turn uses an Azure Function with our payload with address information needed to map location. This stored procedure allows us to call out to an Azure Function endpoint each time the work order form is submitted. Another thing interesting about this, we’re using JSON formats as our content type. The SQL engine can both validate JSON documents, or convert SQL data into JSON, like we’re doing here to post data, to Azure Functions using the JSON format. All right, and the great thing with Azure Functions that I love is that they’re serverless. So it means that you don’t pay for any idle compute. It’s all on demand and you only pay for what you use.


- That’s right. In our case, we’re going to use it to automate more of our workflows. Our new workflow invokes Azure Maps, a mapping service to calculate longitude and latitude of each location. I’ll dig into our orchestration code passing and receiving data from Azure Functions. We’re using Azure SQL bindings here for Azure Functions to write data to our database, with the latitude and longitude coordinates for each worker. Again, we’re going to use those coordinates to assign the closest available technician. Let’s test this out and see what’s written back to our table. So I’m back in Azure Data Studio, and I’m going to simulate what filling in the form looks like for three work orders. Let me run it. And if I scroll down now, you’ll see our three rows with everything populated, except for the latitude and longitude columns. Remember, as those rows were created, the SQL trigger fired and called our new stored procedure to invoke the function asynchronously, which post everything out to Azure Maps to calculate those coordinates and write them back into our database. So I’ll run the simple select query against our work orders object to see those changes. And as I scroll over to the right, you’ll see now our latitude and longitude columns have the values added.


- Okay, so now the “”sp_invoke_external_rest_endpoint” stored procedure is calling your API, which is then going directly to Azure SQL to calculate that geolocation. But what about the technician? ’Cause they’ve normally maybe got a mobile phone or a mobile app and they’ve got to use that to know where to go for those work orders. So how would we build maybe a more modern API for that mobile app or that modern app?


- Well, normally you have to write a bunch of plumbing code, to wire the requests up from that mobile app to the database to get and insert data. In fact, I’m going to pull up some code that will show you what you’d normally have to do. You can see there are more than a dozen lines of code here needed to connect, and perform operations against our database. We’ve got our class for our work order with fields mapped to the table. Here’s our get function and connection string and even our logic, with our select statements to go get the data. And then there’s the code to fetch back our coordinates from the mapping service. I could use all of this to build out a REST API, but I’m going to show you how this gets better with the new Data API Builder, which will be coming soon. And you now you’re going to build an equivalent or even better API experience. In addition to get, you can use it for commands to update, create, or delete. So imagine how much coding it is going to save you. Let me show you how to configure one. I’m back in VS code. I’ll initialize the Data API Builder. Now I’ll add my work order API setting with the desired permissions. You can see everything is done here through a JSON config file. Let me start the runtime engine, which uses that config file. And it looks like it’s working. Switching over to a tool called Insomnia, it’s a great REST client, you’ll see these entries under Data API Builder on the left. I’ll send a request to get a single work order, and it immediately returns our coordinate data. This time I’ll send a request to get multiple work orders. I also have this option to customize the results set. So I’ll remove this select parameter and it returns everything I’m looking for. Then when I add that parameter back, you’ll see it’s only returning just the ID summary and location. So this small change in the get string is effectively changing my query. Imagine how complicated this would be if I needed needed to manage all those query combinations of my code. And we just saw REST APIs. We also support GraphQL. Here, I’m just going to use a few lines of code to filter locations on Seattle, and I’ll go ahead and send it. And you’ll see everything returns just for Seattle here. And that’s just one example. I could apply similar logic to any other field. And this flexibility means you can optimize speed, bandwidth and even things like battery usage, while executing operations on your apps. And of course, saving you hundreds of line of code in the process. This of course makes it easier to roll out new features.


- So this isn’t the SQL that we all grew up with. You know, it shows just how lightweight and flexible and powerful the SQL database and supporting tools have become. So for anyone who’s watching right now looking to get started, what do you recommend?


- It first starts with the emulator. You know the emulators in VS Code and Azure Data Studio today. So go try it out. Second, we’ve published a code for the app we demonstrated today, for assigning work orders based on geolocation. You can find it at So you can get started right today with what we showed you. Third, we also have a learning path from Microsoft Learn, complete with samples for a serverless full stack app with Azure SQL to help you onboard faster.


- Great stuff. Thanks so much for joining us today, Bob, and sharing all the updates for developers, and keep checking back to Microsoft Mechanics for all the latest tech. Subscribe to our channel if you haven’t already. And as always, thank you for watching.

Version history
Last update:
‎Oct 12 2022 12:06 PM
Updated by: