Hot Patching SQL Server Engine in Azure SQL Database
Published Sep 11 2019 02:51 PM 28.3K Views
Microsoft

Author: Hans Olav Norheim, hansnor@microsoft.com

Technical reviewers: Jim Cavalaris, Jimmy Cleary, Jim Radigan, Ashish Kishan Mansukhani, Alexey Yeltsov, Brian Gianforcaro, Jasraj Dange, Jon Jahren, Michael Nelson, Peter Carlin

 

In the world of cloud database services, few things are more important to customers than having uninterrupted access to their data.  In industries like online gaming and financial services that experience high transaction rates, even the smallest interruptions can potentially impact the end-user’s experience.  Azure SQL Database is evergreen, meaning that it always has the latest version of the SQL engine, but maintaining this evergreen state requires periodic updates to the service that can take the database offline for a second.  For this reason, our engineering team is continuously working on innovative technology improvements that reduce workload interruption.

 

Today’s post, in collaboration with the Visual C++ Compiler team, covers how we patch SQL Server Engine without impacting workload at all.

 

This is what hot patching looks like under the covers. Read on if you want all the details!This is what hot patching looks like under the covers. Read on if you want all the details!

1. The Challenge

The SQL Engine we are running in Azure SQL Database is the very latest version of the same engine customers run on their own servers, except we manage and update it. To update SQL Server or the underlying infrastructure (i.e. Service Fabric or the operating system), we must stop the SQL Server process. If that process hosts the primary database replica, we move the replica to another machine (requiring a failover).

 

During failover, the database may be offline for a second and still meet our 99.995% SLA. However,  failover of the primary replica impacts workload because it aborts in-flight queries and transactions. We built features such as resumable index (re)build and accelerated database recovery to address these situations, but not all running operations are automatically resumable. It may be expensive to restart complex queries or transactions that were aborted due to an upgrade. So even though failovers are quick, we want to avoid them.

 

SQL Server and the overall Azure platform invests significant engineering effort into platform availability and reliability. We can, for example, update the Azure host without restarting Guest VMs. In SQL DB, we have multiple replicas of every database. During upgrade, we ensure that hot standbys are available to take over immediately.

 

We’ve worked closely with the broader Azure and Service Fabric teams to minimize the number of failovers. When we first decide to fail over a database for upgrade, we apply updates to all components in the stack at the same time: OS, Service Fabric and SQL Server. We have automatic scheduling that avoids deploying during an Azure region’s core business hours. Just before failover, we attempt to drain active transactions to avoid aborting them. We even utilize database workload patterns to perform failover at the best time for the workload.

 

Even with all that, we don’t get away from the fact that to update SQL Engine to a new version, we must restart the process and fail over the database’s primary replica at least once. Or do we?

 

2. Hot Patching and Results

Hot patching is modifying in-memory code in a running process without restarting the process. In our case, it gives us the capability to modify C++ code in SQL Engine without restarting sqlservr.exe. Since we don’t restart, we don’t fail over the primary replica and interrupt the workload. We don't even need to pause SQL Server activity while we patch. Hot patching is unnoticed by the user workload (other than the patch payload, of course)!

 

Hot patching does not replace traditional, restarting upgrades – it complements them. Hot patching currently has limitations that make it unsuitable when there are a large number of changes, such as when a major new feature is introduced. But it is perfect for smaller, targeted changes. More than 80% of typical SQL bug fixes are hot patchable. Benefits of hot patching include:

 

  • Reduced workload disruption
    No restart means no database failover and no workload impact.
  • Faster bug fixes
    Previously, we weighed the urgency of a bug fix vs. impact on customer workloads from deploying it. Sometimes we would deem a bug fix not important enough for worldwide rollout because of the workload impact. With hot patching, we can now deploy bug fixes worldwide right away.
  • Features available sooner
    Even with the 500,000+ functional tests that we run several times per day and thorough testing of every new feature, sometimes we discover problems after a new feature has been made available to customers. In such cases, we may have to disable the feature or delay go-live until the next scheduled full upgrade. With hot patching, we can fix the problem and make the feature available sooner.

We did the first hot patch in production in 2018. Since then, we have hot patched millions of SQL Servers every month. Hot patching increases SQL Database ship velocity by 50% while at the same time improving availability.

 

3. Testing and Safety

Hot patching is riskier than normal updates as it introduces new kinds of pitfalls and race conditions. For example, both new and old code may execute concurrently during patching. Adding a field to a class would probably crash. This makes tooling support, code review, reasoning and extensive testing a must.

 

Close collaboration with the Visual C++ team was crucial. Furthermore, we did extensive validation before we performed the first hot patch in production. We’ve enhanced SQL test infrastructure to perform hot patching while running our 500,000 functional tests. Additionally, we run hot patch stress, where we patch and unpatch millions of times in a loop, while hitting the server with stress workload. When we finally roll the patch to clusters, the rollout is monitored by the system and automatically rolls back if something unforeseen happens.

 

4. How Hot Patching Works

Now, let’s dive into the technical details of how hot patching works. Replacing code in a running process is easy; memcpy() solves that. Doing it without crashing is not. A running program is program code (processor instructions – the contents of the compiled image - DLLs/EXEs) and in-memory state. The in-memory state consists of thread contexts and shared heap state. Thread context includes the stack and local variables, as well as the instruction pointer for the currently executing instruction and other CPU registers. The heap includes allocated memory for classes and data structures, such as the buffer pool, procedure cache and running query execution plans.

 

If we simply copied the contents of a new program image on top of the old one, the program would crash at the most basic level. With profile-guided optimization, the new program image may be very different from the old. Functions may not be in the same place. The size may not even be the same if the patch required inserting code. The (unchanged) instruction pointer of a thread executing in one function would now point to some other random function where execution would continue. It would not be atomic and return addresses on stacks would be invalid. At best it would crash, at worst it would corrupt data.

 

To simplify the problem, we only update the function(s) that changed between the old and new program code. The basic primitive offered by hot patching is to replace the body of one or more C/C++ functions in a running image with new implementation(s). This happens at runtime, while threads are executing inside the process, including in the function(s) being replaced.

Figure 1. Hot Patching at function/procedure level. To update “blah += 1” inside function Foo(), we load the patched image into memory and 1) redirect all callees of Foo() back to the old image and 2) redirect Foo() to the new image.Figure 1. Hot Patching at function/procedure level. To update “blah += 1” inside function Foo(), we load the patched image into memory and 1) redirect all callees of Foo() back to the old image and 2) redirect Foo() to the new image.

The hot patching process shown in Fig 1 above proceeds as follows.

  1. Modify C++ code
    To prepare a hot patch, we first make the necessary modifications to the C++ code of the function(s) we want to patch - say void Foo(). We then build a new version of the program image – sqlservr_patched.exe, for example. We now have the existing (unchanged) sqlservr.exe that is currently running, and a sqlservr_patched.exe that is sitting on disk. This is the left side of Figure 1. In this example, we are modifying the “blah += 1” statement inside void Foo().
  2. Load patched image into memory
    Next, we load sqlservr_patched.exe into the address space of the running process. An EXE or DLL is just a “data file”, where the “data” is CPU instructions. Loading it into a section of free memory does not disturb the running process. But loading it into memory doesn’t patch anything. We need to hook into the live code in the process.
  3. Redirect function calls
    We hook into live code by redirecting function calls targeting the old function(s) in sqlservr.exe to new replacement function(s) in sqlservr_patched.exe. Outgoing calls from the function(s) are redirected back to the old image – we only want to patch the function itself, not its entire call tree. In Figure 1, blue are active functions and grey are inactive functions. It’s important that we wire up the reverse redirects first in step ①. There are threads executing Foo(), so the nanosecond we redirect it in step ②, threads will start executing the new Foo() function.

If things go wrong, we can detach the patch by undoing step ②, without workload impact. Even hot patches can have bugs!

 

5. How Hot Patching Works at the Instruction Level

Above, we explained how we “redirect” function calls. But function redirection isn’t something the processor has any concept of – it is executing instructions. What we actually do is to insert (overwrite) a JMP (jump) instruction at the beginning of the function. When a patched function is entered and the processor encounters this JMP, we tell it to jump to the patched version of the function. Since this is a jump, not a function call, there is no extra stack frame. This means we don’t do anything special to return from the function – the RET instruction in the patched function is enough.

 

For reduced compiled program size, x64 instructions are variable length - 2 to 15 bytes. The first byte or two is the op-code (operation code). To execute an instruction, the processor begins by decoding the op-code pointed to by @rip (the current instruction pointer register). Based on the op-code, the processor knows how long the instruction is. After the processor has executed the instruction, it increments @rip by the instruction length and repeats. The smallest far-JMP instruction we can use is 2 bytes for the op-code and 4 bytes for the address offset.

Figure 2 Naïve function redirection at instruction level - the first 10 bytes of function Foo() shown. Redirection by overwriting the beginning of the function with a far jump (op-code 0xFF25) will crash when @rip is incremented by 5 and a partial address is interpreted as an op-code.Figure 2 Naïve function redirection at instruction level - the first 10 bytes of function Foo() shown. Redirection by overwriting the beginning of the function with a far jump (op-code 0xFF25) will crash when @rip is incremented by 5 and a partial address is interpreted as an op-code.

There may be threads executing in the function we are patching. We don’t freeze them because that would impact the workload. Figure 2 shows what would happen if we overwrote the beginning of the function with a 6-byte JMP while the processor was in the middle of executing the now-gone MOV instruction at offset 0. It would increment @rip by 5 and try to decode the byte at that address as an instruction. That isn’t an instruction, but instead part of the address offset we filled in. The program would crash.

Figure 3 Function redirection with function padding and a trampoline. We write a near-jump (0xEB) into the function and a far-jump (0xFF25) into the padding. This avoids partial instruction execution.Figure 3 Function redirection with function padding and a trampoline. We write a near-jump (0xEB) into the function and a far-jump (0xFF25) into the padding. This avoids partial instruction execution.

To solve this, we use a trick called a trampoline – see Figure 3. We compile (link) all SQL Server images with function padding – 6 bytes of empty space between functions. We use this space to write the 6-byte JMP from before. We then write a smaller 2-byte JMP instruction into the beginning of the function that jumps to the function padding (it can only jump to offsets +/-127 bytes). The compiler guarantees all functions start with an instruction >= 2 bytes.

 

6. Functional Correctness of Patches

When compiling a program, the compiler guarantees that your program is correct as a whole. It does not guarantee that any function looks like the source code. Compiler optimizations may, for example, inline a function into other functions, remove an if-branch never taken, change the number of parameters and many other tricks.

 

Figure 4 Compiler optimizations create trouble for hot patching. In this example, the compiler removed code needed by the hot patch.Figure 4 Compiler optimizations create trouble for hot patching. In this example, the compiler removed code needed by the hot patch.

This spells trouble for hot patching because those optimizations may change when we modify code. Figure 4 shows an example. Here, the compiler found that nobody ever calls ExecuteAction(ACTION2), so it removed the switch-case for it. If we patch PatchedFunction() to start using ACTION2, it won’t work – we also need to patch ExecuteAction.

 

This is where collaboration with the Visual C++ compiler team was crucial. By building hot patching into the compiler, it can detect and correctly handle such cases and we can guarantee that the hot patch is functionally equivalent to the source code. Inlining and globals also require special consideration:

 

Inlining: If a function Parent() contains a call to Foo(), the compiler may inline Foo() into Parent() to avoid the overhead of a function call. Patching Foo() won’t fix the copy inside Parent(). The solution is to also hot patch Parent(). Parent() may also have been inlined some other place – if so we need to patch that function too, recursively.

 

Globals: A global variable is nothing but a compiler-assigned memory location inside the compiled image. A reference to the global variable in code translates to instructions referencing this memory location. This presents a problem for hot patching. sqlservr_patched.exe::Foo() would be referencing a memory location inside sqlservr_patched.exe when the location of the global at runtime is in sqlservr.exe.

 

This is solved by the compiler transforming global references to proxy references. These references are patched up at runtime to point to the correct location. Figure 5 shows the compiler transformation.

Figure 5 Compiler transformation of globals to proxy references that can be patched to the right address at runtime.Figure 5 Compiler transformation of globals to proxy references that can be patched to the right address at runtime.

7. Closing Words and Next Steps

With the capability in place, we are now working to improve the tooling and remove limitations to make more changes hot patchable with quick turnaround. This benefits you as a customer on our platform by allowing us to fix bugs and issues quickly without impacting your workload. For now, hot patching is only available in Azure SQL Database, but some day it may also come to SQL Server. Let us know via SQLDBArchitects@microsoft.com if you would be interested.

 

We hope this gave you a view into some of the exciting work we are doing. While this barely scratches the surface of hot patching, please leave comments and questions below or contact us on the email above if you would like to see more in-depth coverage of cool technology we work on.

12 Comments
Version history
Last update:
‎Nov 09 2020 09:41 AM
Updated by: