Blog Post

Microsoft Mechanics Blog
9 MIN READ

Can cloud native architectures lower your long-term costs?

Zachary-Cavanell's avatar
Zachary-Cavanell
Bronze Contributor
Jun 28, 2023

Architect workloads in Azure for long-term efficiency and growth. Consolidate VM count and transition applications to containers using the discovery and assessment tool in Azure Migrate. Use Azure Kubernetes Service (AKS) and DevOps practices for more efficient workload management. Increase capacity at any scale and work across different data types using SQL Serverless architecture.

 

Azure Expert, Matt McSpirit, joins Jeremy Chapman to share cloud-native approaches to run workloads in Azure for reduced costs.

 

Consolidate VM count with containers.

Fewer nodes to manage and less OS licensing cost. See how to set it up in Azure.

 

Reduce costs in Azure with AKS-specific options.

Determine efficient cluster architectures, use VMs that work with ephemeral OS disks, and pause an AKS cluster using the start and stop feature. Check it out.

 

Increase capacity at any scale and work across different data types.

Reduce operational costs using SQL Serverless architecture with Azure. Start here.

 

Watch our video here.

 

QUICK LINKS:

00:00 — Introduction

01:06 — Consolidate VM count

02:35 — Assessment and migration options

04:01 — AKS-specific options

05:34 — Management processes

06:37 — Efficiency for data backend

07:39 — Paths for cloud analytics and AI

08:16 — Query efficiency and compat levels

09:47 — Wrap up

 

Link References:

Quick ways to reduce Azure costs at https://aka.ms/CostReductionMechanics

How to change the compat level at https://aka.ms/SQL22Mechanics

More ideas for workload efficiency at https://aka.ms/Azure-DMWL

 

Unfamiliar with Microsoft Mechanics?

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

 

Keep getting this insider knowledge, join us on social:


Video Transcript:

- Up next, we continue our series on architecting your workloads for efficiency with a look at how you can make the right choices for your workloads to set yourself up for ongoing efficiency and growth in the long term. And to guide us on this topic, we’re joined once again by Azure expert Matt McSpirit, welcome back.

 

- Always good to be back, thank you.

 

- It’s great to have you back on. And, you know, the last time you were on the show, we actually demonstrated the top things that you can do to achieve immediate savings on your cloud spend as you run your workloads in Azure. Now, if you missed that episode, it’s worth checking that out at aka.ms/CostReductionMechanics. And today we’re looking at workload strategies for achieving more in Azure for the long term. So Matt, how should we think about this?

 

- Well, in many respects, it’s about finding the best ways to modernize your workloads in Azure to be as efficient as possible. Now, once you’re in Azure, you’ve taken that first modernization step, and if you follow the recommendations in our last show, the savings you make from identifying and stopping on necessary cloud spend in the short term can be reinvested for even more savings in the long term by choosing more cloud native approaches to running your workloads.

 

- This is a pretty big topic though. How would I know even where to start?

 

- Yeah, it is, as you say, a big topic, but there are some obvious places to start modernization efforts. An impactful place to begin is by consolidating your VM count. Now if you’ve lifted and shifted your VMs to the cloud already, that was a good move, and now there’s even more opportunity to save costs. But let me put this into context. If you’ve been in IT for a while, you might remember the shift from bare metal infrastructure to virtual machines as a way to drive the highest possible utilization for hardware, power, and other related costs. We call this server consolidation. Now with containers you can consolidate even further. So with VMs running on physical servers, they each have their own instance of an operating system which in itself has a lot of compute overhead, licensing costs, and each needs to be patched regularly. With containers, you are reducing the number of VMs. It’s not one app per VM. A single VM can run a collection of apps and services. And the containers within the VM are sandboxed app packages, essentially that share the same VM host OS. So this means a smaller number of VM instances to run the same number or often more app instances, less OS licensing costs, and fewer nodes to manage. And with containers, you can instantly scale the number of running container app instances based on real-time user demand.

 

- And all these are things you can do to radically save costs and the time needed also to keep everything up and running. It also sets you up to respond to elastic demand as you run your workloads. That said, though, I know a lot of people are probably watching right now and thinking, getting further down this modernization path is easier said than done and itself requires a lot of investment that equates kind of to time and money.

 

- Well the first thing I’d say is it’s a lot easier now than it was in the past. Our free tool, Azure Migrate, provides a guided experience for migrating your applications to containers. Now the discovery and assessment tool helps you identify the applications that are suitable for migration and provides recommendations for containerizing them. You can then use Azure Migrate to transform the contents of your virtual machines running ASP.NET or Java web apps running Apache Tomcat into container-based apps, which will then run in the Azure Kubernetes Service, or AKS. Alternatively, you can migrate your ASP.NET apps into the Azure App Service which also runs on container infrastructure. The Azure App Service as a destination then simplifies the process further post-migration by managing the underlying infrastructure for you. So it allows you to focus on your code and high-level parameters around running the service instead of the underlying infrastructure. In fact, Azure is the most cost effective and performant environment to run ASP.NET web workloads. It can be significantly less expensive than traditional on-prem IIS-based applications.

 

- And it’s great to see the automated assessment and migration tools, you know, but as a rule, these work primarily for line of business or kind of first-party internally-built apps. So what if I’ve got some pre-packaged third-party apps? So how would that work?

 

- Well, there are definitely going to be some apps that lend themselves more to containers than others. As a general rule, I’d agree it’s good to start with new apps and the ones you have more control over. And it’s also a good idea to check if the third-party apps you have can be run in containers. You don’t have to do everything at once. You can just start with your app front end and work from there. Now whether you are new to AKS or have existing services running, there are things you can do to further save costs with your container-based apps. For example, the new cluster preset configuration options in AKS help you determine the most efficient and suitable cluster architectures from the start. So you can choose from Dev/Test, or cost-optimized configurations to suit your app’s needs. And you can also choose ephemeral OS disks, which not only provide lower rewrite latency and faster scaling, but also since containers aren’t designed to have local state persisted to the managed OS disk, you can use VMs that work with ephemeral OS disks to save on VM-related storage costs. And of course, for your running clusters, using the AKS Start and Stop feature allows you to pause an AKS cluster to save cost. While the cluster stopped, it keeps configurations in place and you resume where you left off without reconfiguring your clusters. And those are just a few AKS-specific options, in addition to the options we discussed on our last show with Spot VMs, Azure Reservations, and resource quotas that you can enforce at the namespace level to keep compute, storage, and object counts within defined quota limits.

 

- That makes a lot of sense, but how should we think about management? You know, one of the things that Microsoft’s internal IT team pointed out to me when I asked them for their best practices on moving to container-based apps is really to avoid managing your containers kind of like mini VMs.

 

- Yeah, that’s a really important point and easily solved if you’re already using DevOps practices today. And if not, it’s a good idea to get on board with this modern engineering approach. So with CI/CD tooling like Azure DevOps, you can update the contents of a container, package them up and push them onto VM nodes to swap out previous container versions as part of an automated update. And unlike apps running in virtual machines, in IT, you don’t need to worry about software distribution or updating OS images. It’s just built into the change management tooling. Even with the upfront learning curve, the payoff is that you’re able to build and deploy your apps a lot faster and more efficiently.

 

- And these processes along with the related tooling are really more about saving time and also speeding up change management processes as you modernize your workload. So what else then would you recommend?

 

- Well, the most common place to start is with your app front-end and middle tier. The other area to focus on for greater efficiency and workload growth is your data backend. And here there are a spectrum of options available in Azure with a ton of advantages that let you dynamically increase your capacity at any scale and work across your different data types. This can help you reduce your operational costs. There are also serverless options. For example, you can use SQL Serverless in places where you infrequently see spikes in activity. Like cases where your invoicing is processing once or twice per month, using a serverless architecture, you can run a zero or minimal hot footprint. Then when activities do spike, resources are provisioned and run only for those periods of time to reduce costs associated with idle resources. And if you’re concerned about moving your data to the cloud, you can re-platform your existing SQL server backend to SQL Server 2022, which gives you the best of both worlds because it works seamlessly with Azure. So this let you take advantage of everything from faster querying to managing failover of business continuity in the cloud while your data remains on premises.

 

- And also, the better the availability and accessibility of your organizational data, the more you’ll be able to take advantage of things like advanced cloud analytics and AI. So what’s the best path forward here?

 

- So first, if you decide to go down the PaaS path, you can of course migrate your existing IaaS databases in Azure to Azure PaaS databases such as Azure SQL Database by using the Azure Database Migration Service. This service helps you simplify, guide and automate your database migration to Azure, and to shift your existing SQL servers on-premises to SQL Server 2022 you can upgrade them from 2012 and newer. Now because the SQL query engine is the same whether you’re using SQL server in VMs, managed instances, or past databases in Azure, you can often save on query-driven compute simply by updating to newer compat levels. The improved query efficiency not only saves time but in turn can save on compute costs.

 

- That’s right. We’ve got a few examples, if you missed it, we actually recently had Bob Ward on the show so check out aka.ms/SQL22Mechanics to find that. And here he actually demonstrated a few ways where just changing the compat level could help achieve this. For example, using degree of parallels and feedback or DOP, he demonstrated how a stored procedure running at compat level 160 with 32 threads could achieve the same or even better performance running with just 12 threads. And that’s a measurable reduction in compute. And another example that we saw was taking an analytical query over 23 million rows that would execute in 17 seconds with compat level 130. Just by changing that compat level to 160, the query time went from 17 seconds all the way down to two seconds on the exact same infrastructure. On the back end, this all comes down to compute and it makes things a lot more efficient.

 

- That’s right. And in those cases, we’re not even taking into account how management for managed instances and managed databases in Azure will also equate to time saved for maintaining the infrastructure and keeping it running. There are additional examples where cloud native approaches to running your workloads can lead to more savings in the long term and sometimes like with the serverless database or compat level examples even in the short term.

 

- Thanks, Matt, for all the additional tips in how you can do more in Azure while still saving on cost. So where can people go then to learn more?

 

- Well, to get more ideas on ways you can optimize for workload efficiency in Azure, check out aka.ms/Architecting4Efficiency.

 

- Awesome stuff. And keep checking back to Microsoft Mechanics for all the latest tech updates. Subscribe to our channel if you haven’t already. And as always, thank you for watching.

Published Jun 28, 2023
Version 1.0
No CommentsBe the first to comment