Blog Post

FinOps Blog
12 MIN READ

What’s new in FinOps toolkit 0.6 – September 2024

flanakin's avatar
flanakin
Icon for Microsoft rankMicrosoft
Oct 15, 2024

Whether you consider yourself a FinOps practitioner, someone who’s enthusiastic about driving cloud efficiency and maximizing the value you get from the cloud or were just asked to look at ways to reduce cost, the FinOps toolkit has something for you. This month, we’re excited to share a new library for FinOps best practices, new Power BI reports for governance and workload optimization, promoted tags in Power BI, more datasets in FinOps hubs, a consolidated tool for all FinOps workbooks, more Azure Optimization Engine improvements, an updated services mapping file that includes FOCUS 1.1 ServiceSubcategory, and other small improvements and bug fixes.

 

New to FinOps toolkit?

In case you haven’t heard, the FinOps toolkit is an open-source collection of tools and resources that help you learn, adopt, and implement FinOps in the Microsoft Cloud. The foundation of the toolkit is the Implementing FinOps guide that helps you get started with FinOps whether you’re using native tools in the Azure portal, looking for ways to automate and extend those tools, or if you’re looking to build your own FinOps tools and reports. To learn more about the toolkit, how to provide feedback, or how to contribute, see the FinOps toolkit site.

 

Introducing the FinOps best practice library

FinOps is an extremely broad space. Whether you’re looking for more insight into your usage, how that usage is priced, how to identify anomalies based on unique pricing models, how to allocate and build a chargeback model for shared costs across them, how to forecast and budget for them, and so on. And this isn’t something you do once, either. This is something you need to do for each and every service. And as new services and pricing models are introduced, this challenge continues to grow year after year. Learning about each of these areas for every service your organization uses requires a staggering effort. Building out a collection of lessons learned and proven practices and formalizing those into flexible tools and resources was one of the foundational goals of the FinOps toolkit. And with that, we are happy to introduce the FinOps best practices library.

 

As a starting point, we’ve pulled in some of the key queries from the Cost optimization workbook. Going forward, we will continue to build out the library to include more than just queries, but also cover tips and tricks for how to understand, optimize, and quantify the value of each of the services you use. Of course, as I mentioned, this is a very formidable task. With that, we are looking for feedback on what you would like to see next. And for those who’ve amassed their own collection of proven practices, we encourage you to share them with others via this central resource.

 

To learn more about the FinOps best practices library, see Unlocking Azure savings: Introducing the FinOps best practices library. And if you have any requests to add to the library or want to submit your own tips and tricks, create an issue or, better yet, submit a pull request! Learn more about the many ways to contribute and jump right in!

 

New Power BI reports for governance and workload optimization

In today's fast-paced environment, engineering, business, and finance teams must work together to accelerate product development and maximize business value through better financial control and predictability. But this can only happen when FinOps data is easily accessible by all stakeholders. And while engineers have many tools in the Azure portal, business and finance teams historically haven’t had access to details about what’s deployed in the cloud or the optimization opportunities that might exist. This is where FinOps toolkit Power BI reports come in. And in September, the FinOps toolkit now includes new governance and workload optimization reports to offer even more clarity.

 

The Governance report summarizes your Microsoft Cloud governance posture and offers standard metrics aligned with the Cloud Adoption Framework to facilitate identifying issues, applying recommendations, and resolving compliance gaps. The report includes many views including a summary of subscriptions and resources, policy compliance, virtual machines, managed disks, SQL databases, and network security groups.

 

The Workload optimization report provides insights into resource utilization and efficiency opportunities based on historical usage patterns. Specifically, you can get a summary of Azure Advisor cost recommendations or review any managed disks that are not currently being used and may no longer be needed anymore. We recommend reviewing unattached disks to determine if they are still needed and deleting any that aren’t to avoid unnecessary storage costs.

 

Both reports are just the beginning of what’s possible. They leverage Azure Resource Graph and will require the person or service principal used to refresh reports to have at least read access to the subscriptions you want to report on. We’ll continue to expand both reports to cover more scenarios and bring additional clarity based on your feedback. We encourage you to build on these reports and let us know what you’d like to see next in upcoming releases.

 

Reporting on tags in Power BI

One of the most important steps to understanding your cloud costs is knowing who’s responsible. While identifying costs based on subscriptions and resource groups provides a simple mechanism for tracking accountability, it often isn’t enough to provide a holistic view for leaders across the organization. This is why many organizations use tags to amend the cloud cost and usage with metadata that allows them to map costs back to responsible projects and teams, identify engineering owners, define the purpose, identify environment, and more. Now, the latest version of the FinOps toolkit reports include an option to extract specific tags to support you building your own custom reports.

 

 

To update the list of promoted tags, go to Transform data > Storage > CostDetails, select Advanced Editor to view the underlying query, update the list of PromotedTags as desired, and select Done, then Close & Apply. The list of tags will be extracted into “tag_*” columns in the CostDetails table. Once data is refreshed, you can customize existing visuals to include your tags or build out new pages and reports to suit your needs.

 

Of course, there’s a lot to do when it comes to tagging, metadata, and the larger allocation space. Let us know what you’d like to see next. We’d like to add a more comprehensive allocation engine into FinOps hubs in the future, so understanding your needs will help inform that design. Please join us in FinOps toolkit discussions to share your perspective on this or any other capability.

 

Ingest all Cost Management datasets in FinOps hubs

In August, we added the ability to point Power BI reports to raw exports without FinOps hubs to support all exportable datasets from Cost Management. In September, we completed the other half of that by adding native support for all Cost Management datasets, data formats, and compression options in FinOps hubs. This provides a simpler, more performant option for ingesting and working with data at scale in storage.

 

Cost Management supports the following exportable datasets:

  • Cost and usage
  • Price sheet
  • Reservation details
  • Reservation recommendations
  • Reservation transactions

 

Note the price and reservation exports are only available for Enterprise Agreement billing accounts and Microsoft Customer Agreement billing profiles today.

 

If you currently have CSV exports or are still using the Cost Management connector for reservation recommendations, we highly recommend updating to use parquet exports with snappy compression, when available, and to switch to reservation recommendations coming from exports rather than the connector. As a reminder, the Cost Management connector is no longer being maintained so this provides you a nice option to move forward.

 

Performance improvements in FinOps hubs and Power BI

Given the breadth and depth of data needed to manage and optimize cost, usage, and carbon over time, performance and scalability are two critical aspects of any FinOps practice. This is one of the core design principles for FinOps hubs. And as we continue to lay the foundational elements to enable our vision of FinOps, we continue to look back at ways to optimize what we have so far. In September, we introduced a few changes to streamline performance and improve scalability.

 

When FinOps hubs was first released, we converted Cost Management CSV exports to parquet to improve data refresh speeds and scale to larger datasets for reporting on raw cost data in Power BI. And now that Cost Management has native support for Gzip CSV and snappy parquet exports, FinOps hubs has been updated to support ingestion of these format and compression options. If you’re using CSV exports today, we highly recommend switching to snappy parquet exports as this provides improved performance and works better with Power BI incremental refresh than the current parquet conversion in Azure Data Factory. Once you’ve updated to FinOps hubs 0.6, simply delete the old exports and create new ones with snappy parquet.

 

Looking beyond initial ingestion, we’ve also been evaluating ways to streamline data loading in Power BI and other tools, like Microsoft Fabric or databases. With the inclusion of additional datasets, we realized it was time to change how data is stored. For details about which versions of reports work with which versions of FinOps hubs, see the compatibility guide. When you identify the right target release, use the upgrade guide to help.

 

Beyond these changes in FinOps hubs, the CostDetails and Prices queries were also optimized to reduce load time. These changes will impact anyone using FinOps toolkit Power BI reports, whether using them against raw storage or FinOps hubs.

 

Stay tuned for more performance and scalability improvements. We’re eager to enable large scale data analytics on top of all datasets to unlock new scenarios and capabilities.

 

Get the latest FinOps workbooks in one convenient package

Every month we look for ways to improve the FinOps workbooks to make it easier for you to optimize and govern your cloud environment. In September, we streamlined the deployment experience to make it easier for you to get the latest workbooks into your environment with a single FinOps workbooks template.

 

When you deploy the FinOps workbooks template, you’ll see a new option to select which workbooks you want. It’s that simple. As we look to include additional workbooks, you can simply redeploy the template to get the latest and greatest improvements to existing workbooks as well as any new workbooks.

 

With that in mind, let us know what you’d like to see. Whether you’re looking for more capabilities in the optimization or governance workbooks or maybe you’re interested in coverage of a new FinOps capability or Microsoft Cloud service. Whatever you need, we’re hear to help. We evaluate changes to our workbooks every month so let us know what you’d like to see next!

 

What’s new in Azure Optimization Engine

Last month, I talked about how important security is to us at Microsoft. In September, we continued our secure by default push by improving storage account security in the Azure Optimization Engine (AOE) and also improved troubleshooting documentation and deprecated the legacy Log Analytics agent in the process.

 

AOE runbooks have all been updated to replace key-based authentication against Azure storage with Entra ID authentication. Deployment scripts were also updated to remove plain text Entra ID token responses for added security.

 

And in an effort to provide additional self-help guidance for troubleshooting common issues, AOE now includes a troubleshooting page with the most common deployment and runtime issues and their respective solutions. We hope this will save you time if you ever run into an issue. And if you find anything missing, let us know where you’re getting stuck and how we can help.

 

Finally, with the deprecation of the legacy Log Analytics agent in August 2024, we stopped maintaining the legacy agent-related AOE setup assets and now recommend everyone migrate to the Azure Monitor Agent and corresponding toolset. For additional details, refer to Migrate to Azure Monitor Agent from Log Analytics agent.

 

New mapping for FOCUS 1.1 ServiceSubcategory

As many of you already know, I’m a staunch believer and proponent of the FinOps Open Cost and Usage Specification (FOCUS). FOCUS has so much potential to streamline every corner of FinOps, from early education and enablement to advanced optimization and unit economics. And as both a FOCUS steering committee member and maintainer, I can say that, as proud as we were to ship FOCUS 1.0 in June, that didn’t slow us down. FOCUS members from all corners of the globe continue to dedicate their time to pushing FOCUS forward day after day. And with a goal of shipping 2 updates every year, we’re coming up on the FOCUS 1.1 release. Of specific interest to the FinOps toolkit is one of our open data files that facilitates mapping resources to services, service categories, and now – new as of FOCUS 1.1 – service subcategories.

 

For those who aren’t familiar, ServiceName in FOCUS refers to the service the resource type falls into. This is distinctly different from MeterCategory or even the current ServiceName column in actual and amortized cost datasets because those revolve around the usage and not the resource. Perhaps my favorite example is this: If you calculate the total cost from all rows where ResourceType is “Microsoft.Compute/virtualMachines” and compare that to the sum of cost where MeterCategory is “Virtual Machines”, some of you may be surprised to learn these return different totals. The reason is because each resource emits different types of usage, like bandwidth, which is categorized as a networking charge rather than a VM charge. FOCUS ServiceName improves on this by helping you quantify the total cost of all resources within a specific service. This is what the Services mapping file in the toolkit provides.

 

FOCUS also introduced a provider-agnostic categorization of services, which can be helpful when grouping and aggregating costs across providers. While each provider has their own columns to track the type of service (more accurately, the types of usage), the names of and values in those columns are currently inconsistent given there has never been a centralized standard to align to. With FOCUS, you’re able to group by or filter on ServiceCategory across providers for a single set of consistent groups for simpler reporting and quicker answers.

 

Coming soon, in FOCUS 1.1, you’ll also see a new ServiceSubcategory that breaks ServiceCategory down to the next level. As an example, Compute is broken down into Virtual Machines, Containers, Serverless Compute, and more. Databases are broken down into Relational Databases, NoSQL Databases, Caching, and more. The list goes on. As the FOCUS ServiceCategory column was finalized, we updated the Services mapping file in the toolkit to include this additional detail so you can now apply that to your own datasets, whether you’re using an existing FOCUS version, actual or amortized costs, or even if you’re interested in categorizing other resource datasets. There are many uses of a provider-agnostic categorization of services and this dataset will help you achieve your goals, whatever they might be.

 

And for those interested in leveraging this data from PowerShell, you can also use the Get-FinOpsService command from the FinOps toolkit PowerShell module, which now includes a -ServiceSubcategory filter option.

 

If this sounds interesting, please do check out the other open data files available in the toolkit and let us know what you’d like to see next.

 

Other new and noteworthy updates

Many small improvements and bug fixes go into each release, so covering everything in detail can be a lot to take in. But I do want to call out a few other small things that you may be interested in.

 

In FinOps hubs:

  • Renamed the following pipelines to be clearer about their intent:
    • config_BackfillData to config_StartBackfillProcess.
    • config_ExportData to config_StartExportProcess.
    • config_RunBackfill to config_RunBackfillJob.
    • config_RunExports to config_RunExportJobs.
  • Changed the storage ingestion path from “{scope}/{yyyyMM}/{dataset}” to “{dataset}/{yyyy}/{MM}/{dataset}”
  • Improved error handling in the config_RunBackfillJob and config_StartExportProcess pipelines which was causing them to fail in some situations.
  • Removed the temporary Event Grid resource from the template which was attempting to streamline first-time setup, but inadvertently caused unexpected costs in scenarios where the deployment cleanup script failed.

 

In Power BI reports:

 

In open data:

  • 48 new resource types were added and 14 were updated.
  • 4 new service mappings were added.

 

What’s next

Here are a few of the things we’re looking at in the coming months:

  • FinOps hubs will enable large scale analytics in Azure Data Explorer and add support for private endpoints.
  • FinOps workbooks will continue to get recurring updates, expand to more FinOps capabilities, and add cost from FinOps hubs.
  • Azure Optimization Engine will continue to receive small updates as we plan out the next major release of the tool.
  • Each release, we’ll try to pick at least one of the highest voted issues (based on 👍 votes) to continue to evolve based on your feedback, so keep the feedback coming!

 

To learn more, check out the FinOps toolkit roadmap, and please let us know if there’s anything you’d like to see in a future release. Whether you’re using native products, automating and extending those products, or using custom solutions, we’re here to help make FinOps easier to adopt and implement.

 

Updated Oct 10, 2024
Version 1.0
  • chisholmd's avatar
    chisholmd
    Copper Contributor

    I just wanted to say how excited I am about the FinOps Toolkit 0.6 release! I noticed your announcement and our team set up an assessment to explore the capabilities of the new iteration. We’ve just wrapped our first session and have two more to go! We really appreciate all the hard work your team has put into delivering these new capabilities to the community. Looking forward to seeing all the great improvements!