FinOps Framework
26 TopicsProvider-Managed Azure Subscriptions: Cost Control and Commitment Clarity
As a Microsoft Cloud Solution Architect supporting enterprise customers, I occasionally encounter a specific scenario where customers with an Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA-E) allow a service provider (SP) to manage one or more of their Azure subscriptions via the SP’s tenant. This setup has notable implications for cost and commitment management, which I’ll explore in this article. Recommended prerequisite reading: Microsoft Cost Management: Billing & Trust Relationships Explained Scenario Overview A customer signs a contract with a service provider to outsource the management of certain resources. The customer retains full control over resource pricing and expects the usage of these resources to contribute towards their Microsoft Azure Consumption Commitment (MACC). To achieve this, the customer associates one or more Azure subscriptions with a Microsoft Entra ID tenant owned and managed by the SP. In our example, this is “Subscription B.” The SP gains full RBAC access to the subscription and its resources, while the billing relationship remains tied to the customer’s billing account (EA) or billing profile (MCA-E). Let’s have a look at the implications from both the customers and the service providers perspective: Customers perspective Cost & Pricing All cost in Subscription B that occurs because of resource usage are tied and therefore billed to the customers billing account (EA) or billing profile (MCA-E). The prices used for the usage are based on the negotiated customer price list associated with the billing account (EA) /profile (MCA-E). The Azure resource consumption of Subscription B plus any eligible Marketplace offer consumption within the subscription contributes to the MACC of the customer. Customer has full cost visibility of Subscription B via Azure Cost Analysis on the billing account/billing profile level. Commitments (Reservations / Savings Plans) Shared commitments at the billing account/billing profile level are utilized by matching resources in Subscription B. Commitments scoped to Subscription B or lower can only be purchased by the customer, if the customer has RBAC rights on the subscription and the global billing policy allows purchases for subscription owner / reservation purchasers. Service Provider Perspective Cost & Pricing The service provider is responsible for managing Subscription B’s resources and the associated costs. Subscription B’s actual and amortized cost view is limited for the service provider as they have only access at the subscription level. The service provider has no direct access to the customer price (Price Sheet) or invoice information. Commitments (Reservations / Savings Plans) The service provider can purchase commitments scoped at Subscription B or lower (resource group) if the global customer’s billing policy allows purchases for subscription owners / reservation purchasers. The associated costs of the commitment are attributed to the customer’s billing account/profile. Shared or management group scoped commitments purchased by the service provider based on their own billing account / billing profile do not apply to Subscription B. Key take aways Decoupled Ownership: Customers can separate subscription management from billing ownership, enabling flexible operational models. Cost Control: Customers retain full visibility and control over pricing, cost allocation, and commitment utilisation—even when subscriptions are managed by a service provider. Governance and Policy Alignment: Successful implementation depends on clear billing policies and RBAC configurations that align with both customer and provider responsibilities.244Views0likes0CommentsNews and updates from FinOps X 2024: How Microsoft is empowering organizations
Last year, I shared a broad set of updates that showcased how Microsoft is embracing FinOps practitioners through education, product improvements, and innovative solutions that help organizations achieve more. with AI-powered experiences like Copilot and Microsoft Fabric. Whether you’re an engineer working in the Azure portal or part of a business or finance team collaborating in Microsoft 365 or analyzing data in Power BI, Microsoft Cloud has the tools you need to accelerate business value for your cloud investments.11KViews8likes0CommentsWhat’s new in FinOps toolkit 0.4 – July 2024
In July, the FinOps toolkit 0.4 added support for FOCUS 1.0, updated tools and resources to align with the FinOps Framework 2024 updates, introduced a new tool for cloud optimization recommendations called Azure Optimization Engine, and more!3.7KViews4likes1CommentWhat's new in FinOps toolkit 0.10 – April 2025
In April, the FinOps toolkit 0.10 introduced support for Microsoft Fabric Real-Time Intelligence in FinOps hubs, support for Azure Gov and Azure China in both FinOps hubs and Power BI reports, FinOps Framework 2025 updates, and more!1.2KViews1like0CommentsMicrosoft at FinOps X 2025: Embracing FinOps in the era of AI
It’s that time of year again! June 2 nd is quickly approaching, and Microsoft is set to make a bold impact at FinOps X 2025, continuing our mission to empower organizations with financial discipline, cost optimization, and AI-driven efficiency in the cloud. As artificial intelligence reshapes the industry, we’re excited to showcase how Microsoft’s FinOps tools and services can help you boost productivity, streamline automation, and maximize cloud investments with innovative AI-powered capabilities. Our presence at FinOps X goes beyond standard conference participation, it’s an immersive experience designed to engage, inspire, and equip FinOps practitioners with actionable strategies. Here’s what to expect: Evening by the bay: A Microsoft experience 📍 Roy’s restaurant 📅 June 2 🕖 7:00 PM – 9:00 PM Back by popular demand, we hope you can join us where networking meets relaxation at Microsoft’s signature welcome reception, an unforgettable evening at Roy’s Restaurant. Enjoy handcrafted cocktails, live music, and gourmet food while mingling with FinOps leaders, practitioners, and Microsoft experts. Immersive exhibit: FinOps in the era of AI 📍 Microsoft stage room 📅 June 3-4 🕘 9:30 AM – 5:00 PM On June 3 rd and 4 th , we encourage you to step into the Microsoft stage room, where we’re debuting an interactive exhibit that explores the AI adoption journey. Participants will navigate through key phases, from foundations to design, management, and best practices, all rooted in the FinOps Framework. Gain insights into how Microsoft solutions can help drive cost efficiency and deliver value throughout your AI transformation. Microsoft sessions: Insights and innovation Microsoft will host 10 dedicated sessions throughout the conference, offering a mix of breakout sessions, expert Q&As, and, for the first time, a partner-focused discussion. Our sessions will explore a wide range of topics including: Best practices for maximizing cloud investments FinOps for AI and AI for FinOps FinOps strategies for SaaS and sustainability Governance, forecasting, benchmarking, and rate optimization Deep dives into Microsoft tools: Microsoft Cost Management Microsoft Fabric Microsoft Copilot in Azure GitHub Copilot Azure AI Foundry Azure Marketplace Pricing offers Azure Carbon Optimization FinOps toolkit Each session is designed to provide proven frameworks, real-world insights, and practical guidance to help organizations integrate financial discipline into their cloud and AI projects. Join us at FinOps X 2025 Ready to dive into the latest FinOps strategies and AI-driven efficiencies? Register today to secure your spot at Microsoft’s sessions and experiences! Space is limited, so don’t miss this opportunity to connect, learn, and innovate with peers and experts. 👉 Explore and register for Microsoft sessions (https://aka.ms/finops/x) Stay tuned for more updates as we gear up for an exciting week of learning, networking, and innovation with the FinOps community. We hope you can join us in San Diego from June 2-5, where the future of FinOps meets the limitless possibilities of AI. Microsoft is all in, and we can’t wait to see you there!290Views0likes0CommentsNews and updates from FinOps X 2025: Transforming FinOps in the era of AI
AI is central to nearly every corner of business and technology. It's no surprise that FinOps X 2025 was loaded with news about how AI is changing everything. Learn about how FinOps tools and solutions are evolving in the era of AI!922Views0likes0CommentsGetting started with FinOps hubs: Multicloud cost reporting with Azure and Google Cloud
Microsoft’s FinOps hubs offer a powerful and trusted foundation for managing, analyzing, and optimizing cloud costs. Built on Azure Data Explorer (ADX) and Azure Data Lake Storage (ADLS), FinOps hubs provide a scalable platform to unify billing data across providers leveraging FOCUS datasets. In this post, we’ll take a hands-on technical walkthrough of how to connect Microsoft FinOps hubs to Google Cloud, enabling you to export and analyze Google Cloud billing data with Azure billing data directly within your FinOps hub instance. This walk through will focus on using only storage in the hub for accessing data and is designed to get you started and understand multicloud connection and reporting from FinOps hubs. For large datasets Azure Data Explorer or Microsoft Fabric is recommended. With the introduction of the FinOps Open Cost and Usage Specification (FOCUS), normalizing billing data and reporting it through a single-pane-of-glass experience has never been easier. As a long-time contributor to the FOCUS working group, I’ve spent the past few years helping define standards to make multicloud reporting simpler and more actionable. FOCUS enables side-by-side views of cost and usage data—such as compute hours across cloud providers—helping organizations make better decisions around workload placement and right-sizing. Before FOCUS, this kind of unified analysis was incredibly challenging due to the differing data models used by each provider. To complete this technical walk through, you’ll need access to both Azure and Google Cloud—and approximately 2 to 3 hours of your time. Getting started: The basics you’ll need Before diving in, ensure you have the following prerequisites in place: ✅ Access to your Google Cloud billing account. ✅ A Google Cloud project with BigQuery and other APIs enabled and linked to your billing account. ✅ All required IAM roles and permissions for working with BigQuery, Cloud Functions, Storage, and billing data (detailed below). ✅ Detailed billing export and pricing export configured in Google Cloud. ✅ An existing deployment of FinOps hubs. What you’ll be building In this walk-through, you’ll set up Google billing exports and an Azure Data Factory pipeline to fetch the exported data, convert to parquet and ingest into your FinOps hub storage, following the FOCUS 1.0 standard for normalization. Through the process you should end up creating: Configure a BigQuery view to convert detailed billing exports into the FOCUS 1.0 schema. Create a metadata table in BigQuery to track export timestamps to enable incremental data exports reducing file export sizes and avoiding duplicate data. Set up a GCS bucket to export FOCUS-formatted data in CSV format. Deploy a Google Cloud Function that performs incremental exports from BigQuery to GCS. Create a Google Cloud Schedule to automate the export of your billing data to Google Cloud Storage. Build a pipeline in Azure Data Factory to: Fetch the CSV billing exports from Google Cloud Storage. Convert them to Parquet. Ingest the transformed data into your FinOps hub ingestion container. Let's get started. We're going to start in Google Cloud before we jump back into Azure to setup the Data Factory pipeline. Enabling FOCUS exports in Google Prerequisite: Enable detailed billing & pricing exports ⚠️ You cannot complete this guide without billing data enabled in BigQuery if you have not enabled detailed billing exports and pricing exports, do this now and come back to the walk-through in 24 hours. Steps to enabled detailed billing exports and pricing exports: Navigate to Billing > Billing Export. Enable Detailed Cost Export to BigQuery. Select the billing project and a dataset - if you have not created a project for billing do so now. Enable Pricing Export to the same dataset. 🔊 “This enables daily cost and usage data to be streamed to BigQuery for granular analysis.” Detailed guidance and information on billing exports in Google can be found here: Google Billing Exports. What you'll create: Service category table Metadata table FOCUS View - you must have detailed billing export and pricing exports enabled to create this view Cloud Function Cloud Schedule What you'll need An active GCP Billing Account (this is your payment method). A GCP Project (new or existing) linked to your GCP billing account. All required APIs enabled. All required IAM roles enabled. Enable Required APIs: BigQuery API Cloud Billing API Cloud Build API Cloud Scheduler Cloud Functions Cloud Storage Cloud Run Admin Pub/Sub (optional, for event triggers) Required IAM Roles Assign the following roles to your user account: roles/billing.viewer or billing.admin roles/bigquery.dataEditor roles/storage.admin roles/cloudfunctions.admin roles/cloudscheduler.admin roles/iam.serviceAccountTokenCreator roles/cloudfunctions.invoker roles/run.admin (if using Cloud Run) roles/project.editor Create a service account (e.g., svc-bq-focus) Assign the following roles to your service account: roles/bigquery.dataOwner roles/storage.objectAdmin roles/cloudfunctions.invoker roles/cloudscheduler.admin roles/serviceAccount.tokenCreator roles/run.admin Your default compute service account will also require access to run cloud build services. Ensure you apply the cloud build role to your default compute service account in your project, it may look like this: projectID-compute@developer.gserviceaccount.com → roles/cloudbuild.builds.builder Create FOCUS data structure and View This section will create two new tables in Big Query, one for service category mappings and one for metadata related to export times. These are important to get in before we create the FOCUS view to extract billing data in FOCUS format. Create a service category mapping table I removed service category from the original google FOCUS view to reduce the size of the SQL Query, therefore, to ensure we mapped Service category properly, I created a new service category table and joined it to the FOCUS view. In this step we will create a new table using open data to map GCP services to service category. Doing this helps reduce the size of the SQL query and simplifies management of Service Category mapping. Leveraging open source data we can easily update service category mappings if they ever change or new categories are added without impacting the FOCUS view query. Process: Download the latest service_category_mapping.csv from the FOCUS Converter repo Go to BigQuery > Your Dataset > Create Table Upload the CSV Table name: service_category Schema: Auto-detect Create a metadata table This table will be used to track the last time detailed billing data was added to your detailed billing export, we use this to enable incremental exports of billing data through the FOCUS view to ensure we only export the latest set of data and not everything all the time. Process: Go to BigQuery > Create Table Table name: metadata_focus_export Schema: Field Name : Format last_export_time: TIMESTAMP export_message: STRING Enter your field name and then choose field format, do not add : 🔊 “Ensures each export only includes new data since the last timestamp.” Create the FOCUS-aligned view Creating a view in BigQuery allows us to un-nest the detailed billing export tables into the format of FOCUS 1.0*. To use Power BI we must un-nest the tables so this step is required. It also ensures we map the right columns in Google Cloud detailed billing export to the right columns in FOCUS. ⚠️ The FOCUS SQL code provided in this walk-through has been altered from the original Google provided code. I believe this new code is better formatted for FOCUS 1.0 than the original however it does contain some nuances that suit my personal views. Please evaluate this carefully before using this in a production system and adjust the code accordingly to your needs. Steps: Navigate to BigQuery > New Query Paste and update the FOCUS view SQL query which is provided below Replace: yourexporttable with detailed export dataset ID and table name that will look like " yourpricingexporttable with pricing export and table name your_billing_dataset with your detailed export dataset ID and table name FOCUS SQL Query: WITH usage_export AS ( SELECT *, ( SELECT AS STRUCT type, id, full_name, amount, name FROM UNNEST(credits) LIMIT 1 ) AS cud, FROM "Your-Detailed-Billing-Export-ID" -- replace with your detailed usage export table path ), prices AS ( SELECT export_time, sku.id AS sku_id, sku.description AS sku_description, service.id AS service_id, service.description AS service_description, tier.* FROM "your_pricing_export_id", UNNEST(list_price.tiered_rates) AS tier ) SELECT "111111-222222-333333" AS BillingAccountId, "Your Company Name" AS BillingAccountName, COALESCE((SELECT SUM(x.amount) FROM UNNEST(usage_export.credits) x),0) + cost as BilledCost, usage_export.currency AS BillingCurrency, DATETIME(PARSE_DATE("%Y%m", invoice.month)) AS BillingPeriodStart, DATETIME(DATE_SUB(DATE_ADD(PARSE_DATE("%Y%m", invoice.month), INTERVAL 1 MONTH), INTERVAL 1 DAY)) AS BillingPeriodEnd, CASE WHEN usage_export.adjustment_info.type IS NOT NULL and usage_export.adjustment_info.type !='' THEN 'Adjustment' WHEN usage_export.cud.type = 'PROMOTION' AND usage_export.cost_type = 'regular' AND usage_export.cud.id IS NOT NULL THEN 'Credit' WHEN usage_export.sku.description LIKE '%Commitment - 3 years - dollar based VCF%' or usage_export.sku.description LIKE '%Prepay Commitment%' THEN 'Purchase' WHEN usage_export.cud.id IS NOT NULL AND usage_export.cud.id != '' THEN 'Credit' WHEN usage_export.cost_type = 'regular' THEN 'Usage' WHEN usage_export.cost_type = 'tax' THEN 'Tax' WHEN usage_export.cost_type = 'adjustment' THEN 'Adjustment' WHEN usage_export.cost_type = 'rounding_error' THEN 'Adjustment' ELSE usage_export.cost_type END AS ChargeCategory, IF(COALESCE( usage_export.adjustment_info.id, usage_export.adjustment_info.description, usage_export.adjustment_info.type, usage_export.adjustment_info.mode) IS NOT NULL, "correction", NULL) AS ChargeClass, CASE WHEN usage_export.adjustment_info.type IS NOT NULL AND usage_export.adjustment_info.type != '' THEN usage_export.adjustment_info.type ELSE usage_export.sku.description END AS ChargeDescription, CAST(usage_export.usage_start_time AS DATETIME) AS ChargePeriodStart, CAST(usage_export.usage_end_time AS DATETIME) AS ChargePeriodEnd, CASE usage_export.cud.type WHEN "COMMITTED_USAGE_DISCOUNT_DOLLAR_BASE" THEN "Spend" WHEN "COMMITTED_USAGE_DISCOUNT" THEN "Usage" END AS CommitmentDiscountCategory, usage_export.cud.id AS CommitmentDiscountId, COALESCE (usage_export.cud.full_name, usage_export.cud.name) AS CommitmentDiscountName, usage_export.cud.type AS CommitmentDiscountType, CAST(usage_export.usage.amount_in_pricing_units AS numeric) AS ConsumedQuantity, usage_export.usage.pricing_unit AS ConsumedUnit, -- review CAST( CASE WHEN usage_export.cost_type = "regular" THEN usage_export.price.effective_price * usage_export.price.pricing_unit_quantity ELSE 0 END AS NUMERIC ) AS ContractedCost, -- CAST(usage_export.price.effective_price AS numeric) AS ContractedUnitPrice, usage_export.seller_name AS InvoiceIssuerName, COALESCE((SELECT SUM(x.amount) FROM UNNEST(usage_export.credits) x),0) + cost as EffectiveCost, CAST(usage_export.cost_at_list AS numeric) AS ListCost, prices.account_currency_amount AS ListUnitPrice, IF( usage_export.cost_type = "regular", IF( LOWER(usage_export.sku.description) LIKE "commitment%" OR usage_export.cud IS NOT NULL, "committed", "standard"), null) AS PricingCategory, IF(usage_export.cost_type = 'regular', usage_export.price.pricing_unit_quantity, NULL) AS PricingQuantity, IF(usage_export.cost_type = 'regular', usage_export.price.unit, NULL) AS PricingUnit, 'Google'AS ProviderName, usage_export.transaction_type AS PublisherName, usage_export.location.region AS RegionId, usage_export.location.region AS RegionName, usage_export.service.id AS ResourceId, REGEXP_EXTRACT (usage_export.resource.global_name, r'[^/]+$') AS ResourceName, usage_export.sku.description AS ResourceType, COALESCE(servicemapping.string_field_1, 'Other') AS ServiceCategory, usage_export.service.description AS ServiceName, usage_export.sku.id AS SkuId, CONCAT("SKU ID:", usage_export.sku.id, ", Price Tier Start Amount: ", price.tier_start_amount) AS SkuPriceId, usage_export.project.id AS SubAccountId, usage_export.project.name AS SubAccountName, (SELECT CONCAT('{', STRING_AGG(FORMAT('%s:%s', kv.key, kv.value), ', '), '}') FROM ( SELECT key, value FROM UNNEST(usage_export.project.labels) UNION ALL SELECT key, value FROM UNNEST(usage_export.tags) UNION ALL SELECT key, value FROM UNNEST(usage_export.labels) ) AS kv) AS Tags, FORMAT_DATE('%B', PARSE_DATE('%Y%m', invoice.month)) AS Month, usage_export.project.name AS x_ResourceGroupName, CAST(usage_export.export_time AS TIMESTAMP) AS export_time, FROM usage_export LEFT JOIN "Your-Service-Category-Id".ServiceCategory AS servicemapping ON usage_export.service.description = servicemapping.string_field_0 LEFT JOIN prices ON usage_export.sku.id = prices.sku_id AND usage_export.price.tier_start_amount = prices.start_usage_amount AND DATE(usage_export.export_time) = DATE(prices.export_time); 🔊 "This creates a FOCUS-aligned view of your billing data using FOCUS 1.0 specification, this view does not 100% conform to FOCUS 1.0. Create GCS Bucket for CSV Exports Your GCS bucket is where you will place your incremental exports to be exported to Azure. Once your data is exported to Azure you may elect to delete the files in the bucket, the metaata table keeps a record of the last export time. Steps: Go to Cloud Storage > Create Bucket Name: focus-cost-export (or any name you would like) Region: Match your dataset region Storage Class: Nearline (cheaper to use Earline but standard will also work just fine) Enable Interoperability settings > create access/secret key The access key and secret are tied to your user account, if you want to be able to use this with multiple people, create an access key for your service account - recommended for Prod, for this guide purpose, an access key linked to your account is fine. Save the access key and secret to a secure location to use later as part of the ADF pipeline setup. 🔊 “This bucket will store daily CSVs. Consider enabling lifecycle cleanup policies.” Create a Cloud Function for incremental export A cloud function is used here to enable incremental exports of your billing data in FOCUS format on a regular schedule. At present there is no known supported on demand export service from Google, so we came up with this little workaround. The function is designed to evaluate your billing data for last export time that is equals to or greater than the last time the function ran. To do this we look at the export_time column and the metadata table for the last time we ran this. This ensure we only export the most recent billing data which aids in reducing data export costs to Azure. This process is done through the GCP GUI using an inline editor to create the cloud function in the cloud run service. Steps: Go to Cloud Run > Write Function Select > Use Inline editor to create a function Service Name: daily_focus_export Region, the same region as your dataset - in our demo case us-central1 Use settings: Runtime: Python 3.11 (you cannot use anything later than 3.11) Trigger: Optional Authentication: Require Authentication Billing: Request based Service Scaling: Auto-scaling set to 0 Ingress: All Containers: leave all settings as set Volumes: Leave all settings as set Networking: Leave all settings as set Security: Choose the service account you created earlier Save Create main.py file and requirements.txt files through the inline editor For requirements.txt copy and paste the below: functions-framework==3.* google-cloud-bigquery Google-cloud-storage For main.py your function entry point is: export_focus_data Paste the code below into your main.py inline editor window import logging import time import json from google.cloud import bigquery, storage logging.basicConfig(level=logging.INFO) def export_focus_data(request): bq_client = bigquery.Client() storage_client = storage.Client() project_id = "YOUR-ProjectID" dataset = "YOUR-Detailed-Export-Dataset" view = "YOUR-FOCUS-View-Name" metadata_table = "Your-Metadata-Table" job_name = "The name you want to call the job for the export" bucket_base = "gs://<your bucketname>/<your foldername>" bucket_name = "Your Bucket Name" metadata_file_path = "Your-Bucket-name/export_metadata.json" try: logging.info("🔁 Starting incremental export based on export_time...") # Step 1: Get last export_time from metadata metadata_query = f""" SELECT last_export_time FROM `{project_id}.{dataset}.{metadata_table}` WHERE job_name = '{job_name}' LIMIT 1 """ metadata_result = list(bq_client.query(metadata_query).result()) if not metadata_result: return "No metadata row found. Please seed export_metadata.", 400 last_export_time = metadata_result[0].last_export_time logging.info(f"📌 Last export_time from metadata: {last_export_time}") # Step 2: Check for new data check_data_query = f""" SELECT COUNT(*) AS row_count FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') """ row_count = list(bq_client.query(check_data_query).result())[0].row_count if row_count == 0: logging.info("✅ No new data to export.") return "No new data to export.", 204 # Step 3: Get distinct export months folder_query = f""" SELECT DISTINCT FORMAT_DATETIME('%Y%m', BillingPeriodStart) AS export_month FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') AND BillingPeriodStart IS NOT NULL """ export_months = [row.export_month for row in bq_client.query(folder_query).result()] logging.info(f"📁 Exporting rows from months: {export_months}") # Step 4: Export data for each month for export_month in export_months: export_path = f"{bucket_base}/{export_month}/export_{int(time.time())}_*.csv" export_query = f""" EXPORT DATA OPTIONS( uri='{export_path}', format='CSV', overwrite=true, header=true, field_delimiter=';' ) AS SELECT * FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') AND FORMAT_DATETIME('%Y%m', BillingPeriodStart) = '{export_month}' """ bq_client.query(export_query).result() # Step 5: Get latest export_time from exported rows max_export_time_query = f""" SELECT MAX(export_time) AS new_export_time FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') """ new_export_time = list(bq_client.query(max_export_time_query).result())[0].new_export_time # Step 6: Update BigQuery metadata table update_query = f""" MERGE `{project_id}.{dataset}.{metadata_table}` T USING ( SELECT '{job_name}' AS job_name, TIMESTAMP('{new_export_time}') AS new_export_time ) S ON T.job_name = S.job_name WHEN MATCHED THEN UPDATE SET last_export_time = S.new_export_time WHEN NOT MATCHED THEN INSERT (job_name, last_export_time) VALUES (S.job_name, S.new_export_time) """ bq_client.query(update_query).result() # Step 7: Write metadata JSON to GCS blob = storage_client.bucket(bucket_name).blob(metadata_file_path) blob.upload_from_string( json.dumps({"last_export_time": new_export_time.isoformat()}), content_type="application/json" ) logging.info(f"📄 Metadata file written to GCS: gs://{bucket_name}/{metadata_file_path}") return f"✅ Export complete. Metadata updated to {new_export_time}", 200 except Exception as e: logging.exception("❌ Incremental export failed:") return f"Function failed: {str(e)}", 500 Before saving, update variables like project_id, bucket_name, etc. 🔊 “This function exports new billing data based on the last timestamp in your metadata table.” Test the Cloud Function Deploy and click Test Function Ensure a seed export_metadata.json file is uploaded to your GCS bucket (if you have not uploaded a seed export file the function will not run. Example file is below { "last_export_time": "2024-12-31T23:59:59.000000+00:00" } Confirm new CSVs appear in your target folder Automate with Cloud Scheduler This step will set up an automated schedule to run your cloud function on a daily or hourly pattern, you may adjust the frequency to your desired schedule, this demo uses the same time each day at 11pm. Steps: Go to Cloud Scheduler > Create Job Region: Same region as your Function - This Demo us-central1 Name: daily-focus-export Frequency: 0 */6 * * * (every 6 hours) or Frequency: 0 23 * * * (daily at 11 PM) Time zone: your desired time zone Target: HTTP Auth: OIDC Token → Use service account Click CREATE 🔊 "This step automates the entire pipeline to run daily and keep downstream platforms updated with the latest billing data." Wrapping up Google setup Wow—there was a lot to get through! But now that you’ve successfully enabled FOCUS-formatted exports in Google Cloud, you're ready for the next step: connecting Azure to your Google Cloud Storage bucket and ingesting the data into your FinOps hub instance. This is where everything comes together—enabling unified, multi-cloud reporting in your Hub across both Azure and Google Cloud billing data. Let’s dive into building the Data Factory pipeline and the associated datasets needed to fetch, transform, and load your Google billing exports. Connecting Azure to Google billing data With your Google Cloud billing data now exporting in FOCUS format, the next step is to bring it into your Azure environment for centralized FinOps reporting. Using Data Factory, we'll build a pipeline to fetch the CSVs from your Google Cloud Storage bucket, convert them to Parquet, and land them in your FinOps Hub storage account. Azure access and prerequisites Access to the Azure Portal Access to the resource group your hub has been deployed to Contributor access to your resource group At a minimum storage account contributor and storage blob data owner roles An existing deployment of the Microsoft FinOps Hub Toolkit Admin rights on Azure Data Factory (or at least contributor role on ADF) Services and tools required Make sure the following Azure resources are in place: Azure Data Factory instance A linked Azure Storage Account (this is where FinOps Hub is expecting data) (Optional) Azure Key Vault for secret management Pipeline Overview We’ll be creating the following components in Azure Data Factory Component Purpose GCS Linked Service Connects ADF to your Google Cloud Storage bucket Azure Blob Linked Service Connects ADF to your Hub’s ingestion container Source Dataset Reads CSV files from GCS Sink Dataset Writes Parquet files to Azure Data Lake (or Blob) in Hub's expected format Pipeline Logic Orchestrates the copy activity, format conversion, and metadata updates Secrets and authentication To connect securely from ADF to GCS, you will need: The access key and secret from GCS Interoperability settings (created earlier) Store them securely in Azure Key Vault or in ADF pipeline parameters (if short-lived) 💡 Best Practice Tip: Use Azure Key Vault to securely store GCS access credentials and reference them from ADF linked services. This improves security and manageability over hardcoding values in JSON. Create Google Cloud storage billing data dataset Now that we're in Azure, it's time to connect to your Google Cloud Storage bucket and begin building your Azure Data Factory pipeline. Steps Launch Azure Data Factory Log in to the Azure Portal Navigate to your deployed Azure Data Factory instance Click “Author” from the left-hand menu to open the pipeline editor In the ADF Author pane, click the "+" (plus) icon next to Datasets Select “New dataset” Choose Google Cloud Storage as the data source Select CSV as the file format Set the Dataset Name, for example: gcs_focus_export_dataset Click “+ New” next to the Linked Service dropdown Enter a name like: GCS-Billing-LinkedService Under Authentication Type, choose: Access Key (for Interoperability access key) Or Azure Key Vault (recommended for secure credential storage) Fill in the following fields: Access Key: Your GCS interoperability key Secret: Your GCS interoperability secret Bucket Name: focus-cost-export (Optional) Point to a folder path like: focus-billing-data/ Click Test Connection to validate access Click Create Adjust Dataset Properties Now that the dataset has been created, make the following modifications to ensure proper parsing of the CSVs: SettingValue Column delimiter; (semicolon) Escape character" (double quote) You can find these under the “Connection” and “Schema” tabs of the dataset editor. If your connection fails you may need to enable public access on your GCS bucket or check your firewall restrictions from azure to the internet! Create a Google Cloud Storage metadata dataset To support incremental loading, we need a dedicated dataset to read the export_metadata.json file that your Cloud Function writes to Google Cloud Storage. This dataset will be used by the Lookup activity in your pipeline to get the latest export timestamp from Google. Steps: In the Author pane of ADF, click "+" > Dataset > New dataset Select Google Cloud Storage as the source Choose JSON as the format Click Continue Configure the Dataset Setting Value Name gcs_export_metadata_dataset Linked Service Use your existing GCS linked service File path e.g., focus-cost-export/metadata/export_metadata.json Import schema Set to From connection/store or manually define if needed File pattern Set to single file (not folder) Create the sink dataset – "ingestion_gcp" Now that we’ve connected to Google Cloud Storage and defined the source dataset, it’s time to create the sink dataset. This will land your transformed billing data in Parquet format into your Azure FinOps Hub’s ingestion container. Steps: In Azure Data Factory, go to the Author section Under Datasets, click the "+" (plus) icon and select “New dataset” Choose Azure Data Lake Storage Gen2 (or Blob Storage, depending on your Hub setup) Select Parquet as the file format Click Continue Configure dataset properties Name: ingestion_gcp Linked Service: Select your existing linked service that connects to your FinOps Hub’s storage account. (this will have been created when you deployed the hub) File path: Point to the container and folder path where you want to store the ingested files (e.g., ingestion-gcp/cost/billing-provider/gcp/focus/) Optional Settings: Option Recommended Value Compression type snappy Once configured, click Publish All again to save your new sink dataset. Create the sink dataset – "adls_last_import_metadata" The adls_last_import_metadata dataset (sinks dataset) is the location you use to copy the export time json file from google to azure, this location is sued for the pipeline to check the last time the import of data was run by reading the json file you coped from google and comparing it to the new json file from google Steps: In Azure Data Factory, go to the Author section Under Datasets, click the "+" (plus) icon and select “New dataset” Choose Azure Data Lake Storage Gen2 (or Blob Storage, depending on your Hub setup) Select JSON as the file format Click Continue Configure dataset properties Name: adsl_last_import_metadata Linked Service: Select your existing linked service that connects to your FinOps Hub’s storage account. (this will have been created when you deployed the hub) File path: Point to the container and folder path where you want to store the ingested files (e.g., ingestion/cost/metadata) Build the ADF pipeline for incremental import With your datasets created, the final step is building the pipeline logic that orchestrates the data flow. The goal is to only import newly exported billing data from Google, avoiding duplicates by comparing timestamps from both clouds. In this pipeline, we’ll: Fetch Google’s export timestamp from the JSON metadata file Fetch the last successful import time from the Hub’s metadata file in Azure Compare timestamps to determine whether there is new data to ingest If new data exists: Run the Copy Activity to fetch GCS CSVs, convert to Parquet, and write to the ingestion_gcp container Write an updated metadata file in the hub, with the latest import timestamp Pipeline components: Activity Purpose Lookup - GCS Metadata Reads the last export time from Google's metadata JSON in GCS Lookup - Hub Metadata Reads the last import time from Azure’s metadata JSON in ADLS If Condition Compares timestamps to decide whether to continue with copy Copy Data Transfers files from GCS (CSV) to ADLS (Parquet) Set Variable Captures the latest import timestamp Web/Copy Activity Writes updated import timestamp JSON file to ingestion_gcp container What your pipeline will look like: Step-by-Step Create two lookup activities: GCS Metadata Hub Metadata Lookup1 – GCS Metadata Add an activity called Lookup to your new pipeline Select the Source Dataset: gcs_export_metadata_dataset or whatever you named it earlier This lookup reads the the export_metadata.json file created by your Cloud Function in GCS for the last export time available. Configuration view of lookup for GCS metadata file Lookup2 – Hub Metadata Add an activity called lookup and name it Hub Metadata Select the source dataset: adls_last_import_metadata This lookup reads the last import time data from the hub metadata file to compare it to the last export time from GCS Configuration view of Metadata lookup activity Add conditional logic In this step we will add the condition logic Add activity called If Condition Add the expression below to the condition Go to the Expression tab and paste the following into Dynamic Content: @greater(activity('Lookup1').output.last_export_time, activity('Lookup2').output.last_import_time) Configuration view of If Condition Activity Next Add Copy Activity (If Condition = True) next to True condition select edit and add activity Copy Data Configure the activity with the following details Setting Value Source Dataset gcs_focus_export_dataset (CSV from GCS) Sink Dataset ingestion_gcp (Parquet to ADLS) Merge Files Enabled (reduce file count) Filter Expression @activity('Lookup1').output.firstRow.last_export_time Ensure you add filter expression for filter by last modified. This is important, if you do not add the filter by last modified expression your pipeline will not function properly. Finally we create an activity to update the metadata file in the hub Add another copy activity to your if condition and ensure it is linked to the previous copy activity, this ensure it runs after the import activity is completed. Copy metadata activity settings: Source gcs_export_metadata_dataset Sink adls_last_import_dataset Destination Path ingestion_gcp/metadata/last_import.json This step ensures the next run of your pipeline uses the updated import time to decide whether new data exists. Your pipeline now ingests only new billing data from Google and records each successful import to prevent duplicate processing. Wrapping up – A unified FinOps reporting solution Congratulations — you’ve just built a fully functional multi-cloud FinOps data pipeline using Microsoft’s FinOps Hub Toolkit and Google Cloud billing data, normalized with the FOCUS 1.0 standard. By following this guide, you’ve: ✅ Enabled FOCUS billing exports in Google Cloud using BigQuery, GCS, and Cloud Functions ✅ Created normalized, FOCUS-aligned views to unify your GCP billing data ✅ Automated metadata tracking to support incremental exports ✅ Built an Azure Data Factory pipeline to fetch, transform, and ingest GCP data into your Hub ✅ Established a reliable foundation for centralized, multi-cloud cost reporting This solution brings side-by-side visibility into Azure and Google Cloud costs, enabling informed decision-making, workload optimization, and true multi-cloud FinOps maturity. Next steps 🔁 Schedule the ADF pipeline to run daily or hourly using triggers 📈 Build Power BI dashboards or use templates from the Microsoft FinOps Toolkit visualise unified cloud spend 🧠 Extend to AWS by applying the same principles using the AWS FOCUS export and AWS S3 storage Feedback? Have questions or want to see more deep dives like this? Let me know — or connect with me if you’re working on FinOps, FOCUS, or multi-cloud reporting. This blog is just the beginning of what's possible.1.9KViews3likes1CommentManaging Azure OpenAI costs with the FinOps toolkit and FOCUS: Turning tokens into unit economics
By Robb Dilallo Introduction As organizations rapidly adopt generative AI, Azure OpenAI usage is growing—and so are the complexities of managing its costs. Unlike traditional cloud services billed per compute hour or storage GB, Azure OpenAI charges based on token usage. For FinOps practitioners, this introduces a new frontier: understanding AI unit economics and managing costs where the consumed unit is a token. This article explains how to leverage the Microsoft FinOps toolkit and the FinOps Open Cost and Usage Specification (FOCUS) to gain visibility, allocate costs, and calculate unit economics for Azure OpenAI workloads. Why Azure OpenAI cost management is different AI services break many traditional cost management assumptions: Billed by token usage (input + output tokens). Model choices matter (e.g., GPT-3.5 vs. GPT-4 Turbo vs. GPT-4o). Prompt engineering impacts cost (longer context = more tokens). Bursty usage patterns complicate forecasting. Without proper visibility and unit cost tracking, it's difficult to optimize spend or align costs to business value. Step 1: Get visibility with the FinOps toolkit The Microsoft FinOps toolkit provides pre-built modules and patterns for analyzing Azure cost data. Key tools include: Microsoft Cost Management exports Export daily usage and cost data in a FOCUS-aligned format. FinOps hubs Infrastructure-as-Code solution to ingest, transform, and serve cost data. Power BI templates Pre-built reports conformed to FOCUS for easy analysis. Pro tip: Start by connecting your Microsoft Cost Management exports to a FinOps hub. Then, use the toolkit’s Power BI FOCUS templates to begin reporting. Learn more about the FinOps toolkit Step 2: Normalize data with FOCUS The FinOps Open Cost and Usage Specification (FOCUS) standardizes billing data across providers—including Azure OpenAI. FOCUS Column Purpose Azure Cost Management Field ServiceName Cloud service (e.g., Azure OpenAI Service) ServiceName ConsumedQuantity Number of tokens consumed Quantity PricingUnit Unit type, should align to "tokens" DistinctUnits BilledCost Actual cost billed CostInBillingCurrency ChargeCategory Identifies consumption vs. reservation ChargeType ResourceId Links to specific deployments or apps ResourceId Tags Maps usage to teams, projects, or environments Tags UsageType / Usage Details Further SKU-level detail Sku Meter Subcategory, Sku Meter Name Why it matters: Azure’s native billing schema can vary across services and time. FOCUS ensures consistency and enables cross-cloud comparisons. Tip: If you use custom deployment IDs or user metadata, apply them as tags to improve allocation and unit economics. Review the FOCUS specification Step 3: Calculate unit economics Unit cost per token = BilledCost ÷ ConsumedQuantity Real-world example: Calculating unit cost in Power BI A recent Power BI report breaks down Azure OpenAI usage by: SKU Meter Category → e.g., Azure OpenAI SKU Meter Subcategory → e.g., gpt 4o 0513 Input global Tokens SKU Meter Name → detailed SKU info (input/output, model version, etc.) GPT Model Usage Type Effective Cost gpt 4o 0513 Input global Tokens Input $292.77 gpt 4o 0513 Output global Tokens Output $23.40 Unit Cost Formula: Unit Cost = EffectiveCost ÷ ConsumedQuantity Power BI Measure Example: Unit Cost = SUM(EffectiveCost) / SUM(ConsumedQuantity) Pro tip: Break out input and output token costs by model version to: Track which workloads are driving spend. Benchmark cost per token across GPT models. Attribute costs back to teams or product features using Tags or ResourceId. Power BI tip: Building a GPT cost breakdown matrix To easily calculate token unit costs by GPT model and usage type, build a Matrix visual in Power BI using this hierarchy: Rows: SKU Meter Category SKU Meter Subcategory SKU Meter Name Values: EffectiveCost (sum) ConsumedQuantity (sum) Unit Cost (calculated measure) Unit Cost = SUM(‘Costs’[EffectiveCost]) / SUM(‘Costs’[ConsumedQuantity]) Hierarchy Example: Azure OpenAI ├── GPT 4o Input global Tokens ├── GPT 4o Output global Tokens ├── GPT 4.5 Input global Tokens └── etc. Power BI Matrix visual showing Azure OpenAI token usage and costs by SKU Meter Category, Subcategory, and Name. This breakdown enables calculation of unit cost per token across GPT models and usage types, supporting FinOps allocation and unit economics analysis. What you can see at the token level Metric Description Data Source Token Volume Total tokens consumed Consumed Quantity Effective Cost Actual billed cost BilledCost / Cost Unit Cost per Token Cost divided by token quantity Effective Unit Price SKU Category & Subcategory Model, version, and token type (input/output) Sku Meter Category, Subcategory, Meter Name Resource Group / Business Unit Logical or organizational grouping Resource Group, Business Unit Application Application or workload responsible for usage Application (tag) This visibility allows teams to: Benchmark cost efficiency across GPT models. Track token costs over time. Allocate AI costs to business units or features. Detect usage anomalies and optimize workload design. Tip: Apply consistent tagging (Cost Center, Application, Environment) to Azure OpenAI resources to enhance allocation and unit economics reporting. How the FinOps Foundation’s AI working group informs this approach The FinOps for AI overview, developed by the FinOps Foundation’s AI working group, highlights unique challenges in managing AI-related cloud costs, including: Complex cost drivers (tokens, models, compute hours, data transfer). Cross-functional collaboration between Finance, Engineering, and ML Ops teams. The importance of tracking AI unit economics to connect spend with value. By combining the FinOps toolkit, FOCUS-conformed data, and Power BI reporting, practitioners can implement many of the AI Working Group’s recommendations: Establish token-level unit cost metrics. Allocate costs to teams, models, and AI features. Detect cost anomalies specific to AI usage patterns. Improve forecasting accuracy despite AI workload variability. Tip: Applying consistent tagging to AI workloads (model version, environment, business unit, and experiment ID) significantly improves cost allocation and reporting maturity. Step 4: Allocate and Report Costs With FOCUS + FinOps toolkit: Allocate costs to teams, projects, or business units using Tags, ResourceId, or custom dimensions. Showback/Chargeback AI usage costs to stakeholders. Detect anomalies using the Toolkit’s patterns or integrate with Azure Monitor. Tagging tip: Add metadata to Azure OpenAI deployments for easier allocation and unit cost reporting. Example: tags: CostCenter: AI-Research Environment: Production Feature: Chatbot Step 5: Iterate Using FinOps Best Practices FinOps capability Relevance Reporting & analytics Visualize token costs and trends Allocation Assign costs to teams or workloads Unit economics Track cost per token or business output Forecasting Predict future AI costs Anomaly management Identify unexpected usage spikes Start small (Crawl), expand as you mature (Walk → Run). Learn about the FinOps Framework Next steps Ready to take control of your Azure OpenAI costs? Deploy the Microsoft FinOps toolkit Start ingesting and analyzing your Azure billing data. Get started Adopt FOCUS Normalize your cost data for clarity and cross-cloud consistency. Explore FOCUS Calculate AI unit economics Track token consumption and unit costs using Power BI. Customize Power BI reports Extend toolkit templates to include token-based unit economics. Join the conversation Share insights or questions with the FinOps community on TechCommunity or in the FinOps Foundation Slack. Advance Your Skills Consider the FinOps Certified FOCUS Analyst certification. Further Reading Managing the cost of AI: Understanding AI workload cost considerations Microsoft FinOps toolkit Learn about FOCUS Microsoft Cost Management + Billing FinOps Foundation Appendix: FOCUS column glossary ConsumedQuantity: The number of tokens or units consumed for a given SKU. This is the key measure of usage. ConsumedUnit: The type of unit being consumed, such as 'tokens', 'GB', or 'vCPU hours'. Often appears as 'Units' in Azure exports for OpenAI workloads. PricingUnit: The unit of measure used for pricing. Should match 'ConsumedUnit', e.g., 'tokens'. EffectiveCost: Final cost after amortization of reservations, discounts, and prepaid credits. Often derived from billing data. BilledCost: The invoiced charge before applying commitment discounts or amortization. PricingQuantity: The volume of usage after applying pricing rules such as tiered or block pricing. Used to calculate cost when multiplied by unit price.688Views2likes0CommentsCreating custom analyses and reports with FinOps hubs
The FinOps toolkit provides a set of standard Power BI reports that address key scenarios within the FinOps Framework, such as cost transparency and rate optimization. However, many organizations require more than just predefined reports. They often need the flexibility to build custom analyses, develop advanced reporting, and enrich FinOps data with internal data sources. This post shows you how to: Build your own cost analyses using Kusto Query Language (KQL). Connect and enrich your FinOps data with additional sources in Azure Data Explorer (ADX) or Microsoft Fabric Real-Time Intelligence (RTI). Common use cases Organizations typically extend their FinOps reporting and analysis capabilities to: Allocate and charge back cloud costs Design customized showback and chargeback models, refine cost allocation across business units, cost centers, and regions, and support internal billing and negotiation processes. Improve cost visibility and efficiency Identify cost-saving opportunities, detect idle or over-provisioned resources, and analyze the financial impact of architectural changes such as serverless adoption or reserved instance utilization. Integrate cloud costs with business metrics Combine cost data with key performance indicators (KPIs) such as cost per customer, cost per transaction, or revenue margins. Enable advanced forecasting and budgeting Perform trend analysis, forecast future costs, and estimate budgets based on historical usage patterns. Support multi-cloud governance Normalize and compare costs across multiple cloud providers for unified reporting and benchmarking. Implement proactive cost management Set up custom anomaly detection mechanisms to monitor unexpected spending patterns. Create executive-level reporting Build tailored KPIs and dashboards for C-level stakeholders, focusing on business-relevant insights. Enabling advanced analysis with Azure Data Explorer and Microsoft Fabric Since version 0.7 (released in November 2024), FinOps hubs, one of the tools in the FinOps toolkit, includes Azure Data Explorer (ADX) as its central data platform. In version 0.10 (released on May 4, 2025), FinOps hubs also added support for Microsoft Fabric Real-Time Intelligence (RTI), which is a version of ADX fully integrated into Microsoft Fabric. ADX and RTI are high-performance, fully managed analytics services optimized for large-scale data exploration. They enable organizations to query, join, and analyze cloud cost and usage data at scale using the powerful Kusto Query Language (KQL). By leveraging ADX and RTI, FinOps hubs not only support standard reporting needs but also empower users to build advanced custom analyses, enrich datasets with internal or external sources, and integrate seamlessly with a variety of visualization and analytics tools. Prerequisites To follow the examples in this article, ensure you have: A deployed FinOps hub instance with Azure Data Explorer or Microsoft Fabric, see FinOps hubs. An Entra ID account with read access to both the Hub and Ingestion databases (Viewer role). For details, see Security roles. Writing your first KQL query Kusto Query Language (KQL) is the primary query language for Azure Data Explorer and Fabric Real-Time Intelligence. While KQL is conceptually similar to SQL or T-SQL, KQL is optimized for fast, flexible analysis across large datasets. The simplest way to execute a KQL query is through the Azure Data Explorer or Microsoft Fabric web UI. For an overview, see Azure Data Explorer web UI query overview or Query data in a Microsoft Fabric KQL queryset. Steps to run a query Whether your data is in Azure Data Explorer or Microsoft Fabric, you can use either tool to connect and run queries. To use Azure Data Explorer: Open the Azure Data Explorer web UI. Add a connection to your Azure Data Explorer cluster, see add a cluster connection. In the query editor, select the Hub database. To use Fabric Real-Time Intelligence: Open the desired workspace from Microsoft Fabric. In the list of workspace resources, select Hub_queryset for your eventhouse. Once you've connected to the Hub database, run the following query: Costs | summarize Cost = sum(EffectiveCost) by BillingPeriodStart | render columnchart This query summarizes effective costs by billing period and renders the results as a column chart. FOCUS compatibility The Costs view in the Hub database is compatible with the FinOps Open Cost and Usage Specification (FOCUS), allowing you to align analyses with standardized FinOps use cases. While KQL is the primary language, Azure Data Explorer also supports a subset of T-SQL commands, enabling users familiar with SQL to write basic queries. For more information, see T-SQL – Azure Data Explorer. The following example shows the different metered costs for a particular SKU. The SQL-statement, taken from the FOCUS use case library, can be executed against the Costs view: SELECT ProviderName, ChargePeriodStart, ChargePeriodEnd, SkuId, SkuPriceId, PricingUnit, ListUnitPrice, SUM(PricingQuantity) AS TotalPricingQuantity, SUM(ListCost) AS TotalListCost, SUM(EffectiveCost) AS TotalEffectiveCost FROM Costs WHERE ChargePeriodStart >= DATEADD(day, -30, GETDATE()) AND ChargePeriodEnd < GETDATE() GROUP BY ProviderName, ChargePeriodStart, ChargePeriodEnd, SkuId, SkuPriceId, PricingUnit, ListUnitPrice ORDER BY ChargePeriodStart ASC While Azure Data Explorer supports a subset of T-SQL commands for basic querying, KQL is the recommended language for advanced analysis. It offers broader capabilities, is optimized for large-scale data exploration, and benefits from widespread adoption across Azure services such as Microsoft Defender, Azure Resource Graph, Azure Monitor and Microsoft Fabric — meaning many cloud and security teams are already familiar with it. Writing more advanced KQL queries Basic KQL queries can be extended to address complex business requirements. For example: Costs | where BillingPeriodStart >= datetime(2024-01-01) and BillingPeriodEnd < datetime(2024-04-01) | summarize TotalEffectiveCost = sum(EffectiveCost) by ProviderName, BillingCurrency, BillingPeriodStart, ServiceCategory | order by TotalEffectiveCost desc Beyond the Costs view, FinOps hubs also include other views such as Prices, which can be queried individually or joined with other datasets using the join operator. Example queries are available in FinOps toolkit best practice library. Learn more about KQL Expand your KQL skills with the following Microsoft Learn resources: Kusto Query Language overview KQL quick reference SQL to Kusto query translation Visualizing FinOps data with KQL Writing powerful queries is only the first step. Visualizing insights effectively is critical to drive action across the organization. Azure Data Explorer (ADX) and Microsoft Fabric Real-Time Intelligence offer a rich set of visualization options to address diverse FinOps needs — from portfolio managers to cloud engineers. Microsoft Excel and Power Query Microsoft Excel, enhanced with Power Query, is ideal for portfolio managers and procurement teams who want to integrate live FinOps data into their existing spreadsheets. Excel and Power Query enable users to refresh FinOps data seamlessly, perform custom reporting within familiar Excel environments, and support processes like budget planning, contract renewals, and forecast validation. Learn more about connecting Microsoft Excel to Azure Data Explorer. Power BI For FinOps teams, finance managers, and executive stakeholders, Power BI is a natural choice especially when Power BI is already used across the organization. Power BI allows users to build FinOps KPIs, create management reports, and integrate cloud financials into existing corporate dashboards with ease, powered directly by Azure Data Explorer queries. Learn more about connecting Power BI to Azure Data Explorer. Azure Data Explorer dashboards and KQL render visualizations FinOps practitioners and cloud analysts often need lightweight, real-time visualizations to monitor specific cloud spend patterns or anomalies. Azure Data Explorer dashboards and the render operator allow users to quickly create ad-hoc dashboards, monitor commitment-based discounts like reservations and savings plans, or track the costs of specific services such as Azure OpenAI, Sentinel, or Azure Monitor — all without leaving Azure Data Explorer. Learn more about Azure Data Explorer dashboards. Learn more about KQL render operator Azure Workbooks Azure Workbooks are ideal for cloud engineers and operations teams who work primarily within the Azure Portal. Workbooks provide a flexible way to create new FinOps visualizations by combining cost data from Azure Data Explorer with operational telemetry and monitoring insights. Organizations can easily add their own custom Workbooks to complement the existing FinOps workbooks. This approach helps teams extend their cost governance capabilities while maintaining consistency with the core FinOps Framework. Learn more about connecting Azure Workbooks to Azure Data Explorer. Microsoft Fabric Microsoft Fabric extends the FinOps visualization landscape by bringing together real-time analytics and traditional reporting within a unified platform. FinOps teams can use Fabric to build Power BI reports for advanced financial insights and Real-Time Dashboards for live cost monitoring, whether data is in a Fabric eventhouse or a separate Azure Data Explorer cluster. Fabric also enables organizations to package multiple FinOps assets, such as reports and dashboards, into organizational apps. This allows teams to deliver comprehensive, role-specific FinOps experiences across the business, ensuring that portfolio managers, engineers, and executives can access the right insights in a single, integrated environment. Learn more about Microsoft Fabric Org Apps. Learn more about Microsoft Fabric Real-Time Dashboards. Automating FinOps insights with KQL Beyond visualization, organizations can automate FinOps workflows and insights by executing KQL queries programmatically. Azure Data Explorer and Fabric Real-Time Intelligence support integration with automation tools across low-code platforms, DevOps pipelines, and data orchestration services. Low-code automation: Power Automate and Azure Logic Apps Using built-in connectors for Azure Data Explorer, FinOps teams can create automated workflows that: Execute KQL queries on a schedule or in response to events. Send notifications or alerts when cost anomalies are detected. Populate ticketing systems with cost optimization tasks. Trigger approval workflows for unusual spending patterns. Learn more about Azure Data Explorer connectors. DevOps integration: Azure DevOps Pipelines KQL queries can also be integrated into Azure DevOps Pipelines using dedicated tasks. This enables: Running cost validation checks as part of CI/CD pipelines. Automatically generating FinOps reports during deployments. Enforcing cost governance as part of the delivery process. Learn more about Azure DevOps integration with Azure Data Explorer. Data orchestration: Azure Data Factory and Microsoft Fabric pipelines For more complex data processing and orchestration scenarios, Azure Data Factory and Microsoft Fabric pipelines allow you to execute KQL queries and commands natively. Typical use cases include: Periodically enriching FinOps datasets with external reference data. Automating ingestion of additional telemetry or metadata into FinOps Hub. Triggering FinOps analyses as part of broader data workflows. Learn more about executing KQL commands in Azure Data Factory. Learn more about Pipelines in Microsoft Fabric. Integrating FinOps data across systems Before moving into full-scale analysis and visualization, many organizations find value in enriching FinOps data by combining it with additional operational, telemetry, or security sources. Azure Data Explorer and Fabric Real-Time Intelligence provide powerful capabilities to integrate external datasets seamlessly, enabling FinOps teams to generate deeper insights, automate governance processes, and support data-driven decision-making across the business. This section outlines the main integration options and best practices for working with FinOps data in broader cloud and enterprise environments. External tables For FinOps analysts and data engineering teams, external tables offer a flexible way to reference additional datasets without the need for ingestion. External tables allow querying external data sources directly from Azure Data Explorer or Fabric Real-Time Intelligence, helping to avoid data duplication and minimizing ingestion costs. Typical use cases include: Joining FinOps cost data with organizational metadata, such as business unit mappings or account hierarchies. Referencing FinOps metadata, budget and forecast data, or other external business and technical datasets in real time. Combining FinOps data with datasets stored in data lakes or relational databases like SQL Server, MySQL, PostgreSQL, or CosmosDB. A practical example is the integration of blob inventory report data with storage cost data: By enabling blob inventory reports on a specific Azure Storage account, detailed metadata about stored blobs is automatically exported as CSV or Parquet files into a storage container. These files can then be referenced directly in Azure Data Explorer as an external table, allowing detailed cost analysis and storage optimization. Learn how to enable Azure Storage blob inventory reports. Learn more about external tables in Azure Data Explorer. Create an external table using the Azure Data Explorer web UI wizard Azure Monitor and Log Analytics integration For cloud engineers, SRE teams, and FinOps practitioners, integrating Azure Monitor and Log Analytics unlocks new possibilities: Correlating cost data with operational telemetry such as CPU utilization, memory usage, request volumes, or storage IOPS. Attributing shared infrastructure costs based on actual resource usage patterns. Monitoring operational and financial efficiency side by side. This bi-directional integration allows querying Azure Monitor data within Azure Data Explorer, and vice versa, enabling unified cost and performance analysis. A practical example: When virtual machines and managed disks are configured to send performance metrics (such as CPU utilization, memory usage, throughput, latency, and IOPS) to a Log Analytics workspace (specifically into the Perf table), this telemetry can be directly joined with cost data from the Costs view. This enables not only identifying downsizing opportunities but also calculating the potential savings associated with specific optimization actions. Correlate data in Azure Data Explorer and Azure Resource Graph with data in a Log Analytics workspace Query data in Azure Monitor with Azure Data Explorer Sentinel and security data integration For security teams and cloud governance groups, integrating FinOps data with security telemetry can provide important financial insights. Since Microsoft Sentinel is built on top of a Log Analytics workspace, it is possible to directly correlate Sentinel cost data with Sentinel data ingestion metrics. This enables building showback models for shared security costs, based on actual usage patterns and ingestion volumes — ensuring transparency and fair allocation of security-related cloud spend. Best practices for FinOps data integration Choosing the right integration method depends on your use case, data dynamics, and performance requirements. General recommendations: Use external tables when: The data is relatively static or updated periodically (e.g., metadata mappings, budget forecasts, inventory reports). Real-time query performance is not critical. Ingest data into Azure Data Explorer when: You need high-performance queries over frequently changing datasets. Large-scale enrichment with operational, security, or business data is required. Real-time cost monitoring, anomaly detection, or performance correlation is needed. In practice, many organizations adopt a hybrid approach — using external tables for static reference data while ingesting dynamic datasets for optimized querying and analytics. Additionally, Azure Data Explorer makes it easy to copy data from an external table into a native table. This provides a simple yet powerful pattern for implementing ETL (Extract, Transform, Load) processes within ADX, allowing you to materialize reference data or periodically snapshot external datasets for performance optimization. How we put this into practice at Alescent At Alescent, we have been helping organizations unlock the full potential of KQL for FinOps and advanced cloud cost optimization — from early adoption on Azure Data Explorer to fully integrated solutions on Microsoft Fabric. We have successfully implemented end-to-end FinOps capabilities, covering everything from cost transparency and allocation to forecasting, anomaly detection, and executive reporting. If you are interested in learning more today, feel free to connect with me on LinkedIn for a direct conversation. Conclusion With FinOps hubs leveraging Azure Data Explorer and Microsoft Fabric, organizations can move beyond basic cost reporting to build a fully integrated, data-driven FinOps practice. By combining flexible querying, advanced visualization, and seamless data integration, FinOps teams can deliver deeper insights, optimize cloud investments, and empower every stakeholder — from engineers to executives — to make smarter, financially informed decisions. Next steps Ready to take your FinOps practice to the next level? Explore FinOps hubs and learn about deployment options and capabilities. Deepen your KQL skills with the Kusto Query Language documentation. Start building your own custom cost analyses and dashboards — and unlock deeper insights into your cloud environment.1.6KViews1like0Comments