finops toolkit
34 TopicsA practitioner's guide to accelerating FinOps with GitHub Copilot and FinOps hubs
ℹ️ Quick implementation overview Setup time: ~30 minutes for basic configuration Target audience: FinOps practitioners, finance teams, engineering managers Prerequisites: Azure subscription with FinOps hubs deployed, VS Code, GitHub Copilot Key enabler: FinOps Hub Copilot v0.11 release Key benefits 🎯 Democratized analytics Non-technical team members can perform advanced cost analysis without KQL expertise. ⚡ Faster insights Natural language eliminates query writing overhead and accelerates time-to-insights. 📋 FinOps Framework alignment All queries map directly to validated FinOps Framework capabilities. 🔒 Enterprise ready Built on proven FinOps hub data foundation with security and governance controls. FinOps practitioners face a common challenge: bridging the gap between complex cost data and actionable business insights. While FinOps hubs provide a comprehensive, analytics-ready foundation aligned with the FinOps Framework, accessing and analyzing this data traditionally requires deep technical expertise in KQL and schema knowledge. This guide demonstrates how to perform sophisticated cost analysis using natural language queries using GitHub Copilot in VS Code connected to FinOps hubs 0.11 via the Azure MCP server. This approach democratizes advanced analytics across FinOps teams, supporting faster decision-making and broader organizational adoption of FinOps practices. ℹ️ Understanding the technology stack The Model Context Protocol (MCP) is an open standard that enables AI agents to securely connect to external data sources and tools. The Azure MCP server is Microsoft's implementation that provides this connectivity specifically for Azure resources, while GitHub Copilot acts as the AI agent that translates your natural language questions into the appropriate technical queries. Understanding the foundation: FinOps hubs and natural language integration FinOps hubs serve as the centralized data platform for cloud cost management, providing unified cost and usage data across clouds, accounts, and tenants. The integration with GitHub Copilot through the Azure MCP server introduces a natural language interface that maps practitioner questions directly to validated KQL queries, eliminating the technical barrier that often limits FinOps analysis to specialized team members. Note: The FinOps toolkit also includes Power BI reports, workbooks, alerts, and an optimization engine for advanced analytics and automation. See the FinOps toolkit overview for the full set of capabilities. Key capabilities and technical foundation ℹ️ About the FinOps toolkit ecosystem The FinOps toolkit also includes Power BI reports, workbooks, and an optimization engine for advanced analytics and automation. See the FinOps toolkit overview for the full set of capabilities. FinOps hubs provide several critical capabilities that enable practitioner success: 📊 Data foundation Centralized cost and usage data across multiple cloud providers, billing accounts, and organizational units Native alignment with the FinOps Framework domains and FOCUS specification Analytics-ready data model optimized for performance at scale without complexity overhead 🔗 Integration capabilities Multiple access patterns: Power BI integration, Microsoft Fabric compatibility, and direct KQL access for advanced scenarios Natural language query interface through Azure MCP server integration with Copilot ⚙️ Technical architecture The Azure MCP server acts as the translation layer, implementing the open Model Context Protocol to enable secure communication between AI agents (like GitHub Copilot) and Azure resources. For FinOps scenarios, it specifically provides natural language access to Azure Data Explorer databases containing FinOps hubs data, converting practitioner questions into validated KQL queries while maintaining enterprise authentication and security standards. Mapping FinOps Framework capabilities to natural language queries The integration supports the complete spectrum of FinOps Framework capabilities through natural language interfaces. Each query type maps to specific Framework domains and validated analytical patterns: 💡 Quick reference Each prompt category leverages pre-validated queries from the FinOps hubs query catalog, ensuring consistent, accurate results across different practitioners and use cases. 🔍 Understand phase capabilities Capability Natural language example Business value Cost allocation and accountability "Show me cost allocation by team for Q1" Instant breakdown supporting chargeback discussions Anomaly detection and management "Find any cost anomalies in the last 30 days" Proactive identification of budget risks Reporting and analytics "What are our top resource types by spend?" Data-driven optimization focus areas ⚡ Optimize phase capabilities Capability Natural language example Business value Rate optimization "How much did we save with reservations last month?" Quantification of commitment discount value Workload optimization "Show me underutilized resources" Resource efficiency identification Governance enforcement "Show me resources without proper tags" Policy compliance gaps 📈 Operate phase capabilities Capability Natural language example Business value Forecasting and planning "Forecast next quarter's cloud costs" Proactive budget planning support Performance tracking "Show month-over-month cost trends" Operational efficiency measurement Business value quantification "Calculate our effective savings rate" ROI demonstration for stakeholders Practical implementation: Real-world scenarios and results The following examples demonstrate how natural language queries translate to actionable FinOps insights. Each scenario includes the business context, Framework alignment, query approach, and interpretable results to illustrate the practical value of this integration. ℹ️ Sample data notation All cost figures, dates, and resource names in the following examples are illustrative and provided for demonstration purposes. Actual results will vary based on your organization's Azure usage, billing structure, and FinOps hub configuration. Effective cost allocation and accountability FinOps Framework alignment Domain: Understand usage and cost Capabilities: Allocation, Reporting and analytics Business context Finance teams require accurate cost allocation data to support budget planning and accountability discussions across organizational units. Natural language query What are the top resource groups by cost last month? Query results and business impact The natural language prompt maps to a validated allocation query that aggregates effective cost by resource group, providing the foundational data for chargeback and showback processes. Resource group Effective cost haven $36,972.85 leap $15,613.96 ahbtest $6,824.54 vnet-hub-001 $1,560.13 ... ... 🎯 Key takeaway Natural language queries eliminate the need for complex KQL knowledge while maintaining data accuracy. Finance teams can now perform sophisticated cost allocation analysis without technical barriers. Learn more: Introduction to cost allocation Proactive cost anomaly detection and management FinOps Framework alignment Domain: Understand usage and cost Capabilities: Anomaly management, Reporting and analytics Business context Proactive anomaly detection enables rapid response to unexpected cost changes, supporting budget adherence and operational efficiency. Natural language query Are there any unusual cost spikes or anomalies in the last 12 months? Query results and business impact The system applies time series analysis to identify significant cost deviations, automatically calculating percentage changes and flagging potential anomalies for investigation. Date Daily cost % change vs previous day 2025-06-03 $971.36 -59.54% 2025-06-01 $2,370.16 -4.38% 2025-04-30 $2,302.10 -5.56% 2025-04-02 $2,458.45 +5.79% ... ... ... ⚠️ Warning: Analysis insight The 59% cost reduction on June 3rd indicates a significant operational change, such as workload migration or resource decommissioning, requiring validation to ensure expected behavior. 🎯 Key takeaway Automated anomaly detection enables proactive cost management by identifying unusual spending patterns before they impact budgets, supporting rapid response to operational changes. Learn more: Anomaly management Accurate financial forecasting and budget planning FinOps Framework alignment Domain: Quantify business value Capabilities: Forecasting, Planning and estimating Business context Accurate financial forecasting supports budget planning processes and enables proactive capacity and cost management decisions. Natural language query Forecast total cloud cost for the next 90 days based on the last 12 months. Query results and business impact The forecasting algorithm analyzes historical spending patterns and applies trend analysis to project future costs, providing both daily estimates and aggregate totals for planning purposes. Date Forecasted cost 2025-06-04 $2,401.61 2025-07-01 $2,401.61 2025-08-01 $2,401.61 2025-09-01 $2,401.61 ... ... Total forecasted 90-day spend: $216,145.24 🎯 Key takeaway Natural language forecasting queries provide accurate financial projections based on validated historical analysis, enabling confident budget planning without requiring data science expertise. Learn more: Forecasting Reporting and analytics capabilities FinOps Framework alignment Domain: Understand usage and cost Capabilities: Reporting and analytics Business context Executive reporting requires consistent, reliable cost trend analysis to support strategic decision-making and budget performance tracking. Natural language query Show monthly billed and effective cost trends for the last 12 months. Query results and business impact Month Billed cost Effective cost 2024-06 $46,066.39 $46,773.85 2024-07 $72,951.41 $74,004.08 2024-08 $73,300.31 $74,401.81 2024-09 $71,886.30 $72,951.26 ... ... ... Learn more: Reporting and analytics Resource optimization analysis FinOps Framework alignment Domain: Optimize usage and cost Capabilities: Workload optimization, Reporting and analytics Business context Prioritizing optimization efforts requires understanding which resource types drive the most cost, enabling focused improvement initiatives with maximum business impact. Natural language query What are the top resource types by cost last month? Query results and business impact Resource type Effective cost Fabric Capacity $34,283.52 Virtual machine scale set $15,155.59 SQL database $2,582.99 Virtual machine $2,484.34 ... ... Learn more: Workload optimization Implementation methodology This section provides a systematic approach to implementing natural language FinOps analysis using the technical foundation established above. Prerequisites and environment validation Before proceeding with implementation, ensure you have: ✅ Azure subscription with appropriate FinOps hub deployment permissions ✅ Node.js runtime environment (required by Azure MCP Server) ✅ Visual Studio Code with GitHub Copilot extension ✅ Azure CLI, Azure PowerShell, or Azure Developer CLI authentication configured Access validation methodology Step 1: Verify FinOps hub deployment Confirm hub deployment status and data ingestion through the FinOps hubs setup guide Step 2: Validate database access Test connectivity to the hub database using Azure Data Explorer web application or Azure portal Step 3: Confirm schema availability Verify core functions (Costs, Prices) and databases (Hub, Ingestion) are accessible with current data Expected Database Structure Hub database: Public-facing functions including Costs, Prices, and version-specific functions (e.g., Costs_v1_0) Ingestion database: Raw data tables, configuration settings (HubSettings, HubScopes), and open data tables (PricingUnits) FOCUS-aligned data: All datasets conform to FinOps Open Cost and Usage Specification standards Learn more: FinOps hubs template details Azure MCP server configuration ℹ️ What is Azure MCP Server? The Azure Model Context Protocol (MCP) server is a Microsoft-provided implementation that enables AI agents and clients to interact with Azure resources through natural language commands. It implements the open Model Context Protocol standard to provide secure, structured access to Azure services including Azure Data Explorer (FinOps hub databases). Key capabilities and service support The Azure MCP server provides comprehensive Azure service integration, particularly relevant for FinOps analysis: 🔍 FinOps-relevant services Azure Data Explorer: Execute KQL queries against FinOps hub databases Azure Monitor: Query logs and metrics for cost analysis Resource groups: List and analyze organizational cost structures Subscription management: Access subscription-level cost data 🔧 Additional Azure services Azure Storage, Cosmos DB, Key Vault, Service Bus, and 10+ other services Full list available in the Azure MCP Server tools documentation Installation methodology The Azure MCP Server is available as an NPM package and VS Code extension. For FinOps scenarios, we recommend the VS Code extension approach for seamless integration with GitHub Copilot. Option 1: VS Code extension (recommended) Install the Azure MCP server extension from VS Code Marketplace The extension automatically configures the server in your VS Code settings Open GitHub Copilot and activate Agent Mode to access Azure tools Option 2: Manual configuration Add the following to your MCP client configuration: { "servers": { "Azure MCP Server": { "command": "npx", "args": ["-y", "@azure/mcp@latest", "server", "start"] } } } Authentication requirements Azure MCP Server uses Entra ID through the Azure Identity library, following Azure authentication best practices. It supports: Azure CLI: az login (recommended for development) Azure PowerShell: Connect-AzAccount Azure Developer CLI: azd auth login Managed identity: For production deployments The server uses DefaultAzureCredential and automatically discovers the best available authentication method for your environment. Technical validation steps Step 1: Authentication verification Confirm successful login to supported Azure tools Step 2: Resource discovery Validate MCP Server can access your Azure subscription and FinOps hub resources Step 3: Database connectivity Test query execution against FinOps hub databases Integration with development environment VS Code configuration requirements: GitHub Copilot extension with Agent Mode capability Azure MCP Server installation and configuration FinOps hubs copilot instructions and configuration files The FinOps Hub Copilot v0.11 release provides pre-configured GitHub Copilot instructions specifically tuned for FinOps analysis. This release includes: AI agent instructions optimized for FinOps Framework capabilities GitHub Copilot configuration files for VS Code Agent Mode Validated query patterns mapped to common FinOps scenarios Azure MCP Server integration guides for connecting to FinOps hub data Verification methodology: Open Copilot Chat interface (Ctrl+Shift+I / Cmd+Shift+I) Activate Agent Mode and select tools icon to verify Azure MCP Server availability Execute connectivity test: "What Azure resources do I have access to?" Expected response validation: Successful authentication confirmation Azure subscription and resource enumeration FinOps hub database connectivity status Progressive query validation Foundational test queries: Complexity level Validation query Expected behavior Basic "Show me total cost for last month" Single aggregate value with currency formatting Intermediate "What are my top 10 resource groups by cost?" Tabular results with proper ranking Advanced "Find any costs over $1000 in the last week" Filtered results with anomaly identification Query execution validation: KQL translation accuracy against FinOps hub schema Result set formatting and data type handling Error handling and user feedback mechanisms Operational best practices for enterprise implementation Query optimization and performance considerations Data volume management: Implement temporal filtering to prevent timeout scenarios (Azure Data Explorer 64MB result limit) Use summarization functions for large datasets rather than detailed row-level analysis Apply resource-level filters when analyzing specific environments or subscriptions Schema consistency validation: Reference the FinOps hub database guide for authoritative column definitions Verify data freshness through ingestion timestamp validation Validate currency normalization across multi-subscription environments Query pattern optimization: Leverage the FinOps hub query catalog for validated analytical patterns Customize costs-enriched-base query foundation for organization-specific requirements Implement proper time zone handling for global operational environments Security and access management Authentication patterns: Utilize Azure CLI integrated authentication for development environments Implement service principal authentication for production automation scenarios Maintain principle of least privilege for database access permissions Data governance considerations: Ensure compliance with organizational data classification policies Implement appropriate logging for cost analysis queries and results Validate that natural language prompts don't inadvertently expose sensitive financial data Comprehensive query patterns by analytical domain The following reference provides validated natural language prompts mapped to specific FinOps Framework capabilities and proven KQL implementations. Technical note: Each pattern references validated queries from the FinOps hub query catalog. Verify schema compatibility using the FinOps hub database guide before implementation. Cost visibility and allocation patterns Analytical requirement FinOps Framework alignment Validated natural language query Executive cost trend reporting Reporting and analytics "Show monthly billed and effective cost trends for the last 12 months." Resource group cost ranking Allocation "What are the top resource groups by cost last month?" Quarterly financial reporting Allocation / Reporting and analytics "Show quarterly cost by resource group for the last 3 quarters." Service-level cost analysis Reporting and analytics "Which Azure services drove the most cost last month?" Organizational cost allocation Allocation / Reporting and analytics "Show cost allocation by team and product for last quarter." Optimization and efficiency patterns Analytical requirement FinOps Framework alignment Validated natural language query Resource optimization prioritization Workload optimization "What are the top resource types by cost last month?" Commitment discount analysis Rate optimization "Show reservation recommendations and break-even analysis for our environment." Underutilized resource identification Workload optimization "Find resources with low utilization that could be optimized or decommissioned." Savings plan effectiveness Rate optimization "How much did we save with savings plans compared to pay-as-you-go pricing?" Tag compliance monitoring Data ingestion "Show me resources without required cost center tags." Anomaly detection and monitoring patterns Analytical requirement FinOps Framework alignment Validated natural language query Cost spike identification Anomaly management "Find any unusual cost spikes or anomalies in the last 30 days." Budget variance analysis Budgeting "Show actual vs. budgeted costs by resource group this quarter." Trending analysis Reporting and analytics "Identify resources with consistently increasing costs over the last 6 months." Threshold monitoring Anomaly management "Alert me to any single resources costing more than $5,000 monthly." Governance and compliance patterns Analytical Requirement FinOps Framework Alignment Validated Natural Language Query Policy compliance validation Policy and governance "Show resources that don't comply with our tagging policies." Approved service usage Policy and governance "List any non-approved services being used across our subscriptions." Regional compliance monitoring Policy and governance "Verify all resources are deployed in approved regions only." Cost center accountability Invoicing and chargeback "Generate chargeback reports by cost center for last quarter." Key takeaway: These validated query patterns provide a comprehensive foundation for FinOps analysis across all Framework capabilities. Use them as templates and customize for your organization's specific requirements. Troubleshooting and optimization guidance Common query performance issues ⚠️ Warning: Performance considerations Azure Data Explorer has a 64MB result limit by default. Proper query optimization avoids timeouts and ensures reliable performance. If using Power BI, use DirectQuery to connect to your data. Large dataset timeouts Symptom: Queries failing with timeout errors on large datasets Solution: Add temporal filters ✅ Recommended: "Show costs for last 30 days" ❌ Avoid: "Show all costs" Framework alignment: Data ingestion Memory limit exceptions Symptom: Exceeding Azure Data Explorer 64MB result limit Solution: Use aggregation functions ✅ Recommended: "Summarize costs by month" ❌ Avoid: Daily granular data for large time periods Best practice: Implement progressive drill-down from summary to detail Schema validation errors Symptom: Queries returning empty results or unexpected columns Solution: Verify hub schema version compatibility using the database guide Validation: Test with known queries from the query catalog Query optimization best practices Temporal filtering ✅ Recommended: "Show monthly costs for Q1 2025" ❌ Avoid: "Show all historical costs by day" Aggregation-first approach ✅ Recommended: "Top 10 resource groups by cost" ❌ Avoid: "All resources with individual costs" Multi-subscription handling ✅ Recommended: "Costs by subscription for production environment" ❌ Avoid: "All costs across all subscriptions without filtering" Conclusion The integration of FinOps hubs with natural language querying through GitHub Copilot and Azure MCP Server represents a transformative advancement in cloud financial management accessibility. By eliminating technical barriers traditionally associated with cost analysis, this approach enables broader organizational adoption of FinOps practices while maintaining analytical rigor and data accuracy. Key takeaways for implementation success Foundation building Start with the basics: Ensure robust FinOps hub deployment with clean, consistent data ingestion Validate authentication and connectivity before advancing to complex scenarios Begin with basic queries and progressively increase complexity as team familiarity grows Business value focus Align with organizational needs: Align query patterns with organizational FinOps maturity and immediate business needs Prioritize use cases that demonstrate clear ROI and operational efficiency gains Establish feedback loops with finance and business stakeholders to refine analytical approaches Scale and governance planning Design for enterprise success: Implement appropriate access controls and data governance from the beginning Design query patterns that perform well at organizational scale Establish monitoring and alerting for cost anomalies and policy compliance Future considerations As natural language interfaces continue to evolve, organizations should prepare for enhanced capabilities including: 🔮 Advanced analytics Multi-modal analysis: Integration of cost data with performance metrics, compliance reports, and business KPIs Predictive analytics: Advanced forecasting and scenario modeling through conversational interfaces 🤖 Automated intelligence Automated optimization: Natural language-driven resource rightsizing and commitment recommendations Cross-platform intelligence: Unified analysis across cloud providers, SaaS platforms, and on-premises infrastructure The democratization of FinOps analytics through natural language interfaces positions organizations to make faster, more informed decisions about cloud investments while fostering a culture of cost consciousness across all teams. Success with this integration requires both technical implementation excellence and organizational change management to maximize adoption and business impact. Learn more about the FinOps toolkit and stay updated on new capabilities at the FinOps toolkit website.280Views3likes0CommentsGetting started with FinOps hubs: Multicloud cost reporting with Azure and Google Cloud
Microsoft’s FinOps hubs offer a powerful and trusted foundation for managing, analyzing, and optimizing cloud costs. Built on Azure Data Explorer (ADX) and Azure Data Lake Storage (ADLS), FinOps hubs provide a scalable platform to unify billing data across providers leveraging FOCUS datasets. In this post, we’ll take a hands-on technical walkthrough of how to connect Microsoft FinOps hubs to Google Cloud, enabling you to export and analyze Google Cloud billing data with Azure billing data directly within your FinOps hub instance. This walk through will focus on using only storage in the hub for accessing data and is designed to get you started and understand multicloud connection and reporting from FinOps hubs. For large datasets Azure Data Explorer or Microsoft Fabric is recommended. With the introduction of the FinOps Open Cost and Usage Specification (FOCUS), normalizing billing data and reporting it through a single-pane-of-glass experience has never been easier. As a long-time contributor to the FOCUS working group, I’ve spent the past few years helping define standards to make multicloud reporting simpler and more actionable. FOCUS enables side-by-side views of cost and usage data—such as compute hours across cloud providers—helping organizations make better decisions around workload placement and right-sizing. Before FOCUS, this kind of unified analysis was incredibly challenging due to the differing data models used by each provider. To complete this technical walk through, you’ll need access to both Azure and Google Cloud—and approximately 2 to 3 hours of your time. Getting started: The basics you’ll need Before diving in, ensure you have the following prerequisites in place: ✅ Access to your Google Cloud billing account. ✅ A Google Cloud project with BigQuery and other APIs enabled and linked to your billing account. ✅ All required IAM roles and permissions for working with BigQuery, Cloud Functions, Storage, and billing data (detailed below). ✅ Detailed billing export and pricing export configured in Google Cloud. ✅ An existing deployment of FinOps hubs. What you’ll be building In this walk-through, you’ll set up Google billing exports and an Azure Data Factory pipeline to fetch the exported data, convert to parquet and ingest into your FinOps hub storage, following the FOCUS 1.0 standard for normalization. Through the process you should end up creating: Configure a BigQuery view to convert detailed billing exports into the FOCUS 1.0 schema. Create a metadata table in BigQuery to track export timestamps to enable incremental data exports reducing file export sizes and avoiding duplicate data. Set up a GCS bucket to export FOCUS-formatted data in CSV format. Deploy a Google Cloud Function that performs incremental exports from BigQuery to GCS. Create a Google Cloud Schedule to automate the export of your billing data to Google Cloud Storage. Build a pipeline in Azure Data Factory to: Fetch the CSV billing exports from Google Cloud Storage. Convert them to Parquet. Ingest the transformed data into your FinOps hub ingestion container. Let's get started. We're going to start in Google Cloud before we jump back into Azure to setup the Data Factory pipeline. Enabling FOCUS exports in Google Prerequisite: Enable detailed billing & pricing exports ⚠️ You cannot complete this guide without billing data enabled in BigQuery if you have not enabled detailed billing exports and pricing exports, do this now and come back to the walk-through in 24 hours. Steps to enabled detailed billing exports and pricing exports: Navigate to Billing > Billing Export. Enable Detailed Cost Export to BigQuery. Select the billing project and a dataset - if you have not created a project for billing do so now. Enable Pricing Export to the same dataset. 🔊 “This enables daily cost and usage data to be streamed to BigQuery for granular analysis.” Detailed guidance and information on billing exports in Google can be found here: Google Billing Exports. What you'll create: Service category table Metadata table FOCUS View - you must have detailed billing export and pricing exports enabled to create this view Cloud Function Cloud Schedule What you'll need An active GCP Billing Account (this is your payment method). A GCP Project (new or existing) linked to your GCP billing account. All required APIs enabled. All required IAM roles enabled. Enable Required APIs: BigQuery API Cloud Billing API Cloud Build API Cloud Scheduler Cloud Functions Cloud Storage Cloud Run Admin Pub/Sub (optional, for event triggers) Required IAM Roles Assign the following roles to your user account: roles/billing.viewer or billing.admin roles/bigquery.dataEditor roles/storage.admin roles/cloudfunctions.admin roles/cloudscheduler.admin roles/iam.serviceAccountTokenCreator roles/cloudfunctions.invoker roles/run.admin (if using Cloud Run) roles/project.editor Create a service account (e.g., svc-bq-focus) Assign the following roles to your service account: roles/bigquery.dataOwner roles/storage.objectAdmin roles/cloudfunctions.invoker roles/cloudscheduler.admin roles/serviceAccount.tokenCreator roles/run.admin Your default compute service account will also require access to run cloud build services. Ensure you apply the cloud build role to your default compute service account in your project, it may look like this: projectID-compute@developer.gserviceaccount.com → roles/cloudbuild.builds.builder Create FOCUS data structure and View This section will create two new tables in Big Query, one for service category mappings and one for metadata related to export times. These are important to get in before we create the FOCUS view to extract billing data in FOCUS format. Create a service category mapping table I removed service category from the original google FOCUS view to reduce the size of the SQL Query, therefore, to ensure we mapped Service category properly, I created a new service category table and joined it to the FOCUS view. In this step we will create a new table using open data to map GCP services to service category. Doing this helps reduce the size of the SQL query and simplifies management of Service Category mapping. Leveraging open source data we can easily update service category mappings if they ever change or new categories are added without impacting the FOCUS view query. Process: Download the latest service_category_mapping.csv from the FOCUS Converter repo Go to BigQuery > Your Dataset > Create Table Upload the CSV Table name: service_category Schema: Auto-detect Create a metadata table This table will be used to track the last time detailed billing data was added to your detailed billing export, we use this to enable incremental exports of billing data through the FOCUS view to ensure we only export the latest set of data and not everything all the time. Process: Go to BigQuery > Create Table Table name: metadata_focus_export Schema: Field Name : Format last_export_time: TIMESTAMP export_message: STRING Enter your field name and then choose field format, do not add : 🔊 “Ensures each export only includes new data since the last timestamp.” Create the FOCUS-aligned view Creating a view in BigQuery allows us to un-nest the detailed billing export tables into the format of FOCUS 1.0*. To use Power BI we must un-nest the tables so this step is required. It also ensures we map the right columns in Google Cloud detailed billing export to the right columns in FOCUS. ⚠️ The FOCUS SQL code provided in this walk-through has been altered from the original Google provided code. I believe this new code is better formatted for FOCUS 1.0 than the original however it does contain some nuances that suit my personal views. Please evaluate this carefully before using this in a production system and adjust the code accordingly to your needs. Steps: Navigate to BigQuery > New Query Paste and update the FOCUS view SQL query which is provided below Replace: yourexporttable with detailed export dataset ID and table name that will look like " yourpricingexporttable with pricing export and table name your_billing_dataset with your detailed export dataset ID and table name FOCUS SQL Query: WITH usage_export AS ( SELECT *, ( SELECT AS STRUCT type, id, full_name, amount, name FROM UNNEST(credits) LIMIT 1 ) AS cud, FROM "Your-Detailed-Billing-Export-ID" -- replace with your detailed usage export table path ), prices AS ( SELECT export_time, sku.id AS sku_id, sku.description AS sku_description, service.id AS service_id, service.description AS service_description, tier.* FROM "your_pricing_export_id", UNNEST(list_price.tiered_rates) AS tier ) SELECT "111111-222222-333333" AS BillingAccountId, "Your Company Name" AS BillingAccountName, COALESCE((SELECT SUM(x.amount) FROM UNNEST(usage_export.credits) x),0) + cost as BilledCost, usage_export.currency AS BillingCurrency, DATETIME(PARSE_DATE("%Y%m", invoice.month)) AS BillingPeriodStart, DATETIME(DATE_SUB(DATE_ADD(PARSE_DATE("%Y%m", invoice.month), INTERVAL 1 MONTH), INTERVAL 1 DAY)) AS BillingPeriodEnd, CASE WHEN usage_export.adjustment_info.type IS NOT NULL and usage_export.adjustment_info.type !='' THEN 'Adjustment' WHEN usage_export.cud.type = 'PROMOTION' AND usage_export.cost_type = 'regular' AND usage_export.cud.id IS NOT NULL THEN 'Credit' WHEN usage_export.sku.description LIKE '%Commitment - 3 years - dollar based VCF%' or usage_export.sku.description LIKE '%Prepay Commitment%' THEN 'Purchase' WHEN usage_export.cud.id IS NOT NULL AND usage_export.cud.id != '' THEN 'Credit' WHEN usage_export.cost_type = 'regular' THEN 'Usage' WHEN usage_export.cost_type = 'tax' THEN 'Tax' WHEN usage_export.cost_type = 'adjustment' THEN 'Adjustment' WHEN usage_export.cost_type = 'rounding_error' THEN 'Adjustment' ELSE usage_export.cost_type END AS ChargeCategory, IF(COALESCE( usage_export.adjustment_info.id, usage_export.adjustment_info.description, usage_export.adjustment_info.type, usage_export.adjustment_info.mode) IS NOT NULL, "correction", NULL) AS ChargeClass, CASE WHEN usage_export.adjustment_info.type IS NOT NULL AND usage_export.adjustment_info.type != '' THEN usage_export.adjustment_info.type ELSE usage_export.sku.description END AS ChargeDescription, CAST(usage_export.usage_start_time AS DATETIME) AS ChargePeriodStart, CAST(usage_export.usage_end_time AS DATETIME) AS ChargePeriodEnd, CASE usage_export.cud.type WHEN "COMMITTED_USAGE_DISCOUNT_DOLLAR_BASE" THEN "Spend" WHEN "COMMITTED_USAGE_DISCOUNT" THEN "Usage" END AS CommitmentDiscountCategory, usage_export.cud.id AS CommitmentDiscountId, COALESCE (usage_export.cud.full_name, usage_export.cud.name) AS CommitmentDiscountName, usage_export.cud.type AS CommitmentDiscountType, CAST(usage_export.usage.amount_in_pricing_units AS numeric) AS ConsumedQuantity, usage_export.usage.pricing_unit AS ConsumedUnit, -- review CAST( CASE WHEN usage_export.cost_type = "regular" THEN usage_export.price.effective_price * usage_export.price.pricing_unit_quantity ELSE 0 END AS NUMERIC ) AS ContractedCost, -- CAST(usage_export.price.effective_price AS numeric) AS ContractedUnitPrice, usage_export.seller_name AS InvoiceIssuerName, COALESCE((SELECT SUM(x.amount) FROM UNNEST(usage_export.credits) x),0) + cost as EffectiveCost, CAST(usage_export.cost_at_list AS numeric) AS ListCost, prices.account_currency_amount AS ListUnitPrice, IF( usage_export.cost_type = "regular", IF( LOWER(usage_export.sku.description) LIKE "commitment%" OR usage_export.cud IS NOT NULL, "committed", "standard"), null) AS PricingCategory, IF(usage_export.cost_type = 'regular', usage_export.price.pricing_unit_quantity, NULL) AS PricingQuantity, IF(usage_export.cost_type = 'regular', usage_export.price.unit, NULL) AS PricingUnit, 'Google'AS ProviderName, usage_export.transaction_type AS PublisherName, usage_export.location.region AS RegionId, usage_export.location.region AS RegionName, usage_export.service.id AS ResourceId, REGEXP_EXTRACT (usage_export.resource.global_name, r'[^/]+$') AS ResourceName, usage_export.sku.description AS ResourceType, COALESCE(servicemapping.string_field_1, 'Other') AS ServiceCategory, usage_export.service.description AS ServiceName, usage_export.sku.id AS SkuId, CONCAT("SKU ID:", usage_export.sku.id, ", Price Tier Start Amount: ", price.tier_start_amount) AS SkuPriceId, usage_export.project.id AS SubAccountId, usage_export.project.name AS SubAccountName, (SELECT CONCAT('{', STRING_AGG(FORMAT('%s:%s', kv.key, kv.value), ', '), '}') FROM ( SELECT key, value FROM UNNEST(usage_export.project.labels) UNION ALL SELECT key, value FROM UNNEST(usage_export.tags) UNION ALL SELECT key, value FROM UNNEST(usage_export.labels) ) AS kv) AS Tags, FORMAT_DATE('%B', PARSE_DATE('%Y%m', invoice.month)) AS Month, usage_export.project.name AS x_ResourceGroupName, CAST(usage_export.export_time AS TIMESTAMP) AS export_time, FROM usage_export LEFT JOIN "Your-Service-Category-Id".ServiceCategory AS servicemapping ON usage_export.service.description = servicemapping.string_field_0 LEFT JOIN prices ON usage_export.sku.id = prices.sku_id AND usage_export.price.tier_start_amount = prices.start_usage_amount AND DATE(usage_export.export_time) = DATE(prices.export_time); 🔊 "This creates a FOCUS-aligned view of your billing data using FOCUS 1.0 specification, this view does not 100% conform to FOCUS 1.0. Create GCS Bucket for CSV Exports Your GCS bucket is where you will place your incremental exports to be exported to Azure. Once your data is exported to Azure you may elect to delete the files in the bucket, the metaata table keeps a record of the last export time. Steps: Go to Cloud Storage > Create Bucket Name: focus-cost-export (or any name you would like) Region: Match your dataset region Storage Class: Nearline (cheaper to use Earline but standard will also work just fine) Enable Interoperability settings > create access/secret key The access key and secret are tied to your user account, if you want to be able to use this with multiple people, create an access key for your service account - recommended for Prod, for this guide purpose, an access key linked to your account is fine. Save the access key and secret to a secure location to use later as part of the ADF pipeline setup. 🔊 “This bucket will store daily CSVs. Consider enabling lifecycle cleanup policies.” Create a Cloud Function for incremental export A cloud function is used here to enable incremental exports of your billing data in FOCUS format on a regular schedule. At present there is no known supported on demand export service from Google, so we came up with this little workaround. The function is designed to evaluate your billing data for last export time that is equals to or greater than the last time the function ran. To do this we look at the export_time column and the metadata table for the last time we ran this. This ensure we only export the most recent billing data which aids in reducing data export costs to Azure. This process is done through the GCP GUI using an inline editor to create the cloud function in the cloud run service. Steps: Go to Cloud Run > Write Function Select > Use Inline editor to create a function Service Name: daily_focus_export Region, the same region as your dataset - in our demo case us-central1 Use settings: Runtime: Python 3.11 (you cannot use anything later than 3.11) Trigger: Optional Authentication: Require Authentication Billing: Request based Service Scaling: Auto-scaling set to 0 Ingress: All Containers: leave all settings as set Volumes: Leave all settings as set Networking: Leave all settings as set Security: Choose the service account you created earlier Save Create main.py file and requirements.txt files through the inline editor For requirements.txt copy and paste the below: functions-framework==3.* google-cloud-bigquery Google-cloud-storage For main.py your function entry point is: export_focus_data Paste the code below into your main.py inline editor window import logging import time import json from google.cloud import bigquery, storage logging.basicConfig(level=logging.INFO) def export_focus_data(request): bq_client = bigquery.Client() storage_client = storage.Client() project_id = "YOUR-ProjectID" dataset = "YOUR-Detailed-Export-Dataset" view = "YOUR-FOCUS-View-Name" metadata_table = "Your-Metadata-Table" job_name = "The name you want to call the job for the export" bucket_base = "gs://<your bucketname>/<your foldername>" bucket_name = "Your Bucket Name" metadata_file_path = "Your-Bucket-name/export_metadata.json" try: logging.info("🔁 Starting incremental export based on export_time...") # Step 1: Get last export_time from metadata metadata_query = f""" SELECT last_export_time FROM `{project_id}.{dataset}.{metadata_table}` WHERE job_name = '{job_name}' LIMIT 1 """ metadata_result = list(bq_client.query(metadata_query).result()) if not metadata_result: return "No metadata row found. Please seed export_metadata.", 400 last_export_time = metadata_result[0].last_export_time logging.info(f"📌 Last export_time from metadata: {last_export_time}") # Step 2: Check for new data check_data_query = f""" SELECT COUNT(*) AS row_count FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') """ row_count = list(bq_client.query(check_data_query).result())[0].row_count if row_count == 0: logging.info("✅ No new data to export.") return "No new data to export.", 204 # Step 3: Get distinct export months folder_query = f""" SELECT DISTINCT FORMAT_DATETIME('%Y%m', BillingPeriodStart) AS export_month FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') AND BillingPeriodStart IS NOT NULL """ export_months = [row.export_month for row in bq_client.query(folder_query).result()] logging.info(f"📁 Exporting rows from months: {export_months}") # Step 4: Export data for each month for export_month in export_months: export_path = f"{bucket_base}/{export_month}/export_{int(time.time())}_*.csv" export_query = f""" EXPORT DATA OPTIONS( uri='{export_path}', format='CSV', overwrite=true, header=true, field_delimiter=';' ) AS SELECT * FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') AND FORMAT_DATETIME('%Y%m', BillingPeriodStart) = '{export_month}' """ bq_client.query(export_query).result() # Step 5: Get latest export_time from exported rows max_export_time_query = f""" SELECT MAX(export_time) AS new_export_time FROM `{project_id}.{dataset}.{view}` WHERE export_time >= TIMESTAMP('{last_export_time}') """ new_export_time = list(bq_client.query(max_export_time_query).result())[0].new_export_time # Step 6: Update BigQuery metadata table update_query = f""" MERGE `{project_id}.{dataset}.{metadata_table}` T USING ( SELECT '{job_name}' AS job_name, TIMESTAMP('{new_export_time}') AS new_export_time ) S ON T.job_name = S.job_name WHEN MATCHED THEN UPDATE SET last_export_time = S.new_export_time WHEN NOT MATCHED THEN INSERT (job_name, last_export_time) VALUES (S.job_name, S.new_export_time) """ bq_client.query(update_query).result() # Step 7: Write metadata JSON to GCS blob = storage_client.bucket(bucket_name).blob(metadata_file_path) blob.upload_from_string( json.dumps({"last_export_time": new_export_time.isoformat()}), content_type="application/json" ) logging.info(f"📄 Metadata file written to GCS: gs://{bucket_name}/{metadata_file_path}") return f"✅ Export complete. Metadata updated to {new_export_time}", 200 except Exception as e: logging.exception("❌ Incremental export failed:") return f"Function failed: {str(e)}", 500 Before saving, update variables like project_id, bucket_name, etc. 🔊 “This function exports new billing data based on the last timestamp in your metadata table.” Test the Cloud Function Deploy and click Test Function Ensure a seed export_metadata.json file is uploaded to your GCS bucket (if you have not uploaded a seed export file the function will not run. Example file is below { "last_export_time": "2024-12-31T23:59:59.000000+00:00" } Confirm new CSVs appear in your target folder Automate with Cloud Scheduler This step will set up an automated schedule to run your cloud function on a daily or hourly pattern, you may adjust the frequency to your desired schedule, this demo uses the same time each day at 11pm. Steps: Go to Cloud Scheduler > Create Job Region: Same region as your Function - This Demo us-central1 Name: daily-focus-export Frequency: 0 */6 * * * (every 6 hours) or Frequency: 0 23 * * * (daily at 11 PM) Time zone: your desired time zone Target: HTTP Auth: OIDC Token → Use service account Click CREATE 🔊 "This step automates the entire pipeline to run daily and keep downstream platforms updated with the latest billing data." Wrapping up Google setup Wow—there was a lot to get through! But now that you’ve successfully enabled FOCUS-formatted exports in Google Cloud, you're ready for the next step: connecting Azure to your Google Cloud Storage bucket and ingesting the data into your FinOps hub instance. This is where everything comes together—enabling unified, multi-cloud reporting in your Hub across both Azure and Google Cloud billing data. Let’s dive into building the Data Factory pipeline and the associated datasets needed to fetch, transform, and load your Google billing exports. Connecting Azure to Google billing data With your Google Cloud billing data now exporting in FOCUS format, the next step is to bring it into your Azure environment for centralized FinOps reporting. Using Data Factory, we'll build a pipeline to fetch the CSVs from your Google Cloud Storage bucket, convert them to Parquet, and land them in your FinOps Hub storage account. Azure access and prerequisites Access to the Azure Portal Access to the resource group your hub has been deployed to Contributor access to your resource group At a minimum storage account contributor and storage blob data owner roles An existing deployment of the Microsoft FinOps Hub Toolkit Admin rights on Azure Data Factory (or at least contributor role on ADF) Services and tools required Make sure the following Azure resources are in place: Azure Data Factory instance A linked Azure Storage Account (this is where FinOps Hub is expecting data) (Optional) Azure Key Vault for secret management Pipeline Overview We’ll be creating the following components in Azure Data Factory Component Purpose GCS Linked Service Connects ADF to your Google Cloud Storage bucket Azure Blob Linked Service Connects ADF to your Hub’s ingestion container Source Dataset Reads CSV files from GCS Sink Dataset Writes Parquet files to Azure Data Lake (or Blob) in Hub's expected format Pipeline Logic Orchestrates the copy activity, format conversion, and metadata updates Secrets and authentication To connect securely from ADF to GCS, you will need: The access key and secret from GCS Interoperability settings (created earlier) Store them securely in Azure Key Vault or in ADF pipeline parameters (if short-lived) 💡 Best Practice Tip: Use Azure Key Vault to securely store GCS access credentials and reference them from ADF linked services. This improves security and manageability over hardcoding values in JSON. Create Google Cloud storage billing data dataset Now that we're in Azure, it's time to connect to your Google Cloud Storage bucket and begin building your Azure Data Factory pipeline. Steps Launch Azure Data Factory Log in to the Azure Portal Navigate to your deployed Azure Data Factory instance Click “Author” from the left-hand menu to open the pipeline editor In the ADF Author pane, click the "+" (plus) icon next to Datasets Select “New dataset” Choose Google Cloud Storage as the data source Select CSV as the file format Set the Dataset Name, for example: gcs_focus_export_dataset Click “+ New” next to the Linked Service dropdown Enter a name like: GCS-Billing-LinkedService Under Authentication Type, choose: Access Key (for Interoperability access key) Or Azure Key Vault (recommended for secure credential storage) Fill in the following fields: Access Key: Your GCS interoperability key Secret: Your GCS interoperability secret Bucket Name: focus-cost-export (Optional) Point to a folder path like: focus-billing-data/ Click Test Connection to validate access Click Create Adjust Dataset Properties Now that the dataset has been created, make the following modifications to ensure proper parsing of the CSVs: SettingValue Column delimiter; (semicolon) Escape character" (double quote) You can find these under the “Connection” and “Schema” tabs of the dataset editor. If your connection fails you may need to enable public access on your GCS bucket or check your firewall restrictions from azure to the internet! Create a Google Cloud Storage metadata dataset To support incremental loading, we need a dedicated dataset to read the export_metadata.json file that your Cloud Function writes to Google Cloud Storage. This dataset will be used by the Lookup activity in your pipeline to get the latest export timestamp from Google. Steps: In the Author pane of ADF, click "+" > Dataset > New dataset Select Google Cloud Storage as the source Choose JSON as the format Click Continue Configure the Dataset Setting Value Name gcs_export_metadata_dataset Linked Service Use your existing GCS linked service File path e.g., focus-cost-export/metadata/export_metadata.json Import schema Set to From connection/store or manually define if needed File pattern Set to single file (not folder) Create the sink dataset – "ingestion_gcp" Now that we’ve connected to Google Cloud Storage and defined the source dataset, it’s time to create the sink dataset. This will land your transformed billing data in Parquet format into your Azure FinOps Hub’s ingestion container. Steps: In Azure Data Factory, go to the Author section Under Datasets, click the "+" (plus) icon and select “New dataset” Choose Azure Data Lake Storage Gen2 (or Blob Storage, depending on your Hub setup) Select Parquet as the file format Click Continue Configure dataset properties Name: ingestion_gcp Linked Service: Select your existing linked service that connects to your FinOps Hub’s storage account. (this will have been created when you deployed the hub) File path: Point to the container and folder path where you want to store the ingested files (e.g., ingestion-gcp/cost/billing-provider/gcp/focus/) Optional Settings: Option Recommended Value Compression type snappy Once configured, click Publish All again to save your new sink dataset. Build the ADF pipeline for incremental import With your datasets created, the final step is building the pipeline logic that orchestrates the data flow. The goal is to only import newly exported billing data from Google, avoiding duplicates by comparing timestamps from both clouds. In this pipeline, we’ll: Fetch Google’s export timestamp from the JSON metadata file Fetch the last successful import time from the Hub’s metadata file in Azure Compare timestamps to determine whether there is new data to ingest If new data exists: Run the Copy Activity to fetch GCS CSVs, convert to Parquet, and write to the ingestion_gcp container Write an updated metadata file in the hub, with the latest import timestamp Pipeline components: Activity Purpose Lookup - GCS Metadata Reads the last export time from Google's metadata JSON in GCS Lookup - Hub Metadata Reads the last import time from Azure’s metadata JSON in ADLS If Condition Compares timestamps to decide whether to continue with copy Copy Data Transfers files from GCS (CSV) to ADLS (Parquet) Set Variable Captures the latest import timestamp Web/Copy Activity Writes updated import timestamp JSON file to ingestion_gcp container What your pipeline will look like: Step-by-Step Create two lookup activities: GCS Metadata Hub Metadata Lookup1 – GCS Metadata Add an activity called Lookup to your new pipeline Select the Source Dataset: gcs_export_metadata_dataset or whatever you named it earlier This lookup reads the the export_metadata.json file created by your Cloud Function in GCS for the last export time available. Configuration view of lookup for GCS metadata file Lookup2 – Hub Metadata Add an activity called lookup and name it Hub Metadata Select the source dataset: adls_last_import_metadata This lookup reads the last import time data from the hub metadata file to compare it to the last export time from GCS Configuration view of Metadata lookup activity Add conditional logic In this step we will add the condition logic Add activity called If Condition Add the expression below to the condition Go to the Expression tab and paste the following into Dynamic Content: @greater(activity('Lookup1').output.last_export_time, activity('Lookup2').output.last_import_time) Configuration view of If Condition Activity Next Add Copy Activity (If Condition = True) next to True condition select edit and add activity Copy Data Configure the activity with the following details Setting Value Source Dataset gcs_focus_export_dataset (CSV from GCS) Sink Dataset ingestion_gcp (Parquet to ADLS) Merge Files Enabled (reduce file count) Filter Expression @activity('Lookup1').output.firstRow.last_export_time Ensure you add filter expression for filter by last modified. This is important, if you do not add the filter by last modified expression your pipeline will not function properly. Finally we create an activity to update the metadata file in the hub Add another copy activity to your if condition and ensure it is linked to the previous copy activity, this ensure it runs after the import activity is completed. Copy metadata activity settings: Source gcs_export_metadata_dataset Sink adls_last_import_dataset Destination Path ingestion_gcp/metadata/last_import.json This step ensures the next run of your pipeline uses the updated import time to decide whether new data exists. Your pipeline now ingests only new billing data from Google and records each successful import to prevent duplicate processing. Wrapping up – A unified FinOps reporting solution Congratulations — you’ve just built a fully functional multi-cloud FinOps data pipeline using Microsoft’s FinOps Hub Toolkit and Google Cloud billing data, normalized with the FOCUS 1.0 standard. By following this guide, you’ve: ✅ Enabled FOCUS billing exports in Google Cloud using BigQuery, GCS, and Cloud Functions ✅ Created normalized, FOCUS-aligned views to unify your GCP billing data ✅ Automated metadata tracking to support incremental exports ✅ Built an Azure Data Factory pipeline to fetch, transform, and ingest GCP data into your Hub ✅ Established a reliable foundation for centralized, multi-cloud cost reporting This solution brings side-by-side visibility into Azure and Google Cloud costs, enabling informed decision-making, workload optimization, and true multi-cloud FinOps maturity. Next steps 🔁 Schedule the ADF pipeline to run daily or hourly using triggers 📈 Build Power BI dashboards or use templates from the Microsoft FinOps Toolkit visualise unified cloud spend 🧠 Extend to AWS by applying the same principles using the AWS FOCUS export and AWS S3 storage Feedback? Have questions or want to see more deep dives like this? Let me know — or connect with me if you’re working on FinOps, FOCUS, or multi-cloud reporting. This blog is just the beginning of what's possible.1KViews2likes0CommentsManaging Azure OpenAI costs with the FinOps toolkit and FOCUS: Turning tokens into unit economics
By Robb Dilallo Introduction As organizations rapidly adopt generative AI, Azure OpenAI usage is growing—and so are the complexities of managing its costs. Unlike traditional cloud services billed per compute hour or storage GB, Azure OpenAI charges based on token usage. For FinOps practitioners, this introduces a new frontier: understanding AI unit economics and managing costs where the consumed unit is a token. This article explains how to leverage the Microsoft FinOps toolkit and the FinOps Open Cost and Usage Specification (FOCUS) to gain visibility, allocate costs, and calculate unit economics for Azure OpenAI workloads. Why Azure OpenAI cost management is different AI services break many traditional cost management assumptions: Billed by token usage (input + output tokens). Model choices matter (e.g., GPT-3.5 vs. GPT-4 Turbo vs. GPT-4o). Prompt engineering impacts cost (longer context = more tokens). Bursty usage patterns complicate forecasting. Without proper visibility and unit cost tracking, it's difficult to optimize spend or align costs to business value. Step 1: Get visibility with the FinOps toolkit The Microsoft FinOps toolkit provides pre-built modules and patterns for analyzing Azure cost data. Key tools include: Microsoft Cost Management exports Export daily usage and cost data in a FOCUS-aligned format. FinOps hubs Infrastructure-as-Code solution to ingest, transform, and serve cost data. Power BI templates Pre-built reports conformed to FOCUS for easy analysis. Pro tip: Start by connecting your Microsoft Cost Management exports to a FinOps hub. Then, use the toolkit’s Power BI FOCUS templates to begin reporting. Learn more about the FinOps toolkit Step 2: Normalize data with FOCUS The FinOps Open Cost and Usage Specification (FOCUS) standardizes billing data across providers—including Azure OpenAI. FOCUS Column Purpose Azure Cost Management Field ServiceName Cloud service (e.g., Azure OpenAI Service) ServiceName ConsumedQuantity Number of tokens consumed Quantity PricingUnit Unit type, should align to "tokens" DistinctUnits BilledCost Actual cost billed CostInBillingCurrency ChargeCategory Identifies consumption vs. reservation ChargeType ResourceId Links to specific deployments or apps ResourceId Tags Maps usage to teams, projects, or environments Tags UsageType / Usage Details Further SKU-level detail Sku Meter Subcategory, Sku Meter Name Why it matters: Azure’s native billing schema can vary across services and time. FOCUS ensures consistency and enables cross-cloud comparisons. Tip: If you use custom deployment IDs or user metadata, apply them as tags to improve allocation and unit economics. Review the FOCUS specification Step 3: Calculate unit economics Unit cost per token = BilledCost ÷ ConsumedQuantity Real-world example: Calculating unit cost in Power BI A recent Power BI report breaks down Azure OpenAI usage by: SKU Meter Category → e.g., Azure OpenAI SKU Meter Subcategory → e.g., gpt 4o 0513 Input global Tokens SKU Meter Name → detailed SKU info (input/output, model version, etc.) GPT Model Usage Type Effective Cost gpt 4o 0513 Input global Tokens Input $292.77 gpt 4o 0513 Output global Tokens Output $23.40 Unit Cost Formula: Unit Cost = EffectiveCost ÷ ConsumedQuantity Power BI Measure Example: Unit Cost = SUM(EffectiveCost) / SUM(ConsumedQuantity) Pro tip: Break out input and output token costs by model version to: Track which workloads are driving spend. Benchmark cost per token across GPT models. Attribute costs back to teams or product features using Tags or ResourceId. Power BI tip: Building a GPT cost breakdown matrix To easily calculate token unit costs by GPT model and usage type, build a Matrix visual in Power BI using this hierarchy: Rows: SKU Meter Category SKU Meter Subcategory SKU Meter Name Values: EffectiveCost (sum) ConsumedQuantity (sum) Unit Cost (calculated measure) Unit Cost = SUM(‘Costs’[EffectiveCost]) / SUM(‘Costs’[ConsumedQuantity]) Hierarchy Example: Azure OpenAI ├── GPT 4o Input global Tokens ├── GPT 4o Output global Tokens ├── GPT 4.5 Input global Tokens └── etc. Power BI Matrix visual showing Azure OpenAI token usage and costs by SKU Meter Category, Subcategory, and Name. This breakdown enables calculation of unit cost per token across GPT models and usage types, supporting FinOps allocation and unit economics analysis. What you can see at the token level Metric Description Data Source Token Volume Total tokens consumed Consumed Quantity Effective Cost Actual billed cost BilledCost / Cost Unit Cost per Token Cost divided by token quantity Effective Unit Price SKU Category & Subcategory Model, version, and token type (input/output) Sku Meter Category, Subcategory, Meter Name Resource Group / Business Unit Logical or organizational grouping Resource Group, Business Unit Application Application or workload responsible for usage Application (tag) This visibility allows teams to: Benchmark cost efficiency across GPT models. Track token costs over time. Allocate AI costs to business units or features. Detect usage anomalies and optimize workload design. Tip: Apply consistent tagging (Cost Center, Application, Environment) to Azure OpenAI resources to enhance allocation and unit economics reporting. How the FinOps Foundation’s AI working group informs this approach The FinOps for AI overview, developed by the FinOps Foundation’s AI working group, highlights unique challenges in managing AI-related cloud costs, including: Complex cost drivers (tokens, models, compute hours, data transfer). Cross-functional collaboration between Finance, Engineering, and ML Ops teams. The importance of tracking AI unit economics to connect spend with value. By combining the FinOps toolkit, FOCUS-conformed data, and Power BI reporting, practitioners can implement many of the AI Working Group’s recommendations: Establish token-level unit cost metrics. Allocate costs to teams, models, and AI features. Detect cost anomalies specific to AI usage patterns. Improve forecasting accuracy despite AI workload variability. Tip: Applying consistent tagging to AI workloads (model version, environment, business unit, and experiment ID) significantly improves cost allocation and reporting maturity. Step 4: Allocate and Report Costs With FOCUS + FinOps toolkit: Allocate costs to teams, projects, or business units using Tags, ResourceId, or custom dimensions. Showback/Chargeback AI usage costs to stakeholders. Detect anomalies using the Toolkit’s patterns or integrate with Azure Monitor. Tagging tip: Add metadata to Azure OpenAI deployments for easier allocation and unit cost reporting. Example: tags: CostCenter: AI-Research Environment: Production Feature: Chatbot Step 5: Iterate Using FinOps Best Practices FinOps capability Relevance Reporting & analytics Visualize token costs and trends Allocation Assign costs to teams or workloads Unit economics Track cost per token or business output Forecasting Predict future AI costs Anomaly management Identify unexpected usage spikes Start small (Crawl), expand as you mature (Walk → Run). Learn about the FinOps Framework Next steps Ready to take control of your Azure OpenAI costs? Deploy the Microsoft FinOps toolkit Start ingesting and analyzing your Azure billing data. Get started Adopt FOCUS Normalize your cost data for clarity and cross-cloud consistency. Explore FOCUS Calculate AI unit economics Track token consumption and unit costs using Power BI. Customize Power BI reports Extend toolkit templates to include token-based unit economics. Join the conversation Share insights or questions with the FinOps community on TechCommunity or in the FinOps Foundation Slack. Advance Your Skills Consider the FinOps Certified FOCUS Analyst certification. Further Reading Managing the cost of AI: Understanding AI workload cost considerations Microsoft FinOps toolkit Learn about FOCUS Microsoft Cost Management + Billing FinOps Foundation Appendix: FOCUS column glossary ConsumedQuantity: The number of tokens or units consumed for a given SKU. This is the key measure of usage. ConsumedUnit: The type of unit being consumed, such as 'tokens', 'GB', or 'vCPU hours'. Often appears as 'Units' in Azure exports for OpenAI workloads. PricingUnit: The unit of measure used for pricing. Should match 'ConsumedUnit', e.g., 'tokens'. EffectiveCost: Final cost after amortization of reservations, discounts, and prepaid credits. Often derived from billing data. BilledCost: The invoiced charge before applying commitment discounts or amortization. PricingQuantity: The volume of usage after applying pricing rules such as tiered or block pricing. Used to calculate cost when multiplied by unit price.390Views2likes0CommentsCreating custom analyses and reports with FinOps hubs
The FinOps toolkit provides a set of standard Power BI reports that address key scenarios within the FinOps Framework, such as cost transparency and rate optimization. However, many organizations require more than just predefined reports. They often need the flexibility to build custom analyses, develop advanced reporting, and enrich FinOps data with internal data sources. This post shows you how to: Build your own cost analyses using Kusto Query Language (KQL). Connect and enrich your FinOps data with additional sources in Azure Data Explorer (ADX) or Microsoft Fabric Real-Time Intelligence (RTI). Common use cases Organizations typically extend their FinOps reporting and analysis capabilities to: Allocate and charge back cloud costs Design customized showback and chargeback models, refine cost allocation across business units, cost centers, and regions, and support internal billing and negotiation processes. Improve cost visibility and efficiency Identify cost-saving opportunities, detect idle or over-provisioned resources, and analyze the financial impact of architectural changes such as serverless adoption or reserved instance utilization. Integrate cloud costs with business metrics Combine cost data with key performance indicators (KPIs) such as cost per customer, cost per transaction, or revenue margins. Enable advanced forecasting and budgeting Perform trend analysis, forecast future costs, and estimate budgets based on historical usage patterns. Support multi-cloud governance Normalize and compare costs across multiple cloud providers for unified reporting and benchmarking. Implement proactive cost management Set up custom anomaly detection mechanisms to monitor unexpected spending patterns. Create executive-level reporting Build tailored KPIs and dashboards for C-level stakeholders, focusing on business-relevant insights. Enabling advanced analysis with Azure Data Explorer and Microsoft Fabric Since version 0.7 (released in November 2024), FinOps hubs, one of the tools in the FinOps toolkit, includes Azure Data Explorer (ADX) as its central data platform. In version 0.10 (released on May 4, 2025), FinOps hubs also added support for Microsoft Fabric Real-Time Intelligence (RTI), which is a version of ADX fully integrated into Microsoft Fabric. ADX and RTI are high-performance, fully managed analytics services optimized for large-scale data exploration. They enable organizations to query, join, and analyze cloud cost and usage data at scale using the powerful Kusto Query Language (KQL). By leveraging ADX and RTI, FinOps hubs not only support standard reporting needs but also empower users to build advanced custom analyses, enrich datasets with internal or external sources, and integrate seamlessly with a variety of visualization and analytics tools. Prerequisites To follow the examples in this article, ensure you have: A deployed FinOps hub instance with Azure Data Explorer or Microsoft Fabric, see FinOps hubs. An Entra ID account with read access to both the Hub and Ingestion databases (Viewer role). For details, see Security roles. Writing your first KQL query Kusto Query Language (KQL) is the primary query language for Azure Data Explorer and Fabric Real-Time Intelligence. While KQL is conceptually similar to SQL or T-SQL, KQL is optimized for fast, flexible analysis across large datasets. The simplest way to execute a KQL query is through the Azure Data Explorer or Microsoft Fabric web UI. For an overview, see Azure Data Explorer web UI query overview or Query data in a Microsoft Fabric KQL queryset. Steps to run a query Whether your data is in Azure Data Explorer or Microsoft Fabric, you can use either tool to connect and run queries. To use Azure Data Explorer: Open the Azure Data Explorer web UI. Add a connection to your Azure Data Explorer cluster, see add a cluster connection. In the query editor, select the Hub database. To use Fabric Real-Time Intelligence: Open the desired workspace from Microsoft Fabric. In the list of workspace resources, select Hub_queryset for your eventhouse. Once you've connected to the Hub database, run the following query: Costs | summarize Cost = sum(EffectiveCost) by BillingPeriodStart | render columnchart This query summarizes effective costs by billing period and renders the results as a column chart. FOCUS compatibility The Costs view in the Hub database is compatible with the FinOps Open Cost and Usage Specification (FOCUS), allowing you to align analyses with standardized FinOps use cases. While KQL is the primary language, Azure Data Explorer also supports a subset of T-SQL commands, enabling users familiar with SQL to write basic queries. For more information, see T-SQL – Azure Data Explorer. The following example shows the different metered costs for a particular SKU. The SQL-statement, taken from the FOCUS use case library, can be executed against the Costs view: SELECT ProviderName, ChargePeriodStart, ChargePeriodEnd, SkuId, SkuPriceId, PricingUnit, ListUnitPrice, SUM(PricingQuantity) AS TotalPricingQuantity, SUM(ListCost) AS TotalListCost, SUM(EffectiveCost) AS TotalEffectiveCost FROM Costs WHERE ChargePeriodStart >= DATEADD(day, -30, GETDATE()) AND ChargePeriodEnd < GETDATE() GROUP BY ProviderName, ChargePeriodStart, ChargePeriodEnd, SkuId, SkuPriceId, PricingUnit, ListUnitPrice ORDER BY ChargePeriodStart ASC While Azure Data Explorer supports a subset of T-SQL commands for basic querying, KQL is the recommended language for advanced analysis. It offers broader capabilities, is optimized for large-scale data exploration, and benefits from widespread adoption across Azure services such as Microsoft Defender, Azure Resource Graph, Azure Monitor and Microsoft Fabric — meaning many cloud and security teams are already familiar with it. Writing more advanced KQL queries Basic KQL queries can be extended to address complex business requirements. For example: Costs | where BillingPeriodStart >= datetime(2024-01-01) and BillingPeriodEnd < datetime(2024-04-01) | summarize TotalEffectiveCost = sum(EffectiveCost) by ProviderName, BillingCurrency, BillingPeriodStart, ServiceCategory | order by TotalEffectiveCost desc Beyond the Costs view, FinOps hubs also include other views such as Prices, which can be queried individually or joined with other datasets using the join operator. Example queries are available in FinOps toolkit best practice library. Learn more about KQL Expand your KQL skills with the following Microsoft Learn resources: Kusto Query Language overview KQL quick reference SQL to Kusto query translation Visualizing FinOps data with KQL Writing powerful queries is only the first step. Visualizing insights effectively is critical to drive action across the organization. Azure Data Explorer (ADX) and Microsoft Fabric Real-Time Intelligence offer a rich set of visualization options to address diverse FinOps needs — from portfolio managers to cloud engineers. Microsoft Excel and Power Query Microsoft Excel, enhanced with Power Query, is ideal for portfolio managers and procurement teams who want to integrate live FinOps data into their existing spreadsheets. Excel and Power Query enable users to refresh FinOps data seamlessly, perform custom reporting within familiar Excel environments, and support processes like budget planning, contract renewals, and forecast validation. Learn more about connecting Microsoft Excel to Azure Data Explorer. Power BI For FinOps teams, finance managers, and executive stakeholders, Power BI is a natural choice especially when Power BI is already used across the organization. Power BI allows users to build FinOps KPIs, create management reports, and integrate cloud financials into existing corporate dashboards with ease, powered directly by Azure Data Explorer queries. Learn more about connecting Power BI to Azure Data Explorer. Azure Data Explorer dashboards and KQL render visualizations FinOps practitioners and cloud analysts often need lightweight, real-time visualizations to monitor specific cloud spend patterns or anomalies. Azure Data Explorer dashboards and the render operator allow users to quickly create ad-hoc dashboards, monitor commitment-based discounts like reservations and savings plans, or track the costs of specific services such as Azure OpenAI, Sentinel, or Azure Monitor — all without leaving Azure Data Explorer. Learn more about Azure Data Explorer dashboards. Learn more about KQL render operator Azure Workbooks Azure Workbooks are ideal for cloud engineers and operations teams who work primarily within the Azure Portal. Workbooks provide a flexible way to create new FinOps visualizations by combining cost data from Azure Data Explorer with operational telemetry and monitoring insights. Organizations can easily add their own custom Workbooks to complement the existing FinOps workbooks. This approach helps teams extend their cost governance capabilities while maintaining consistency with the core FinOps Framework. Learn more about connecting Azure Workbooks to Azure Data Explorer. Microsoft Fabric Microsoft Fabric extends the FinOps visualization landscape by bringing together real-time analytics and traditional reporting within a unified platform. FinOps teams can use Fabric to build Power BI reports for advanced financial insights and Real-Time Dashboards for live cost monitoring, whether data is in a Fabric eventhouse or a separate Azure Data Explorer cluster. Fabric also enables organizations to package multiple FinOps assets, such as reports and dashboards, into organizational apps. This allows teams to deliver comprehensive, role-specific FinOps experiences across the business, ensuring that portfolio managers, engineers, and executives can access the right insights in a single, integrated environment. Learn more about Microsoft Fabric Org Apps. Learn more about Microsoft Fabric Real-Time Dashboards. Automating FinOps insights with KQL Beyond visualization, organizations can automate FinOps workflows and insights by executing KQL queries programmatically. Azure Data Explorer and Fabric Real-Time Intelligence support integration with automation tools across low-code platforms, DevOps pipelines, and data orchestration services. Low-code automation: Power Automate and Azure Logic Apps Using built-in connectors for Azure Data Explorer, FinOps teams can create automated workflows that: Execute KQL queries on a schedule or in response to events. Send notifications or alerts when cost anomalies are detected. Populate ticketing systems with cost optimization tasks. Trigger approval workflows for unusual spending patterns. Learn more about Azure Data Explorer connectors. DevOps integration: Azure DevOps Pipelines KQL queries can also be integrated into Azure DevOps Pipelines using dedicated tasks. This enables: Running cost validation checks as part of CI/CD pipelines. Automatically generating FinOps reports during deployments. Enforcing cost governance as part of the delivery process. Learn more about Azure DevOps integration with Azure Data Explorer. Data orchestration: Azure Data Factory and Microsoft Fabric pipelines For more complex data processing and orchestration scenarios, Azure Data Factory and Microsoft Fabric pipelines allow you to execute KQL queries and commands natively. Typical use cases include: Periodically enriching FinOps datasets with external reference data. Automating ingestion of additional telemetry or metadata into FinOps Hub. Triggering FinOps analyses as part of broader data workflows. Learn more about executing KQL commands in Azure Data Factory. Learn more about Pipelines in Microsoft Fabric. Integrating FinOps data across systems Before moving into full-scale analysis and visualization, many organizations find value in enriching FinOps data by combining it with additional operational, telemetry, or security sources. Azure Data Explorer and Fabric Real-Time Intelligence provide powerful capabilities to integrate external datasets seamlessly, enabling FinOps teams to generate deeper insights, automate governance processes, and support data-driven decision-making across the business. This section outlines the main integration options and best practices for working with FinOps data in broader cloud and enterprise environments. External tables For FinOps analysts and data engineering teams, external tables offer a flexible way to reference additional datasets without the need for ingestion. External tables allow querying external data sources directly from Azure Data Explorer or Fabric Real-Time Intelligence, helping to avoid data duplication and minimizing ingestion costs. Typical use cases include: Joining FinOps cost data with organizational metadata, such as business unit mappings or account hierarchies. Referencing FinOps metadata, budget and forecast data, or other external business and technical datasets in real time. Combining FinOps data with datasets stored in data lakes or relational databases like SQL Server, MySQL, PostgreSQL, or CosmosDB. A practical example is the integration of blob inventory report data with storage cost data: By enabling blob inventory reports on a specific Azure Storage account, detailed metadata about stored blobs is automatically exported as CSV or Parquet files into a storage container. These files can then be referenced directly in Azure Data Explorer as an external table, allowing detailed cost analysis and storage optimization. Learn how to enable Azure Storage blob inventory reports. Learn more about external tables in Azure Data Explorer. Create an external table using the Azure Data Explorer web UI wizard Azure Monitor and Log Analytics integration For cloud engineers, SRE teams, and FinOps practitioners, integrating Azure Monitor and Log Analytics unlocks new possibilities: Correlating cost data with operational telemetry such as CPU utilization, memory usage, request volumes, or storage IOPS. Attributing shared infrastructure costs based on actual resource usage patterns. Monitoring operational and financial efficiency side by side. This bi-directional integration allows querying Azure Monitor data within Azure Data Explorer, and vice versa, enabling unified cost and performance analysis. A practical example: When virtual machines and managed disks are configured to send performance metrics (such as CPU utilization, memory usage, throughput, latency, and IOPS) to a Log Analytics workspace (specifically into the Perf table), this telemetry can be directly joined with cost data from the Costs view. This enables not only identifying downsizing opportunities but also calculating the potential savings associated with specific optimization actions. Correlate data in Azure Data Explorer and Azure Resource Graph with data in a Log Analytics workspace Query data in Azure Monitor with Azure Data Explorer Sentinel and security data integration For security teams and cloud governance groups, integrating FinOps data with security telemetry can provide important financial insights. Since Microsoft Sentinel is built on top of a Log Analytics workspace, it is possible to directly correlate Sentinel cost data with Sentinel data ingestion metrics. This enables building showback models for shared security costs, based on actual usage patterns and ingestion volumes — ensuring transparency and fair allocation of security-related cloud spend. Best practices for FinOps data integration Choosing the right integration method depends on your use case, data dynamics, and performance requirements. General recommendations: Use external tables when: The data is relatively static or updated periodically (e.g., metadata mappings, budget forecasts, inventory reports). Real-time query performance is not critical. Ingest data into Azure Data Explorer when: You need high-performance queries over frequently changing datasets. Large-scale enrichment with operational, security, or business data is required. Real-time cost monitoring, anomaly detection, or performance correlation is needed. In practice, many organizations adopt a hybrid approach — using external tables for static reference data while ingesting dynamic datasets for optimized querying and analytics. Additionally, Azure Data Explorer makes it easy to copy data from an external table into a native table. This provides a simple yet powerful pattern for implementing ETL (Extract, Transform, Load) processes within ADX, allowing you to materialize reference data or periodically snapshot external datasets for performance optimization. How we put this into practice at Alescent At Alescent, we have been helping organizations unlock the full potential of KQL for FinOps and advanced cloud cost optimization — from early adoption on Azure Data Explorer to fully integrated solutions on Microsoft Fabric. We have successfully implemented end-to-end FinOps capabilities, covering everything from cost transparency and allocation to forecasting, anomaly detection, and executive reporting. If you are interested in learning more today, feel free to connect with me on LinkedIn for a direct conversation. Conclusion With FinOps hubs leveraging Azure Data Explorer and Microsoft Fabric, organizations can move beyond basic cost reporting to build a fully integrated, data-driven FinOps practice. By combining flexible querying, advanced visualization, and seamless data integration, FinOps teams can deliver deeper insights, optimize cloud investments, and empower every stakeholder — from engineers to executives — to make smarter, financially informed decisions. Next steps Ready to take your FinOps practice to the next level? Explore FinOps hubs and learn about deployment options and capabilities. Deepen your KQL skills with the Kusto Query Language documentation. Start building your own custom cost analyses and dashboards — and unlock deeper insights into your cloud environment.1.1KViews1like0CommentsMicrosoft at FinOps X 2025: Embracing FinOps in the era of AI
It’s that time of year again! June 2 nd is quickly approaching, and Microsoft is set to make a bold impact at FinOps X 2025, continuing our mission to empower organizations with financial discipline, cost optimization, and AI-driven efficiency in the cloud. As artificial intelligence reshapes the industry, we’re excited to showcase how Microsoft’s FinOps tools and services can help you boost productivity, streamline automation, and maximize cloud investments with innovative AI-powered capabilities. Our presence at FinOps X goes beyond standard conference participation, it’s an immersive experience designed to engage, inspire, and equip FinOps practitioners with actionable strategies. Here’s what to expect: Evening by the bay: A Microsoft experience 📍 Roy’s restaurant 📅 June 2 🕖 7:00 PM – 9:00 PM Back by popular demand, we hope you can join us where networking meets relaxation at Microsoft’s signature welcome reception, an unforgettable evening at Roy’s Restaurant. Enjoy handcrafted cocktails, live music, and gourmet food while mingling with FinOps leaders, practitioners, and Microsoft experts. Immersive exhibit: FinOps in the era of AI 📍 Microsoft stage room 📅 June 3-4 🕘 9:30 AM – 5:00 PM On June 3 rd and 4 th , we encourage you to step into the Microsoft stage room, where we’re debuting an interactive exhibit that explores the AI adoption journey. Participants will navigate through key phases, from foundations to design, management, and best practices, all rooted in the FinOps Framework. Gain insights into how Microsoft solutions can help drive cost efficiency and deliver value throughout your AI transformation. Microsoft sessions: Insights and innovation Microsoft will host 10 dedicated sessions throughout the conference, offering a mix of breakout sessions, expert Q&As, and, for the first time, a partner-focused discussion. Our sessions will explore a wide range of topics including: Best practices for maximizing cloud investments FinOps for AI and AI for FinOps FinOps strategies for SaaS and sustainability Governance, forecasting, benchmarking, and rate optimization Deep dives into Microsoft tools: Microsoft Cost Management Microsoft Fabric Microsoft Copilot in Azure GitHub Copilot Azure AI Foundry Azure Marketplace Pricing offers Azure Carbon Optimization FinOps toolkit Each session is designed to provide proven frameworks, real-world insights, and practical guidance to help organizations integrate financial discipline into their cloud and AI projects. Join us at FinOps X 2025 Ready to dive into the latest FinOps strategies and AI-driven efficiencies? Register today to secure your spot at Microsoft’s sessions and experiences! Space is limited, so don’t miss this opportunity to connect, learn, and innovate with peers and experts. 👉 Explore and register for Microsoft sessions (https://aka.ms/finops/x) Stay tuned for more updates as we gear up for an exciting week of learning, networking, and innovation with the FinOps community. We hope you can join us in San Diego from June 2-5, where the future of FinOps meets the limitless possibilities of AI. Microsoft is all in, and we can’t wait to see you there!219Views0likes0CommentsWhat's new in FinOps toolkit 0.10 – April 2025
In April, the FinOps toolkit 0.10 introduced support for Microsoft Fabric Real-Time Intelligence in FinOps hubs, support for Azure Gov and Azure China in both FinOps hubs and Power BI reports, FinOps Framework 2025 updates, and more!989Views1like0Comments