microsoft fabric
23 TopicsDecision Guide for Selecting an Analytical Data Store in Microsoft Fabric
Learn how to select an analytical data store in Microsoft Fabric based on your workload's data volumes, data type requirements, compute engine preferences, data ingestion patterns, data transformation needs, query patterns, and other factors.10KViews15likes5CommentsApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.3KViews8likes1CommentAzure Databricks & Fabric Disaster Recovery: The Better Together Story
Author's: Amudha Palani amudhapalani, Eric Kwashie ekwashie, Peter Lo PeterLo and Rafia Aqil Rafia_Aqil Disaster recovery (DR) is a critical component of any cloud-native data analytics platform, ensuring business continuity even during rare regional outages caused by natural disasters, infrastructure failures, or other disruptions. Identify Business Critical Workloads Before designing any disaster recovery strategy, organizations must first identify which workloads are truly business‑critical and require regional redundancy. Not all Databricks or Fabric processes need full DR protection; instead, customers should evaluate the operational impact of downtime, data freshness requirements, regulatory obligations, SLAs, and dependencies across upstream and downstream systems. By classifying workloads into tiers and aligning DR investments accordingly, customers ensure they protect what matters most without over‑engineering the platform. Azure Databricks Azure Databricks requires a customer‑driven approach to disaster recovery, where organizations are responsible for replicating workspaces, data, infrastructure components, and security configurations across regions. Full System Failover (Active-Passive) Strategy A comprehensive approach that replicates all dependent services to the secondary region. Implementation requirements include: Infrastructure Components: Replicate Azure services (ADLS, Key Vault, SQL databases) using Terraform Deploy network infrastructure (subnets) in the secondary region Establish data synchronization mechanisms Data Replication Strategy: Use Deep Clone for Delta tables rather than geo-redundant storage Implement periodic synchronization jobs using Delta's incremental replication Measure data transfer results using time travel syntax Workspace Asset Synchronization: Co-deploy cluster configurations, notebooks, jobs, and permissions using CI/CD Utilize Terraform and SCIM for identity and access management Keep job concurrencies at zero in the secondary region to prevent execution Fully Redundant (Active-Active) Strategy The most sophisticated approach where all transactions are processed in multiple regions simultaneously. While providing maximum resilience, this strategy: Requires complex data synchronization between regions Incurs highest operational costs due to duplicate processing Typically needed only for mission-critical workloads with zero-tolerance for downtime Can be implemented as partial active-active, processing most workload in primary with subset in secondary Enabling Disaster Recovery Create a secondary workspace in a paired region. Use CI/CD to keep Workspace Assets Synchronized continuously. Requirement Approach Tools Cluster Configurations Co-deploy to both regions as code Terraform Code (Notebooks, Libraries, SQL) Co-deploy with CI/CD pipelines Git, Azure DevOps, GitHub Actions Jobs Co-deploy with CI/CD, set concurrency to zero in secondary Databricks Asset Bundles, Terraform Permissions (Users, Groups, ACLs) Use IdP/SCIM and infrastructure as code Terraform, SCIM Secrets Co-deploy using secret management Terraform, Azure Key Vault Table Metadata Co-deploy with CI/CD workflows Git, Terraform Cloud Services (ADLS, Network) Co-deploy infrastructure Terraform Update your orchestrator (ADF, Fabric pipelines, etc.) to include a simple region toggle to reroute job execution. Replicate all dependent services (Key Vault, Storage accounts, SQL DB). Implement Delta “Deep Clone” synchronization jobs to keep datasets continuously aligned between regions. Introduce an application‑level “Sync Tool” that redirects: data ingestion compute execution Enable parallel processing in both regions for selected or all workloads. Use bi‑directional synchronization for Delta data to maintain consistency across regions. For performance and cost control, run most workloads in primary and only subset workloads in secondary to keep it warm. Implement Three-Pillar DR Design Primary Workspace: Your production Databricks environment running normal operations Secondary Workspace: A standby Databricks workspace in a different(paired) Azure region that remains ready to take over if the primary fails. This architecture ensures business continuity while optimizing costs by keeping the secondary workspace dormant until needed. The DR solution is built on three fundamental pillars that work together to provide comprehensive protection: 1. Infrastructure Provisioning (Terraform) The infrastructure layer creates and manages all Azure resources required for disaster recovery using Infrastructure as Code (Terraform). What It Creates: Secondary Resource Group: A dedicated resource group in your paired DR region (e.g., if primary is in East US, secondary might be in West US 2) Secondary Databricks Workspace: A standby Databricks workspace with the same SKU as your primary, ready to receive failover traffic DR Storage Account: An ADLS Gen2 storage account that serves as the backup destination for your critical data Monitoring Infrastructure: Azure Monitor Log Analytics workspace and alert action groups to track DR health Protection Locks: Management locks to prevent accidental deletion of critical DR resources Key Design Principle: The Terraform configuration references your existing primary workspace without modifying it. It only creates new resources in the secondary region, ensuring your production environment remains untouched during setup. 2. Data Synchronization (Delta Notebooks) The data synchronization layer ensures your critical data is continuously backed up to the secondary region. How It Works: The solution uses a Databricks notebook that runs in your primary workspace on a scheduled basis. This notebook: Connects to Backup Storage: Uses Unity Catalog with Azure Managed Identity for secure, credential-free authentication to the secondary storage account Identifies Critical Tables: Reads from a configuration list you define (sales data, customer data, inventory, financial transactions, etc.) Performs Deep Clone: Uses Delta Lake's native CLONE functionality to create exact copies of your tables in the backup storage Tracks Sync Status: Logs each synchronization operation, tracks row counts, and reports on data freshness Authentication Flow: The synchronization process leverages Unity Catalog's managed identity capabilities: An existing Access Connector for Unity Catalog is granted "Storage Blob Data Contributor" permissions on the backup storage. Storage credentials are created in Databricks that reference this Access Connector. The notebook uses these credentials transparently—no storage keys or secrets are required. What Gets Synced: You define which tables are critical to your business operations. The notebook creates backup copies including: Full table data and schema Table partitioning structure Delta transaction logs for point-in-time recovery 3. Failover Automation (Python Scripts) The failover automation layer orchestrates the switch from primary to secondary workspace when disaster strikes. Microsoft Fabric Microsoft Fabric provides built‑in disaster recovery capabilities designed to keep analytics and Power BI experiences available during regional outages. Fabric simplifies continuity for reporting workloads, while still requiring customer planning for deeper data and workload replication. Power BI Business Continuity Power BI, now integrated into Fabric, provides automatic disaster recovery as a default offering: No opt-in required: DR capabilities are automatically included. Azure storage geo-redundant replication: Ensures backup instances exist in other regions. Read-only access during disasters: Semantic models, reports, and dashboards remain accessible. Always supported: BCDR for Power BI remains active regardless of OneLake DR setting. Microsoft Fabric Fabric's cross-region DR uses a shared responsibility model between Microsoft and customers: Microsoft's Responsibilities: Ensure baseline infrastructure and platform services availability Maintain Azure regional pairings for geo-redundancy. Provide DR capabilities for Power BI as default. Customer Responsibilities: Enable disaster recovery settings for capacities Set up secondary capacity and workspaces in paired regions Replicate data and configurations Enabling Disaster Recovery Organizations can enable BCDR through the Admin portal under Capacity settings: Navigate to Admin portal → Capacity settings Select the appropriate Fabric Capacity Access Disaster Recovery configuration Enable the disaster recovery toggle Critical Timing Considerations: 30-day minimum activation period: Once enabled, the setting remains active for at least 30 days and cannot be reverted. 72-hour activation window: Initial enablement can take up to 72 hours to become fully effective. Azure Databricks & Microsoft Fabric DR Considerations Building a resilient analytics platform requires understanding how disaster recovery responsibilities differ between Azure Databricks and Microsoft Fabric. While both platforms operate within Azure’s regional architecture, their DR models, failover behaviors, and customer responsibilities are fundamentally different. Recovery Procedures Procedure Databricks Fabric Failover Stop workloads, update routing, resume in secondary region. Microsoft initiates failover; customers restore services in DR capacity. Restore to Primary Stop secondary workloads, replicate data/code back, test, resume production. Recreate workspaces and items in new capacity; restore Lakehouse and Warehouse data. Asset Syncing Use CI/CD and Terraform to sync clusters, jobs, notebooks, permissions. Use Git integration and pipelines to sync notebooks and pipelines; manually restore Lakehouses. Business Considerations Consideration Databricks Fabric Control Customers manage DR strategy, failover timing, and asset replication. Microsoft manages failover; customers restore services post-failover. Regional Dependencies Must ensure secondary region has sufficient capacity and services. DR only available in Azure regions with Fabric support and paired regions. Power BI Continuity Not applicable. Power BI offers built-in BCDR with read-only access to semantic models and reports. Activation Timeline Immediate upon configuration. DR setting takes up to 72 hours to activate; 30-day wait before changes allowed.611Views3likes0CommentsExplore Microsoft Fabric Data Agent & Azure AI Foundry for agentic solutions
Contributors for this blogpost: Jeet J & Ritaja S Context & Objective Over the past year, Gen AI apps have expanded significantly across enterprises. The agentic AI era is here, and the Microsoft ecosystem helps enable end-to-end acceleration of agentic AI apps in production. In this blog, we'll cover how both low-code business analysts and pro-code developers can use the Microsoft stack to build reusable agentic apps for their organizations. Professionals in the Microsoft ecosystem are starting to build advanced agentic generative AI solutions using Microsoft AI Services and Azure AI Foundry, which supports both open source and industry models. Combined with the advancements in Microsoft Fabric, these tools enable robust, industry-specific applications. This blog post explains how to develop multi-agent solutions for various industries using Azure AI Foundry, Copilot Studio, and Fabric. Disclaimer: This blogpost is for educational purposes only and walks through the process of using the relevant services without ton of custom code; teams must follow engineering best practices—including development, automated deployment, testing, security, and responsible AI—before any production deployment. What to expect In-Focus: Our goal is to help the reader explore specific industry use cases and understand the concept of building multi-agent solutions. In our case, we will focus on the insurance and financial services use case, use Fabric Notebooks to create sample (fake) datasets, utilize simple click-through based workflow to build-and-configure three agents (both on Fabric and Azure AI Foundry), tie them together and offer the solution via Teams or Microsoft Copilot using the new M365 Agents Toolkit. Out-of-Focus: This blog post will not cover the fundamentals of Azure AI Services, Azure AI Foundry, or the various components of Microsoft Fabric. It also won’t cover the different ways (low-code or pro-code) to build agents, orchestration frameworks (Semantic Kernel, Langchain, AutoGen, etc.) for orchestrating the agents, or hosting options (Azure App Service – Web App, Azure Kubernetes Service, Azure Container Apps, Azure Functions ). Please review the pointers listed towards the end to gain a holistic understanding of building and deploying mission-critical generative AI solutions. Logical Architecture of Multi-Agent Solution utilizing Microsoft Fabric Data Agent and Azure AI Foundry Agent. Fabric & Azure AI Foundry – Pro-code Agentic path Prerequisites a. Access to Azure Tenant & Subscription b. Work with Azure tenant administrator to have appropriate Azure Roles and Capacity to provision Azure Resources (services) and Deploy AI Models in certain regions. c. A paid F2 or higher Fabric capacity resource – Important to note that the Fabric compute capacity can be paused and resumed. Pause in case you wish to save costs after your learning. d. Access Fabric Admin Portal Power BI for enabling these settings > Fabric data agent tenant settings is enabled. > Copilot tenant switch is enabled. > Optional: Cross-geo processing for AI is enabled. (depends on your region and data sovereignty requirements) > Optional: Cross-geo storing for AI is enabled. (depends on your region) e. At least one of these: Fabric Data Warehouse, Fabric Lakehouse, one or more Power BI semantic models, or a KQL database with data. This blog post will cover how to create sample datasets using Fabric Notebooks. f. Power BI semantic models via XMLA endpoints tenant switch is enabled for Power BI semantic model data sources. Walkthrough/Set-up One-time setup for Fabric Workspace and all agents a) Visit https://app.powerbi.com Click “New workspace” button to create a new workspace, give it a name and ensure that it is tied/associated to the Fabric Capacity you have provisioned. b) Click Workspace settings of the newly created Workspace c) Review the information in License info. If the workspace isn’t associated with your newly created Fabric Capacity, please do the proper association (link the Fabric capacity to the workspace) and wait for 5-10 mins. 2. Create an Insurance Agent a) Create a new Lakehouse in your Fabric Workspace. Change the name to InsuranceLakehouse b) Create a new Fabric Notebook, assign a name, and associate the Insurance Lakehouse with it. c) Add the following Pyspark (Python) code-snippets in the Notebook. i) Faker library for Fabric Notebook ii) Insurance Sample Dataset in Fabric Notebook iii) Run both cells to generate the sample Insurance dataset. d) Create a new Fabric data agent, give it a name and add the Data Source (InsuranceLakehouse). i) Ensure that the Insurance Lakehouse is added as the data source in the Insurance Fabric data agent. ii) Click AI instructions button first to paste the sample instructions and finally the Publish button. iii) Paste the sample instructions in the field. A churn is represented by number 1. Calculate churn rate as : total number of churns (TC) / (TT) total count of churn entries. When asked to calculate churn for each policy type then TT should be total count of churn of that policy type e.g Life, legal. iv) Make sure to hit the Publish button. v) Capture two values from the URL and store them in secure/private place. We will use them to configure the knowledge source in Azure AI Foundry Agent. https://app.powerbi.com/groups/<WORKSPACE-ID>/aiskills/<ARTIFACT-ID>?experience=power-bi e) Create Azure AI Agent on Azure AI Foundry and use Fabric Data Agent as the Knowledge Source i) Visit https://ai.azure/com ii) Create new Azure AI Project and deploy gpt-4o-mini model (as an example model) in the region where the model is available. iii) Create new Azure AI Foundry Agent by clicking the “New agent” button. Give it a name (for e.g. AgentforInsurance) iv) Paste the sample Instructions in the Azure AI Agent as follows Use Fabric data agent tool to answer questions related to Insurance data. Fabric data agent as a tool has access to insurance data tables: Claims (amount, date, status), Customer (age, address, occupation, etc), Policy (premium amount, policy type: life insurance, auto insurance, etc) v) On the right-hand pane, click “+Add” button next to Knowledge. vi) Choose an existing Fabric data agent connection or click the new Connection. vii) In the next dialog, plug-in the values of the Workspace and Artifact ID you captured above, Ensure that “is secret” is checked, give a name to the connection and hit the Connect button. viii) Add the Code Interpreter as the tool in the Azure AI Foundry agent by clicking +Add next to Actions and selecting Code Interpreter. ix) Test the agent by clicking “Try in playground” button x) To test the Agent, you can try out these sample questions: What is the churn rate across my different insurance policy types What’s the month over month claims change in % for each insurance type? Show me graph view of month over month claims change in % for each insurance type for the year 2025 only Based on month over month claims change for the year 2025, can you show the forecast for the next 3 months in a graph? f) Exposing the Fabric data agent to end users: We will explore this in the Copilot Studio Section Fabric & Copilot Studio – Low-code agentic path Prerequisites For Copilot Studio, you have 3 options to work with: Copilot Studio trial or Copilot Studio license or Copilot Studio Pay as you go (connected to your Azure billing). Follow steps here to setup: Copilot Studio licensing - Microsoft Copilot Studio | Microsoft Learn Once you have Copilot Studio set up, navigate to https://copilotstudio.microsoft.com/ and start creating a new agent – Click on “create” and then “New Agent” Walkthrough/Set-up: Follow steps from “Fabric & Azure AI Foundry section” to create the Fabric Lakehouse and You could create the new agent by describing step by step in natural language but for this exercise we will “skip to configure” (button): Give the agent a name, add a helpful description (suggestion, add: Agent that has access to Insurance data and helps with data driven insights from Policy, Claims and Customer information). Then add the agent instruction prompt: “Answer questions related to Insurance data, you have access to insurance data agent, use the agent to gather insights from insurance Lakehouse containing customer, policy and claim information.” Finally click on “Create” You should have the following setup like below: Next, we want to add another agent for this agent to use – in this case this will be our Fabric Data Agent. Click on “Agents”: Next click on “Add” and add agent. From the screen click on Microsoft Fabric: If you haven’t set-up a connection to Fabric from Copilot Studio, you will be prompted to create a new connection, follow the prompts to sign in with your user and add a connection. Once that is done click “Next” to continue: From the Fabric screen, select the appropriate data agent and click “Next”: On the next screen, name the agent appropriately and use a friendly description “Agent that answers questions from the insurance lakehouse knowledge, has access to claims, policies and customer information” and finally click on “Add Agent”: On the “Tools” section click on refresh to make sure the tool description populates. Finally go back to overview and then Start Testing the agent from the side Test Panel. Click on Activity map to see the sequence of events. Type in the following question: “What’s the month over month claims change in % for auto insurance ?” You can see the Fabric data agent is called by the Copilot Agent in this scenario to answer your question: Now let’s prepare to surface this through Teams. You will need to publish the agent to a channel (in this case, we will use the Teams channel). First, navigate to channels: Click on the Teams and M365 Copilot channel and click add. After adding the channel, a pop up will ask y ou if you are ready to publish the agent: To view the app in Teams you need to make sure that you have setup proper policies in Teams. Follow this tutorial: Connect and configure an agent for Teams and Microsoft 365 Copilot - Microsoft Copilot Studio | Microsoft Learn. Now your app is available across Teams. Below is an example of how to use it from Teams – make sure you click on “Allow” for the fabric data agent connection: Fabric & AI Foundry & Copilot Studio – the end to end We saw how Fabric data agents can be created and utilized in Copilot Studio in a multi-agent setup. In the future, pro-code and low-code agentic developers are expected to work together to create agentic apps, instead of in silos. So, how do we solve the challenge of connecting all the components together in the same technology stack ? Let’s say a pro-code developer has created a custom agent in AI Foundry. Meanwhile, a low-code business user has put in business context to create another agent that requires access to the agent in AI Foundry. You’ll be pleased to know that Copilot Studio and Azure AI Foundry are becoming more integrated to enable complex, custom scenarios: Copilot Studio will soon release the integration to help with this: Summary: We demonstrated how one can build a Gen AI solution that allows seamless integration between Azure AI Foundry agents and Fabric data agents. We look forward to seeing what innovative solutions you can build by learning and working closely with your Microsoft contacts or your SI partner. This may include but not limited to: Utilizing a real industry domain to illustrate the concepts of building simple multi-agent solution. Showcasing the value of combining Fabric data agent and Azure AI agent Demonstrating how one can publish the conceptual solution to Teams or Copilot using the new M365 Agents Toolkit. Note that this blog post focused only on Fabric data agent and Azure AI Foundry Agent Service, but production ready solutions will need to consider Azure Monitor (for monitoring and observability) and Microsoft Purview for data governance. Pointers to Other Learning Resources Ways to build AI Agents: Build agents, your way | Microsoft Developer Components of Microsoft Fabric: What is Microsoft Fabric - Microsoft Fabric | Microsoft Learn Info on Microsoft Fabric data agent Create a Fabric data agent (preview) - Microsoft Fabric | Microsoft Learn Fabric Data Agent Tutorial: Fabric data agent scenario (preview) - Microsoft Fabric | Microsoft Learn New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses | Microsoft Fabric Blog | Microsoft Fabric Info on Azure AI Services, Models and Azure AI Foundry Azure AI Foundry documentation | Microsoft Learn What are Azure AI services? - Azure AI services | Microsoft Learn How to use Azure AI services in Azure AI Foundry portal - Azure AI Services | Microsoft Learn What is Azure AI Foundry Agent Service? - Azure AI Foundry | Microsoft Learn Explore Azure AI Foundry Models - Azure AI Foundry | Microsoft Learn What is Azure OpenAI in Azure AI Foundry Models? | Microsoft Learn Explore model leaderboards in Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn & Benchmark models in the model leaderboard of Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn How to use model router for Azure AI Foundry (preview) | Microsoft Learn Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn Trustworthy AI for Azure AI Foundry - Azure AI Foundry | Microsoft Learn Cost Management for Models: Plan to manage costs for Azure AI Foundry Models | Microsoft Learn Provisioned Throughput Offering: Provisioned throughput for Azure AI Foundry Models | Microsoft Learn Extend Azure AI Foundry Agent with Microsoft Fabric Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor | Microsoft Community Hub How to use the data agents in Microsoft Fabric with Azure AI Foundry Agent Service - Azure AI Foundry | Microsoft Learn General guidance on when to adopt, extend and build CoPilot experiences: Adopt, extend and build Copilot experiences across the Microsoft Cloud | Microsoft Learn M365 Agents Toolkit Choose the right agent solution to support your use case | Microsoft Learn Microsoft 365 Agents Toolkit Overview - Microsoft 365 Developer | Microsoft Learn Github and Video to offer Azure AI Agent inside Teams or CoPilot via M365 Agents Toolkit OfficeDev/microsoft-365-agents-toolkit: Developer tools for building Teams apps & Deploying your Azure AI Foundry agent to Microsoft 365 Copilot, Microsoft Teams, and beyond Happy Learning! Contributors: Jeet J & Ritaja S Special thanks to reviewers: Joanne W, Amir J & Noah A1.7KViews3likes2CommentsFabric Data Agents: Unlocking the Power of Agents as a Steppingstone for a Modern Data Platform
What Are Fabric Data Agents? Fabric Data Agents are intelligent, AI-powered assistants embedded within Microsoft Fabric, a unified data platform that integrates data ingestion, processing, transformation, and analytics. These agents act as intermediaries between users and data, enabling seamless interaction through natural language queries in the form of Q&A applications. Whether it's retrieving insights, analyzing trends, or generating visualizations, Fabric Data Agents simplify complex data tasks, making advanced analytics accessible to everyone—from data scientists to business analysts to executive teams. How Do They Work? At the center of Fabric Data Agents is OneLake, a unified and governed data lake that joins data from various sources, including on-premises systems, cloud platforms, and third-party databases. OneLake ensures that all data is stored in a common, open format, simplifying data management and enabling agents to access a comprehensive view of the organization's data. Through Fabric’s Data Ingestion capabilities, such as Fabric Data Factory, OneLake Shortcuts, and Fabric Database Mirroring, Fabric Data Agents are designed to connect with over 200 data sources, ensuring seamless integration across an organization's data estate. This connectivity allows them to pull data from diverse systems and provide a unified analytics experience. Here's how Fabric Data Agents work: Natural Language Processing: Using advanced NLP techniques, Fabric Data Agents enable users to interact with data through conversational queries. For example, users can ask questions like, "What are the top-performing investment portfolios this quarter?" and receive precise answers, grounded on enterprise data. AI-powered Insights: The agents process queries, reason over data, and deliver actionable insights, using Azure OpenAI models, all while maintaining data security and compliance. Customization: Fabric data agents are highly customizable. Users can provide custom instructions and examples to tailor their behavior to specific scenarios. Fabric Data Agents allow users to provide example SQL queries, which can be used to influence the agent’s behavior. They also can integrate with Azure AI Agent Service or Microsoft Copilot Studio, where organizations can tailor agents to specific use cases, such as risk assessment or fraud detection. Security and Compliance: Fabric Data Agents are built with enterprise-grade security features, including inheriting Identity Passthrough/On-Behalf-Of (OBO) authentication. This ensures that business users only access data they are authorized to view, keeping strict compliance with regulations like GDPR and CCPA across geographies and user roles. Integration with Azure: Fabric Data Agents are deeply integrated with Azure services, such as Azure AI Agent Service and Azure OpenAI Service. Practically, organizations can publish Fabric Data Agents to custom Copilots using these services and use the APIs in various custom AI applications. This integration ensures scalability, high availability, and performance and exceptional customer experience. Why Should Financial Services Companies Use Fabric Data Agents? The financial services industry faces unique challenges, including stringent regulatory requirements, the need for real-time decision-making, and empowering users to interact with an AI application in a Q&A fashion over enterprise data. Fabric Data Agents address these challenges head-on through: Enhanced Efficiency: Automate repetitive tasks, freeing up valuable time for employees to focus on strategic initiatives. Improved Compliance: Use robust data governance features to ensure compliance with regulations like GDPR and CCPA. Data-Driven Decisions: Gain deeper insights into customer behavior, market trends, and operational performance. Scalability: Seamlessly scale analytics capabilities to meet the demands of a growing organization, without really investing in building custom AI applications which require deep expertise. Integration with Azure: Fabric Data Agents are natively designed to integrate across Microsoft’s ecosystem, providing a comprehensive end-to-end solution for a Modern Data Platform. How different are Fabric Data Agents from Copilot Studio Agents? Fabric Data Agents and Copilot Studio Agents serve distinct purposes within Microsoft's ecosystem: Fabric Data Agents are tailored for data science workflows. They integrate AI capabilities to interact with organizational data, providing analytics insights. They focus on data processing and analysis using the medallion architecture (bronze, silver, and gold layers) and support integration with the Lakehouse, Data Warehouse, KQL Databases and Semantic Models. Copilot Studio Agents, on the other hand, are customizable AI-powered assistants designed for specific tasks. Built within Copilot Studio, they can connect to various enterprise data sources like OneLake, AI Search, SharePoint, OneDrive, and Dynamics 365. These agents are versatile, enabling businesses to automate workflows, analyze data, and provide contextual responses by using APIs and built-in connectors. What are the technical requirements for using Fabric Data Agents? A paid F64 or higher Fabric capacity resource Fabric data agent tenant settingsis enabled. Copilot tenant switchis enabled. Cross-geo processing for AIis enabled. Cross-geo storing for AIis enabled. At least one of these: Fabric Data Warehouse, Fabric Lakehouse, one or more Power BI semantic models, or a KQL database with data. Power BI semantic models via XMLA endpoints tenant switchis enabled for Power BI semantic model data sources. Final Thoughts In a data-driven world, Fabric Data Agents are poised to redefine how financial services organizations operate and innovate. By simplifying complex data processes, enabling actionable insights, and fostering collaboration across teams, these intelligent agents empower organizations to unlock the true potential of their data. Paired with the robust capabilities of Microsoft Fabric and Azure, financial institutions can confidently navigate industry challenges, drive growth, and deliver superior customer experiences. Adopting Fabric Data Agents is not just an upgrade—it's a transformative step towards building a resilient and future-ready business. The time to embrace the data revolution is now. Learn how to create Fabric Data Agents3KViews3likes1CommentAI-Ready Infrastructure Design - A pattern for Enterprise Scale
AI Hub Gateway Solution Accelerator – a comprehensive reference architecture designed to address Decentralized AI Management related challenges head-on. This innovative solution provides a set of guidelines and best practices for implementing a central AI API gateway, empowering various line-of-business units to leverage Azure AI services efficiently and securely.11KViews3likes1CommentExternal Data Sharing With Microsoft Fabric
The demands and growth of data for external analytics consumption is rapidly growing. There are many options to share data externally and the field is very dynamic. One of the most frictionless and easy onboarding steps for external data sharing we will explore is with Microsoft Fabric. This external data allows users to share data from their tenant with users in another Microsoft Fabric tenant.6.3KViews3likes2CommentsTableau to Power BI Migration: Semantic Layer-First Approach for Cloud Architects
Author's: Lavanya Sreedhar LavanyaSreedhar, Peter Lo PeterLo, Aryan Anmol aryananmol and Rafia Aqil Rafia_Aqil In this guide, we provide practical guidance for migrating from Tableau to Power BI, with a focus on technical best practices and architecture. Unifying business intelligence on the Microsoft Fabric platform, enterprises gain closer integration with Microsoft 365 (Teams, Copilot, Excel). For cloud solution architects and BI developers, a successful migration is not just about rebuilding dashboards in a new tool. It requires thoughtful architectural planning and a shift to a more model-centric approach to BI. Why Semantic Layer-First Architecture Matters The Traditional Migration Challenge Most Tableau to Power BI migrations follow a dashboard-centric approach: teams attempt to replicate existing Tableau workbooks, calculated fields, and LOD (Level of Detail) expressions directly into Power BI reports. While this may seem efficient initially, it creates significant downstream challenges: Duplicated logic: Each report embeds its own calculations and business rules, leading to conflicting KPIs across the organization Maintenance overhead: Changes to business logic require updating dozens or hundreds of individual reports Governance gaps: Without centralized definitions, semantic drift occurs—different teams calculate "Revenue" or "Active Customer" differently Scalability issues: As data volumes grow, report-level transformations become performance bottlenecks The Semantic Layer-First Alternative Microsoft's recommended approach centers on semantic models (formerly called datasets)—centralized, governed data models that separate business logic from visualization. In this architecture: The payoff is substantial: when data evolves or business rules change, you update the semantic model once, and all dependent reports automatically reflect the changes—no manual redesign required. Understanding Migration Complexity: Simple to Very Complex Dashboards Not all Tableau dashboards are created equal. The migration strategy should align with dashboard complexity, and the semantic layer approach becomes increasingly valuable as complexity grows. Follow a Step-by-Step Migration Strategy Migrating from Tableau to Power BI is not a one-click effort – it requires a mix of automated and manual refactoring, plus a sound change management plan. Below are key strategies and best practices for a successful migration: Audit your Tableau estate: Start by taking inventory of all existing Tableau workbooks, data sources, and dashboards. Determine what needs to be migrated (focus on high-value, widely used reports first) and identify any redundant or obsolete content that can be retired rather than converted. Conduct a proof-of-concept (PoC): Before migrating everything, pick a representative complex dashboard (or a subset of your data) and perform a pilot migration. This will help you validate that Power BI can connect to your data (e.g. setting up the Power BI gateways for on-premises sources), test performance (Import vs DirectQuery modes), and experiment with replicating key visuals or calculations. Use the PoC to uncover any surprises early – for example, test that any Level of Detail expressions or table calculations in Tableau can be re-created in DAX. The lessons learned here should inform your overall project plan. Use a phased migration approach: Plan to run Tableau and Power BI in parallel for some period, rather than switching everything at once. Migrate in waves – for example, by business unit or subject area – and incorporate user feedback as you go. This phased approach reduces risk and allows your team to improve the process with each iteration. It also gives end users time to adjust gradually. Migrate high-impact dashboards first: Prioritize the migration of key reports and dashboards that are critical to the business or have the most usage. Delivering these early wins will not only surface any technical challenges to solve but will also help demonstrate the value of Power BI’s capabilities to stakeholders. Early success builds buy-in and momentum for the rest of the migration. Reimagine (don’t just replicate) the experience: It’s rarely possible – or desirable – to exactly re-create every Tableau visualization pixel-for-pixel in Power BI. Embrace the opportunity to focus on business questions and improve user experience with Power BI’s features. For example, rather than replicating a complex Tableau workaround, you might implement a cleaner solution in Power BI using native features (like bookmarks, drilldowns, or simpler navigation between pages). Engage business users and subject matter experts during this redesign to ensure the new reports meet their needs. Enable dataset reusability: One major benefit of the Power BI approach is the ability to create shared datasets and dataflows. As you migrate, look for opportunities to create central semantic models (datasets) that can serve multiple reports. For instance, if several Tableau workbooks are all using similar data about sales, you can create one central Sales dataset in Power BI. Report creators across the organization can then build different Power BI reports on that single dataset without duplicating data or logic. This reduces maintenance and promotes a “build once, reuse often” strategy. Provide training and support: Expect a learning curve for teams moving to Power BI – especially those who are very fluent in Tableau. Plan for user upskilling and training programs. Establish a support community or office hours where new users can ask questions and get help. If possible, identify Power BI champions or recruit a Power BI Center of Excellence (COE) team who can guide others. During the transition, ensure there are subject matter experts (SMEs) available to address questions and validate that the new reports are correct. Manage change and expectations: It’s important to communicate why the organization is moving to Power BI (e.g. benefits like deeper integration, lower TCO, better governance) to get buy-in from end users. Some users may be resistant to change, especially if they’ve invested a lot of time in mastering Tableau. Prepare to handle varying responses – emphasize the personal benefits (like improved performance, new capabilities, or career growth with popular skills) to encourage adoption. Also, involve influential business users early and gather their feedback, so they feel ownership in the new solution. Establish governance from Day 1: Don’t wait until after migration to think about governance. Use this chance to set up Power BI governance aligned to best practices. Decide on important aspects such as workspace naming conventions, who can create or publish content, how you’ll monitor usage and costs, and how to manage data access and security (for example, designing a strategy for RLS, and deciding when to use per-user datasets vs. organizational semantic models). Good governance will ensure your shiny new Power BI environment doesn’t sprawl into chaos over time. Allow time for adjustment and iteration: Finally, be patient and iterative. Depending on the scale of your organization and the number of Tableau assets, a full migration can take months or even a year or more. Plan realistic transition periods where both systems might coexist. Continuously refine your approach with each wave of migration. Power BI’s frequent update cadence (monthly releases) means new features may emerge even during your project – stay updated, as new capabilities could simplify your migration (for example, the introduction of field parameters or Copilot might let you modernize certain Tableau features more easily). Reimagine (don’t just replicate) the experience (Step 5): Phase 1: Assessment and Planning 1. Audit Your Tableau Estate Inventory all workbooks, data sources, and calculated fields Identify high-traffic dashboards (prioritize for early migration) Categorize by complexity (Simple/Medium/Complex/Very Complex) 2. Design Your Semantic Architecture Map Tableau data sources to Power BI data sources (DirectQuery, Import, or Direct Lake) Plan star schema for fact/dimension tables Identify shared calculations that should live in semantic models vs. report-specific logic 3. Choose Storage Modes Source Type Recommended Mode Rationale Databricks Delta Lake Direct Lake Real-time analytics, no refresh lag Azure SQL Database DirectQuery or Import Based on data volume and refresh SLAs On-Premises SQL Server Import (via Gateway) Network latency considerations Excel/CSV files Import Small reference data Phase 2: Build the Semantic Layer 1. Create Star Schema Data Models Tableau often relies on flat, denormalized datasets. Power BI performs best with star schemas: Fact tables: Transactional data (sales, orders, events) with foreign keys to dimensions Dimension tables: Descriptive attributes (customers, products, dates) with primary keys Relationships: One-to-many from dimension to fact, leveraging bidirectional filtering sparingly 2. Migrate Calculations to DAX Measures Convert Tableau calculated fields to DAX measures in the semantic model: --Example of DAX: -- Define as measure: Total Revenue = SUMX( 'Sales', 'Sales'[Quantity] * 'Sales'[Unit Price] ) 2.1 Use Copilot to Accelerate DAX Development Leverage Copilot in Power BI Desktop to generate and validate DAX: Describe the calculation in natural language Copilot suggests DAX syntax Review, test, and refine Phase 3: Understanding Migration Complexity: Simple to Very Complex Dashboards Not all Tableau dashboards are created equal. The migration strategy should align with dashboard complexity, and the semantic layer approach becomes increasingly valuable as complexity grows. 1. Dashboard Conversion Best Practices Think in "pages" not "sheets": Power BI reports combine multiple visuals per page; group related visuals logically Use slicers for interactivity: Replace Tableau filters with Power BI slicers and filter pane Leverage bookmarks for navigation: Create dynamic report experiences with show/hide containers Simple Complexity Level Category Tableau Feature Power BI Equivalent Microsoft Fabric Enhancements Best Practice Notes Data Model Single custom SQL Power Query for data shaping and ETL. OneLake Shortcuts for unified data access. Use star schema for optimized performance; push logic into the semantic layer rather than visuals. Calculations Basic IF/ELSE, SUM Data Analysis Expressions (DAX) for measures and calculated columns. Copilot for Power BI to assist with DAX creation. Fabric IQ for natural language queries. Centralize calculations in semantic models for consistency and governance. Medium Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multiple custom SQL (up to 3) Connect live to databases (Azure Databricks): DirectQuery in Power BI Connect with cloud data sources: Power BI data sources OneLake Shortcuts for unified access without databricks compute cost. Semantic Models can combine multiple sources. Optimize with star schema; Prefer OneLake Shortcuts for performance; avoid heavy transformations in visuals. Calculations Nested IFs, CASE Data Analysis Expressions (DAX) for measures and calculated columns. Copilot for Power BI to assist with DAX creation. Fabric Data Agent for conversational BI. Fabric IQ for natural language queries: Fabric IQ Centralize logic in semantic models; use Copilot for automation and validation; keep calculations reusable. Reporting Tooltip format in Bar and Map visuals Select All/Clear option for Single Select dropdown Standard tooltips offer help tooltips, text, and background formatting. Dynamic tooltip will be able to create the Tooltip page and reuse it in multiple visuals The customization is so much better than the OOB tooltips Create report tooltip pages in Power BI - Power BI | Microsoft Learn Use Clear All Slicers Button. Disable Single Select, Add Clear All Slicers button, Customize the Button and Use the Button Complex Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multiple sources Create relationship using more than one column Composite Models in Power BI (DirectQuery + Import) for combining multiple sources, also connect to various cloud services. Dataflows for pre-processing. Power BI allows a relationship between 2 tables based on only one active column. OneLake Shortcuts for unified access without Azure Databricks compute cost; Microsoft Fabric Dataflows Gen2 offers multiple ways to ingest, transform, and load data efficiently. Consolidate sources into semantic models; use Direct Lake for performance; Plan and design data model to comply with star schema supported by Power BI Relationship DAX USERELATIONSHIP DAX for activating relationships in Power BI for a specific calculation Calculations LOD, window functions Data Analysis Expressions (DAX) for measures and calculated columns. Copilot to assist with complex DAX. Fabric IQ Ontology for semantic alignment. Q&A visual for natural language. Change how visuals interact in a Power BI report. Centralize calculations in semantic layer; use variables in DAX for readability and performance. Fabric Data Agent for a conversational BI. Very Complex Complexity Level Category Tableau Feature Power BI Equivalent Fabric Enhancements Best Practice Notes Data Model Multi-source, Excel, SQL Composite Models in Power BI (DirectQuery + Import) for combining multiple sources, also connect to various cloud services. Dataflows for pre-processing. OneLake Shortcuts for unified access; Connector overview build-in support. Mirroring for real-time sync. Combine multiple sources into well-structured semantic models for consistency and optimized performance. Calculations Predictive logic Data Analysis Expressions (DAX) for measures and calculated columns. Fabric AutoML, ML models, AI Insights, Python/R, Notebook‑based ML (Spark/Scikit‑Learn), Fabric AI Functions, Fabric IQ Ontology Fabric Data Agent for a conversational BI. Centralize logic in semantic models; leverage Copilot for automation and parameter-driven workflows. Prepare for Copilot and Q&A integration. 2. Tableau Feature Equivalents Tableau Feature Power BI Equivalent Microsoft Learn Link Calculated Fields DAX Measures DAX Documentation Parameters Field Parameters / Bookmarks Use report readers to change visuals Actions Drillthrough / Bookmarks Drillthrough Tableau Prep Power Query / Dataflows Differences between Dataflow Gen1 and Dataflow Gen2 Tableau Server Power BI Service What is Power BI? Overview of Components and Benefits Phase 4: Governance and Deployment Workspace Planning (Dev / Test / Prod Separation) A proper workspace strategy is essential for governed deployments in Fabric and Power BI. Fabric supports separate Development, Test, and Production stages using Deployment Pipelines, enabling controlled promotions of semantic models, reports, dataflows, notebooks, lakehouses, and other items. You can assign each workspace to a pipeline stage (Dev → Test → Prod) to ensure safe lifecycle management. Sensitivity Labeling (Microsoft Purview Information Protection) Sensitivity labels allow governed classification and protection of data across Fabric items. Sensitivity labels can be applied directly to Fabric items (semantic models, reports, dataflows, etc.) through the item's header flyout or the item settings. Labels from Microsoft Purview Information Protection enforce data access rules and help organizations meet compliance requirements. Endorsement & Certification (Promoted, Certified, Master Data) Endorsement improves discoverability and trust in shared organizational content. Promoted: Item creators mark content as recommended for broader use. Certified: Administrators or authorized reviewers validate content meets organizational quality standards. Master Data: Indicates authoritative single‑source‑of‑truth items such as semantic models or lakehouses. All Fabric items except dashboards can be promoted or certified; data‑containing items can be designated as Master Data. Monitoring & Capacity Planning Determine the appropriate size for fabric capacity when migrating from Tableau to PowerBI. The Fabric SKU Estimator can generate a SKU recommendation (estimate) for your capacity requirements. Ensuring performance and cost efficiency requires ongoing monitoring of your Fabric capacity. Microsoft recommends evaluating workloads using Fabric Capacity Metrics and planning SKU sizes based on real usage. Fabric uses bursting and smoothing to handle spikes while enforcing capacity limits. Monitoring helps identify high compute usage, background refreshes, and interactive workloads to optimize performance. Fabric Data Source Connections (OneLake+ Manage Connections) Microsoft Fabric is designed as an end‑to‑end analytics platform that integrates data from many different source systems into a unified environment powered by OneLake, Data Factory, Real‑Time Analytics, Dataflows , Lakehouses, Warehouses, and Mirrored Databases. The Strategic Advantage: Semantic Layer + Fabric IQ The semantic layer-first approach sets the foundation for the next evolution in enterprise analytics. Fabric IQ (announced at Ignite 2025) is Microsoft's semantic intelligence platform that auto-elevates semantic models into ontologies—structured knowledge graphs that power AI agents, Copilot experiences, and cross-domain data reasoning. What this means for your migration: Semantic models you build today become the foundation for AI-driven analytics tomorrow Data Agents can reason across multiple semantic models, answering questions that span domains Business users transition from "report consumers" to "data explorers" via natural language interfaces Conclusion: Build for the Future, Not Just for Today Migrating from Tableau to Power BI is more than a technology swap—it's an opportunity to re-architect your analytics strategy for the cloud-native, AI-powered era. The semantic layer-first approach requires upfront investment in data modeling, DAX expertise, and Fabric platform adoption. But the payoff is transformative: Consistency: Single source of truth for all business metrics Scalability: Semantic models that serve hundreds of reports and thousands of users Agility: Changes to business logic propagate instantly across the enterprise Future-readiness: Foundation for Fabric IQ, Data Agents, and AI-driven insights Start your migration with the end in mind: not just convert dashboards, but a modern, governed, AI-ready analytics platform that scales with your business. Addressing Key Migration Concerns (1) Why a semantic‑layered model approach is better than recreating Tableau dashboards A semantic‑layered modeling approach is the optimal strategy for migration and is significantly more effective than attempting to recreate Tableau dashboards exactly as they exist. By contrast, Power BI and Fabric encourage a semantic model–first architecture, where all business rules, relationships, calculations, and transformations are centralized in a governed model that serves many dashboards. The approach not only provides consistency and reuse across the enterprise but also ensures that report authors build on a single certified version of the truth. (2) How semantic-layered model approach reduces the constant redesign caused by changing data needs. A semantic‑layered modeling approach directly addresses concern about constant changes and frequent redesigns of dashboards when data evolves. With a semantic layer, changes are absorbed in the model layer—so the logic is updated once and flows automatically into all dependent reports. Combined with Fabric features like OneLake shortcuts, Direct Lake mode, and centralized governance, the semantic layer drastically reduces breakage, minimizes rework, and ensures scalability as data continues to grow and shift. Additional Resources Direct Lake in Microsoft Fabric Create Fabric Data Agents OneLake Shortcuts Write DAX queries with Copilot - DAX1.2KViews2likes2CommentsOverload to Optimal: Tuning Microsoft Fabric Capacity
Co-Authored by: Daya Ram, Sr. Cloud Solutions Architect and Rafia Aqil, Could Solutions Architect Optimizing Microsoft Fabric capacity is both a performance and cost exercise. By diagnosing workloads, tuning cluster and Spark settings, and applying data best practices, teams can reduce run times, avoid throttling, and lower total cost of ownership—without compromising SLAs. Use Fabric’s built-in observability (Monitoring Hub, Capacity Metrics, Spark UI) to identify hot spots and then apply cluster- and data-level remediations. Capacity Planning For capacity planning and sizing guidance, see Plan your capacity size. Selecting the wrong SKU can lead to two major issues: Over-provisioning: Paying for resources you don’t need. Under-provisioning: Struggling with performance bottlenecks and failed jobs. To simplify this process, Microsoft provides the Fabric SKU Estimator, a powerful tool designed to help organizations accurately size their capacity based on real-world usage patterns. Run the SKU Estimator before onboarding new workloads or scaling existing ones. Combine its recommendations with monitoring tools like Fabric Capacity Metrics to validate performance and adjust as needed. Options to Diagnose Capacity Issues 1) Monitoring Hub — Start with the Story of the Run What to use it for: Browse Spark activity across applications (notebooks, Spark Job Definitions, and pipelines). Quickly surface long‑running or anomalous runs; view read/write bytes, idle time, core allocation, and utilization. How to use it From the Fabric portal, open Monitoring (Monitor Hub). Select a Notebook or Spark Job Definition to run and choose Historical Runs. Inspect the Run Duration chart; click on a run to see read/write bytes, idle time, core allocation, overall utilization, and other Spark metrics. What to look for Use the guide: application detail monitoring to review and monitor your application. 2) Capacity Metrics App — Measure the Whole Environment What to use it for: Review capacity-wide utilization and system events (overloads, queueing); compare utilization across time windows and identify sustained peaks. How to use it Open the Microsoft Fabric Capacity Metrics app for your capacity. Review the Compute page (ribbon charts, utilization trends) and the System events tab to see overload or throttling windows. Use the Timepoint page to drill into a 30‑second interval and see which operations consumed the most compute. What to look for Use the Troubleshooting guide: Monitor and identify capacity usage to pinpoint top CU‑consuming items. 3) Spark UI — Diagnose at Deeper Level Why it matters: Spark UI exposes skew, shuffle, memory pressure, and long stages. Use it after Monitoring Hub/Capacity Metrics to pinpoint the problematic job. Key tabs to inspect Stages: uneven task durations (data skew), heavy shuffle read/write, large input/output volumes. Executors: storage memory, task time (GC), shuffle metrics. High GC or frequent spills indicate memory tuning is needed. Storage: which RDDs/cached tables occupy memory; any disk spill. Jobs: long‑running jobs and gaps in the timeline (driver compilation, non‑Spark code, driver overload). What to look for Set via environment Spark properties or session config. Data skew, Memory usage, High/Low Shuffles: Adjust Apache Spark settings: i.e. spark.ms.autotune.enabled, spark.task.cpus and spark.sql.shuffle.partitions. Remediation and Optimization Suggestions A) Cluster & Workspace Settings Runtime & Native Execution Engine (NEE) Use Fabric Runtime 1.3 (Spark 3.5, Delta 3.2) and enable the Native Execution Engine to boost performance; enable at the environment level under Spark compute → Acceleration. Starter Pools vs. Custom Pools Starter Pool: prehydrated, medium‑size pools; fast session starts, good for dev/quick runs. Custom Pools: size nodes, enable autoscale, dynamic executors. Create via workspace Spark Settings (requires capacity admin to enable workspace customization). High Concurrency Session Sharing Enable High Concurrency to share Spark Sessions across notebooks (and pipelines) to reduce session startup latency and cost; use session tags in pipelines to group notebooks. Autotune for Spark Enable Autotune (spark.ms.autotune.enabled = true) to auto‑adjust per‑query: spark.sql.shuffle.partitions Spark.sql.autoBroadcastJoinThreshold spark.sql.files.maxPartitionBytes. Autotune is disabled by default and is in preview; enable per environment or session. B) Data‑level best practices Microsoft Fabric offers several approaches to maintain optimal file sizes in Delta tables, review documentation here: Table Compaction - Microsoft Fabric. Intelligent Cache Enabled by default (Runtime 1.1/1.2) for Spark pools: caches frequently read files at node level for Delta/Parquet/CSV; improves subsequent read performance and TCO. OPTIMIZE & Z‑Order Run OPTIMIZE regularly to rewrite files and improve file layout. V‑Order V‑Order (disabled by default in new workspaces) can accelerate reads for read‑heavy workloads; enable via spark.sql.parquet.vorder.default = true. Vacuum Run VACUUM to remove unreferenced files (stale data); default retention is 7 days; align retention across OneLake to control storage costs and maintain time travel. Collaboration & Next Steps Engage Data Engineering Team to Define an Optimization Playbook Start with reviewing capacity sizing guidance, cluster‑level optimizations (runtime/NEE, pools, concurrency, Autotune) and then target data improvements (Z‑order, compaction, caching, query refactors). Triage: Monitor Hub → Capacity Metrics → Spark UI to map workloads and identify high‑impact jobs, and workloads causing throttling. Schedule: Operationalize maintenance: OPTIMIZE (full or selective) during off‑peak windows; enable Auto Compaction for micro‑batch/streaming writes; add VACUUM to your cadence with agreed retention. Add regular code review sessions to ensure consistent performance patterns. Fix: Adjust pool sizing or concurrency; enable Autotune; tune shuffle partitions; refactor problematic queries; re‑run compaction. Verify: Re‑run the job and change, i.e. reduced run time, lower shuffle, improved utilization.721Views2likes0CommentsSimplifying Migration to Fabric Real-Time Intelligence for Power BI Real Time Reports
Power BI with real-time streaming has been the preferred solution for users to visualize streaming data. Real-Time streaming in PowerBI is being retired. We recommend users to start planning the migration of their data processing pipeline to Fabric Real-Time Intelligence.5.5KViews2likes0Comments