fabric
21 TopicsPurview Data Quality Dashboard/ Report - Refresh
Hi All, Currently I am getting all blank in Purview Data quality dashboard, before two months dashboard shows all values across each data quality dimensions and showed graph for each quadrant in a dashboard. After two months when checked the dashboard everything is blank nothing is shown in the report. (Note : I have created two governance domain and each domain has five data products assigned with data assets, implemented data quality rules on top of each data assets that time scores were reflected in the Purview data quality dashboard), but suddenly now it all went blank scores showing as 'blank' Note : None of the data quality assessment were not deleted during that two months, data quality rules are still active and its still showing scores at data asset level. But its not showing in the dashboard currently. Can you please help me to sort out, is there any refresh policy associated for Purview Data quality dashboard.120Views1like8CommentsManaging Multi-Tenant Azure/365: Workarounds for Cross-Tenant Limitations in Purview and Fabric
I am working in a Microsoft Azure/365 multi-tenant setting due to some constraints. I am using Purview (Tenant1) and Fabric (Tenant2), M365 in (Tenant 2). I'm facing issues with various solutions due to cross tenant limitation for eg: Data Quality Connection, Metadata ingestion, lineage, etc. To overcome this I am exploring various workarounds. Key Question: 1. Are there proven workarounds or solutions to manage data estate in this scenario? (Can't merge /migrate tenants)122Views2likes1CommentAnnouncing Public Preview for Business Process Solutions
In today’s AI powered enterprises, success hinges on access to reliable, unified business information. Whether you are deploying AI-augmented workflows or fully autonomous agentic solutions, one thing is clear: trusted, consistent data is the fuel that drives intelligent outcomes. Yet in many organizations, data remains fragmented across best of breed applications – creating blind spots in cross-functional processes and throwing roadblocks in the path of automation. Microsoft is dedicated to tackle these challenges, delivering a unified data foundation that accelerates AI adoption, simplifies automation and reduces risk – empowering businesses to unlock the full potential of unified data analytics and agentic intelligence. Our new solution offers cross-functional insights across previously siloed environments and includes: Prebuilt data models for enterprise business applications in Microsoft Fabric Source system data mappings and transformations Prebuilt dashboards and reports in Power BI Prebuilt AI Agents in Copilot Studio (coming soon) Integrated Security and Compliance By unifying Microsoft’s Fabric and AI solutions we can rapidly accelerate transformation and derisk AI rollout through repeatable, reliable, prebuilt solutions. Functional Scope Our new solution currently supports a set of business applications and functional areas, enabling organizations to break down silos and drive actionable insights across their core processes. The platform covers key domains such as: Finance: Delivers a comprehensive view of financial performance, integrating data from general ledger, accounts receivable, and accounts payable systems. This enables finance teams to analyze trends, monitor compliance, and optimize cash flow management all from within Power BI. The associated Copilot agent provides not only access to this data via natural language but will also enable financial postings. Sales: Provides a complete perspective on customers’ opportunity to cash journeys, from initial opportunity through invoicing and payment via Power BI reports and dashboards. The associated Copilot agent can help improve revenue forecasting, by connecting structured ERP and CRM data with unstructured data from Microsoft 365, also tracking sales pipeline health and identify bottlenecks. Procurement: Supports strategic procurement and supplier management, consolidating purchase orders, goods receipts, and vendor invoicing data into a complete spend dashboard. This empowers procurement teams to optimize sourcing strategies, manage supplier risk, and control spend. Manufacturing: (coming soon): Will extend coverage to manufacturing and production processes, enabling organizations to optimize resource allocation and monitor production efficiency. Each item within Business Process Solutions is delivered as a complete, business-ready offering. These models are thoughtfully designed to ensure that organizations can move seamlessly from raw data to actionable execution. Key features include: Facts and Dimensions: Each model is structured to capture both transactional details (facts) and contextual information (dimensions), supporting granular analysis and robust reporting across business processes. Transformations: Built-in transformations automatically prepare data for reporting and analytics, making it compatible with Microsoft Fabric. For example, when a business user needs to compare sales results from Europe, Asia, and North America, the solution transformations handle currency conversion behind the scenes. This ensures that results are consistent across regions, making analysis straightforward and reliable—without the need for manual intervention or complex configuration. Insight to Action: Customers will be able to leverage prebuilt Copilot Agents within Business Process Solutions to turn insight into action. These agents are deeply integrated not only with Microsoft Fabric and Microsoft Teams, but also connected source applications, enabling users to take direct, contextual actions across systems based on real-time insights. By connecting unstructured data sources such as emails, chats, and documents from Microsoft 365 apps, the agents can provide a holistic and contextualized view to support smarter decisions. With embedded triggers and intelligent agents, automated responses could be initiated based on new insights -- streamlining decision-making and enabling proactive, data-driven operations. Ultimately, this will empower teams to not just understand what is happening on a wholistic level, but to also take faster and smarter actions, and with greater confidence. Authorizations: Data models are tailored to respect organizational security and access policies, ensuring that sensitive information is protected and only accessible to authorized users. The same user credential principles apply to the Copilot agents when interacting with/updating the source system in the user-context. Behind the scenes, the solution automatically provisions the required objects and infrastructure to build the data warehouse, removing the usual complexity of bringing data together. It guarantees consistency and reliability, so organizations can focus on extracting value from their data rather than managing technical details. This reliable data foundation serves as one of the key informants of the agentic business processes. Accelerated Insights with Prebuilt Analytics Building on these robust data models, Business Process Solutions offer a suite of prebuilt Power BI reports tailored to common business processes. These reports provide immediate access to key metrics and trends, such as financial performance, sales effectiveness, and procurement efficiency. Designed for rapid deployment, they allow organizations to: Start analyzing data from day one, without lengthy setup or customization. Adapt existing reports for your organization’s exact business needs. Demonstrate best practices for leveraging data models in analytics and decision-making. This approach accelerates time-to-value and also empowers users to explore new analytical scenarios and drive continuous improvement. Extensibility and Customization Every organization is unique and our new solution is designed to support this, allowing you to adapt analytics and data models to fit your specific processes and requirements. You can customize scope items, bring in your own tables and views, integrate new data sources as your business evolves, and combine data across Microsoft Fabric for deeper insights. Similarly, the associated agents will be customizable from Copilot Studio to adapt to your specific Enterprise apps configuration. This flexibility ensures that, no matter how your organization operates, Business Process Solutions helps you unlock the full value of your data. Data integration Business Process Solutions uses the same connectivity options as Microsoft Fabric and Copilot Studio but goes further by embedding best practices that make integration simpler and more effective. We recognize that no single pattern can address the diverse needs of all business applications. We also understand that many businesses have already invested in data extraction tools, which is why our solution supports a wide range of options, from native connectivity to third-party options that bring specialized capabilities to the table. With Business Process Solutions we ensure data can be interacted with in a reliable and high-performant way, whether working with massive volumes or complex data structures. Getting started If your organization is ready to unlock the value of unified analytics, getting started is simple. Just send us a request using the form at: https://aka.ms/JoinBusAnalyticsPreview. Our team will guide you through the next steps and help you begin your journey.July 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, July delivered a wave of exciting updates to Azure Database for PostgreSQL! From Fabric mirroring support for private networking to cascading read replicas, these new features are all about scaling smarter, performing faster, and building better. This blog covers what’s new, why it matters, and how to get started. Catch Up on POSETTE 2025 In case you missed POSETTE: An Event for Postgres 2025 or couldn't watch all of the sessions live, here's a playlist with the 11 talks all about Azure Database for PostgreSQL. And, if you'd like to dive even deeper, the Ultimate Guide will help you navigate the full catalog of 42 recorded talks published on YouTube. Feature Highlights Upsert and Script activity in ADF and Azure Synapse – Generally Available Power BI Entra authentication support – Generally Available New Regions: Malaysia West & Chile Central Latest Postgres minor versions: 17.5, 16.9, 15.13, 14.18 and 13.21 Cascading Read Replica – Public Preview Private Endpoint and VNet support for Fabric Mirroring - Public Preview Agentic Web with NLWeb and PostgreSQL PostgreSQL for VS Code extension enhancements Improved Maintenance Workflow for Stopped Instances Upsert and Script activity in ADF and Azure Synapse – Generally Available We’re excited to announce the general availability of Upsert method and Script activity in Azure Data Factory and Azure Synapse Analytics for Azure Database for PostgreSQL. These new capabilities bring greater flexibility and performance to your data pipelines: Upsert Method: Easily merge incoming data into existing PostgreSQL tables without writing complex logic reducing overhead and improving efficiency. Script Activity: Run custom SQL scripts as part of your workflows, enabling advanced transformations, procedural logic, and fine-grained control over data operations. Together, these features streamline ETL and ELT processes, making it easier to build scalable, declarative, and robust data integration solutions using PostgreSQL as either a source or sink. Visit our documentation guide for Upsert Method and script activity to know more. Power BI Entra authentication support – Generally Available You can now use Microsoft Entra ID authentication to connect to Azure Database for PostgreSQL from Power BI Desktop. This update simplifies access management, enhances security, and helps you support your organization’s broader Entra-based authentication strategy. To learn more, please refer to our documentation. New Regions: Malaysia West & Chile Central Azure Database for PostgreSQL has now launched in Malaysia West and Chile Central. This expanded regional presence brings lower latency, enhanced performance, and data residency support, making it easier to build fast, reliable, and compliant applications, right where your users are. This continues to be our mission to bring Azure Database for PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. Latest Postgres minor versions: 17.5, 16.9, 15.13, 14.18 and 13.21 PostgreSQL latest minor versions 17.5, 16.9, 15.13, 14.18 and 13.21 are now supported by Azure Database for PostgreSQL flexible server. These minor version upgrades are automatically performed as part of the monthly planned maintenance in Azure Database for PostgreSQL. This upgrade automation ensures that your databases are always running on the most secure and optimized versions without requiring manual intervention. This release fixes two security vulnerabilities and over 40 bug fixes and improvements. To learn more, please refer PostgreSQL community announcement for more details about the release. Cascading Read Replica – Public Preview Azure Database for PostgreSQL supports cascading read replica in public preview capacity. This feature allows you to scale read-intensive workloads more effectively by creating replicas not only from the primary database but also from existing read replicas, enabling two-level replication chains. With cascading read replicas, you can: Improve performance for read-heavy applications. Distribute read traffic more efficiently. Support complex deployment topologies. Data replication is asynchronous, and each replica can serve as a source for additional replicas. This setup enhances scalability and flexibility for your PostgreSQL deployments. For more details read the cascading read replicas documentation. Private Endpoint and VNET Support for Fabric Mirroring - Public Preview Microsoft Fabric now supports mirroring for Azure Database for PostgreSQL flexible server instances deployed with Virtual Network (VNET) integration or Private Endpoints. This enhancement broadens the scope of Fabric’s real-time data replication capabilities, enabling secure and seamless analytics on transactional data, even within network-isolated environments. Previously, mirroring was only available for flexible server instances with public endpoint access. With this update, organizations can now replicate data from Azure Database for PostgreSQL hosted in secure, private networks, without compromising on data security, compliance, or performance. This is particularly valuable for enterprise customers who rely on VNETs and Private Endpoints for database connectivity from isolated networks. For more details visit fabric mirroring with private networking support blog. Agentic Web with NLWeb and PostgreSQL We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. For more details, read Agentic web with NLWeb and PostgreSQL blog. PostgreSQL for VS Code extension enhancements PostgreSQL for VS Code extension is rolling out new updates to improve your experience with this extension. We are introducing key connections, authentication, and usability improvements. Here’s what we improved: SSH connections - You can now set up SSH tunneling directly in the Advanced Connection options, making it easier to securely connect to private networks without leaving VS Code. Clearer authentication setup - A new “No Password” option eliminates guesswork when setting up connections that don’t require credentials. Entra ID fixes - Improved default username handling, token refresh, and clearer error feedback for failed connections. Array and character rendering - Unicode and PostgreSQL arrays now display more reliably and consistently. Azure Portal flow - Reuses existing connection profiles to avoid duplicates when launching from the portal. Don’t forget to update to the latest version in the Marketplace to take advantage of these enhancements and visit our GitHub to learn more about this month’s release. Improved Maintenance Workflow for Stopped Instances We’ve improved how scheduled maintenance is handled for stopped or disabled PostgreSQL servers. Maintenance is now applied only when the server is restarted - either manually or through the 7-day auto-restart rather than forcing a restart during the scheduled maintenance window. This change reduces unnecessary disruptions and gives you more control over when updates are applied. You may notice a slightly longer restart time (5–8 minutes) if maintenance is pending. For more information, refer Applying Maintenance on Stopped/Disabled Instances. Azure Postgres Learning Bytes 🎓 Set Up HA Health Status Monitoring Alerts This section will talk about setting up HA health status monitoring alerts using Azure Portal. These alerts can be used to effectively monitor the HA health states for your server. To monitor the health of your High Availability (HA) setup: Navigate to Azure portal and select your Azure Database for PostgreSQL flexible server instance. Create an Alert Rule Go to Monitoring > Alerts > Create Alert Rule Scope: Select your PostgreSQL Flexible Server Condition: Choose the signal from the drop down (CPU percentage, storage percentage etc.) Logic: Define when the alert should trigger Action Group: Specify where the alert should be sent (email, webhook, etc.) Add tags Click on “Review + Create” Verify the Alert Check the Alerts tab in Azure Monitor to confirm the alert has been triggered. For deeper insight into resource health: Go to Azure Portal > Search for Service Health > Select Resource Health. Choose Azure Database for PostgreSQL Flexible Server from the dropdown. Review the health status of your server. For more information, check out the HA Health status monitoring documentation guide. Conclusion That’s a wrap for our July 2025 feature updates! Thanks for being part of our journey to make Azure Database for PostgreSQL better with every release. We’re always working to improve, and your feedback helps us do that. 💬 Got ideas, questions, or suggestions? We’d love to hear from you: https://aka.ms/pgfeedback 📢 Want to stay on top of Azure Database for PostgreSQL updates? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog Stay tuned for more updates in our next blog!Fabric Lakehouse tables are not showing in Purview
Hi Purview Community, Further to scanning Fabric in purview, Delta Lake tables are showing up as assets in Purview. I can see the lakehouse but no details of the delta tables or associated folder/schema structure in the Related tab. In case you have encountered this issue before, I will appreciate your guidance on the troubleshooting. Thanks in advance!Solved555Views2likes4CommentsAutomating Power BI Viewer Role Assignment After Azure Purview Approval
Hello everyone! In my organization we use Azure Purview to manage access requests for our Power BI reports. Our current flow is: A user requests access to a data product (Power BI report) from Purview. I approve the request in the Purview portal. Although the user now has metadata-level access in Purview, to actually view the report they must click “Open in Power BI (Fabric)”—and that only works if I manually add them as Viewer to the workspace or app. This manual step is very tedious when there are dozens of requests per day. I’m looking for ideas to automate it so that, upon approval in Purview, the user is granted the Viewer role on the Power BI workspace/app for that report without any manual intervention. Has anyone implemented something similar or knows of an out-of-the-box approach? Perhaps a Purview extension (even in preview), third-party tool, or community solution that automates this provisioning? Thanks in advance for any pointers or examples!Solved261Views0likes5CommentsSynapse Data Explorer (SDX) to Eventhouse Migration Capability (Preview)
Synapse Data Explorer (SDX), part of Azure Synapse Analytics, is an enterprise analytics service that enables you to explore, analyze, and visualize large volumes of data using the familiar Kusto Query Language (KQL). SDX has been in public preview since 2019. The evolution of Synapse Data Explorer The next generation of SDX offering is evolving to become Eventhouse, part of Real-Time Intelligence in Microsoft Fabric. Eventhouse offers the same powerful features and capabilities as SDX, but with enhanced scalability, performance, and security. Eventhouse is built on the same technology as SDX, and is compatible with all the applications, SDKs, integrations, and tools that work with SDX. For existing customers considering a move to Fabric, we are excited to offer a seamless migration capability. You can now migrate your Data Explorer pools from Synapse workspace to Eventhouse effortlessly. To initiate the migration of your SDX cluster to Eventhouse, simply follow the instructions. http://aka.ms/sdx.migrate421Views0likes0CommentsPurview Integration with MS Fabric (Scanner)
Hi Everyone, Facing issue in scanning Lakehouse Delta tables in Purview. When I scan the Fabric workspace in Purview, I could able to see only Pipelines and Notebooks present in that workspace it doesn't identified Lakehouse tables as an asset. Prerequisites also done : Purview MSI granted with Contributor role access to that workspace in Fabric and Enabled Fabric Tenant level settings with Specific Security group (where Purview MSI as a member). Please help me on this, how to identify Lakehouse tables as an asset and extract Metadata details of lakehouse tables in Purview. It will help me proceed with Data cataloging. Thanks in advance. Regards, BanuMurali211Views1like1CommentThe #1 factor in ADX/KQL database performance
The most important thing determining the performance of a KQL query is making sure that the minimum part of the data is scanned. In almost all cases a filter on a datetime column is used to determine what part of the data is relevant for the query results. The filter can be expressed in many ways on the actual table or a joined table. All variations are returning the correct results but the difference in performance can be 50X The different variations are described, and the reasons why are performant, and some are not.2.2KViews0likes3Comments