microsoft fabric
39 TopicsActing on Real-Time data using custom actions with Data Activator
Being able to make data driven decisions and act on real-time data is important to organizations because it enable them to either avert crisis in systems that monitor product health and take other actions based on their requirements. For an example, a shipping company may want to monitor their packages and act in real-time when the temperature of the packages becomes too hot. One way of monitoring and acting on data is to use Data Activator, which is a no-code experience in Microsoft Fabric for automatically taking actions when the condition of the package temperature is detected in the data.1.1KViews0likes1CommentIntegrating remote patient monitoring solutions with healthcare data solutions in Microsoft Fabric
Co-Authors: Kemal Kepenek, Mustafa Al-Durra PhD, Matt Dearing, Jason Foerch, Manoj Kumar Introduction Remote patient monitoring solutions rely on connected devices, wearable technology, and advanced software platforms to collect and transmit patient health data. They facilitate monitoring of vital signs, chronic conditions, and behavioral patterns. Healthcare data solutions in Microsoft Fabric offers a secure, scalable, and interoperable data platform as part of Microsoft for Healthcare. Such a unified data platform is crucial for integrating disparate data sources and generating actionable health insights. This article provides a reference architecture and the steps to integrate remote patient monitoring solutions with healthcare data solutions in Fabric. The integration is aimed at satisfying low data resolution use cases. With low data resolution, we address infrequent (hourly, daily, or less) transfer of aggregated or point-in-time-snapshot device data into healthcare data solutions in Fabric to be used in a batch fashion to generate analytical insights. Integration steps for high data resolution use cases, which necessitate high frequency transfer of highly granular medical device data (for example, data from EKGs or ECGs) to become input to either batch or (near) real-time analytics processing and consumption, is a candidate for a future article. There are several methods, solutions and partners available in the marketplace today that will allow you to integrate a remote patient monitoring solution with the healthcare data solutions in Fabric. In this article, we leveraged the solution from Life365 (a Microsoft partner). The integration approach discussed here is applicable to most remote patient monitoring solutions whose integration logic (code) can be run inside a platform that can programmatically access (for example, through REST API calls) Microsoft Fabric. In our approach, the integration platform chosen is the Function App service within Microsoft Azure. In the subsequent sections of this article, we cover the integration approach in two phases: Interoperability phase, which illustrates how the data from medical devices (used by the remote patient monitoring solution) can be converted into format suitable for transferring into healthcare data solutions in Fabric. Analytical processing and consumption phase, which provides the steps to turn the medical device data into insights that can be easily accessed through Fabric. Integration Approach Interoperability Phase Step 1 of this phase performs the transfer of proprietary device data. As part of this step, datasets are collected from medical devices and transferred (typically, in the form of files) to an integration platform or service. In our reference architecture, the datasets are trans ferred to the Function App (inside an Azure Resource Group) that is responsible for the integration function. It is important for these datasets to contain information about (at least) three concepts or entities: Medical device(s) from which the datasets are collected. Patient(s) to whom the datasets belong. Reading(s) obtained from the medical device(s) throughout the time that the patients utilize these devices. Medical device readings data may be point-in-time data capture, metrics, measures, calculations, collections, or similar data points. Information about the entities listed above will be used in the later step of interoperability phase (discussed below) when we will convert this information into resources to be transferred to the second phase that will perform analytical processing and consumption. In step 2, to maintain mapping between proprietary device data and FHIR® resources, you can use transformation templates, or follow a programmatic approach, to convert datasets received from medical devices into appropriate FHIR® resources. Using the entities mentioned in the previous step, the conversion takes place as follows: Medical device information is converted to Device resource in FHIR® * . Patient information is converted to Patient resource in FHIR®. Device reading information is converted to Observation resource in FHIR®. * Currently, healthcare data solutions in Fabric supports FHIR® Release 4 (R4) standard. Consequently, the FHIR® resources that are created as part of this step should follow the same standard. Transformation and mapping activities are under the purview of each specific remote patient monitoring integration solution and are not reviewed in detail in this article. As an example, we provided below the high-level steps that one of the Microsoft partners (Life365) followed to integrate their remote patient monitoring solution with healthcare data solutions in Fabric: Life365 team developed a cloud-based transformation service that translates internal device data into standardized FHIR® (Fast Healthcare Interoperability Resources) Observations to enable compatibility with healthcare data solutions in Microsoft Fabric and other health data ecosystems. This service is implemented in Microsoft Azure Cloud and designed to ingest structured payloads from Life365-connected medical devices —including blood pressure monitors, weight scales, and pulse oximeters— and convert them into FHIR®-compliant formats in real time. When a reading is received: The service identifies relevant clinical metrics (e.g., systolic/diastolic blood pressure, heart rate, weight, SpO₂). These metrics are mapped to FHIR® Observation resources using industry-standard LOINC codes and units. Each Observation is enriched with references to the associated patient and device, formatted in NDJSON to meet the ingestion requirements in healthcare data solutions in Fabric. The resulting FHIR®-compliant data is securely transmitted to the Fabric instance using token-based authentication. This implementation provides a consistent, standards-aligned pathway for Life365 device data to integrate with downstream FHIR®-based platforms while abstracting the proprietary structure of the original device payloads. For examples from the public domain, you can use the following open-source projects as references: https://github.com/microsoft/fit-on-FHIR® https://github.com/microsoft/healthkit-to-FHIR® https://github.com/microsoft/FitbitOnFHIR® https://github.com/microsoft/FHIR®-Converter Please note that above open-source repositories might not be up to date. While they may not provide a complete (end to end) solution to map medical device data to FHIR®, they may still be helpful as a starting point. If you decide to incorporate them into your remote patient monitoring integration solution, validate their functionality and make necessary changes to meet your solution’s requirements. For the resulting FHIR® resources to be successfully consumed by the analytics processing later (within healthcare data solutions in Fabric), they need to satisfy the requisites listed below. Each FHIR® resource, in its entirety, needs to be saved as a single row into an NDJSON-formatted file. We recommend creating one NDJSON file per FHIR® resource type. That means creating Device.ndjson, Patient.ndjson, and Observation.ndjson files for the three entities we reviewed above. Each FHIR® resource needs to have a meta segment populated with inclusion of lastUpdated value. As an example: "meta":{"lastUpdated":"2025-05-15T15:35:04.218Z", "profile":["http://hl7.org/FHIR®/us/core/StructureDefinition/us-core-documentreference"]} Cross references between Observation and Patient, as well as between Observation and Device FHIR® resources need to be represented correctly, either through formal FHIR® identifiers or logical identifiers. As an example, the subject and device attributes of Observation FHIR® resource need to refer to Patient and Device FHIR® resources, respectively, in this manner: "subject":{"reference":"Patient/d3281621-1584-4631-bc82-edcaf49fda96"} "device":{"reference":"Device/5a934020-c2c4-4e92-a0c5-2116e29e757d"} For Patient FHIR® resource, if MRN is used as the identifier, it is important to represent the MRN value according to the FHIR® standard. Patient identifier is a critical attribute that it is used to establish cross-FHIR®-resource relationships throughout the analytics processing and consumption phase. We will review that phase later in this article. At a minimum, a Patient identifier, which uses MRN coding as its identifier type, needs to have its value, system, type.coding.system, and type.coding.code (with value “MR”) attributes populated correctly. See an example below. You can also refer to a Patient FHIR® resource example from hl7.org. "reference": null, "type": "Patient", "identifier": { "extension": null, "use": null, "value": "4e7e5bf8-2823-8ec1-fe37-eba9c9d69463", "system": "urn:oid: 1.2.36.146.595.217.0.1", "type": { "extension": null, "id": null, "coding": [ { "extension": null, "id": null, "system": "http://terminology.h17.org/CodeSystem/v2-0203", "version": null, "code": "MR", "display": null, "userSelected": null } "text": null }, ... With Step 3, to perform the transfer of FHIR® resource NDJSON files to healthcare data solutions in Fabric: Ensure that the integration platform (Azure Function App, in our case) has permission to transfer (upload) files to the healthcare data solutions in Fabric: Find the managed identity or the service principal that the Azure Function App is running under: Navigate to the Azure portal and find your Function App within your resource group. In the Function App's navigation pane, under "Settings," select "Identity". Identify the Managed Identity (if enabled): If System-assigned managed identity is enabled, you'll see information about the system-assigned managed identity, including its object ID and principal ID. If User-assigned managed identity is linked, the details of that identity will be displayed. You can also add user-assigned identities here if needed. Service Principal (if applicable): If the Function App is configured to use a service principal, you'll need to look for the service principal within the Azure Active Directory (a.k.a. Microsoft Entra ID). You can find this by searching for "Enterprise Applications" within Azure Active Directory and looking for the application associated with the Function App. Grant Azure Function App’s identity access to upload files: Having been logged into Fabric with an administrator account, navigate to the Fabric workspace where your healthcare data solutions instance is deployed. Click on the “Manage Access” button on the top right. Click on “Add People or Groups” Add the managed identity or the service principal, which is associated with your Azure Function App, with Contributor access by selecting “Contributor” from the dropdown list. Using a coding environment, similar to the Python example provided below, you can manage the OneLake content programmatically. This includes the ability to transfer (upload) the NDJSON-formatted files, which have been created earlier, to the destination OneLake folder. from azure.identity import DefaultAzureCredential from azure.storage.filedatalake import DataLakeFileClient, DataLakeFileSystemClient # Replace with your OneLake URI onelake_uri = "https://your-account-name.dfs.core.windows.net" # Replace with the destination path to your file file_path = "/<full path to destination folder (see below)>/<entity name>.ndjson" # Get the credential credential = DefaultAzureCredential() # Create a DataLakeFileClient file_client = DataLakeFileClient( url=f"{onelake_uri}{file_path}", credential=credential ) # Upload the file with open("<entity name>.ndjson", "rb") as f: file_client.upload_data(f, overwrite=True) print(f"File uploaded successfully: {file_path}") The destination OneLake folder to use for the remote patient monitoring solution integration into healthcare data solutions in Fabric is determined as follows: Navigate to the bronze lakehouse created with the healthcare data solutions instance inside the Fabric workspace. The lakehouse is named as “healthcare1_msft_bronze”. “healthcare1” segment in the name of the lakehouse points to the name of the healthcare data solutions instance deployed in the workspace. You might see a different name in your Fabric workspace; however, the rest of the lakehouse name (“_msft_bronze”) remains unchanged. Unified folder structure of healthcare data solutions is located inside the bronze lakehouse. Within that folder structure, create a subfolder named with the name of the remote patient monitoring solution you are integration with. See the screenshot below. This subfolder is referred to as namespace in healthcare data solutions documentation, and is used to uniquely identify the source of incoming (to-be-uploaded) data. NDJSON files, which have been generated during the previous interoperability phase, will be transferred (uploaded) into that subfolder. The full path of the destination OneLake folder to use in your file transfer (upload) code is: healthcare1_msft_bronze.Lakehouse\Files\Ingest\Clinical\FHIR®-NDJSON\<Solution-Name-as-Namespace> Analytics Processing and Consumption Phase Step 1 of this phase connects the interoperability phase discussed earlier with the analytics processing and consumption phase. As part of this step, you can simply verify that the NDJSON files have been uploaded to the remote patient monitoring solution subfolder inside the unified folder structure in bronze lakehouse of healthcare data solutions in Fabric. The path to that subfolder is provided earlier in this article. After the upload of the files has been completed, you are ready to run the data pipeline that will perform data ingestion and transformation so that the device readings data may be used for analytics purposes. In the Fabric workspace, where healthcare data solutions instance is deployed, find and open the data pipeline named “healthcare1_msft_omop_analytics”. As is the case with the bronze lakehouse name, “healthcare1” segment in the name of the data pipeline points to the name of the healthcare data solutions instance deployed in the workspace. You might see a different name in your Fabric workspace depending on your own instance. This data pipeline will execute four activities, first of which will copy the transferred files into another subfolder within unified folder structure so that they can be input to the ingestion step next. Then, the subsequent pipeline activities perform steps 2 through 4 as illustrated in the analytics processing and consumption phase diagram further above. Step 2 ingests the content from the transferred (NDJSON) file(s) to the ClinicalFHIR delta table of the bronze lakehouse. Step 3 transforms the content from the ClinicalFHIR delta table of the bronze lakehouse into flattened FHIR® data model content inside silver lakehouse. Step 4 transforms the flattened FHIR® content of silver lakehouse into OMOP data model content inside gold lakehouse. As part of step 5, you can develop your own gold lakehouse(s) through transforming content from the silver lakehouse into data model(s) best suited for your custom analytics use cases. Device data, once transformed into a gold lakehouse, may be used for analytics or reporting through several ways some of which are discussed briefly below. In step 6, Power BI reports and dashboards can be built inside Fabric that offer a visual and interactive canvas to analyze the data in detail. (Overview of Power BI - Microsoft Fabric | Microsoft Learn) As part of step 7, Fabric data share feature can be used to grant teams within external organizations (that you collaborate with) access to the data (External data sharing in Microsoft Fabric - Microsoft Fabric | Microsoft Learn). Finally, step 8 enables you to utilize the discover and build cohorts capability of healthcare data solutions in Fabric. With this capability, you can submit natural language queries to explore the data and build patient cohorts that fit the criteria that your use cases are aiming for. (Build patient cohorts with generative AI in discover and build cohorts (preview) - Microsoft Cloud for Healthcare | Microsoft Learn) Conclusion When integrated with healthcare data solutions in Fabric, remote patient monitoring solutions can enable transformative potential in enhancing patient outcomes, optimizing care coordination, and streamlining healthcare system operations. If your organization would like to explore the next steps in such a journey, please contact your Microsoft account team.DAX Demystified: 5 Key Lessons Every Beginner Needs to Learn Early
🔍 Upcoming https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# Session – Saturday, June 21 at 7:00 AM PT 🎙️ Speaker: Markus Ehrenmueller-Jensen 📌 Topic: DAX Demystified: 5 Key Lessons Every Beginner Needs to Learn Early If you've ever written a DAX measure and thought, “Why doesn’t this work?” — you’re not alone. In this session, Markus will walk you through five key lessons that make the difference between confusion and confidence when working with DAX. Learn the practical insights he wishes he knew when he started—like understanding calculated columns vs. measures, and how row and filter context really work. 📈 Whether you're just beginning with DAX or looking to solidify your fundamentals, this session is packed with real-world examples and “aha!” moments that will help DAX finally make sense. 👥 Ideal for: Power BI users, data analysts, and Excel pros getting started with DAX. 🔗 Register here: https://www.meetup.com/microsoft-fabric-cafe/events/308524621/?utm_medium=referral&utm_campaign=share-btn_savedevents_share_modal&utm_source=link&utm_version=v2 #MicrosoftFabric #PowerBI #DAX #FabricCafe #MicrosoftLearn #DataAnalytics112Views0likes1CommentOrchestrate Data Ingestion using Apache Airflow in Microsoft Fabric
🚀 Upcoming #FabricCoffee session at https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# 🚀 📅 Date: Friday, June 13th 🕕 Time: 6:00 PM PT | Saturday, June 14th at 11:00 AM AEST 🎙️ Speaker: https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# 📌 Topic: Orchestrate Data Ingestion using Apache Airflow Supercharge your data pipelines by combining the power of #Apache #Airflow with #MicrosoftFabric! In this dynamic session, discover how to seamlessly orchestrate data ingestion from multiple sources into Lakehouses and Warehouses with full automation and scalability. 🔹 Trigger Fabric Dataflows, Pipelines, and Notebooks with Airflow 🔹 Automate and monitor data ingestion in real time 🔹 Optimize dependencies and error handling for seamless workflows Whether you're modernizing your ETL processes or implementing a Medallion Architecture, this session equips you with practical strategies to streamline and scale your data operations effortlessly. Register here: https://www.meetup.com/microsoft-fabric-cafe/events/308348139/?utm_medium=referral&utm_campaign=share-btn_savedevents_share_modal&utm_source=link&utm_version=v2 👉 Don’t miss this opportunity to level up your data engineering game with Apache Airflow + Microsoft Fabric! #MicrosoftFabric #FabricCafe #MicrosoftLearn #ApacheAirflow #DataEngineering115Views2likes1CommentWhat's new in SQL Server 2025
Add deep AI integration with built-in vector search and DiskANN optimizations, plus native support for large object JSON and new Change Event Streaming for live data updates. Join and analyze data faster with the Lakehouse shortcuts in Microsoft Fabric that unify multiple databases — across different SQL Server versions, clouds, and on-prem — into a single, logical schema without moving data. Build intelligent apps, automate workflows, and unlock rich insights with Copilot and the unified Microsoft data platform, including seamless Microsoft Fabric integration, all while leveraging your existing SQL skills and infrastructure. Bob Ward, lead SQL engineer, joins Jeremy Chapman to share how the latest SQL Server 2025 innovations simplify building complex, high-performance workloads with less effort. Run natural language semantic search directly in SQL Server 2025. Vector search and DiskANN work efficiently on modest hardware — no GPU needed. Get started. Run NoSQL in SQL. Store and manage large JSON documents directly in SQL Server 2025. Insert, update, and query JSON data with native tools. Check it out. Avoid delays. Reduce database locking without code changes to keep your apps running smoothly. See the new Optimized Locking in SQL Server 2025. QUICK LINKS: 00:00 — Updates to SQL Server 2025 00:58 — Search and AI 03:55 — Native JSON Support 06:41 — Real-Time Change Event Streaming 08:40 — Optimized Locking for Better Concurrency 10:33 — Join SQL Server data with Fabric 13:53 — Wrap up Link References Start using SQL Server 2025 at https://aka.ms/GetSQLServer2025 Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - Today we’ll look at the AI integration developer updates and performance improvements that make SQL Server 2025 a major upgrade. We’ve got a lot to unpack here, so we’re going to waste no time and get straight into this with lead SQL engineer, Bob Ward. Welcome back to the show. - So great to be back. - So SQL Server 2025, it’s brand new. It’s in public preview right now. So what’s behind the release and what’s new? - There are three major areas of updates that we focus on in this release. First, we have deep AI integration. For example, we now have built-in vector search support for more accurate and efficient data retrieval with some under the hood optimizations using DiskANN. Second, if you’re a developer, this is the most significant release of SQL in the last decade. You know, some of the highlights are native support for JSON files and new change event streaming capabilities for real-time updates. And the third area is improved analytics, where we’re going to make it easy to mirror your SQL Servers into Microsoft Fabric without moving the data. - And all of these are very significant updates. So why don’t we start with what’s new in search and AI? - Great, let’s get going. As I’ve mentioned, we’ve integrated AI directly into the database engine to give you smarter, intelligent searching. With vector search capabilities built-in, you can do semantic search over your data to find matches based on similarity versus keywords. For example, here I have a database with a table called ProductDescription, and I want to search using SQL queries against the Description table for intelligent search. Typically, you’d use full text search for this. Now I’ve built this out, but what about these natural language phrases, Will they work? They don’t. And even when I use like clauses, as you can see here, or contains, or even freetext, none of these methods returns what I’m looking for. Instead, this is where natural language with vector search in SQL Server 2025 shines. As a developer, I can get started even locally on my laptop, no GPU required. I’m using the popular framework, Ollama, to host a free open-source embeddings model from Hugging Face. This will convert our data into vectors, including query prompts, and I declare it using this CREATE EXTERNAL MODEL statement. Then I’m able to go in and build a table using the new built-in vector type to store what’s called embeddings in a binary format. My table has keys pointing back to my description data and then I can use a built-in T-SQL function to generate embeddings based on Ollama and store them. For vector search to work, I need to create a vector index, and it’s also performance optimized using Disk approximate nearest neighbor, or DiskANN, which is a new way to offload what you’d normally want to run completely in memory to point to an index stored on disk. I have a stored procedure to convert the query prompts into embeddings so it can be used to find matching embeddings in the vector index. So now I have everything running locally on my laptop running SQL. Let’s see how it works. I’ll try this natural language prompt, like I showed earlier. And it worked. I get a rich set of results, with matching information based on my search to find products in the database. And I can even use Copilot from here to explore more about SQL data. I’ll prompt it to look for my new table. And you can see, response here, finding our new table. And I can ask it to pull up a few embedding values with product names and descriptions. And as you saw the result using our open source embeddings returned a few languages back. And the good news is that if your data contains multiple languages, it’s easy to use different embedding models. For example, here I’ve wired up Azure OpenAI’s ADA 2 embeddings model optimized for multiple languages without even changing my code. And now I can even search using Mandarin Chinese and get back matching results. - And DiskANN and vector-based search are both massive updates that really go hand in hand to enable better natural language querying on modest hardware. So what about all the developer updates? - With these updates, things get so much more efficient for developers. With JSON file types, you can bring NoSQL into your SQL relational database. Let me show you how. I’ve created a database called Orders and a table called Orders. Notice here the new JSON data type, which can store up to a massive two gigabytes of JSON document in this native data type. Now let’s look at a couple of examples. First, I can easily insert JSON documents in their native format directly into the table, and I’ll show you some of the JSON functions that you can do to process this new JSON type. JSON value will pull a particular value out of a JSON document and bring it back in result set format. And I can just dump out all the JSON values, so each document will appear as a separate row in their native JSON format. But instead of just doing that, I have aggregate functions. This takes all the rows of JSON types in the table and produces a single array with a single JSON document with all the new rows in the native JSON type. Key-value pairs are also popular in JSON, and I can use the new OBJECT AGGREGATE function to take the order ID key and the JSON document and produce a set of key-value pairs. And I can modify the JSON type directly from here too. Notice, for order_id 1, the quantity is also 1. I’ll run this update to modify the value. And when it’s finished, the order_id, quantity has been updated with the value of 2 directly in the JSON. Now that’s a good example of using the JSON type. So let me show you how this works with a JSON index. I’ve got a different database for contacts, along with the table for contacts using a JSON document as one of the properties of the contacts table. I can create a JSON index on top of that JSON document, like this. Now I’ve got some sample data that are JSON documents. And in a second, I’m going to push those into our database. And as I scroll, you’ll that this has nested tags as properties in the JSON document. Now I’ll run the query so I can insert these rows with the names of each tag. Let’s go look at the output. I’m using JSON value for the name, but I’m using JSON query because the tags are nested. Now I’ll show you an example searching with the JSON index. I’m using the new JSON contains function to find tags called fitness that are deep nested in the JSON document. And I can run that and find the right tags and even the execution plan. You can see here that it shows we’re using the new JSON index to help go find that information. - That’s a big deal. And like you said, there’s a lot happening natively in JSON, and now you’ve got the benefits of SQL for joins, and security, and a lot more, - You know, and for developers who use change data capture, things become a lot easier with change event streaming. Here, we’re reducing I/O overhead and sending transaction log changes directly to your application. To get started with change event streaming for our orders database, I’ll run the stored procedure to enable streaming for the database. You can see the table we’re going to use to track changes is a typical type of orders table. Here I’ve created what’s called an event stream group. This is where I’ve configured event streaming to tell it the location of our Azure event hub to stream our data, and I’ve added my credentials. Then I’ve configured the table orders to be part of the event streaming group. I’ve run these procedures to make sure that my configuration is correct. So let’s do something interesting. I’m going to automate a workflow using agents to listen for changes as they come in and try to resolve any issues. First, I’ve created an Azure function app, and using my function app, I have an agent running in the Azure AI service called ContosoShippingAgent. It’s built to take shipment information, analyze it, and decide whether something can be done to help. For example, resolving a shipping delay. I’ve started my Azure function. This function is waiting for events to be sent to Azure Event Hub in order to process them. Now, in SQL, I’ll insert a new order. Going back over to my Azure function, you’ll see how the event is processed. In the code, first, we’re dumping up the raw cloud event that I showed earlier. Notice the operation is an insert. It’s going to dump out some of the different fields we’ve parsed out of the data, the column names, the metadata, and then the row itself. Notice that because the shipment is 75 days greater than our sales date, it will call our agent. The agent then comes back with a response. It looked at the tracking details and determined that it can change the shipping provider to expedite our delayed shipment, and it contacted the customer directly with the updating shipping info. - And everybody likes faster shipping. So speaking of things that are getting faster, it’s kind of a tradition on Mechanics that we cover the speed ups for SQL Server. So what are the speed ups and the performance optimizations for ‘25? - Well, there’s a lot, but my favorite one improves application concurrency. We’ve improved the internals of how locking works without application code changes. And I’ve got an example of this running. I have a lock escalation problem that I need to resolve. I’m going to go update about 2,500 rows in this table just to show what happens, then how we’ve solved for it. So running this query against that Dynamic Management View, or DMV, shows locks that have accumulated, about 2,500 locks here for key-value locks and 111 for page locks. So what happens if I run enough updates against the table that would cause a lock escalation? Here, I’ll update 10,000 rows in the system. But you can see with the locks that this has been escalated to an object lock. It’s not updating the entire table, but it’s going to cause a problem. Because I’ve got a query over here that can update the maximum value in just one row and it’s going to get blocked, but it shouldn’t have to be. You can see here from the blocking query that’s running that it’s blocked on that original session, and I’m not actually updating a row that’s affected by the first one. This is the problem with lock escalation. Now let’s look at a new option called optimized locking in SQL Server 2025. Okay, let’s go back to where I updated 10,000 rows and look at the lock. Notice how in this particular case I have a transaction lock. It’s an intent exclusive lock for the table, but only a transaction lock for that update. If I use this query to update the max, you’ll see that we are not blocked. And by looking at the locks, each item has specific transaction locks, so we’re not blocking each other. And related to this, we’ve also solved another problem where two unrelated updates can get blocked. We call this lock after qualification. - Okay, so it’s pinpointing the exact lock type, so you’ll get less locks in the end. So why don’t we move on though from locks to joins? - Sure. With Microsoft Fabric, it’s amazing. You can pull in multiple databases, multiple data types into a unified data platform. Imagine you have two different SQL Servers in different clouds and on-prem, and you just want to join this data together in an easy way without migrating it. With Fabric, you can. I have a SQL Server 2022 instance with a database, and we’ve already mirrored the product tables from that database into Fabric. I’ll show you the mirroring configuration process for a SQL Server 2025 instance with different, but related tables. These are similar to the steps from mirroring any SQL Server. I’ve created a database connection for SQL Server 2025. Now I’ll pick all the tables in our database and connect. I’ll leave the name as is, AdventureWorks, and we’re ready to mirror our database. You can see now that the replication process has started for all the tables. All the rows have been replicated for all the columns on all the tables in my database and they’ve been mirrored into Fabric. Now let’s query the data using the SQL analytic endpoint. And you can see that the tables that we have previously had in our database and SQL Server are now mirrored into OneLake. Let’s run a query and I’ll use Copilot to do that. Here’s the Copilot code with explanations. Now I’ll run it. And as it completes, there’s our top customers buy sales. Now what if we wanted to do a join across the other SQL server? It’s possible. But normally, there are a lot of manual pieces to do this. Fabric can make that easier using a lakehouse. So let’s create a new lakehouse. I just didn’t to give it a name, AdventureWorks, and confirm. Now notice there are no tables in this lakehouse yet, so let’s add some. And for that, I’ll use a shortcut. A shortcut uses items in OneLake, like the SQL Server databases we just mirrored. So I’ll add the AdventureWorks database. And scrolling down, I’ll pick all the tables I want. Now I’ll create it. And we’re not storing the data separately in the lakehouse. It’s just a shortcut, like an active read link to the source data, which is our mirrored database, and therefore something that already exists in OneLake. And now you can see I’ve got these objects here. This icon means that these are shortcut from another table. So now, let’s get data from another warehouse. The SQL Server 2022 instance, which was ADW_products. Again, here, I’ll pick the tables that I want and Create. That’s it. So I can go and look at product to make sure I’ve got my product data. Now, let’s try to query this as one database and use another analytic endpoint directly against the lakehouse itself. So basically it thinks all the tables are just part of the unified schema now. Let’s open up Copilot and write a prompt to pull my top customers by products and sales. And it will be able to work directly against all of these connected databases because they are in just the same schema. And there you go. I have a list of all the data I need in one logical database. - And this is really great. And I know now that everything’s in OneLake, there’s also a lot more that you can do with that data. - With the lakehouse, the sky’s the limit. You can use Power BI, or any of those services that are in the unified data platform, Microsoft Fabric. - Okay, so now we’ve seen all the updates with SQL Server 2025. To everyone watching, what’s the best thing they can do to get started? - Well, the first thing is to start using it. SQL Server 2025 is ready for you to download and install it on the platform of your choice. You’ll find it at aka.ms/GetSQLServer2025. - So thanks so much for sharing all the updates, Bob, and thank you for joining us today. Be sure to subscribe for more, and we’ll see again soon.515Views0likes0CommentsMicrosoft Fabric Warehouses for the Database Administrator
📢 Upcoming Session – June 7 at 7 AM PT 🧠 Topic: Microsoft Fabric Warehouses for the Database Administrator 🎙️ Speaker: Andy Cutler Are you a DBA trying to navigate your role in the world of Microsoft Fabric? Whether you've been asked to "look after this Fabric thing" or you're just curious about where DBAs fit in, this session is for you. We’ll explore Microsoft Fabric Warehouses—specifically from a DBA’s point of view. Learn how to approach this cloud-based SQL service with the tools and mindset of a data professional, and understand what really matters when managing Fabric in a real-world setting. Join us to uncover: 🔹 What DBAs need to know about Fabric Warehouses 🔹 How to think about administration in a SaaS analytics platform 🔹 How Fabric fits into the future of data warehousing and analytics #MicrosoftFabric #DataWarehouse #DBA #FabricWarehouse #MicrosoftLearn #FabricCafe #CloudAnalytics #PowerBI #SQL65Views0likes0CommentsWhat’s Included with Microsoft’s Granted Offerings for Nonprofits?
Are you a nonprofit looking to boost your impact with cutting-edge technology? Microsoft is here to help! From free software licenses to guided technical documentation and support, this program offers a range of resources designed to empower your organization. In this blog, we’ll dive into the incredible tools and grants available to nonprofits through Microsoft, showing you how to make the most of these generous offerings. Whether you’re managing projects or just trying to simplify your day-to-day tasks, there’s something here for everyone. Let’s explore what’s possible!1.6KViews0likes1CommentOrchestrate multimodal AI insights within your healthcare data estate (Public Preview)
In today’s healthcare landscape, there is an increasing emphasis on leveraging artificial intelligence (AI) to extract meaningful insights from diverse datasets to improve patient care and drive clinical research. However, incorporating AI into your healthcare data estate often brings significant costs and challenges, especially when dealing with siloed and unstructured data. Healthcare organizations produce and consume data that is not only vast but also varied in format—ranging from structured EHR entries to unstructured clinical notes and imaging data. Traditional methods require manual effort to prepare and harmonize this data for AI, specify the AI output format, set up API calls, store the AI outputs, integrate the AI outputs, and analyze the AI outputs for each AI model or service you decide to use. Orchestrate multimodal AI insights is designed to streamline and scale healthcare AI within your data estate by building off of the data transformations in healthcare data solutions in Microsoft Fabric. This capability provides a framework to generate AI insights by connecting your multimodal healthcare data to an ecosystem of AI services and models and integrating structured AI-generated insights back into your data estate. When you combine these AI-generated insights with the existing healthcare data in your data estate, you can power advanced analytics scenarios for your organization and patient population. Key features: Metadata store lakehouse acts as a central repository for the metadata for AI orchestration to effectively capture and manage enrichment definitions, view definitions, and contextual information for traceability purposes. Execution notebooks define the enrichment view and enrichment definition based on the model configuration and input mappings. They also specify the model processor and transformer. The model processor calls the model API, and the transformer produces the standardized output while saving the output in the bronze lakehouse in the Ingest folder. Transformation pipeline to ingest AI-generated insights through the healthcare data solutions medallion lakehouse layers and persist the insights in an enrichment store within the silver layer. Conceptual architecture: The data transformations in healthcare data solutions in Microsoft Fabric allow you ingest, store, and analyze multimodal data. With the orchestrate multimodal AI insights capability, this standardized data serves as the input for healthcare AI models. The model results are stored in a standardized format and provide new insights from your data. The diagram below shows the flow of integrating AI generated insights into the data estate, starting as raw data in the bronze lakehouse and being transformed to delta tables in the silver lakehouse. This capability simplifies AI integration across modalities for data-driven research and care, currently supporting: Text Analytics for health in Azure AI Language to extract medical entities such as conditions and medications from unstructured clinical notes. This utilizes the data in the DocumentReference FHIR resource. MedImageInsight healthcare AI model in Azure AI Foundry to generate medical image embeddings from imaging data. This model leverages the data in the ImagingStudy FHIR resource. MedImageParse healthcare AI model in Azure AI Foundry to enable segmentation, detection, and recognition from imaging data across numerous object types and imaging modalities. This model uses the data in the ImagingStudy FHIR resource. By using orchestrate multimodal AI insights to leverage the data in healthcare data solutions for these models and integrate the results into the data estate, you can analyze your existing data alongside AI enrichments. This allows you to explore use cases such as creating image segmentations and combining with your existing imaging metadata and clinical data to enable quick insights and disease progression trends for clinical research at the patient level. Get started today! This capability is now available in public preview, and you can use the in-product sample data to test this feature with any of the three models listed above. For more information and to learn how to deploy the capability, please refer to the product documentation. We will dive deeper into more detailed aspects of the capability, such as the enrichment store and custom AI use cases, in upcoming blogs. Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations. FHIR® is the registered trademark of HL7 and is used with permission of HL7.1.3KViews2likes0CommentsElevating care management analytics with Copilot for Power BI
Healthcare data solutions care management analytics capability offers a comprehensive template using the medallion Lakehouse architecture to unify and analyze diverse data sets of meaningful insights. This enables enhanced care coordination, improved patient outcomes, and scalable, sustainable insights. As the healthcare industry faces rising costs and growing demand for personalized care, data and AI are becoming critical tools. Copilot for Power BI leads this shift, blending AI-driven insights with advanced visualization to revolutionize care delivery. What is Copilot for Power BI? Copilot is an AI-powered assistant embedded directly into Power BI, Microsoft's interactive data visualization platform. By leveraging natural language processing and machine learning, Copilot helps users interact with their data more intuitively whether by asking questions in plain English, generating complex calculations, or uncovering patterns that might otherwise go unnoticed. Copilot for Power BI is embedded within healthcare data solutions, allowing care management—one of its core capabilities—to harness these AI-driven insights. In the context of care management analytics, this means turning a sea of clinical, claims, and operational data into actionable insights without needing to write a single line of code. This empowers teams across all technical levels to gain value from data. Driving better outcomes through intelligent insights in care management analytics The Care Management Analytics solution, built on the Healthcare data solutions platform, leverages Power BI with Copilot embedded directly within it. Here’s how Copilot for Power BI is revolutionizing care management: Enhancing decision-making with AI Traditionally, deriving insights from healthcare data required technical expertise and hours of analysis. Copilot simplifies this by allowing care managers and clinicians to ask questions like “Analyze which medical conditions have the highest cost and prevalence in low-income regions.” The AI interprets these queries and responds with visualizations, trends, and predictions—empowering faster, data-driven decisions. Proactive care planning By analyzing historical and real-time data, Copilot helps identify at-risk patients before complications arise. This enables care teams to intervene earlier, design more personalized care plans, and ultimately improve outcomes while reducing unnecessary hospitalizations. Operational efficiency From staffing models to resource allocation, Copilot provides visibility into operational metrics that can drive significant efficiency gains. Healthcare leaders can quickly identify bottlenecks, monitor key performance indicators (KPIs) and simulate “what-if” scenarios, enabling more i nformed, data-backed decisions on care delivery models. Reducing costs without compromising quality Cost containment is a constant challenge in healthcare. By highlighting areas of high spend and correlating them with clinical outcomes, Copilot empowers organizations to optimize care pathways and eliminate inefficiencies ensuring patients receive the right care at the right time, without waste. Democratizing data access Perhaps one of the most transformative aspects of Copilot is how it democratizes access to analytics. Non-technical users from care coordinators to nurse managers can interact with dashboards, explore data, and generate insights independently. This cultural shift encourages a more data-literate workforce and fosters collaboration across teams. Real-world impact Consider a healthcare system leveraging Power BI and Copilot to manage chronic disease populations more effectively. By combining claims data, social determinants of health (SDoH) indicators, and patient-reported outcomes, care teams can gain a comprehensive view of patient needs- enabling more coordinated care and proactively identifying care gaps. With these insights, organizations can launch targeted outreach initiatives that reduce avoidable emergency department (ED) visits, improve medication adherence, and ultimately enhance outcomes. The future is here The integration of Copilot for Power BI marks a pivotal moment for healthcare analytics. It bridges the gap between data and action, bringing AI to the frontlines of care. As the industry continues to embrace value-based care models, tools like Copilot will be essential in achieving the triple aim: better care, lower costs, and improved patient experience. Copilot is more than a tool — it is a strategic partner in you care transformation journey. Deployment of care management analytics Showcasing how a Population Health Director uncovers actionable insights through Copilot Note: To fully leverage the capabilities of the solution, please follow the deployment steps provided and use the sample data included with the Healthcare Data Solution. For more information on care management analytics, please review our detailed documentation and get started with transforming your healthcare data landscape today Overview of care management analytics - Microsoft Cloud for Healthcare | Microsoft Learn Deploy and analyze using Care management analytics - Training | Microsoft Learn. Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations.Data security controls in OneLake
Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric. With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive columns while still allowing access to relevant data. See it here with OneLake security. Built-in compliance insights. Streamline discovery, governance, and sharing. Get started with the OneLake catalog. QUICK LINKS: 00:00 — OneLake & Microsoft Fabric core concepts 01:28 — Table level security 02:11 — Column level security 03:06 — Power BI report 03:28 — Row level security 04:23 — Data classification options 05:19 — OneLake catalog 06:22 — View and manage data 06:48 — Governance 07:36 — Microsoft Fabric integration 07:59 — Wrap up Link References Check out our blog at https://aka.ms/OneLakeSecurity Sign up for a 60-day free trial at https://fabric.microsoft.com Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -As you build AI and analytic workloads, unifying your data from wherever it lives and making it accessible doesn’t have to come at the cost of security. In fact, today we dive deeper into Microsoft’s approach to data unification, accessibility, and security with OneLake, part of Microsoft Fabric, where we’ll focus on OneLake’s security control set and how it compliments data discovery via the new OneLake catalog. -Now, in case you’re new to OneLake and Microsoft Fabric, I’ll start by explaining a few core concepts. OneLake is the logical multi-cloud data lake that is foundational to Microsoft Fabric, Microsoft’s fully managed data analytics and AI platform. OneLake, with its support for open data formats, provides a single and unified place across your entire company for data to be discovered, accessed, and controlled across your data estate. Data can reside anywhere, and you can connect to it using shortcuts or via mirroring. And once in OneLake, you have a single place where data can be centrally classified and labeled as the basis for policy controls. You can then configure granular, role-based permissions that can apply down to the folder level for unstructured data and by table for structured data. -Then all the way down to the column and row levels within each table. This way, security is enforced across all connected data. Meaning that whether you’re accessing the data through Spark, Power BI, T-SQL, or any other engine, it’s protected and you have the controls to allow or limit access to data on your terms. In fact, let me show you a few examples for enforcing OneLake security at all of these levels. I’ll start with an example showing OneLake security at the table level. I want to grant our suppliers team access to a specific table in this lakehouse. I’ll create a OneLake security role to do that. So I’ll just give it a name, SuppliersReaders. Then I’ll choose selected data and find the table that I want to share by expanding the table list, pick suppliers and then confirm. -Now, I just need to assign the right users. I’ll just add Mona in this case, and create the role. Then if I move over to Mona’s experience, I can run queries against the supplier data in the SQL endpoint. But if I try to query any other table, I’m blocked, as you can see here. Now, let me show you another option. This time, I’ll lock access down to the column level. I want to grant our customer relations team access to the data they need, but I don’t want to give them access to PII data. Using OneLake security controls, I can create a role that restricts access to sensitive columns. Like before, I’ll name it. Then I need to select my data. This time, I’ll choose three different tables for customer and order data. But notice this grayed out legacy orders table here that we would like to apply column security to as well. I don’t own the permissions for this table because it’s a shortcut to other data. However, the owner of that data can grant permission to it using the steps I’ll show next. From the role I just created, I’ll expand on my tables. And for the customer’s table, I’ll enable column security. Once I confirm, I can select the columns I want to remove and that we don’t want them to see and save it. -Now, let’s look at the results of this from another engine, Power BI, while building a report. I’ll choose a semantic model for my Power BI report. With the column level security in place, notice the sensitive columns I removed before, contact name and address, are hidden from me. And when I expand the legacy orders table, which was a shortcut, it’s also not showing PII columns. Now, some scenarios require that security controls are applied where records might be interspersed with the same table, so a row level filter is needed. For example, our US-based HR team should only see data for US-based employees. I’ve created another security role with the right data selected, HRUS. -Now, I’ll move to my tables and choose from the options for this employee’s table and I’ll select row security. Row level security in OneLake uses SQL statements to limit what people can see. I’ll do that here with a simple select statement to limit country to USA. Now, from the HR team’s perspective, they can start to query the data using another engine, Spark, to analyze employer retention. But only across US based employees, as you can see from the country column. And as mentioned, this applies to all engines, no matter how you access it, including the Parquet files directly in OneLake. Next, let’s move on to data classification options that can be used to inform policy controls. Here, the good news is the same labels you’ve defined in Microsoft Purview for your organization used in Microsoft 365 for emails, messaging, files, sites, and meetings can be applied to data items in OneLake. -Additionally, Microsoft Purview policy controls can be used to automatically label content in OneLake. And another benefit I can show you from the lineage view is label inheritance. Notice this Lakehouse is labeled Non-Business, as is NorthwindTest, but look at the connected data items on the right of NorthwindTest. They are also non-business. If I move into the test lakehouse and apply a label either automatically or manually to my data, like I’m doing here, then I move back to the lineage view. My downstream data items like this model and the SQL analytics endpoint below it have automatically inherited the upstream label. -So now we’ve explored OneLake security controls, their implementation, and enforcement, let’s look at how this works hand in hand with the OneLake catalog for data discovery and management. First, to know that you’re in the right place, you can use branded domains to organize collections of data. I’ll choose the sales domain. To get the data I want, I can see my items as the ones I own, endorsed items, and my favorites. I can filter by workspace. And on top, I can select the type of data item that I’m looking for. Then if I move over to tags, I can find ones associated with cost centers, dates, or other collection types. -Now, let’s take a look at a data item. This shows me more detail, like the owner and location. I can also see table schemas and more below. I can preview data within the tables directly from here. Then using the lineage tab, it shows me a list of connected and related items. Lastly, the monitor tab lets me track data refresh history. Now, let me show you how as a data owner you can view and manage these data items. From the settings of this lakehouse, I can change its properties and metadata, such as the endorsement or update the sensitivity label. And as the data owner, I can also share it securely internally or even externally with approved recipients. I’ll choose a colleague, dave@contoso.com, and share it. -Next, the govern tab in the OneLake catalog gives you even more control as a data owner, as well as recommendations to make data more secure and compliant. You’ll find it on the OneLake catalog main page. This gives me key insights at a glance, like the number and type of items I own. And when I click into view more, I see additional information like my data hierarchy. Below that, item inventory and data refresh status. Sensitivity label coverage gives me an idea of how compliant my data items are. And I can assess data completeness based on whether an item is properly tagged, described, and endorsed across the items I own. Back on the main view, I can see governance actions tailored specifically to my data, like increasing sensitivity label, coverage, and more. -The OneLake catalog is integrated across Microsoft Fabric experiences to help people quickly discover the items they need. And it’s also integrated with your favorite Office apps, including Microsoft Excel, where you can use the get data control to select and access data in OneLake. And right in context, without leaving the app, you can define what you want and pull it directly into your Excel file for analysis. The OneLake catalog is the one place where you can discover the data that you want and manage the data that you own. And combined with OneLake security controls, you can do all of this without increasing your data security risks. -To find out more and get started, check out our blog at aka.ms/OneLakeSecurity. Also, be sure to sign up for a 60 day free trial at fabric.microsoft.com. And keep watching Mechanics for the latest updates across Microsoft, subscribe to our channel, and thanks for watching.354Views0likes0Comments