microsoft fabric
79 TopicsThe Future of AI: How Lovable.dev and Azure OpenAI Accelerate Apps that Change Lives
Discover how Charles Elwood, a Microsoft AI MVP and TEDx Speaker, leverages Lovable.dev and Azure OpenAI to create impactful AI solutions. From automating expense reports to restoring voices, translating gestures to speech, and visualizing public health data, Charles's innovations are transforming lives and democratizing technology. Follow his journey to learn more about AI for good.1.5KViews2likes0CommentsAzure AI Search: Microsoft OneLake integration plus more features now generally available
From ingestion to retrieval, Azure AI Search releases enterprise-grade GA features: new connectors, enrichment skills, vector/semantic capabilities and wizard improvements—enabling smarter agentic systems and scalable RAG experiences.771Views1like0CommentsExplore Microsoft Fabric Data Agent & Azure AI Foundry for agentic solutions
Contributors for this blogpost: Jeet J & Ritaja S Context & Objective Over the past year, Gen AI apps have expanded significantly across enterprises. The agentic AI era is here, and the Microsoft ecosystem helps enable end-to-end acceleration of agentic AI apps in production. In this blog, we'll cover how both low-code business analysts and pro-code developers can use the Microsoft stack to build reusable agentic apps for their organizations. Professionals in the Microsoft ecosystem are starting to build advanced agentic generative AI solutions using Microsoft AI Services and Azure AI Foundry, which supports both open source and industry models. Combined with the advancements in Microsoft Fabric, these tools enable robust, industry-specific applications. This blog post explains how to develop multi-agent solutions for various industries using Azure AI Foundry, Copilot Studio, and Fabric. Disclaimer: This blogpost is for educational purposes only and walks through the process of using the relevant services without ton of custom code; teams must follow engineering best practices—including development, automated deployment, testing, security, and responsible AI—before any production deployment. What to expect In-Focus: Our goal is to help the reader explore specific industry use cases and understand the concept of building multi-agent solutions. In our case, we will focus on the insurance and financial services use case, use Fabric Notebooks to create sample (fake) datasets, utilize simple click-through based workflow to build-and-configure three agents (both on Fabric and Azure AI Foundry), tie them together and offer the solution via Teams or Microsoft Copilot using the new M365 Agents Toolkit. Out-of-Focus: This blog post will not cover the fundamentals of Azure AI Services, Azure AI Foundry, or the various components of Microsoft Fabric. It also won’t cover the different ways (low-code or pro-code) to build agents, orchestration frameworks (Semantic Kernel, Langchain, AutoGen, etc.) for orchestrating the agents, or hosting options (Azure App Service – Web App, Azure Kubernetes Service, Azure Container Apps, Azure Functions ). Please review the pointers listed towards the end to gain a holistic understanding of building and deploying mission-critical generative AI solutions. Logical Architecture of Multi-Agent Solution utilizing Microsoft Fabric Data Agent and Azure AI Foundry Agent. Fabric & Azure AI Foundry – Pro-code Agentic path Prerequisites a. Access to Azure Tenant & Subscription b. Work with Azure tenant administrator to have appropriate Azure Roles and Capacity to provision Azure Resources (services) and Deploy AI Models in certain regions. c. A paid F2 or higher Fabric capacity resource – Important to note that the Fabric compute capacity can be paused and resumed. Pause in case you wish to save costs after your learning. d. Access Fabric Admin Portal Power BI for enabling these settings > Fabric data agent tenant settings is enabled. > Copilot tenant switch is enabled. > Optional: Cross-geo processing for AI is enabled. (depends on your region and data sovereignty requirements) > Optional: Cross-geo storing for AI is enabled. (depends on your region) e. At least one of these: Fabric Data Warehouse, Fabric Lakehouse, one or more Power BI semantic models, or a KQL database with data. This blog post will cover how to create sample datasets using Fabric Notebooks. f. Power BI semantic models via XMLA endpoints tenant switch is enabled for Power BI semantic model data sources. Walkthrough/Set-up One-time setup for Fabric Workspace and all agents a) Visit https://app.powerbi.com Click “New workspace” button to create a new workspace, give it a name and ensure that it is tied/associated to the Fabric Capacity you have provisioned. b) Click Workspace settings of the newly created Workspace c) Review the information in License info. If the workspace isn’t associated with your newly created Fabric Capacity, please do the proper association (link the Fabric capacity to the workspace) and wait for 5-10 mins. 2. Create an Insurance Agent a) Create a new Lakehouse in your Fabric Workspace. Change the name to InsuranceLakehouse b) Create a new Fabric Notebook, assign a name, and associate the Insurance Lakehouse with it. c) Add the following Pyspark (Python) code-snippets in the Notebook. i) Faker library for Fabric Notebook ii) Insurance Sample Dataset in Fabric Notebook iii) Run both cells to generate the sample Insurance dataset. d) Create a new Fabric data agent, give it a name and add the Data Source (InsuranceLakehouse). i) Ensure that the Insurance Lakehouse is added as the data source in the Insurance Fabric data agent. ii) Click AI instructions button first to paste the sample instructions and finally the Publish button. iii) Paste the sample instructions in the field. A churn is represented by number 1. Calculate churn rate as : total number of churns (TC) / (TT) total count of churn entries. When asked to calculate churn for each policy type then TT should be total count of churn of that policy type e.g Life, legal. iv) Make sure to hit the Publish button. v) Capture two values from the URL and store them in secure/private place. We will use them to configure the knowledge source in Azure AI Foundry Agent. https://app.powerbi.com/groups/<WORKSPACE-ID>/aiskills/<ARTIFACT-ID>?experience=power-bi e) Create Azure AI Agent on Azure AI Foundry and use Fabric Data Agent as the Knowledge Source i) Visit https://ai.azure/com ii) Create new Azure AI Project and deploy gpt-4o-mini model (as an example model) in the region where the model is available. iii) Create new Azure AI Foundry Agent by clicking the “New agent” button. Give it a name (for e.g. AgentforInsurance) iv) Paste the sample Instructions in the Azure AI Agent as follows Use Fabric data agent tool to answer questions related to Insurance data. Fabric data agent as a tool has access to insurance data tables: Claims (amount, date, status), Customer (age, address, occupation, etc), Policy (premium amount, policy type: life insurance, auto insurance, etc) v) On the right-hand pane, click “+Add” button next to Knowledge. vi) Choose an existing Fabric data agent connection or click the new Connection. vii) In the next dialog, plug-in the values of the Workspace and Artifact ID you captured above, Ensure that “is secret” is checked, give a name to the connection and hit the Connect button. viii) Add the Code Interpreter as the tool in the Azure AI Foundry agent by clicking +Add next to Actions and selecting Code Interpreter. ix) Test the agent by clicking “Try in playground” button x) To test the Agent, you can try out these sample questions: What is the churn rate across my different insurance policy types What’s the month over month claims change in % for each insurance type? Show me graph view of month over month claims change in % for each insurance type for the year 2025 only Based on month over month claims change for the year 2025, can you show the forecast for the next 3 months in a graph? f) Exposing the Fabric data agent to end users: We will explore this in the Copilot Studio Section Fabric & Copilot Studio – Low-code agentic path Prerequisites For Copilot Studio, you have 3 options to work with: Copilot Studio trial or Copilot Studio license or Copilot Studio Pay as you go (connected to your Azure billing). Follow steps here to setup: Copilot Studio licensing - Microsoft Copilot Studio | Microsoft Learn Once you have Copilot Studio set up, navigate to https://copilotstudio.microsoft.com/ and start creating a new agent – Click on “create” and then “New Agent” Walkthrough/Set-up: Follow steps from “Fabric & Azure AI Foundry section” to create the Fabric Lakehouse and You could create the new agent by describing step by step in natural language but for this exercise we will “skip to configure” (button): Give the agent a name, add a helpful description (suggestion, add: Agent that has access to Insurance data and helps with data driven insights from Policy, Claims and Customer information). Then add the agent instruction prompt: “Answer questions related to Insurance data, you have access to insurance data agent, use the agent to gather insights from insurance Lakehouse containing customer, policy and claim information.” Finally click on “Create” You should have the following setup like below: Next, we want to add another agent for this agent to use – in this case this will be our Fabric Data Agent. Click on “Agents”: Next click on “Add” and add agent. From the screen click on Microsoft Fabric: If you haven’t set-up a connection to Fabric from Copilot Studio, you will be prompted to create a new connection, follow the prompts to sign in with your user and add a connection. Once that is done click “Next” to continue: From the Fabric screen, select the appropriate data agent and click “Next”: On the next screen, name the agent appropriately and use a friendly description “Agent that answers questions from the insurance lakehouse knowledge, has access to claims, policies and customer information” and finally click on “Add Agent”: On the “Tools” section click on refresh to make sure the tool description populates. Finally go back to overview and then Start Testing the agent from the side Test Panel. Click on Activity map to see the sequence of events. Type in the following question: “What’s the month over month claims change in % for auto insurance ?” You can see the Fabric data agent is called by the Copilot Agent in this scenario to answer your question: Now let’s prepare to surface this through Teams. You will need to publish the agent to a channel (in this case, we will use the Teams channel). First, navigate to channels: Click on the Teams and M365 Copilot channel and click add. After adding the channel, a pop up will ask y ou if you are ready to publish the agent: To view the app in Teams you need to make sure that you have setup proper policies in Teams. Follow this tutorial: Connect and configure an agent for Teams and Microsoft 365 Copilot - Microsoft Copilot Studio | Microsoft Learn. Now your app is available across Teams. Below is an example of how to use it from Teams – make sure you click on “Allow” for the fabric data agent connection: Fabric & AI Foundry & Copilot Studio – the end to end We saw how Fabric data agents can be created and utilized in Copilot Studio in a multi-agent setup. In the future, pro-code and low-code agentic developers are expected to work together to create agentic apps, instead of in silos. So, how do we solve the challenge of connecting all the components together in the same technology stack ? Let’s say a pro-code developer has created a custom agent in AI Foundry. Meanwhile, a low-code business user has put in business context to create another agent that requires access to the agent in AI Foundry. You’ll be pleased to know that Copilot Studio and Azure AI Foundry are becoming more integrated to enable complex, custom scenarios: Copilot Studio will soon release the integration to help with this: Summary: We demonstrated how one can build a Gen AI solution that allows seamless integration between Azure AI Foundry agents and Fabric data agents. We look forward to seeing what innovative solutions you can build by learning and working closely with your Microsoft contacts or your SI partner. This may include but not limited to: Utilizing a real industry domain to illustrate the concepts of building simple multi-agent solution. Showcasing the value of combining Fabric data agent and Azure AI agent Demonstrating how one can publish the conceptual solution to Teams or Copilot using the new M365 Agents Toolkit. Note that this blog post focused only on Fabric data agent and Azure AI Foundry Agent Service, but production ready solutions will need to consider Azure Monitor (for monitoring and observability) and Microsoft Purview for data governance. Pointers to Other Learning Resources Ways to build AI Agents: Build agents, your way | Microsoft Developer Components of Microsoft Fabric: What is Microsoft Fabric - Microsoft Fabric | Microsoft Learn Info on Microsoft Fabric data agent Create a Fabric data agent (preview) - Microsoft Fabric | Microsoft Learn Fabric Data Agent Tutorial: Fabric data agent scenario (preview) - Microsoft Fabric | Microsoft Learn New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses | Microsoft Fabric Blog | Microsoft Fabric Info on Azure AI Services, Models and Azure AI Foundry Azure AI Foundry documentation | Microsoft Learn What are Azure AI services? - Azure AI services | Microsoft Learn How to use Azure AI services in Azure AI Foundry portal - Azure AI Services | Microsoft Learn What is Azure AI Foundry Agent Service? - Azure AI Foundry | Microsoft Learn Explore Azure AI Foundry Models - Azure AI Foundry | Microsoft Learn What is Azure OpenAI in Azure AI Foundry Models? | Microsoft Learn Explore model leaderboards in Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn & Benchmark models in the model leaderboard of Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn How to use model router for Azure AI Foundry (preview) | Microsoft Learn Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn Trustworthy AI for Azure AI Foundry - Azure AI Foundry | Microsoft Learn Cost Management for Models: Plan to manage costs for Azure AI Foundry Models | Microsoft Learn Provisioned Throughput Offering: Provisioned throughput for Azure AI Foundry Models | Microsoft Learn Extend Azure AI Foundry Agent with Microsoft Fabric Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor | Microsoft Community Hub How to use the data agents in Microsoft Fabric with Azure AI Foundry Agent Service - Azure AI Foundry | Microsoft Learn General guidance on when to adopt, extend and build CoPilot experiences: Adopt, extend and build Copilot experiences across the Microsoft Cloud | Microsoft Learn M365 Agents Toolkit Choose the right agent solution to support your use case | Microsoft Learn Microsoft 365 Agents Toolkit Overview - Microsoft 365 Developer | Microsoft Learn Github and Video to offer Azure AI Agent inside Teams or CoPilot via M365 Agents Toolkit OfficeDev/microsoft-365-agents-toolkit: Developer tools for building Teams apps & Deploying your Azure AI Foundry agent to Microsoft 365 Copilot, Microsoft Teams, and beyond Happy Learning! Contributors: Jeet J & Ritaja S Special thanks to reviewers: Joanne W, Amir J & Noah A541Views1like2CommentsApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.927Views6likes1CommentActing on Real-Time data using custom actions with Data Activator
Being able to make data driven decisions and act on real-time data is important to organizations because it enable them to either avert crisis in systems that monitor product health and take other actions based on their requirements. For an example, a shipping company may want to monitor their packages and act in real-time when the temperature of the packages becomes too hot. One way of monitoring and acting on data is to use Data Activator, which is a no-code experience in Microsoft Fabric for automatically taking actions when the condition of the package temperature is detected in the data.1.1KViews0likes1CommentExternal Data Sharing With Microsoft Fabric
The demands and growth of data for external analytics consumption is rapidly growing. There are many options to share data externally and the field is very dynamic. One of the most frictionless and easy onboarding steps for external data sharing we will explore is with Microsoft Fabric. This external data allows users to share data from their tenant with users in another Microsoft Fabric tenant.6KViews3likes2CommentsAnnouncing Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Public Preview
Back at the first European Microsoft Fabric Community Conference in September 2024 we announced our Private Preview program for Mirroring for Azure Database for PostgreSQL in Microsoft Fabric. Today, in conjunction with 2025 edition of Microsoft Fabric Community Conference in Las Vegas, we're thrilled to announce our Public Preview milestone, giving customers the ability to leverage friction-free near-real time replication from Azure Database for PostgreSQL flexible server to Fabric OneLake in Delta tables, providing a solid foundation for reporting, advanced analytics, AI, and data science on operational data with minimal effort and impact on transactional workloads. Mirroring is setup from Fabric Data Warehousing experience by providing the Azure Database for PostgreSQL flexible server and database connection details, provide selections on what needs to be mirrored into Fabric, either all data or user selected eligible mirrored tables. And, just like that, mirroring is ready to go. Mirroring Azure Database for PostgreSQL flexible server creates an initial snapshot in Fabric OneLake, after which data is kept in sync in near-real time with every transaction. How mirroring to Fabric works in Azure Database for PostgreSQL flexible server Fabric mirroring in Azure Database for PostgreSQL flexible server is based on principles such as logical replication and the Change Data Capture (CDC) design pattern. Once Fabric mirroring is established for a database in Azure Database for PostgreSQL flexible server, an initial snapshot is created by a background process for selected tables to be mirrored. That snapshot is shipped to a Fabric OneLake's landing zone in Parquet format. A process running in Fabric, known as replicator, takes these initial snapshot files and creates tables in Delta format in the Mirrored database artifact. Subsequent changes applied to selected tables are also captured in the source database and shipped to the OneLake landing zone in batches. Those batches of changes are finally applied to the respective Delta tables in the Mirrored database artifact. For Fabric mirroring, the CDC pattern is implemented in a proprietary PostgreSQL extension called azure_cdc, which is installed and registered in source databases during Fabric mirroring enablement workflow. This guided process has a new dedicated page in Azure Portal and is setting up all required pre-requisites and is offering a simplified experience where you just need to select which databases you want to replicate to Fabric OneLake (default is up to 3). You can read additional details regarding the server enablement process and other critical configuration and monitoring options on a dedicated page in Azure Database for PostgreSQL flexible server product documentation. Explore advanced analytics and data engineering for PostgreSQL in Microsoft Fabric Once data is on OneLake, mirrored data in the delta format is ready for immediate consumption across all Fabric experiences and features, such as Power BI with new Direct Lake mode, Data Warehouse, Data Engineering, Lakehouse, KQL Database, Notebooks and Copilot, which work instantly. Direct Lake mode is a fast path to load the data from the lake with groundbreaking semantic model capability for analyzing very large data volumes in Power BI. As Direct Lake mode also supports reading Delta tables right from OneLake, the Mirrored PostgreSQL database is Power BI ready along with Copilot capabilities. Data across any mirrored database (either Azure Database for PostgreSQL, Azure SQL DB, Azure Cosmos DB or Snowflake) can be cross-joined as well, enabling querying across any database, warehouse or Lakehouse (either as a shortcut to AWS S3 or ADLS Gen 2 etc.). With the same approach, you can also have multiple PosgreSQL databases from multiple servers mirrored to OneLake like in a typical SaaS provider scenario, where each database belongs to a different tenant, and execute cross-database queries to aggregate and analyze critical business metrics. Data scientists and data engineers can work with the mirrored Azure Database for PostgreSQL data joined with other sources (see this example with CosmosDB data) that are created as shortcuts in Lakehouse. Read about endless possibilities when loading operational databases in OneLake and Microsoft Fabric in related section of our product documentation here. Getting started with Mirroring for Azure Database for PostgreSQL in Fabric To summarize, Mirroring Azure Database for PostgreSQL in Microsoft Fabric plays a crucial role in enabling analytics and driving insights from operational data by ensuring that the most recent data is available for analysis. This allows businesses to make decisions based on the most current situation, rather than relying on outdated information. Improving accuracy also reduces the risk of discrepancies between the source and the replicated data, leading to more accurate analytics and reliable insights. In addition, is essential for predictive analytics and AI models provide the most recent data to make accurate predictions and decisions. To get started and learn more about Mirroring Azure Database for PostgreSQL flexible server in Microsoft Fabric, its pre-requisites, setup, FAQ’s, current limitations, and tutorial, please click here to read all about it and stay tuned for more updates and new features coming soon. To get more updates also on overall Mirroring capabilities in Fabric, please read this other blog post where you will get the latest news.1.3KViews3likes4CommentsBuilding Enterprise Voice-Enabled AI Agents with Azure Voice Live API
The sample application covered in this post demonstrates two approaches in an end-to-end solution that includes product search, order management, automated shipment creation, intelligent analytics, and comprehensive business intelligence through Microsoft Fabric integration. Use Case Scenario: Retail Fashion Agent Core Business Capabilities: Product Discovery and Ordering: Natural language product search across fashion categories (Winter wear, Active wear, etc.) and order placement. REST APIs hosted in Azure Function Apps provide this functionality and a Swagger definition is configured in the Application for tool action. Automated Fulfillment: Integration with Azure Logic Apps for shipment creation in Azure SQL Database Policy Support: Vector-powered QnA for returns, payment issues, and customer policies. Azure AI Search & File Search capabilities are used for this requirement. Conversation Analytics: AI-powered analysis using GPT-4o for sentiment scoring and performance evaluation. The Application captures the entire conversation between the customer and Agent and sends them to an Agent running in Azure Logic Apps to perform call quality assessment, before storing the results in Azure CosmosDB. When during the voice call the customer indicates that the conversation can be concluded, the Agent autonomously sends the conversation history to the Azure Logic App to perform quality assessment. Advanced Analytics Pipeline: Real-time Data Mirroring: Automatic synchronization from Azure Cosmos DB to Microsoft Fabric OneLake Business Intelligence: Custom Data Agents in Fabric for trend analysis and insights Executive Dashboards: Power BI reports for comprehensive performance monitoring Technical Architecture Overview The solution presents two approaches, each optimized for different enterprise scenarios: 🎯Approach 1: Direct Model Integration with GPT-Realtime Architecture Components This approach provides direct integration with Azure Voice Live API using GPT-Realtime model for immediate speech-to-speech conversational experiences without intermediate text processing. The Application connects to the Voice Live API uses a Web socket connection. The semantics of this API are similar to the one used when connecting to the GPT-Realtime API directly. The Voice Live API provides additional configurability, like the choice of a custom Voice from Azure Speech Services, options for echo cancellation, noise reduction and plugging an Avatar integration. Core Technical Stack: GPT-Realtime Model: Direct audio-to-audio processing Azure Speech Voice: High-quality TTS synthesis (en-IN-AartiIndicNeural) WebSocket Communication: Real-time bidirectional audio streaming Voice Activity Detection: Server-side VAD for natural conversation flow Client-Side Function Calling: Full control over tool execution logic Key Session Configuration The Direct Model Integration uses the session configuration below: session_config = { "input_audio_sampling_rate": 24000, "instructions": system_instructions, "turn_detection": { "type": "server_vad", "threshold": 0.5, "prefix_padding_ms": 300, "silence_duration_ms": 500, }, "tools": tools_list, "tool_choice": "auto", "input_audio_noise_reduction": {"type": "azure_deep_noise_suppression"}, "input_audio_echo_cancellation": {"type": "server_echo_cancellation"}, "voice": { "name": "en-IN-AartiIndicNeural", "type": "azure-standard", "temperature": 0.8, }, "input_audio_transcription": {"model": "whisper-1"}, } Configuration Highlights: 24kHz Audio Sampling: High-quality audio processing for natural speech Server VAD: Optimized threshold (0.5) with 300ms padding for natural conversation flow Azure Deep Noise Suppression: Advanced noise reduction for clear audio Indic Voice Support: en-IN-AartiIndicNeural for localized customer experience Whisper-1 Transcription: Accurate speech recognition for conversation logging Connecting to the Azure Voice Live API The voicelive_modelclient.py demonstrates advanced WebSocket handling for real-time audio streaming: def get_websocket_url(self, access_token: str) -> str: """Generate WebSocket URL for Voice Live API.""" azure_ws_endpoint = endpoint.rstrip("/").replace("https://", "wss://") return ( f"{azure_ws_endpoint}/voice-live/realtime?api-version={api_version}" f"&model={model_name}" f"&agent-access-token={access_token}" ) async def connect(self): if self.is_connected(): # raise Exception("Already connected") self.log("Already connected") # Get access token access_token = self.get_azure_token() # Build WebSocket URL and headers ws_url = self.get_websocket_url(access_token) self.ws = await websockets.connect( ws_url, additional_headers={ "Authorization": f"Bearer {self.get_azure_token()}", "x-ms-client-request-id": str(uuid.uuid4()), }, ) print(f"Connected to Azure Voice Live API....") asyncio.create_task(self.receive()) await self.update_session() Function Calling Implementation The Direct Model Integration provides client-side function execution with complete control: tools_list = [ { "type": "function", "name": "perform_search_based_qna", "description": "call this function to respond to the user query on Contoso retail policies, procedures and general QnA", "parameters": { "type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"], }, }, { "type": "function", "name": "create_delivery_order", "description": "call this function to create a delivery order based on order id and destination location", "parameters": { "type": "object", "properties": { "order_id": {"type": "string"}, "destination": {"type": "string"}, }, "required": ["order_id", "destination"], }, }, { "type": "function", "name": "perform_call_log_analysis", "description": "call this function to analyze call log based on input call log conversation text", "parameters": { "type": "object", "properties": { "call_log": {"type": "string"}, }, "required": ["call_log"], }, }, { "type": "function", "name": "search_products_by_category", "description": "call this function to search for products by category", "parameters": { "type": "object", "properties": { "category": {"type": "string"}, }, "required": ["category"], }, }, { "type": "function", "name": "order_products", "description": "call this function to order products by product id and quantity", "parameters": { "type": "object", "properties": { "product_id": {"type": "string"}, "quantity": {"type": "integer"}, }, "required": ["product_id", "quantity"], }, } ] 🤖 Approach 2: Azure AI Foundry Agent Integration Architecture Components This approach leverages existing Azure AI Foundry Service Agents, providing enterprise-grade voice capabilities as a clean wrapper over pre-configured agents. It does not entail any code changes to the Agent itself to voice enable it. Core Technical Stack: Azure Fast Transcript: Advanced multi-language speech-to-text processing Azure AI Foundry Agent: Pre-configured Agent with autonomous capabilities GPT-4o-mini Model: Agent-configured model for text processing Neural Voice Synthesis: Indic language optimized TTS Semantic VAD: Azure semantic voice activity detection Session Configuration The Agent Integration approach uses advanced semantic voice activity detection: session_config = { "input_audio_sampling_rate": 24000, "turn_detection": { "type": "azure_semantic_vad", "threshold": 0.3, "prefix_padding_ms": 200, "silence_duration_ms": 200, "remove_filler_words": False, "end_of_utterance_detection": { "model": "semantic_detection_v1", "threshold": 0.01, "timeout": 2, }, }, "input_audio_noise_reduction": {"type": "azure_deep_noise_suppression"}, "input_audio_echo_cancellation": {"type": "server_echo_cancellation"}, "voice": { "name": "en-IN-AartiIndicNeural", "type": "azure-standard", "temperature": 0.8, }, "input_audio_transcription": {"model": "azure-speech", "language": "en-IN, hi-IN"}, } Key Differentiators: Semantic VAD: Intelligent voice activity detection with utterance prediction Multi-language Support: Azure Speech with en-IN and hi-IN language support End-of-Utterance Detection: AI-powered conversation turn management Filler Word Handling: Configurable processing of conversational fillers Agent Integration Code The voicelive_client.py demonstrates seamless integration with Azure AI Foundry Agents. Notice that we need to provide the Azure AI Foundry Project Name and an ID of the Agent in it. We do not need to pass the model's name here, since the Agent is already configured with one. def get_websocket_url(self, access_token: str) -> str: """Generate WebSocket URL for Voice Live API.""" azure_ws_endpoint = endpoint.rstrip("/").replace("https://", "wss://") return ( f"{azure_ws_endpoint}/voice-live/realtime?api-version={api_version}" f"&agent-project-name={project_name}&agent-id={agent_id}" f"&agent-access-token={access_token}" ) async def connect(self): """Connects the client using a WS Connection to the Realtime API.""" if self.is_connected(): # raise Exception("Already connected") self.log("Already connected") # Get access token access_token = self.get_azure_token() # Build WebSocket URL and headers ws_url = self.get_websocket_url(access_token) self.ws = await websockets.connect( ws_url, additional_headers={ "Authorization": f"Bearer {self.get_azure_token()}", "x-ms-client-request-id": str(uuid.uuid4()), }, ) print(f"Connected to Azure Voice Live API....") asyncio.create_task(self.receive()) await self.update_session() Advanced Analytics Pipeline GPT-4o Powered Call Analysis The solution implements conversation analytics using Azure Logic Apps with GPT-4o: { "functions": [ { "name": "evaluate_call_log", "description": "Evaluate call log for Contoso Retail customer service call", "parameters": { "properties": { "call_reason": { "description": "Categorized call reason from 50+ predefined scenarios", "type": "string" }, "customer_satisfaction": { "description": "Overall satisfaction assessment", "type": "string" }, "customer_sentiment": { "description": "Emotional tone analysis", "type": "string" }, "call_rating": { "description": "Numerical rating (1-5 scale)", "type": "number" }, "call_rating_justification": { "description": "Detailed reasoning for rating", "type": "string" } } } } ] } Microsoft Fabric Integration The analytics pipeline extends into Microsoft Fabric for enterprise business intelligence: Fabric Integration Features: Real-time Data Mirroring: Cosmos DB to OneLake synchronization Custom Data Agents: Business-specific analytics agents in Fabric Copilot Integration: Natural language business intelligence queries Power BI Dashboards: Interactive reports and executive summaries Artefacts for reference The source code of the solution is available in the GitHub Repo here. An article on this topic is published on LinkedIn here A video recording of the demonstration of this App is available below: Part1 - walkthrough of the Agent configuration in Azure AI Foundry - here Part2 - demonstration of the Application that integrates with the Azure Voice Live API - here Part 3 - demonstration of the Microsoft Fabric Integration, Data Agents, Copilot in Fabric and Power BI for insights and analysis - here Conclusion Azure Voice Live API enables enterprises to build sophisticated voice-enabled AI assistants using two distinct architectural approaches. The Direct Model Integration provides ultra-low latency for real-time applications, while the Azure AI Foundry Agent Integration offers enterprise-grade governance and autonomous operation. Both approaches deliver the same comprehensive business capabilities: Natural voice interactions with advanced VAD and noise suppression Complete retail workflow automation from inquiry to fulfillment AI-powered conversation analytics with sentiment scoring Enterprise business intelligence through Microsoft Fabric integration The choice between approaches depends on your specific requirements: Choose Direct Model Integration for custom function calling and minimal latency Choose Azure AI Foundry Agent Integration for enterprise governance and existing investments659Views1like0CommentsCapacity Template v2 with Microsoft Fabric
1. Capacity Scenario One of the most common scenarios for Microsoft Graph Data Connect (MGDC) for SharePoint is Capacity. This scenario focuses on identifying which sites and files are using the most storage, along with understanding the distribution of these large sites and files by properties like type and age. The MGDC datasets for this scenario are SharePoint Sites and SharePoint Files. If you’re not familiar with these datasets, you can find details in the schema definitions at https://aka.ms/SharePointDatasets. To assist you in using these datasets, the team has developed a Capacity Template. Initially published as a template for Azure Synapse, we now have a new Microsoft Fabric template that is simpler and offers more features. This SharePoint Capacity v2 Template, based on Microsoft Fabric, is now publicly available. 2. Instructions The template comes with a set of detailed instructions at https://aka.ms/fabriccapacitytemplatesteps. These instructions include: How to install the Microsoft Fabric and Microsoft Graph Data Connect prerequisites How to import the pipeline template from the Microsoft Fabric gallery and set it up How to import the Power BI template and configure the data source settings See below some additional details about the template. 3. Microsoft Fabric Pipeline After you import the pipeline template, it will look like this: 4. Pipeline in Microsoft Fabric The Capacity template for Microsoft Fabric includes a few key improvements: The new template uses delta datasets to update the SharePoint Sites and SharePoint Files datasets. It keeps track of the last time the datasets were pulled by this pipeline, requesting just what changed since then. The new template uses views to do calculations and create new properties like size bands or date bands. In our previous template, this was done in Power Query, when importing into Power BI. The new template also uses a view to aggregate file data, grouping the data by file extension. You can find details on how to find and deploy the Microsoft Fabric template in the instructions (see item 3). 5. Microsoft Fabric Report The typical result from this solution is a set of Power BI dashboards pulled from the Microsoft Fabric data source. Here are some examples: These dashboards serve as examples or starting points and can be modified as necessary for various visualizations of the data within these datasets. The instructions (see item 3) include details on how to find and deploy a few sample Power BI Capacity templates. 6. Conclusion I hope this provides a good overview of the Capacity template for Microsoft Fabric. You can read more about the Microsoft Graph Data Connect for SharePoint at https://aka.ms/SharePointData. There you will find many details, including a list of datasets available, other common scenarios and frequently asked questions.