mcp
47 TopicsUnleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge. Key Features of MCP Standardized Communication – MCP provides a structured way for AI models to interact with various tools. Tool Access & Expansion – AI assistants can now utilize external tools for real-time insights. Secure & Scalable – Enables safe and scalable integration with enterprise applications. Multi-Modal Integration – Supports STDIO, SSE (Server-Sent Events), and WebSocket communication methods. MCP Architecture & How It Works MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: Components of MCP MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. Data Flow in MCP The AI model sends a request (e.g., "fetch user profile data"). The MCP client forwards the request to the appropriate MCP server. The MCP server retrieves the required data from a database or API. The response is sent back to the AI model via the MCP client. Integrating MCP with Azure OpenAI Services Microsoft has integrated MCP with Azure OpenAI Services, allowing GPT models to interact with external services and fetch live data. This means AI models are no longer limited to static knowledge but can access real-time information. Benefits of Azure OpenAI Services + MCP Integration ✔ Real-time Data Fetching – AI assistants can retrieve fresh information from APIs, databases, and internal systems. ✔ Contextual AI Responses – Enhances AI responses by providing accurate, up-to-date information. ✔ Enterprise-Ready – Secure and scalable for business applications, including finance, healthcare, and retail. Hands-On Tools for MCP Implementation To implement MCP effectively, Microsoft provides two powerful tools: Semantic Workbench and AI Gateway. Microsoft Semantic Workbench A development environment for prototyping AI-powered assistants and integrating MCP-based functionalities. Features: Build and test multi-agent AI assistants. Configure settings and interactions between AI models and external tools. Supports GitHub Codespaces for cloud-based development. Explore Semantic Workbench Workbench interface examples Microsoft AI Gateway A plug-and-play interface that allows developers to experiment with MCP using Azure API Management. Features: Credential Manager – Securely handle API credentials. Live Experimentation – Test AI model interactions with external tools. Pre-built Labs – Hands-on learning for developers. Explore AI Gateway Setting Up MCP with Azure OpenAI Services Step 1: Create a Virtual Environment First, create a virtual environment using Python: python -m venv .venv Activate the environment: # Windows venv\Scripts\activate # MacOS/Linux source .venv/bin/activate Step 2: Install Required Libraries Create a requirements.txt file and add the following dependencies: langchain-mcp-adapters langgraph langchain-openai Then, install the required libraries: pip install -r requirements.txt Step 3: Set Up OpenAI API Key Ensure you have your OpenAI API key set up: # Windows setx OPENAI_API_KEY "<your_api_key> # MacOS/Linux export OPENAI_API_KEY=<your_api_key> Building an MCP Server This server performs basic mathematical operations like addition and multiplication. Create the Server File First, create a new Python file: touch math_server.py Then, implement the server: from mcp.server.fastmcp import FastMCP # Initialize the server mcp = FastMCP("Math") MCP.tool() def add(a: int, b: int) -> int: return a + b MCP.tool() def multiply(a: int, b: int) -> int: return a * b if __name__ == "__main__": mcp.run(transport="stdio") Your MCP server is now ready to run. Building an MCP Client This client connects to the MCP server and interacts with it. Create the Client File First, create a new file: touch client.py Then, implement the client: import asyncio from mcp import ClientSession, StdioServerParameters from langchain_openai import ChatOpenAI from mcp.client.stdio import stdio_client # Define server parameters server_params = StdioServerParameters( command="python", args=["math_server.py"], ) # Define the model model = ChatOpenAI(model="gpt-4o") async def run_agent(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await load_mcp_tools(session) agent = create_react_agent(model, tools) agent_response = await agent.ainvoke({"messages": "what's (4 + 6) x 14?"}) return agent_response["messages"][3].content if __name__ == "__main__": result = asyncio.run(run_agent()) print(result) Your client is now set up and ready to interact with the MCP server. Running the MCP Server and Client Step 1: Start the MCP Server Open a terminal and run: python math_server.py This starts the MCP server, making it available for client connections. Step 2: Run the MCP Client In another terminal, run: python client.py Expected Output 140 This means the AI agent correctly computed (4 + 6) x 14 using both the MCP server and GPT-4o. Conclusion Integrating MCP with Azure OpenAI Services enables AI applications to securely interact with external tools, enhancing functionality beyond text-based responses. With standardized communication and improved AI capabilities, developers can build smarter and more interactive AI-powered solutions. By following this guide, you can set up an MCP server and client, unlocking the full potential of AI with structured external interactions. Next Steps: Explore more MCP tools and integrations. Extend your MCP setup to work with additional APIs. Deploy your solution in a cloud environment for broader accessibility. For further details, visit the GitHub repository for MCP integration examples and best practices. MCP GitHub Repository MCP Documentation Semantic Workbench AI Gateway MCP Video Walkthrough MCP Blog MCP Github End to End Demo61KViews11likes6CommentsKickstart Your AI Development with the Model Context Protocol (MCP) Course
Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Should Developers Care? Model Context Protocol (MCP) is a innovative framework designed to standardize interactions between AI models and client applications. In simpler terms, MCP is a communication bridge that lets your AI agent fetch live context from external sources (like APIs, documents, databases, or web services) and even take actions using tools. This means your AI apps are no longer limited to pre-trained knowledge they can dynamically retrieve data or execute commands, enabling far more powerful and context-aware behavior. Some key reasons MCP matters for developers: Seamless Integration of Tools & Data: MCP provides a unified way to connect AI to various data sources and tools, eliminating the need for ad-hoc, fragile integrations. Your AI agent can, for example, query a database or call a web API during a conversation all through a standardized protocol. Stay Up-to-Date: Because AI models can use MCP to access external information, they overcome the training data cutoff problem. They can fetch the latest facts, figures, or documents on demand, ensuring more accurate and timely responses. Industry Momentum: MCP is quickly gaining traction. Originally introduced by Microsoft and Anthropic in late 2024, it has since been adopted by major AI platforms (Replit, Sourcegraph, Hugging Face, and more) and spawned thousands of open-source connectors by early 2025. It’s an emerging standard – learning it now puts developers at the forefront of AI innovation. In short, MCP is transformative for AI development, and being proficient in it will help you build smarter AI solutions that can interact with the real world. The MCP for Beginners course is designed to make mastering this protocol accessible, with a structured learning path and hands-on examples. Introducing the MCP for Beginners Course “Model Context Protocol for Beginners” is an open-source, self-paced curriculum created by Microsoft to teach the concepts and fundamentals of MCP. Whether you’re completely new to MCP or have some experience, this course offers a comprehensive guide from the ground up. Key Features and Highlights: Structured Learning Path: The curriculum is organized as a multi-part guide (9 modules in total) that gradually builds your knowledge. It starts with the basics of MCP – What is MCP? Why does standardization matter? What are the use cases? – and then moves through core concepts, security considerations, getting started with coding, all the way to advanced topics and real-world case studies. This progression ensures you understand the “why” and “how” of MCP before tackling complex scenarios. Hands-On Coding Examples: This isn’t just theory – practical coding examples are a cornerstone of the course. You’ll find live code samples and mini-projects in multiple languages (C#, Java, JavaScript/TypeScript, and Python) for each concept. For instance, you’ll build a simple MCP-powered Calculator application as a project, exploring how to implement MCP clients and servers in your preferred language. By coding along, you cement your understanding and see MCP in action. Real-World Use Cases: The curriculum illustrates how MCP applies to real scenarios. It discusses practical use cases of MCP in AI pipelines (e.g. an AI agent pulling in documentation or database info on the fly) and includes case studies of early adopters. These examples help you connect what you learn to actual applications and solutions you might develop in your job. Broad Language Support: A unique aspect of this course is its multi-language approach – both in terms of programming and human languages. The content provides code implementations in several popular programming languages (so you can learn MCP in the context of C#, Java, Python, JavaScript, or TypeScript, as you prefer). In addition, the learning materials themselves are available in multiple human languages (English, plus translations like French, Spanish, German, Chinese, Japanese, Korean, Polish, etc.) to support learners worldwide. This inclusivity ensures that more developers can comfortably engage with the material. Up-to-Date and Open-Source: Being hosted on GitHub under MIT License, the curriculum is completely free to use and open for contributions. It’s maintained with the latest updates for example, automated workflows keep translations in sync so all language versions stay current. As MCP evolves, the course content can evolve with it. You can even join the community to suggest improvements or add content, making this a living learning resource. Official Resources & Community Support: The course links to official MCP documentation and specs for deeper reference, and it encourages learners to join thehttps;//aka.ms/ai/discord to discuss and get help. You won’t be learning alone; you can network with experts and peers, ask questions, and share progress. Microsoft’s open-source approach means you’re part of a community of practitioners from day one. Course Outline: (Modules at a Glance) Introduction to MCP: Overview of MCP, why standardization matters in AI, and the key benefits and use cases of using MCP. (Start here to understand the big picture.) Core Concepts: Deep dive into MCP’s architecture – understanding the client-server model, how requests and responses work, and the message schema. Learn the fundamental components that make up the protocol. Security in MCP: Identify potential security threats when building MCP-based systems and learn best practices to secure your AI integrations. Important for anyone planning to deploy MCP in production environments. Getting Started (Hands-On): Set up your environment and create your first MCP server and client. This module walks through basic implementation steps and shows how to integrate MCP with existing applications, so you get a service up and running that an AI agent can communicate with. MCP Calculator Project: A guided project where you build a simple MCP-powered application (a calculator) in the language of your choice. This hands-on exercise reinforces the concepts by implementing a real tool – you’ll see how an AI agent can use MCP to perform calculations via an external tool. Practical Implementation: Tips and techniques for using MCP SDKs across different languages. Covers debugging, testing, validation of MCP integrations, and how to design effective prompt workflows that leverage MCP’s capabilities. Advanced Topics: Going beyond the basics – explore multi-modal AI workflows (using MCP to handle not just text but other data types), scalability and performance tuning for MCP servers, and how MCP fits into larger enterprise architectures. This is where intermediate users can really deepen their expertise. Community Contributions: Learn how to contribute to the MCP ecosystem and the curriculum itself. This section shows you how to collaborate via GitHub, follow the project’s guidelines, and even extend the protocol with your own ideas. It underlines that MCP is a growing, community-driven standard. Insights from Early Adoption: Hear lessons learned from real-world MCP implementations. What challenges did early adopters face? What patterns and solutions worked best? Understanding these will prepare you to avoid pitfalls in your own projects. Best Practices and Case Studies: A roundup of do’s and don’ts when using MCP. This includes performance optimization techniques, designing fault-tolerant systems, and testing strategies. Plus, detailed case studies that walk through actual MCP solution architectures with diagrams and integration tips bringing everything you learned together in concrete examples. Who Should Take This Course? The MCP for Beginners course is geared towards developers if you build or work on AI-driven applications, this course is for you. The content specifically welcomes: Beginners in AI Integration: You might be a developer who's comfortable with languages like Python, C#, or Java but new to AI/LLMs or to MCP itself. This course will take you from zero knowledge of MCP to a level where you can build and deploy your own MCP-enabled services. You do not need prior experience with MCP or machine learning pipelines the introduction module will bring you up to speed on key concepts. (Basic programming skills and understanding of client-server or API concepts are the only prerequisites.) Intermediate Developers & AI Practitioners: If you have some experience building bots or AI features and want to enhance them with real-time data access, you’ll benefit greatly. The course’s later modules on advanced topics, security, and best practices are especially valuable for those looking to integrate MCP into existing projects or optimize their approach. Even if you've dabbled in MCP or a similar concept before, this curriculum will fill gaps in knowledge and provide structured insights that are hard to get from scattered documentation. AI Enthusiasts & Architects: Perhaps you’re an AI architect or tech lead exploring new frameworks for intelligent agents. This course serves as a comprehensive resource to evaluate MCP for your architecture. By walking through it, you’ll understand how MCP can fit into enterprise systems, what benefits it brings, and how to implement it in a maintainable way. It’s perfect for getting a broad yet detailed view of MCP’s capabilities before adopting it within a team. In essence, anyone interested in making AI applications more connected and powerful will find value here. From a solo hackathon coder to a professional solution architect, the material scales to your need. The course starts with fundamentals in an easy-to-grasp manner and then deepens into complex topics appealing to a wide range of skill levels. Prerequisites: The official prerequisites for the course are minimal: you should have basic knowledge of at least one programming language (C#, Java, or Python is recommended) and a general understanding of how client-server applications or APIs work. Familiarity with machine learning concepts is optional but can help. In short, if you can write simple programs and understand making API calls, you have everything you need to start learning MCP. Conclusion: Empower Your AI Projects with MCP The Model Context Protocol for Beginners course is more than just a tutorial – it’s a comprehensive journey that empowers you to build the next generation of AI applications. By demystifying MCP and equipping you with hands-on experience, this curriculum turns a seemingly complex concept into practical skills you can apply immediately. With MCP, you unlock capabilities like giving your AI agents real-time information access and the ability to use tools autonomously. That means as a developer, you can create solutions that are significantly more intelligent and useful. A chatbot that can search documents, a coding assistant that can consult APIs or run code, an AI service that seamlessly integrates with your database – all these become achievable when you know MCP. And thanks to this beginners-friendly course, you’ll be able to implement such features with confidence. Whether you are starting out in the AI development world or looking to sharpen your cutting-edge skills, the MCP for Beginners course has something for you. It condenses best practices, real-world lessons, and robust techniques into an accessible format. Learning MCP now will put you ahead of the curve, as this protocol rapidly becomes a cornerstone of AI integrations across the industry. So, are you ready to level up your AI development skills? Dive into the https://aka.ms/mcp-for-beginnerscourse and start building AI agents that can truly interact with the world around them. With the knowledge and experience gained, you’ll be prepared to create smarter, context-aware applications and be a part of the community driving AI innovation forward.9.7KViews4likes1CommentAgents League: Meet the Winners
Agents League brought together developers from around the world to build AI agents using Microsoft's developer tools. With 100+ submissions across three tracks, choosing winners was genuinely difficult. Today, we're proud to announce the category champions. 🎨 Creative Apps Winner: CodeSonify View project CodeSonify turns source code into music. As a genuinely thoughtful system, its functions become ascending melodies, loops create rhythmic patterns, conditionals trigger chord changes, and bugs produce dissonant sounds. It supports 7 programming languages and 5 musical styles, with each language mapped to its own key signature and code complexity directly driving the tempo. What makes CodeSonify stand out is the depth of execution. CodeSonify team delivered three integrated experiences: a web app with real-time visualization and one-click MIDI export, an MCP server exposing 5 tools inside GitHub Copilot in VS Code Agent Mode, and a diff sonification engine that lets you hear a code review. A clean refactor sounds harmonious. A messy one sounds chaotic. The team even built the MIDI generator from scratch in pure TypeScript with zero external dependencies. Built entirely with GitHub Copilot assistance, this is one of those projects that makes you think about code differently. 🧠 Reasoning Agents Winner: CertPrep Multi-Agent System View project CertPrep Multi-Agent System team built a production-grade 8-agent system for personalized Microsoft certification exam preparation, supporting 9 exam families including AI-102, AZ-204, AZ-305, and more. Each agent has a distinct responsibility: profiling the learner, generating a week-by-week study schedule, curating learning paths, tracking readiness, running mock assessments, and issuing a GO / CONDITIONAL GO / NOT YET booking recommendation. The engineering behind the scene here is impressive. A 3-tier LLM fallback chain ensures the system runs reliably even without Azure credentials, with the full pipeline completing in under 1 second in mock mode. A 17-rule guardrail pipeline validates every agent boundary. Study time allocation uses the Largest Remainder algorithm to guarantee no domain is silently zeroed out. 342 automated tests back it all up. This is what thoughtful multi-agent architecture looks like in practice. 💼 Enterprise Agents Winner: Whatever AI Assistant (WAIA) View project WAIA is a production-ready multi-agent system for Microsoft 365 Copilot Chat and Microsoft Teams. A workflow agent routes queries to specialized HR, IT, or Fallback agents, transparently to the user, handling both RAG-pattern Q&A and action automation — including IT ticket submission via a SharePoint list. Technically, it's a showcase of what serious enterprise agent development looks like: a custom MCP server secured with OAuth Identity Passthrough, streaming responses via the OpenAI Responses API, Adaptive Cards for human-in-the-loop approval flows, a debug mode accessible directly from Teams or Copilot, and full OpenTelemetry integration visible in the Foundry portal. Franck also shipped end-to-end automated Bicep deployment so the solution can land in any Azure environment. It's polished, thoroughly documented, and built to be replicated. Thank you To every developer who submitted and shipped projects during Agents League: thank you 💜 Your creativity and innovation brought Agents League to life! 👉 Browse all submissions on GitHubBuilding a Smart Building HVAC Digital Twin with AI Copilot Using Foundry Local
Introduction Building operations teams face a constant challenge: optimizing HVAC systems for energy efficiency while maintaining occupant comfort and air quality. Traditional building management systems display raw sensor data, temperatures, pressures, CO₂ levels—but translating this into actionable insights requires deep HVAC expertise. What if operators could simply ask "Why is the third floor so warm?" and get an intelligent answer grounded in real building state? This article demonstrates building a sample smart building digital twin with an AI-powered operations copilot, implemented using DigitalTwin, React, Three.js, and Microsoft Foundry Local. You'll learn how to architect physics-based simulators that model thermal dynamics, implement 3D visualizations of building systems, integrate natural language AI control, and design fault injection systems for testing and training. Whether you're building IoT platforms for commercial real estate, designing energy management systems, or implementing predictive maintenance for building automation, this sample provides proven patterns for intelligent facility operations. Why Digital Twins Matter for Building Operations Physical buildings generate enormous operational data but lack intelligent interpretation layers. A 50,000 square foot office building might have 500+ sensors streaming metrics every minute, zone temperatures, humidity levels, equipment runtimes, energy consumption. Traditional BMS (Building Management Systems) visualize this data as charts and gauges, but operators must manually correlate patterns, diagnose issues, and predict failures. Digital twins solve this through physics-based simulation coupled with AI interpretation. Instead of just displaying current temperature readings, a digital twin models thermal dynamics, heat transfer rates, HVAC response characteristics, occupancy impacts. When conditions deviate from expectations, the twin compares observed versus predicted states, identifying root causes. Layer AI on top, and operators get natural language explanations: "The conference room is 3 degrees too warm because the VAV damper is stuck at 40% open, reducing airflow by 60%." This application focuses on HVAC, the largest building energy consumer, typically 40-50% of total usage. Optimizing HVAC by just 10% through better controls can save thousands of dollars monthly while improving occupant satisfaction. The digital twin enables "what-if" scenarios before making changes: "What happens to energy consumption and comfort if we raise the cooling setpoint by 2 degrees during peak demand response events?" Architecture: Three-Tier Digital Twin System The application implements a clean three-tier architecture separating visualization, simulation, and state management: The frontend uses React with Three.js for 3D visualization. Users see an interactive 3D model of the three-floor building with color-coded zones indicating temperature and CO₂ levels. Click any equipment, AHUs, VAVs, chillers, to see detailed telemetry. The control panel enables adjusting setpoints, running simulation steps, and activating demand response scenarios. Real-time charts display KPIs: energy consumption, comfort compliance, air quality levels. The backend Node.js/Express server orchestrates simulation and state management. It maintains the digital twin state as JSON, the single source of truth for all equipment, zones, and telemetry. REST API endpoints handle control requests, simulation steps, and AI copilot queries. WebSocket connections push real-time updates to the frontend for live monitoring. The HVAC simulator implements physics-based models: 1R1C thermal models for zones, affinity laws for fan power, chiller COP calculations, CO₂ mass balance equations. Foundry Local provides AI copilot capabilities. The backend uses foundry-local-sdk to query locally running models. Natural language queries ("How's the lobby temperature?") get answered with building state context. The copilot can explain anomalies, suggest optimizations, and even execute commands when explicitly requested. Implementing Physics-Based HVAC Simulation Accurate simulation requires modeling actual HVAC physics. The simulator implements several established building energy models: // backend/src/simulator/thermal-model.js class ZoneThermalModel { // 1R1C (one resistance, one capacitance) thermal model static calculateTemperatureChange(zone, delta_t_seconds) { const C_thermal = zone.volume * 1.2 * 1000; // Heat capacity (J/K) const R_thermal = zone.r_value * zone.envelope_area; // Thermal resistance // Internal heat gains (occupancy, equipment, lighting) const Q_internal = zone.occupancy * 100 + // 100W per person zone.equipment_load + zone.lighting_load; // Cooling/heating from HVAC const airflow_kg_s = zone.vav.airflow_cfm * 0.0004719; // CFM to kg/s const c_p_air = 1006; // Specific heat of air (J/kg·K) const Q_hvac = airflow_kg_s * c_p_air * (zone.vav.supply_temp - zone.temperature); // Envelope losses const Q_envelope = (zone.outdoor_temp - zone.temperature) / R_thermal; // Net energy balance const Q_net = Q_internal + Q_hvac + Q_envelope; // Temperature change: Q = C * dT/dt const dT = (Q_net / C_thermal) * delta_t_seconds; return zone.temperature + dT; } } This model captures essential thermal dynamics while remaining computationally fast enough for real-time simulation. It accounts for internal heat generation from occupants and equipment, HVAC cooling/heating contributions, and heat loss through the building envelope. The CO₂ model uses mass balance equations: class AirQualityModel { static calculateCO2Change(zone, delta_t_seconds) { // CO₂ generation from occupants const G_co2 = zone.occupancy * 0.0052; // L/s per person at rest // Outdoor air ventilation rate const V_oa = zone.vav.outdoor_air_cfm * 0.000471947; // CFM to m³/s // CO₂ concentration difference (indoor - outdoor) const delta_CO2 = zone.co2_ppm - 400; // Outdoor ~400ppm // Mass balance: dC/dt = (G - V*ΔC) / Volume const dCO2_dt = (G_co2 - V_oa * delta_CO2) / zone.volume; return zone.co2_ppm + (dCO2_dt * delta_t_seconds); } } These models execute every simulation step, updating the entire building state: async function simulateStep(twin, timestep_minutes) { const delta_t = timestep_minutes * 60; // Convert to seconds // Update each zone for (const zone of twin.zones) { zone.temperature = ZoneThermalModel.calculateTemperatureChange(zone, delta_t); zone.co2_ppm = AirQualityModel.calculateCO2Change(zone, delta_t); } // Update equipment based on zone demands for (const vav of twin.vavs) { updateVAVOperation(vav, twin.zones); } for (const ahu of twin.ahus) { updateAHUOperation(ahu, twin.vavs); } updateChillerOperation(twin.chiller, twin.ahus); updateBoilerOperation(twin.boiler, twin.ahus); // Calculate system KPIs twin.kpis = calculateSystemKPIs(twin); // Detect alerts twin.alerts = detectAnomalies(twin); // Persist updated state await saveTwinState(twin); return twin; } 3D Visualization with React and Three.js The frontend renders an interactive 3D building view that updates in real-time as conditions change. Using React Three Fiber simplifies Three.js integration with React's component model: // frontend/src/components/BuildingView3D.jsx import { Canvas } from '@react-three/fiber'; import { OrbitControls } from '@react-three/drei'; export function BuildingView3D({ twinState }) { return ( {/* Render building floors */} {twinState.zones.map(zone => ( selectZone(zone.id)} /> ))} {/* Render equipment */} {twinState.ahus.map(ahu => ( ))} ); } function ZoneMesh({ zone, onClick }) { const color = getTemperatureColor(zone.temperature, zone.setpoint); return ( ); } function getTemperatureColor(current, setpoint) { const deviation = current - setpoint; if (Math.abs(deviation) < 1) return '#00ff00'; // Green: comfortable if (Math.abs(deviation) < 3) return '#ffff00'; // Yellow: acceptable return '#ff0000'; // Red: uncomfortable } This visualization immediately shows building state at a glance, operators see "hot spots" in red, comfortable zones in green, and can click any area for detailed metrics. Integrating AI Copilot for Natural Language Control The AI copilot transforms building data into conversational insights. Instead of navigating multiple screens, operators simply ask questions: // backend/src/routes/copilot.js import { FoundryLocalClient } from 'foundry-local-sdk'; const foundry = new FoundryLocalClient({ endpoint: process.env.FOUNDRY_LOCAL_ENDPOINT }); router.post('/api/copilot/chat', async (req, res) => { const { message } = req.body; // Load current building state const twin = await loadTwinState(); // Build context for AI const context = buildBuildingContext(twin); const completion = await foundry.chat.completions.create({ model: 'phi-4', messages: [ { role: 'system', content: `You are an HVAC operations assistant for a 3-floor office building. Current Building State: ${context} Answer questions about equipment status, comfort conditions, and energy usage. Provide specific, actionable information based on the current data. Do not speculate beyond provided information.` }, { role: 'user', content: message } ], temperature: 0.3, max_tokens: 300 }); res.json({ response: completion.choices[0].message.content, model: 'phi-4', timestamp: new Date().toISOString() }); }); function buildBuildingContext(twin) { const alerts = twin.alerts.filter(a => a.active); const zoneStatus = twin.zones.map(z => `${z.name}: ${z.temperature.toFixed(1)}°F (setpoint: ${z.setpoint}°F), CO₂: ${z.co2_ppm}ppm, ${z.occupancy} people` ); return ` Zones: ${zoneStatus.join('\n')} Active Alerts: ${alerts.length} ${alerts.map(a => `- ${a.severity}: ${a.message}`).join('\n')} Energy: ${twin.kpis.total_power_kw.toFixed(1)} kW Chiller: ${twin.chiller.percent_load}% load, ${twin.chiller.cop.toFixed(2)} COP `.trim(); } Example interactions: User: "Why is the third floor conference room warm?" AI: "The Executive Suite on Floor 3 is currently 75.2°F, 3 degrees above the 72°F setpoint. This is due to high occupancy (8 people) and VAV-F3-EXEC operating at reduced airflow (285 CFM vs 450 CFM capacity). AHU-2 is supplying adequate cool air at 55°F. Recommend checking VAV damper position—it may be partially stuck." User: "What's our current energy usage?" AI: "Total building power consumption is 47.3 kW. Breakdown: Chiller (24.1 kW, 51%), AHU fans (12.8 kW, 27%), Pumps (7.2 kW, 15%), Other (3.2 kW, 7%). Chiller is operating at 62% load with a COP of 4.8, which is efficient for current conditions." User: "Activate demand response level 2" AI: "I cannot directly control building systems for safety reasons. To activate DR Level 2, use the Control Panel and click 'Demand Response' → 'Level 2'. This will raise cooling setpoints by 3°F and reduce auxiliary loads, targeting 15% energy reduction." The AI provides grounded, specific answers citing actual equipment IDs and metrics. It refuses to directly execute control commands, instead guiding operators to explicit control interfaces, a critical safety pattern for building systems. Fault Injection for Testing and Training Real building operations experience equipment failures, stuck dampers, sensor drift, communication losses. The digital twin includes comprehensive fault injection capabilities to train operators and test control logic: // backend/src/simulator/fault-injector.js const FAULT_CATALOG = { chillerFailure: { description: 'Chiller compressor failure', apply: (twin) => { twin.chiller.status = 'FAULT'; twin.chiller.cooling_output = 0; twin.alerts.push({ id: 'chiller-fault', severity: 'CRITICAL', message: 'Chiller compressor failure - no cooling available', equipment: 'CHILLER-01' }); } }, stuckVAVDamper: { description: 'VAV damper stuck at current position', apply: (twin, vavId) => { const vav = twin.vavs.find(v => v.id === vavId); vav.damper_stuck = true; vav.damper_position_fixed = vav.damper_position; twin.alerts.push({ id: `vav-stuck-${vavId}`, severity: 'HIGH', message: `VAV ${vavId} damper stuck at ${vav.damper_position}%`, equipment: vavId }); } }, sensorDrift: { description: 'Temperature sensor reading 5°F high', apply: (twin, zoneId) => { const zone = twin.zones.find(z => z.id === zoneId); zone.sensor_drift = 5.0; zone.temperature_measured = zone.temperature_actual + 5.0; } }, communicationLoss: { description: 'Equipment communication timeout', apply: (twin, equipmentId) => { const equipment = findEquipmentById(twin, equipmentId); equipment.comm_status = 'OFFLINE'; equipment.stale_data = true; twin.alerts.push({ id: `comm-loss-${equipmentId}`, severity: 'MEDIUM', message: `Lost communication with ${equipmentId}`, equipment: equipmentId }); } } }; router.post('/api/twin/fault', async (req, res) => { const { faultType, targetEquipment } = req.body; const twin = await loadTwinState(); const fault = FAULT_CATALOG[faultType]; if (!fault) { return res.status(400).json({ error: 'Unknown fault type' }); } fault.apply(twin, targetEquipment); await saveTwinState(twin); res.json({ message: `Applied fault: ${fault.description}`, affectedEquipment: targetEquipment, timestamp: new Date().toISOString() }); }); Operators can inject faults to practice diagnosis and response. Training scenarios might include: "The chiller just failed during a heat wave, how do you maintain comfort?" or "Multiple VAV dampers are stuck, which zones need immediate attention?" Key Takeaways and Production Deployment Building a physics-based digital twin with AI capabilities requires balancing simulation accuracy with computational performance, providing intuitive visualization while maintaining technical depth, and enabling AI assistance without compromising safety. Key architectural lessons: Physics models enable prediction: Comparing predicted vs observed behavior identifies anomalies that simple thresholds miss 3D visualization improves spatial understanding: Operators immediately see which floors or zones need attention AI copilots accelerate diagnosis: Natural language queries get answers in seconds vs. minutes of manual data examination Fault injection validates readiness: Testing failure scenarios prepares operators for real incidents JSON state enables integration: Simple file-based state makes connecting to real BMS systems straightforward For production deployment, connect the twin to actual building systems via BACnet, Modbus, or MQTT integrations. Replace simulated telemetry with real sensor streams. Calibrate model parameters against historical building performance. Implement continuous learning where the twin's predictions improve as it observes actual building behavior. The complete implementation with simulation engine, 3D visualization, AI copilot, and fault injection system is available at github.com/leestott/DigitalTwin. Clone the repository and run the startup scripts to explore the digital twin, no building hardware required. Resources and Further Reading Smart Building HVAC Digital Twin Repository - Complete source code and simulation engine Setup and Quick Start Guide - Installation instructions and usage examples Microsoft Foundry Local Documentation - AI integration reference HVAC Simulation Documentation - Physics model details and calibration Three.js Documentation - 3D visualization framework ASHRAE Standards - Building energy modeling standardsLearn how to build MCP servers with Python and Azure
We just concluded Python + MCP, a three-part livestream series where we: Built MCP servers in Python using FastMCP Deployed them into production on Azure (Container Apps and Functions) Added authentication, including Microsoft Entra as the OAuth provider All of the materials from our series are available for you to keep learning from, and linked below: Video recordings of each stream Powerpoint slides Open-source code samples complete with Azure infrastructure and 1-command deployment If you're an instructor, feel free to use the slides and code examples in your own classes. Spanish speaker? We've got you covered- check out the Spanish version of the series. 🙋🏽♂️Have follow up questions? Join our weekly office hours on Foundry Discord: Tuesdays @ 11AM PT → Python + AI Thursdays @ 8:30 AM PT → All things MCP Building MCP servers with FastMCP 📺 Watch YouTube recording In the intro session of our Python + MCP series, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally. Then we consume that server from chatbots like GitHub Copilot in VS Code, using it's tools, resources, and prompts. Finally, we discover how easy it is to connect AI agent frameworks like Langchain and Microsoft agent-framework to the MCP server. Slides for this session Code repository with examples: python-mcp-demos Deploying MCP servers to the cloud 📺 Watch YouTube recording In our second session of the Python + MCP series, we deploy MCP servers to the cloud! We walk through the process of containerizing a FastMCP server with Docker and deploying to Azure Container Apps. Then we instrument the MCP server with OpenTelemetry and observe the tool calls using Azure Application Insights and Logfire. Finally, we explore private networking options for MCP servers, using virtual networks that restrict external access to internal MCP tools and agents. Slides for this session Code repository with examples: python-mcp-demos Authentication for MCP servers 📺 Watch YouTube recording In our third session of the Python + MCP series, we explore the best ways to build authentication layers on top of your MCP servers. We start off simple, with an API key to gate access, and demonstrate a key-restricted FastMCP server deployed to Azure Functions. Then we move on to OAuth-based authentication for MCP servers that provide user-specific data. We dive deep into MCP authentication, which is built on top of OAuth2 but with additional requirements like PRM and DCR/CIMD, which can make it difficult to implement fully. We demonstrate the full MCP auth flow in the open-souce identity provider KeyCloak, and show how to use an OAuth proxy pattern to implement MCP auth on top of Microsoft Entra. Slides for this session Code repository with Container Apps examples: python-mcp-demos Code repository with Functions examples: python-mcp-demos9.7KViews3likes2CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo8.1KViews3likes0CommentsFueling the Agentic Web Revolution with NLWeb and PostgreSQL
We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. Soon, autonomous agents, not just human users, will consume and interpret website content, transforming how information is accessed and utilized online. During Microsoft //Build 2025, Microsoft introduced the era of the open agentic web, in which the internet is an open agentic web a new paradigm in which autonomous agents seamlessly interact across individual, organizational, team and end-to-end business contexts. To realize the future of an open agentic web, Microsoft announced the NLWeb project. NLWeb transforms any website to an AI-powered application with just a few lines of code and by connecting to an AI model and a knowledge base. In this post, we’ll cover: What NLWeb is and how it works with vector databases How pgvector enables vector similarity search in PostgreSQL for NLWeb Get started using NLWeb with Postgres Let’s dive in and see how Postgres + NLWeb can redefine conversational web interfaces while keeping your data in a familiar, powerful database. What is NLWeb? A Quick Overview of Conversational Web Interfaces NLWeb is an open project developed by Microsoft to simplify adding conversational AI interfaces to websites. How NLWeb works under the hood: Processes existing data/website content that exists in semi-structured formats like Schema.org, RSS, and other data that websites already publish Embeds and indexes all the content in a vector store (i.e PostgreSQL with pgvector) Routes user queries through several processes which handle natural langague understanding, reranking and retrieval. Answers queries with an LLM The result is a high-quality natural language interface on top of web data, giving developers the ability to let users “talk to” web data. By default, every NLWeb instance is also a Model Context Protocol (MCP) server, allowing websites to make their content discoverable and accessible to agents and other participants in the MCP ecosystem if they choose. Importantly, NLWeb is platform-agnostic and supports many major operating systems, AI models, and vector stores and the NLWeb project is modular by design, so developers can bring their own retrieval system, model APIs, and define their own extensions. NLWeb with PostgreSQL PostgreSQL is now embedded into the NLWeb reference stack as a native retriever, creating a scalable and flexible path for deploying NLWeb instances using open-source infrastructure. Retrieval Powered by pgvector NLWeb leverages pgvector, a PostgreSQL extension for efficient vector similarity search, to handle natural language retrieval at scale. By integrating pgvector into the NLWeb stack, teams can eliminate the need for external vector databases. Web data stored in PostgreSQL becomes immediately searchable and usable for NLWeb experiences, streamlining infrastructure and enhancing security. PostgreSQL's robust governance features and wide adoption align with NLWeb’s mission to enable conversational AI for any website or content platform. With pgvector retrieval built in, developers can confidently launch NLWeb instances on their own databases no additional infrastructure required. Implementation example We are going to use NLWeb and Postgres, to create a conversational AI app and MCP server that will let us chat with content from the Talking Postgres with Claire Giordano Podcast! Prerequisites An active Azure account. Enable and configure the pg_vector extensions. Create an Azure AI Foundry project. Deploy models gpt-4.1, gpt-4.1-mini and text-embedding-3-small. Install Visual Studio Code. Install the Python extension. Install Python 3.11.x. Install the Azure CLI (latest version). Getting started All the code and sample datasets are available in this GitHub repository. Step 1: Setup NLWeb Server 1. Clone or download the code from the repo. git clone https://github.com/microsoft/NLWeb cd NLWeb 2. Open a terminal to create a virtual python environment and activate it. python -m venv myenv source myenv/bin/activate # Or on Windows: myenv\Scripts\activate 3. Go to the 'code/python' folder in NLWeb to install the dependencies. cd code/python pip install -r requirements.txt 4. Go to the project root folder in NLWeb and copy the .env.template file to a new .env file cd ../../ cp .env.template .env 5. In the .env file, update the API key you will use for your LLM endpoint of choice and update the Postgres connection string. For example: AZURE_OPENAI_ENDPOINT="https://TODO.openai.azure.com/" AZURE_OPENAI_API_KEY="<TODO>" # If using Postgres connection string POSTGRES_CONNECTION_STRING="postgresql://<HOST>:<PORT>/<DATABASE>?user=<USERNAME>&sslmode=require" POSTGRES_PASSWORD="<PASSWORD>" 6. Update your config files (located in the config folder) to make sure your preferred providers match your .env file. There are three files that may need changes. config_llm.yaml: Update the first line to the LLM provider you set in the .env file. By default it is Azure OpenAI. You can also adjust the models you call here by updating the models noted. By default, we are assuming 4.1 and 4.1-mini. config_embedding.yaml: Update the first line to your preferred embedding provider. By default it is Azure OpenAI, using text-embedding-3-small. config_retrieval.yaml: Update the first line to postgres. You should update write_endpoint to postgres and You should update postgres retrieval endpoint is enabled to 'true' in the following list of possible endpoints. Step 2: Initialize Postgres Server Go to the 'code/python/misc folder in NLWeb to run Postgres initializer. NOTE: If you are using Azure Postgres Flexible server make sure you have `vector` extension allow-listed and make sure the database has the vector extension enabled, cd code/python/misc python postgres_load.py Step 3: Ingest Data from Talk Postgres Podcast Now we will load some data in our local vector database to test with. We've listed a few RSS feeds you can choose from below. Go to the 'code/python folder in NLWeb and run the command. The format of the command is as follows (make sure you are still in the 'python' folder when you run this): python -m data_loading.db_load <RSS URL> <site-name> Talking Postgres with Claire Giordano Podcast: python -m data_loading.db_load https://feeds.transistor.fm/talkingpostgres Talking-Postgres (Optional) You can check the documents table in your Postgres database and verify the table looks like the one below. To verify all the data from the website was uploaded. Test NLWeb Server Start your NLWeb server (again from the 'python' folder): python app-file.py Go to http://localhost:8000/ Start ask questions about the Talking Postgres with Claire Giordano Podcast, you may try different modes. Trying List Mode: Sample Prompt: “I want to listen to something that talks about the advances in vector search such as DiskANN” Trying Generate Mode Sample Prompt: “What did Shireesh Thota say about the future of Postgres?” Running NLWeb with MCP 1. If you do not already have it, install MCP in your venv: pip install mcp 2. Next, configure your Claude MCP server. If you don’t have the config file already, you can create the file at the following locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json The default MCP JSON file needs to be modified as shown below: macOS Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “/Users/yourname/NLWeb/myenv/bin/python”, “args”: [ “/Users/yourname/NLWeb/code/chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “/Users/yourname/NLWeb/code” } } } Windows Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “C:\\Users\\yourusername\\NLWeb\\myenv\\Scripts\\python”, “args”: [ “C:\\Users\\yourusername\\NLWeb\\code\\chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “C:\\Users\\yourusername\\NLWeb\\code” } } } Note: For Windows paths, you need to use double backslashes (\\) to escape the backslash character in JSON. 3. Go to the 'code/python’ folder in NLWeb and run the command. Enter your virtual environment and start your NLWeb local server. Make sure it is configured to access the data you would like to ask about from Claude. # On macOS source ../myenv/bin/activate python app-file.py # On Windows ..\myenv\Scripts\activate python app-file.py 4. Open Claude Desktop. It should ask you to trust the 'ask_nlw' external connection if it is configured correctly. After clicking yes and the welcome page appears, you should see 'ask_nlw' in the bottom right '+' options. Select it to start a query. 5. To query NLWeb, just type 'ask_nlw' in your prompt to Claude. You'll notice that you also get the full JSON script for your results. Remember, you must have your local NLWeb server started to use this option. Learn More Vector Store in Azure Postgres Flexible Server Generative AI in Azure Postgres Flexible Server NLWeb GitHub repo includes: A reference server for handling natural language queries PGvector integration851Views3likes1CommentApril 2025 Recap: Azure Database for PostgreSQL Flexible Server
Hello Azure Community, April has brought powerful capabilities to Azure Database for PostgreSQL flexible server, On-Demand backups are now Generally Available, a new Terraform version for our latest REST API has been released, the Public Preview of the MCP Server is now live, and there are also a few other updates that we are excited to share in this blog. Stay tuned as we dive into the details of these new features and how they can benefit you! Feature Highlights General Availability of On-Demand Backups Public Preview of Model Context Protocol (MCP) Server Additional Tuning Parameters in PG 17 Terraform resource released for latest REST API version General Availability of pg_cron extension in PG 17 General Availability of On-Demand Backups We are excited to announce General Availability of On-Demand backups for Azure Database for PostgreSQL flexible server. With this it becomes easier to streamline the process of backup management, including automated, scheduled storage volume snapshots encompassing the entire database instance and all associated transaction logs. On-demand backups provide you with the flexibility to initiate backups at any time, supplementing the existing scheduled backups. This capability is useful for scenarios such as application upgrades, schema modifications, or major version upgrades. For instance, before making schema changes, you can take a database backup, in an unlikely case, if you run into any issues, you can quickly restore (PITR) database back to a point before the schema changes were initiated. Similarly, during major version upgrades, on-demand backups provide a safety net, allowing you to revert to a previous state if anything goes wrong. In the absence of on-demand backup, the PITR could take much longer as it would need to take the last snapshot which could be 24 hours earlier and then replay the WAL. Azure Database for PostgreSQL flexible server already does on-demand backup behind the scenes for you and then deletes it when the upgrade is successful. Key Benefits: Immediate Backup Creation: Trigger full backups instantly. Cost Control: Delete on-demand backups when no longer needed. Improved Safety: Safeguard data before major changes or refreshes. Easy Access: Use via Azure Portal, CLI, ARM templates, or REST APIs. For more details and on how to get started, check out this announcement blog post. Create your first on-demand backup using the Azure portal or Azure CLI. Public Preview of Model Context Protocol (MCP) Server Model Context Protocol (MCP) is a new and emerging open protocol designed to integrate AI models with the environments where your data and tools reside in a scalable, standardized, and secure manner. We are excited to introduce the Public Preview of MCP Server for Azure Database for PostgreSQL flexible server which enables your AI applications and models to talk to your data hosted in Azure Database for PostgreSQL flexible servers according to the MCP standard. The MCP Server exposes a suite of tools including listing databases, tables, and schema information, reading and writing data, creating and dropping tables, listing Azure Database for PostgreSQL configurations, retrieving server parameter values, and more. You can either build custom AI apps and agents with MCP clients to invoke these capabilities or use AI tools like Claude Desktop and GitHub Copilot in Visual Studio Code to interact with your Azure PostgreSQL data simply by asking questions in plain English. For more details and demos on how to get started, check out this announcement blog post. Additional Tuning Parameters in PG17 We have now provided an expanded set of configuration parameters in Azure Database for PostgreSQL flexible server (V17) that allows you to modify and have greater control to optimize your database performance for unique workloads. You can now tune internal buffer settings like commit timestamp, multixact member and offset, notify, serializable, subtransaction, and transaction buffers, allowing you to better manage memory and concurrency in high-throughput environments. Additionally, you can also configure parallel append, plan cache mode, and event triggers that opens powerful optimization and automation opportunities for analytical workloads and custom logic execution. This gives you more control for memory intensive and high-concurrency applications, increased control over execution plans and allowing parallel execution of queries. To get started, all newly modifiable parameters are available now through the Azure portal, Azure CLI, and ARM templates, just like any other server configuration setting. To learn more, visit our Server Parameter Documentation. Terraform resource released for latest REST API version A new version of the Terraform resource for Azure Databases for PostgreSQL flexible server is now available, this brings several key improvements including the ability to easily revive dropped databases with geo-redundancy and customer-managed keys (Geo + CMK - Revive Dropped), seamless switchover of read replicas to a new site (Read Replicas - Switchover), improved connectivity through virtual endpoints for read replicas, and using on-demand backups for your servers. To get started with Terraform support, please follow this link: Deploy Azure Database for PostgreSQL flexible server with Terraform General Availability of pg_cron extension in PG 17 We’re excited to announce that the pg_cron extension is now supported in Azure Database for PostgreSQL flexible server major versions including PostgreSQL 17. This extension enables simple, time-based job scheduling directly within your database, making maintenance and automation tasks easier than ever. You can get started today by enabling the extension through the Azure portal or CLI. To learn more, please refer Azure Database for PostgreSQL flexible server list of extensions. Azure Postgres Learning Bytes 🎓 Setting up alerts for Azure Database PostgreSQL flexible server using Terraform Monitoring metrics and setting up alerts for your Azure Database for PostgreSQL flexible server instance is crucial for maintaining optimal performance and troubleshooting workload issues. By configuring alerts, you can track key metrics like CPU usage and storage etc. and receive notifications by creating an action group for your alert metrics. This guide will walk you through the process of setting up alerts using Terraform. First, create an instance of Azure Database for PostgreSQL flexible server (if not already created) Next, create a Terraform File and add these resources 'azurerm_monitor_action_group', 'azurerm_monitor_metric_alert' as shown below. resource "azurerm_monitor_action_group" "example" { name = "<action-group-name>" resource_group_name = "<rg-name>" short_name = "<short-name>" email_receiver { name = "sendalerts" email_address = "<youremail>" use_common_alert_schema = true } } resource "azurerm_monitor_metric_alert" "example" { name = "<alert-name>" resource_group_name = "<rg-name>" scopes = [data.azurerm_postgresql_flexible_server.demo.id] description = "Alert when CPU usage is high" severity = 3 frequency = "PT5M" window_size = "PT5M" enabled = true criteria { metric_namespace = "Microsoft.DBforPostgreSQL/flexibleServers" metric_name = "cpu_percent" aggregation = "Average" operator = "GreaterThan" threshold = 80 } action { action_group_id = azurerm_monitor_action_group.example.id } } 3. Run the terraform initialize, plan and apply commands to create an action group and attach a metric to the Azure Database for PostgreSQL flexible server instance. terraform init -upgrade terraform plan -out <file-name> terraform apply <file-name>.tfplan Note: This script assumes you have already created an Azure Database for PostgreSQL flexible server instance. To verify your alert, check the Azure portal under Monitoring -> Alerts -> Alert Rules tab. Conclusion That's a wrap for the April 2025 feature updates! Stay tuned for our Build announcements, as we have a lot of exciting updates and enhancements for Azure Database for PostgreSQL flexible server coming up this month. We’ve also published our Yearly Recap Blog, highlighting many improvements and announcements we’ve delivered over the past year. Take a look at our yearly recap blog here: What's new with Postgres at Microsoft, 2025 edition We are always dedicated to improving our service with new array of features, if you have any feedback or suggestions we would love to hear from you. 📢 Share your thoughts here: aka.ms/pgfeedback Thanks for being part of our growing Azure Postgres community.896Views3likes0CommentsAI‑Powered Troubleshooting for Microsoft Purview Data Lifecycle Management
Announcing the DLM Diagnostics MCP Server! Microsoft Purview Data Lifecycle Management (DLM) policies are critical for meeting compliance and governance requirements across Microsoft 365 workloads. However, when something goes wrong – such as retention policies not applying, archive mailboxes not expanding, or inactive mailboxes not getting purged – diagnosing the issue can be challenging and time‑consuming. To simplify and accelerate this process, we are excited to announce the open‑source release of the DLM Diagnostics Model Context Protocol (MCP) Server, an AI‑powered diagnostic server that allows AI assistants to safely investigate Microsoft Purview DLM issues using read‑only PowerShell diagnostics. GitHub repository: https://github.com/microsoft/purview-dlm-mcp The troubleshooting challenge When you notice issues such as: “Retention policy shows Success, but content isn’t being deleted” “Archiving is enabled, but items never move to the archive mailbox” The investigation typically involves: Connecting to Exchange Online and Security & Compliance PowerShell sessions Running 5–15 diagnostic cmdlets in a specific order Interpreting command output using multiple troubleshooting reference guides (TSGs) Correlating policy distribution, holds, archive configuration, and workload behavior Producing a root‑cause summary and recommended remediation steps This workflow requires deep familiarity with DLM internals and is largely manual. Introducing the DLM Diagnostics MCP Server The DLM Diagnostics MCP Server automates this diagnostic workflow by allowing AI assistants – such as GitHub Copilot, Claude Desktop, and other MCP‑compatible clients – to investigate DLM issues step by step. An administrator simply describes the symptom in natural language. The AI assistant then: Executes read‑only PowerShell diagnostics Evaluates results against known troubleshooting patterns Identifies likely root causes Presents recommended remediation steps (never executed automatically) Produces a complete audit trail of the investigation All diagnostics are performed under a strict security model to ensure safety and auditability. What is the Model Context Protocol (MCP)? The Model Context Protocol (MCP) is an open standard that enables AI assistants to interact with external tools and data sources in a secure and structured way. You can think of MCP as a “USB port for AI”: Any MCP‑compatible client can connect to an MCP server The server exposes well‑defined tools The AI can use those tools safely and deterministically The DLM Diagnostics MCP Server exposes Purview DLM diagnostics as MCP tools, enabling AI assistants to run PowerShell diagnostics, retrieve execution logs, and surface Microsoft Learn documentation. More information: https://modelcontextprotocol.io Diagnostic tools exposed by the server The server exposes four MCP tools. 1. Run read‑only PowerShell diagnostics This tool executes PowerShell commands against Exchange Online and Security & Compliance sessions using a strict allow list. Only read‑only cmdlets are permitted: Allowed verbs: Get-*, Test-*, Export-* Blocked verbs: Set-*, New-*, Remove-*, Enable-*, Invoke-*, and others Every command is validated before execution. Example: Archive mailbox not working Admin: “Archiving is not working for john.doe@contoso.com” The AI follows the archive troubleshooting guide: 1 Step 1 – Check archive mailbox status 2 Get-Mailbox -Identity john.doe@contoso.com | 3 Format-List ArchiveStatus, ArchiveState 4 5 Step 2 – Check archive mailbox size 6 Get-MailboxStatistics -Identity john.doe@contoso.com -Archive | 7 Format-List TotalItemSize, ItemCount 8 9 Step 3 – Check auto-expanding archive 10 Get-Mailbox -Identity john.doe@contoso.com | 11 Format-List AutoExpandingArchiveEnabled Finding The archive mailbox is not enabled. Recommended action (not executed automatically): 1 Enable-Mailbox <user mailbox> –Archive All remediation steps are presented as text only for administrator review. 2. Retrieve the execution log Every diagnostic session is fully logged, including: Command executed Timestamp Duration Status Output Admins can retrieve the complete investigation as a Markdown‑formatted audit trail, making it easy to attach to incident records or compliance documentation. 3. Microsoft Learn documentation lookup If a question does not match a diagnostic scenario – such as “How do I create a retention policy?” – the server falls back to curated Microsoft Learn documentation. The documentation lookup covers 11 Purview areas, including: Retention policies and labels Archive and inactive mailboxes eDiscovery Audit Communication compliance Records management Adaptive scopes 4. Create a GitHub issue (create_issue) create_issue lets the assistant open a feature request in the project’s GitHub repo and attach key session details (such as the commands run and any failures) to help maintainers reproduce and prioritize the request. Example: File a feature request from a failed diagnostic ✅ Created GitHub issue #42 Title: Allowlist should allow Get-ComplianceTag cmdlet Category: feature request Labels: enhancement URL: https://github.com/microsoft/purview-dlm-mcp/issues/42 Session context included: 3 commands executed, 1 failure Security and safety model Security is enforced at multiple layers: Read‑only allow list: Only approved diagnostic cmdlets can run No stored credentials: Authentication uses MSAL interactive sign‑in Session isolation: Each server instance runs in its own PowerShell process Full audit trail: Every command and result is logged No automatic remediation: Fixes are never executed by the server This design ensures diagnostics are safe to run even in sensitive compliance environments. Supported diagnostic scenarios The server currently includes 12 troubleshooting reference guides, covering common DLM issues such as: Retention policy shows Success but content is not retained or deleted Policy status shows Error or PolicySyncTimeout Items do not move to archive mailbox Auto‑expanding archive not triggering Inactive mailbox creation failures SubstrateHolds and Recoverable Items growth Teams messages not deleting Conflicts between MRM and Purview retention Adaptive scope misconfiguration Auto‑apply label failures SharePoint site deletion blocked by retention Unified Audit Configuration validation Each guide maps symptoms to diagnostic checks and remediation guidance. Getting started Prerequisites Node.js 18 or later PowerShell 7 ExchangeOnlineManagement module (v3.4+) Exchange Online administrator permissions Required permissions Option Roles Notes Least-privilege Global Reader + Compliance Administrator Recommended, covers both EXO and S&C read access. Single role group Organization Management Covers both workloads but broader than necessary. Full admin Global Administrator Works but overly broad, not recommended. Exchange Online (Connect-ExchangeOnline): cmdlets like Get-Mailbox, Get-MailboxStatistics, Export-MailboxDiagnosticLogs, Get-OrganizationConfig Security & Compliance (Connect-IPPSSession): cmdlets like Get-RetentionCompliancePolicy, Get-RetentionComplianceRule, Get-AdaptiveScope, Get-ComplianceTag Exchange cmdlets require EXO roles; compliance cmdlets require S&C roles. Without both, some diagnostics will fail with permission errors. Why both workloads? The server connects to two PowerShell sessions: The authenticating user (DLM_UPN) needs read access to both Exchange Online and Security & Compliance PowerShell sessions. MCP client configuration The server can be connected to IDE like Claude Desktop or Visual Studio Code (GitHub Copilot) using MCP configuration. Include this configuration in your MCP config JSON file (for VS Code, use .vscode/mcp.json; for Claude Desktop, use claude_desktop_config.json) { "mcpServers": { "dlm-diagnostics": { "command": "npx", "args": [ "-y", "@microsoft/purview-dlm-mcp" ], "env": { "DLM_UPN": "admin@yourtenant.onmicrosoft.com", "DLM_ORGANIZATION": "yourtenant.onmicrosoft.com", "DLM_COMMAND_TIMEOUT_MS": "180000" } } } } Summary The DLM Diagnostics MCP Server brings AI‑assisted, auditable, and safe troubleshooting to Microsoft Purview Data Lifecycle Management. By combining structured troubleshooting guides with read‑only PowerShell diagnostics and MCP, it significantly reduces the time and expertise required to diagnose complex DLM issues. We invite you to try it out, provide feedback, and contribute to the project via GitHub. GitHub repository: https://github.com/microsoft/purview-dlm-mcp Rishabh Kumar, Victor Legat & Purview Data Lifecycle Management Team1.4KViews2likes0CommentsGiving Your AI Agents Reliable Skills with the Agent Skills SDK
AI agents are becoming increasingly capable, but they often do not have the context they need to do real work reliably. Your agent can reason well, but it does not actually know how to do the specific things your team needs it to do. For example, it cannot follow your company's incident response playbook, it does not know your escalation policy, and it has no idea how to page the on-call engineer at 3 AM. There are many ways to close this gap, from RAG to custom tool implementations. Agent Skills is one approach that stands out because it is designed around portability and progressive disclosure, keeping context window usage minimal while giving agents access to deep expertise on demand. What is Agent Skills? Agent Skills is an open format for giving agents new capabilities and expertise. The format was originally developed by Anthropic and released as an open standard. It is now supported by a growing list of agent products including Claude Code, VS Code, GitHub, OpenAI Codex, Cursor, Gemini CLI, and many others. As defined in the spec, a skill is a folder on disk containing a SKILL.md file with metadata and instructions, plus optional scripts, references, and assets: incident-response/ SKILL.md # Required: instructions + metadata references/ # Optional: additional documentation severity-levels.md escalation-policy.md scripts/ # Optional: executable code page-oncall.sh assets/ # Optional: templates, diagrams, data files The SKILL.md file has YAML frontmatter with a name and description (so agents know when the skill is relevant), followed by markdown instructions that tell the agent how to perform the task. The format is intentionally simple: self-documenting, extensible, and portable. What makes this design practical is progressive disclosure. The spec is built around the idea that agents should not load everything at once. It works in three stages: Discovery: At startup, agents load only the name and description of each available skill, just enough to know when it might be relevant. Activation: When a task matches a skill's description, the agent reads the full SKILL.md instructions into context. Execution: The agent follows the instructions, optionally loading referenced files or executing bundled scripts as needed. This keeps agents fast while giving them access to deep context on demand. The format is well-designed and widely adopted, but if you want to use skills from your own agents, there is a gap between the spec and a working implementation. The Agent Skills SDK Conceptually, a skill is more than a folder. It is a unit of expertise: a name, a description, a body of instructions, and a set of supporting resources. The file layout is one way to represent that, but there is nothing about the concept that requires a filesystem. The Agent Skills SDK is an open-source Python library built around that idea, treating skills as abstract units of expertise that can be stored anywhere and consumed by any agent framework. It does this by addressing two challenges that come up when you try to use the format from your own agents. The first is where skills live. The spec defines skills as folders on disk, and the tools that support the format today all assume skills are local files. Files are inherently portable, and that is one of the format's strengths. But in the real world, not every team can or wants to serve skills from the filesystem. Maybe your team keeps them in an S3 bucket. Maybe they are in Azure Blob Storage behind your CDN. Maybe they live in a database alongside the rest of your application data. At the moment, if your skills are not on the local filesystem, you are on your own. The SDK changes where skills are served from, not how they are authored. The content and format stay the same regardless of the storage backend, so skills remain portable across providers. The second is how agents consume them. The spec defines the progressive disclosure pattern but actually implementing it in your agent requires real work. You need to figure out how to validate skills against the spec, generate a catalog for the system prompt, expose the right tools for on-demand content retrieval, and handle the back-and-forth of the agent requesting metadata, then the body, then individual references or scripts. That is a lot of plumbing regardless of where the skills are stored, and the work multiplies if you want to support more than one agent framework. The SDK solves both by separating where skills come from (providers) from how agents use them (integrations), so you can mix and match freely. Load skills from the filesystem today, move them to an HTTP server tomorrow, swap in a custom database provider next month, and your agent code does not change at all. How the SDK works The SDK is a set of Python packages organized around two ideas: storage-agnostic providers and progressive disclosure. The provider abstraction means your skills can live anywhere. The SDK ships with providers for the local filesystem and static HTTP servers, but the SkillProvider interface is simple enough that you can write your own in a few methods. A Cosmos DB provider, a Git provider, a SharePoint provider, whatever makes sense for your team. The rest of the SDK does not care where the data comes from. On top of that, the SDK implements the progressive disclosure pattern from the spec as a set of tools that any LLM agent can use. At startup, the SDK generates a skills catalog containing each skill's name and description. Your agent injects this catalog into its system prompt so it knows what is available. Then, during a conversation, the agent calls tools to retrieve content on demand, following the same discovery-activation-execution flow the spec describes. Here is the flow in practice: You register skills from any source (local files, an HTTP server, your own database). The SDK generates a catalog and tool usage instructions, which you inject into the system prompt. The agent calls tools to retrieve content on demand. This matters because context windows are finite. An incident response skill might have a main body, three reference documents, two scripts, and a flowchart. The agent should not load all of that upfront. It should read the body first, then pull the escalation policy only when the conversation actually gets to escalation. A quick example Here is what it looks like in practice. Start by loading a skill from the filesystem: from pathlib import Path from agentskills_core import SkillRegistry from agentskills_fs import LocalFileSystemSkillProvider provider = LocalFileSystemSkillProvider(Path("my-skills")) registry = SkillRegistry() await registry.register("incident-response", provider) Now wire it into a LangChain agent: from langchain.agents import create_agent from agentskills_langchain import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tool_usage_instructions = get_tools_usage_instructions() system_prompt = ( "You are an SRE assistant. Use the available skill tools to look up " "incident response procedures, severity definitions, and escalation " "policies. Always cite which reference document you used.\n\n" f"{skills_catalog}\n\n" f"{tool_usage_instructions}" ) agent = create_agent( llm, tools, system_prompt=system_prompt, ) That is it. The agent now knows what skills are available and has tools to fetch their content. When a user asks "How do I handle a SEV1 incident?", the agent will call get_skill_body to read the instructions, then get_skill_reference to pull the severity levels document, all without you writing any of that retrieval logic. The same pattern works with Microsoft Agent Framework: from agentskills_agentframework import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tool_usage_instructions = get_tools_usage_instructions() system_prompt = ( "You are an SRE assistant. Use the available skill tools to look up " "incident response procedures, severity definitions, and escalation " "policies. Always cite which reference document you used.\n\n" f"{skills_catalog}\n\n" f"{tool_usage_instructions}" ) agent = Agent( client=client, instructions=system_prompt, tools=tools, ) What is in the SDK The SDK is split into small, composable packages so you only install what you need: agentskills-core handles registration, validation, the skills catalog, and the progressive disclosure API. It also defines the SkillProvider interface that all providers implement. agentskills-fs and agentskills-http are the two built-in providers. The filesystem provider loads skills from local directories. The HTTP provider loads them from any static file host: S3, Azure Blob Storage, GitHub Pages, a CDN, or anything that serves files over HTTP. agentskills-langchain and agentskills-agentframework generate framework-native tools and tool usage instructions from a skill registry. agentskills-mcp-server spins up an MCP server that exposes skill tool access and usage as tools and resources, so any MCP-compatible client can use them. Because providers and integrations are separate packages, you can combine them however you want. Use the filesystem provider during development, switch to the HTTP provider in production, or write a custom provider that reads skills from your own database. The integration layer does not need to know or care. Where to go from here The full source, working examples, and detailed API docs are on GitHub: github.com/pratikxpanda/agentskills-sdk The repo includes end-to-end examples for both LangChain and Microsoft Agent Framework, covering filesystem providers, HTTP providers, and MCP. There is also a sample incident-response skill you can use to try things out. A proposal to contribute this SDK to the official agentskills repository has been submitted. If you find it useful, feel free to show your support on the GitHub issue. To learn more about the Agent Skills format itself: What are skills? covers the format and why it matters. Specification is the complete format reference for SKILL.md files. Integrate skills explains how to add skills support to your agent. Example skills on GitHub are a good starting point for writing your own. The SDK is MIT licensed and contributions are welcome. If you have questions or ideas, post a question here or open an issue on the repo.