ai
102 TopicsFrom Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.AI Upskilling Framework Level 3 Building
The Global AI Community is excited to bring you the latest updates on AI Upskilling Framework Level 3 Building, straight from Microsoft Ignite! This session dives deep into advanced concepts for building agentic workflows and showcases new announcements that will help developers accelerate their Agentic AI journey.Exploring the Future of AI Agents with Microsoft Foundry
Why Agentic AI Matters AI agents are no longer a distant vision—they’re here and transforming how businesses operate. According to industry analysts: Over 1 billion AI agents are expected to be in use by 2028. 80% of organisations plan to integrate agents within the next 2–3 years. By 2026, 40% of enterprise apps will include task-specific AI agents. Why this surge? Agents address critical challenges such as inefficiencies in manual processes, human error, lack of visibility, and scalability issues. They enable autonomous decision-making, with projections suggesting that by 2028, half of day-to-day work decisions will be made autonomously. From Chatbots to Intelligent Agents As Mary Joe highlighted, early chatbots relied on rigid rules and regular expressions, often leading to frustrating user experiences. The introduction of large language models (LLMs) changed the game, making interactions more natural. But true autonomy, where systems act on our behalf, required more than conversational AI. Agentic AI combines: Reasoning and planning capabilities. Tools and APIs for real-world actions. Memory for learning and improving over time. This evolution moves us beyond simple input-output interactions to intelligent systems that can execute workflows, validate data, and deliver outcomes. Microsoft Foundry: Your Platform for Building Agents Microsoft Foundry offers a Platform-as-a-Service (PaaS) approach for creating AI agents, striking a balance between control and ease of use. Key components include: Model Catalogue: Access models from OpenAI, Anthropic, Mistral, and more. Foundry Agent Service: Build and customise agents with integrated tools. Foundry IQ: Knowledge grounding for accurate responses. Control Plane: Ensures safety, trust, and observability in production. Whether you need full control (Infrastructure-as-a-Service) or simplicity (Software-as-a-Service via Copilot Studio), Foundry provides flexibility for diverse scenarios. What Makes an AI Solution Agentic? Unlike traditional AI apps that perform narrow tasks (e.g., extracting text from receipts), agentic solutions: Analyse inputs using LLMs and system instructions. Integrate tools for actions like file search, code execution, or API calls. Retain memory for contextual learning. Operate autonomously across workflows. Real-World Use Cases Agentic AI unlocks new possibilities across industries: Expense Management: Automate claims and approvals. Employee Onboarding: Personalised learning paths and skills navigation. Customer Support: Intelligent assistants for FAQs and troubleshooting. Data Analytics: Interactive insights and reporting with Fabric agents. Multi-agent systems can coordinate complex tasks, with specialised agents handling subtasks under a central orchestrator. Getting Started with Microsoft Foundry Creating your first agent is simple: Sign in at https://ai.azure.com and create a Foundry project. Select a model (e.g., GPT-4.1 mini) and configure deployment options. Customise instructions to define your agent’s persona and tasks. Add tools like file search or code interpreter for extended functionality. Test and iterate using the agent playground, then export code to Visual Studio Code for deployment. For detailed guidance, explore the https://learn.microsoft.com/training. Follow the skilling plan for this series Plans | Microsoft Learn Get started with AI Agents https://aka.ms/ai-agents-fundamentals Join the Community Stay connected and keep learning: Discord: Engage with developers building agents. https://aka.ms/foundry/discord GitHub Discussions: Share ideas and troubleshoot. https://aka.ms/foundrydevs Office Hours: Get direct support from product teams. Final Thoughts Agentic AI is reshaping the way we work, enabling systems to act, learn, and collaborate. With Microsoft Foundry, developers have the tools to build secure, scalable, and intelligent agents today not tomorrow. Join the sessions at https://aka.ms/AzureSkilling-Ignite/25AI Dev Days 2025: Your Gateway to the Future of AI Development
What’s in Store? Day 1 – 10 December: Video Link Building AI Applications with Azure, GitHub, and Foundry Explore cutting-edge topics like: Agentic DevOps Azure SRE Agent Microsoft Foundry MCP Models for AI innovation Day 2 – 11 December Agenda: Video Link Using AI to Boost Developer Productivity Get hands-on with: Agent HQ VS Code & Visual Studio 2026 GitHub Copilot Coding Agent App Modernisation Strategies Why Join? Hands-on Labs: Apply the latest product features immediately. Highlights from Microsoft Ignite & GitHub Universe 2025: Stay ahead of the curve. Global Reach: Local-language workshops for LATAM and EMEA coming soon. You’ll recognise plenty of familiar faces in the lineup – don’t miss the chance to connect and learn from the best! 👉 Register now and share widely across your networks – there’s truly something for everyone! https://aka.ms/ai-dev-daysOn‑Device AI with Windows AI Foundry and Foundry Local
From “waiting” to “instant”- without sending data away AI is everywhere, but speed, privacy, and reliability are critical. Users expect instant answers without compromise. On-device AI makes that possible: fast, private and available, even when the network isn’t - empowering apps to deliver seamless experiences. Imagine an intelligent assistant that works in seconds, without sending a text to the cloud. This approach brings speed and data control to the places that need it most; while still letting you tap into cloud power when it makes sense. Windows AI Foundry: A Local Home for Models Windows AI Foundry is a developer toolkit that makes it simple to run AI models directly on Windows devices. It uses ONNX Runtime under the hood and can leverage CPU, GPU (via DirectML), or NPU acceleration, without requiring you to manage those details. The principle is straightforward: Keep the model and the data on the same device. Inference becomes faster, and data stays local by default unless you explicitly choose to use the cloud. Foundry Local Foundry Local is the engine that powers this experience. Think of it as local AI runtime - fast, private, and easy to integrate into an app. Why Adopt On‑Device AI? Faster, more responsive apps: Local inference often reduces perceived latency and improves user experience. Privacy‑first by design: Keep sensitive data on the device; avoid cloud round trips unless the user opts in. Offline capability: An app can provide AI features even without a network connection. Cost control: Reduce cloud compute and data costs for common, high‑volume tasks. This approach is especially useful in regulated industries, field‑work tools, and any app where users expect quick, on‑device responses. Hybrid Pattern for Real Apps On-device AI doesn’t replace the cloud, it complements it. Here’s how: Standalone On‑Device: Quick, private actions like document summarization, local search, and offline assistants. Cloud‑Enhanced (Optional): Large-context models, up-to-date knowledge, or heavy multimodal workloads. Design an app to keep data local by default and surface cloud options transparently with user consent and clear disclosures. Windows AI Foundry supports hybrid workflows: Use Foundry Local for real-time inference. Sync with Azure AI services for model updates, telemetry, and advanced analytics. Implement fallback strategies for resource-intensive scenarios. Application Workflow Code Example using Foundry Local: 1. Only On-Device: Tries Foundry Local first, falls back to ONNX if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}") return "Error: No local AI available" 2. Hybrid approach: On-device first, cloud as last resort def get_answer(question, context): """ Priority order: 1. Foundry Local (best: advanced + private) 2. ONNX Runtime (good: fast + private) 3. Cloud API (fallback: requires internet, less private) # in case of Hybrid approach, based on real-time scenario """ if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}, trying cloud...") # Last resort: Cloud API (requires internet) if network_available(): try: import requests response = requests.post( '{BASE_URL_AI_CHAT_COMPLETION}', headers={'Authorization': f'Bearer {API_KEY}'}, json={ 'model': '{MODEL_NAME}', 'messages': [{ 'role': 'user', 'content': f'Context: {context}\n\nQuestion: {question}' }] }, timeout=10 ) answer = response.json()['choices'][0]['message']['content'] return answer, source="Cloud API (Online)" except Exception as e: return "Error: No AI runtime available", source="Failed" else: return "Error: No internet and no local AI available", source="Offline" Demo Project Output: Foundry Local answering context-based questions offline : The Foundry Local engine ran the Phi-4-mini model offline and retrieved context-based data. : The Foundry Local engine ran the Phi-4-mini model offline and mentioned that there is no answer. Practical Use Cases Privacy-First Reading Assistant: Summarize documents locally without sending text to the cloud. Healthcare Apps: Analyze medical data on-device for compliance. Financial Tools: Risk scoring without exposing sensitive financial data. IoT & Edge Devices: Real-time anomaly detection without network dependency. Conclusion On-device AI isn’t just a trend - it’s a shift toward smarter, faster, and more secure applications. With Windows AI Foundry and Foundry Local, developers can deliver experiences that respect user specific data, reduce latency, and work even when connectivity fails. By combining local inference with optional cloud enhancements, you get the best of both worlds: instant performance and scalable intelligence. Whether you’re creating document summarizers, offline assistants, or compliance-ready solutions, this approach ensures your apps stay responsive, reliable, and user-centric. References Get started with Foundry Local - Foundry Local | Microsoft Learn What is Windows AI Foundry? | Microsoft Learn https://devblogs.microsoft.com/foundry/unlock-instant-on-device-ai-with-foundry-local/Unlocking Your First AI Solution on Azure: Practical Paths for Developers of All Backgrounds
Over the past several months, I’ve spent hundreds of hours working directly with teams—from small startups to mid-market innovators—who share the same aspiration: “We want to use AI, but where do we start?” This question comes up everywhere. It crosses industries, geographies, skill levels, and team sizes. And as developers, we often feel the pressure to “solve AI” end-to-end—model selection, prompt engineering, security, deployment pipelines, integration…. The list is long, and the learning curve can feel even longer. But here’s what we’ve learned through our work in the SMB space and what we recently shared at Microsoft Ignite (Session OD1210). The first mile of AI doesn’t have to be complex. You don’t need an army of engineers, and you don’t need to start from scratch. You just need the right path. In our Ignite on-demand session with UnifyCloud, we demonstrated two fast, developer-friendly ways to get your first AI workload running on Azure—both grounded in real-world patterns we see every day. Path 1: Build Quickly with Microsoft Foundry Templates Microsoft Foundry gives developers pre-built, customizable templates that dramatically reduce setup time. In the session, I walked through how to deploy a fully functioning AI chatbot using: Azure AI Foundry GitHub (via the Azure Samples “Get Started with AI Chat” repo) Azure Cloudshell for deployment And zero specialized infra prep With five lines of code and a few clicks, you can spin up a secure internal chatbot tailored for your business. Want responses scoped to your internal content? Want control over the model, costs, or safety filters? Want to plug in your own data sources like SharePoint, Blob Storage, or uploaded docs? You can do all of that—easily and on your terms. This “build fast” path is ideal for: Developers who want control and extensibility Teams validating AI use cases Scenarios where data governance matters Lightweight experimentation without heavy architecture upfront And most importantly, you can scale it later. Path 2: Buy a Production-Ready Solution from a Trusted Partner Not every team wants to build. Not every team has the time, the resources, or the desire to compose their own AI stack. That’s why we showcased the “buy” path with UnifyCloud’s AI Factory, a Marketplace-listed solution that lets customers deploy mature AI capabilities directly into their Azure environment, complete with optional support, management, and best practices. In the demo, UnifyCloud’s founder Vivek Bhatnagar walked through: How to navigate Microsoft Marketplace How to evaluate solution listings How to review pricing plans and support tiers How to deploy a partner-built AI app with just a few clicks How customers can accelerate their time to value without implementation overhead This path is perfect when you want: A production-ready AI solution A supported, maintained experience Minimal engineering lift Faster time to outcome Why Azure? Why Now? During the session, we also outlined three reasons developers are choosing Azure for their first AI workloads: 1. Secure, governed, safe by design Azure mitigates risk with always-on guardrails and built-in commitments to security, privacy, and policy-based control. 2. Built for production with a complete AI platform From models to agents to tools and data integrations, Azure provides an enterprise-grade environment developers can trust. 3. Developer-first innovation with agentic DevOps Azure puts developers at the center, integrating AI across the software development lifecycle to help teams build faster and smarter. The Session: Build or Buy—Two Paths, One Goal Whether you build using Azure AI Foundry or buy through Marketplace, the goal is the same: Help teams get to their first AI solution quickly, confidently, and securely. You don’t need a massive budget. You don’t need deep ML experience. You don’t need a full-time AI team. What you need is a path that matches your skills, your constraints, and your timeline. Watch the Full Ignite Session You can watch the full session on-demand now also on YouTube: OD1201 — “Unlock Your First AI Solution on Azure” It includes: The full build and buy demos Partner perspectives Deployment walkthroughs And guidance you can take back to your teams today If you want to explore the same build path we showed at Ignite: ➡️ Azure Samples – Get Started with AI Chat https://github.com/Azure-Samples/get-started-with-ai-chat Deploy it, customize it, attach your data sources, and extend it. It’s a great starting point. If you’re curious about the Marketplace path: ➡️ Search for “UnifyCloud AI Factory” on Microsoft Marketplace You’ll see support offerings, solution details, and deployment instructions. Closing Thought The gap between wanting to adopt AI and actually running AI in production is shrinking fast. Azure makes it possible for teams, especially those without deep AI experience, to take meaningful steps today. No perfect architecture required. No million-dollar budget. No wait for a future-state roadmap. Just two practical paths: Build quickly. Buy confidently. Start now. If you have questions, ideas, or want to share what you’re building, feel free to reach out here in the Developer Community. I’d love to hear what you’re creating. — Joshua Huang Microsoft AzureAzure Skilling at Microsoft Ignite 2025
The energy at Microsoft Ignite was unmistakable. Developers, architects, and technical decision-makers converged in San Francisco to explore the latest innovations in cloud technology, AI applications, and data platforms. Beyond the keynotes and product announcements was something even more valuable: an integrated skilling ecosystem designed to transform how you build with Azure. This year Azure Skilling at Microsoft Ignite 2025 brought together distinct learning experiences, over 150+ hands-on labs, and multiple pathways to industry-recognized credentials—all designed to help you master skills that matter most in today's AI-driven cloud landscape. Just Launched at Ignite Microsoft Ignite 2025 offered an exceptional array of learning opportunities, each designed to meet developers anywhere on the skilling journey. Whether you joined us in-person or on-demand in the virtual experience, multiple touchpoints are available to deepen your Azure expertise. Ignite 2025 is in the books, but you can still engage with the latest Microsoft skilling opportunities, including: The Azure Skills Challenge provides a gamified learning experience that lets you compete while completing task-based achievements across Azure's most critical technologies. These challenges aren't just about badges and bragging rights—they're carefully designed to help you advance technical skills and prepare for Microsoft role-based certifications. The competitive element adds urgency and motivation, turning learning into an engaging race against the clock and your peers. For those seeking structured guidance, Plans on Learn offer curated sets of content designed to help you achieve specific learning outcomes. These carefully assembled learning journeys include built-in milestones, progress tracking, and optional email reminders to keep you on track. Each plan represents 12-15 hours of focused learning, taking you from concept to capability in areas like AI application development, data platform modernization, or infrastructure optimization. The Microsoft Reactor Azure Skilling Series, running December 3-11, brings skilling to life through engaging video content, mixing regular programming with special Ignite-specific episodes. This series will deliver technical readiness and programming guidance in a livestream presentation that's more digestible than traditional documentation. Whether you're catching episodes live with interactive Q&A or watching on-demand later, you’ll get world-class instruction that makes complex topics approachable. Beyond Ignite: Your Continuous Learning Journey Here's the critical insight that separates Ignite attendees who transform their careers from those who simply collect swag: the real learning begins after the event ends. Microsoft Ignite is your launchpad, not your destination. Every module you start, every lab you complete, and every challenge you tackle connects to a comprehensive learning ecosystem on Microsoft Learn that's available 24/7, 365 days a year. Think of Ignite as your intensive immersion experience—the moment when you gain context, build momentum, and identify the skills that will have the biggest impact on your work. What you do in the weeks and months following determines whether that momentum compounds into career-defining expertise or dissipates into business as usual. For those targeting career advancement through formal credentials, Microsoft Certifications, Applied Skills and AI Skills Navigator, provide globally recognized validation of your expertise. Applied Skills focus on scenario-based competencies, demonstrating that you can build and deploy solutions, not simply answer theoretical questions. Certifications cover role-based scenarios for developers, data engineers, AI engineers, and solution architects. The assessment experiences include performance-based testing in dedicated Azure tenants where you complete real configuration and development tasks. And finally, the NEW AI Skills Navigator is an agentic learning space, bringing together AI-powered skilling experiences and credentials in a single, unified experience with Microsoft, LinkedIn Learning and GitHub – all in one spot Why This Matters: The Competitive Context The cloud skills race is intensifying. While our competitors offer robust training and content, Microsoft's differentiation comes not from having more content—though our 1.4 million module completions last fiscal year and 35,000+ certifications awarded speak to scale—but from integration of services to orchestrate workflows. Only Microsoft offers a truly unified ecosystem where GitHub Copilot accelerates your development, Azure AI services power your applications, and Azure platform services deploy and scale your solutions—all backed by integrated skilling content that teaches you to maximize this connected experience. When you continue your learning journey after Ignite, you're not just accumulating technical knowledge. You're developing fluency in an integrated development environment that no competitor can replicate. You're learning to leverage AI-powered development tools, cloud-native architectures, and enterprise-grade security in ways that compound each other's value. This unified expertise is what transforms individual developers into force-multipliers for their organizations. Start Now, Build Momentum, Never Stop Microsoft Ignite 2025 offered the chance to compress months of learning into days of intensive, hands-on experience, but you can still take part through the on-demand videos, the Global Ignite Skills Challenge, visiting the GitHub repos for the /Ignite25 labs, the Reactor Azure Skilling Series, and the curated Plans on Learn provide multiple entry points regardless of your current skill level or preferred learning style. But remember: the developers who extract the most value from Ignite are those who treat the event as the beginning, not the culmination, of their learning journey. They join hackathons, contribute to GitHub repositories, and engage with the Azure community on Discord and technical forums. The question isn't whether you'll learn something valuable from Microsoft Ignite 2025-that's guaranteed. The question is whether you'll convert that learning into sustained momentum that compounds over months and years into career-defining expertise. The ecosystem is here. The content is ready. Your skilling journey doesn't end when Ignite does—it accelerates.3.3KViews0likes0CommentsFrom Concept to Code: Building Production-Ready Multi-Agent Systems with Microsoft Foundry
We have reached a critical inflection point in AI development. Within the Microsoft Foundry ecosystem, the core value proposition of "Agents" is shifting decisively—moving from passive content generation to active task execution and process automation. These are no longer just conversational interfaces. They are intelligent entities capable of connecting models, data, and tools to actively execute complex business logic. To support this evolution, Microsoft has introduced a powerful suite of capabilities: the Microsoft Agent Framework for sophisticated orchestration, the Agent V2 SDK, and integrated Microsoft Foundry VSCode Extensions. These innovations provide the tooling necessary to bridge the gap between theoretical research and secure, scalable enterprise landing. But how do you turn these separate components into a cohesive business solution? That is the challenge we address today. This post dives into the practical application of these tools, demonstrating how to connect the dots and transform complex multi-agent concepts into deployed reality. The Scenario: Recruitment through an "Agentic Lens" Let’s ground this theoretical discussion with a real-world scenario that perfectly models a multi-agent environment: The Recruitment Process. By examining recruitment through an agentic lens, we can identify distinct entities with specific mandates: The Recruiter Agent: Tasked with setting boundary conditions (job requirements) and preparing data retrieval mechanisms (interview questions). The Applicant Agent: Objective is to process incoming queries and synthesize the best possible output to meet the recruiter's acceptance criteria. Phase 1: Design Achieving Orchestration via Microsoft Foundry Workflows To bridge the gap between our scenario and technical reality, we start with Foundry Workflows. Workflows serves as the visual integration environment within Foundry. It allows you to build declarative pipelines that seamlessly combine deterministic business logic with the probabilistic nature of autonomous AI agents. By adopting this visual, low-code paradigm, you eliminate the need to write complex orchestration logic from scratch. Workflows empowers you to coordinate specialized agents intuitively, creating adaptive systems that solve complex business problems collaboratively. Visually Orchestrating the Cycle Microsoft Foundry provides an intuitive, web-based drag-and-drop interface. This canvas allows you to integrate specialized AI agents alongside standard procedural logic blocks, transforming abstract ideas into executable processes without writing extensive glue code. To translate our recruitment scenario into a functional workflow, we follow a structured approach: Agent Prerequisites: We pre-configure our specialized agents within Foundry. We create a Recruiter Agent (prompted to generate evaluation criteria) and an Applicant Agent (prompted to synthesize responses). Orchestrating the Interaction: We drag these nodes onto the board and define the data flow. The process begins with the Recruiter generating questions, piping that output directly as input for the Applicant agent. Adding Business Logic: A true workflow requires decision-making. We introduce control flow logic, such as IF/ELSE conditional blocks, to evaluate the recruiter's questions based on predefined criteria. This allows the workflow to branch dynamically—if satisfied, the candidate answers the questions; if not, the questions are regenerated. Alternative: YAML Configuration For developers who prefer a code-first approach or wish to rapidly replicate this logic across environments, Foundry allows you to export the underlying YAML. kind: workflow trigger: kind: OnConversationStart id: trigger_wf actions: - kind: SetVariable id: action-1763742724000 variable: Local.LatestMessage value: =UserMessage(System.LastMessageText) - kind: InvokeAzureAgent id: action-1763736666888 agent: name: HiringManager input: messages: =System.LastMessage output: autoSend: true messages: Local.LatestMessage - kind: Question variable: Local.Input id: action-1763737142539 entity: StringPrebuiltEntity skipQuestionMode: SkipOnFirstExecutionIfVariableHasValue prompt: Boss, can you confirm this ? - kind: ConditionGroup conditions: - condition: =Local.Input="Yes" actions: - kind: InvokeAzureAgent id: action-1763744279421 agent: name: ApplyAgent input: messages: =Local.LatestMessage output: autoSend: true messages: Local.LatestMessage - kind: EndConversation id: action-1763740066007 id: if-action-1763736954795-0 id: action-1763736954795 elseActions: - kind: GotoAction actionId: action-1763736666888 id: action-1763737425562 id: "" name: HRDemo description: "" Simulating the End-to-End Process Once constructed, Foundry provides a robust, built-in testing environment. You can trigger the workflow with sample input data to simulate the end-to-end cycle. This allows you to debug hand-offs and interactions in real-time before writing a single line of application code. Phase 2: Develop From Cloud Canvas to Local Code with VSCode Foundry Workflows excels at rapid prototyping. However, a visual UI is rarely sufficient for enterprise-grade production. The critical question becomes: How do we integrate these visual definitions into a rigorous Software Development Lifecycle (SDLC)? While the cloud portal is ideal for design, enterprise application delivery happens in the local IDE. The Microsoft Foundry VSCode Extension bridges this gap. This extension allows developers to: Sync: Pull down workflow definitions from the cloud to your local machine. Inspect: Review the underlying logic in your preferred environment. Scaffold: Rapidly generate the underlying code structures needed to run the flow. This accelerates the shift from "understanding" the flow to "implementing" it. Phase 3: Deploy Productionizing Intelligence with the Microsoft Agent Framework Once the multi-agent orchestration has been validated locally, the final step is transforming it into a shipping application. This is where the Microsoft Agent Framework shines as a runtime engine. It natively ingests the declarative Workflow definitions (YAML) exported from Foundry. This allows artifacts from the prototyping phase to be directly promoted to application deployment. By simply referencing the workflow configuration libraries, you can "hydrate" the entire multi-agent system with minimal boilerplate. Here is the code required to initialize and run the workflow within your application. Note - Check the source code https://github.com/microsoft/Agent-Framework-Samples/tree/main/09.Cases/MicrosoftFoundryWithAITKAndMAF Summary: The Journey from Conversation to Action Microsoft Foundry is more than just a toolbox; it is a comprehensive solution designed to bridge the chasm between theoretical AI research and secure, scalable enterprise applications. In this post, we walked through the three critical stages of modern AI development: Design (Low-Code): Leveraging Foundry Workflows to visually orchestrate specialized agents (Recruiter vs. Applicant) mixed with deterministic business rules. Develop (Local SDLC): Utilizing the VSCode Extension to break down the barriers between the cloud canvas and the local IDE, enabling seamless synchronization and debugging. Deploy (Native Runtime): Using the Microsoft Agent Framework to ingest declarative YAML, realizing the promise of "Configuration as Code" and eliminating tedious logic rewriting. By following this path, developers can move beyond simple content generation and build adaptive, multi-agent systems that drive real business value. Learning Resoures What's Microsoft Foundry (https://learn.microsoft.com/azure/ai-foundry/what-is-azure-ai-foundry?view=foundry) Work with Declarative (Low-code) Agent workflows in Visual Studio Code (preview) (https://learn.microsoft.com/azure/ai-foundry/agents/how-to/vs-code-agents-workflow-low-code?view=foundry) Microsoft Agent Framework(https://github.com/microsoft/agent-framework) Microsoft Foundry VSCode Extension(https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.vscode-ai-foundry)7.3KViews1like0CommentsDurable Task Extension for Microsoft Agent Framework で、堅牢なエージェントを構築する
(これは 2025/11/13 に出された製品チームの記事『Bulletproof agents with the durable task extension for Microsoft Agent Framework』を日本語に翻訳したものです。) 本日 (2025/11/13)、Durable Task Extension for Microsoft Agent Framework のパブリックプレビューを発表できることを大変うれしく思います。 この拡張機能は、Azure Durable Functions の 実績ある 耐久性のある実行 (durable execution) (クラッシュや再起動に耐える) と分散実行 (複数インスタンスで動作する) 機能を、Microsoft Agent Framework に直接組み込むことで、本番環境対応の、堅牢でスケーラブルな AI エージェントの構築方法を一新します。 これにより、セッション管理、障害復旧、スケーリングを自動的に処理する、ステートフルで堅牢な AI エージェントを Azure にデプロイでき、開発者はエージェントのロジックに完全に集中できるようになります。 たとえば、複数日にわたる会話でコンテキストを維持するカスタマーサービスエージェント、人間による承認 (human-in-the-loop approval workflow) を含むコンテンツパイプライン、または専門的な AI モデルを連携させる完全自動化されたマルチエージェントシステムを構築する場合でも、この Durable Task Extension for Microsoft Agent Framework は、サーバーレスのシンプルさで本番レベルの信頼性、スケーラビリティ、そして調整機能を提供します。 Durable Task Extension の主な機能: サーバーレスホスティング (Serverless Hosting):Azure Functions 上にエージェントをデプロイし、数千のインスタンスからゼロまで自動スケーリングを実現しながら、サーバーレスアーキテクチャの利点を維持したまま完全な制御を保持します。 自動セッション管理 (Automatic Session Management):エージェントは、プロセスのクラッシュや再起動、インスタンス間の分散実行に耐える、完全な会話コンテキストを保持した永続的なセッションを維持します。 決定的なマルチエージェントオーケストレーション (Deterministic Multi-Agent Orchestrations): コードで制御された、予測可能かつ再現性のある実行パターンで、特化した (specialized) durable agents を組み合わせて動作させる。 (訳註1:「決定的な (deterministic)」とは、同じ入力に対しては常に同じ結果を返すもので、その動作が予測可能なものを指します) (訳註2:「durable agent」とは、このフレームワークのエージェントをそう呼んでおり、普通のエージェントと違ってDurable な性質を持っているエージェントのことを指します) サーバーレスによるコスト削減を伴う Human-in-the-Loop (Human-in-the-Loop with Serverless Cost Savings): 人間の入力を待つ間、コンピュートリソースを消費せず、コストも発生しません。 Durable Task Scheduler による組み込みの可観測性 (Built-in Observability with Durable Task Scheduler):Durable Task Scheduler の UI ダッシュボードを通じて、エージェントの操作やオーケストレーションを深く可視化できます。 Durable Agent を作成して実行してみる 公式ドキュメント https://aka.ms/create-and-run-durable-agent コードサンプル (Python/C#) # Python endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # 標準的な Microsoft Agent Framework パターンに従って AI エージェントを作成します agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""あなたは、どんなテーマに対しても読みやすく構造化された、 魅力的なドキュメントを作成するプロフェッショナルなコンテンツライターです。 テーマが与えられたら、次の手順で進めてください。 1. Web 検索ツールを使ってテーマをリサーチする 2. ドキュメントのアウトラインを生成する 3. 適切な書式で説得力のあるドキュメントを書く 4. 関連する例と出典(引用)を含める""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Durable なセッション管理でエージェントをホストするように Function アプリを構成します app = AgentFunctionApp(agents=[agent]) app.run() // C# var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"); var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT") ?? "gpt-4o-mini"; // 標準的な Microsoft Agent Framework パターンに従って AI エージェントを作成します AIAgent agent = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential()) .GetChatClient(deploymentName) .CreateAIAgent( instructions: """ あなたは、どんなテーマに対しても読みやすく構造化された、 魅力的なドキュメントを作成するプロフェッショナルなコンテンツライターです。 テーマが与えられたら、次の手順で進めてください。 1.Web 検索ツールを使ってテーマをリサーチする 2.ドキュメントのアウトラインを生成する 3.適切な書式で説得力のあるドキュメントを書く 4.関連する例と出典(引用)を含める """, name: "DocumentPublisher", tools: [ AIFunctionFactory.Create(SearchWeb), AIFunctionFactory.Create(GenerateOutline) ]); // Durable なスレッド管理でエージェントをホストするように Functions アプリを構成します // これにより、HTTP エンドポイントが自動で作成され、状態の永続化が管理されます using IHost app = FunctionsApplication .CreateBuilder(args) .ConfigureFunctionsWebApplication() .ConfigureDurableAgents(options => options.AddAIAgent(agent) ) .Build(); app.Run(); なぜ Durable Task Extension が必要なのか AI エージェントが、単純なチャットボットから、複雑で長時間実行されるタスクを処理する高度なシステムへと進化するにつれて、新たな課題が浮上します。 会話が数日から数週間にわたるため、プロセスの再起動やクラッシュ、障害を超えて状態を保持する必要があります。 ツール呼び出しが通常のタイムアウトを超える時間を要する場合があり、自動チェックポイントと復旧が必要です。 大量のワークロードに対応するため、数千のエージェント会話を同時に処理できるよう、分散インスタンス間での弾力的なスケーリングが求められます。 複数の専門エージェントを、信頼性の高いビジネスプロセスのために、予測可能で再現可能な実行パターンで調整する必要があります。 エージェントは、処理を進める前に人間の承認を待つ必要がある場合があり、その間は理想的にはリソースを消費しない (課金されない) ことが望まれます。 Durable Extension は、Azure Durable Functions の機能を Microsoft Agent Framework に拡張することで、これらの課題に対応します。これにより、障害に耐え、弾力的にスケールし、耐久性と分散実行によって予測可能に動作する AI エージェントを構築できます。 4 つの柱 : 4D この拡張機能は、4 つの基本的な価値の柱、通称「4D」に基づいて構築されています。 Durability (耐久性) すべてのエージェントの状態変更(メッセージ、ツール呼び出し、意思決定)は、自動的に耐久性のあるチェックポイントとして保存されます。エージェントは、インフラ更新やクラッシュから復旧し、長時間の待機中にメモリからアンロードされてもコンテキストを失わずに再開できます。これは、長時間実行される処理や外部イベントを待機するエージェントに不可欠です。 Distributed (分散型の) エージェントの実行はすべてのインスタンスで利用可能であり、弾力的なスケーリングと自動フェイルオーバーを実現します。正常なノードは、障害が発生したインスタンスの作業をシームレスに引き継ぎ、継続的な運用を保証します。この分散実行モデルにより、数千のステートフルエージェントがスケールアップし、並列で動作できます。 Deterministic (決定性) エージェントのオーケストレーションは、通常のコードとして記述された命令型ロジックを使用して予測可能に実行されます。実行パスを定義することで、自動テスト、検証可能なガードレール、ステークホルダーが信頼できるビジネスクリティカルなワークフローを実現します。必要に応じて明示的な制御フローを提供し、エージェント主導のワークフローを補完します。 Debuggability (デバッグしやすさ) IDE、デバッガー、ブレークポイント、スタックトレース、単体テストなどの馴染みのある開発ツールやプログラミング言語を使用して開発・デバッグできます。エージェントとそのオーケストレーションはコードとして表現されるため、テスト、デバッグ、保守が容易です。 実際の機能の動作 サーバーレス ホスティング (Serverless hosting) エージェントを Azure Functions (近日中に他の Azure サービスにも拡張予定)にデプロイし、使用していないときはゼロまで、使用時は数千インスタンスまで自動スケーリングします。消費したコンピューティング リソースに対してのみ料金を支払います。このコードファーストのデプロイ手法により、サーバーレス アーキテクチャの利点を維持しながら、コンピュート環境 (compute environment) を完全に制御できます。 # Python endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # 標準的な Microsoft Agent Framework パターンに従って AI エージェントを作成します agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""あなたは、どんなテーマに対しても読みやすく構造化された、 魅力的なドキュメントを作成するプロフェッショナルなコンテンツライターです。 テーマが与えられたら、次の手順で進めてください。 1. Web 検索ツールを使ってテーマをリサーチする 2. ドキュメントのアウトラインを生成する 3. 適切な書式で説得力のあるドキュメントを書く 4. 関連する例と出典(引用)を含める""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Durable なセッション管理でエージェントをホストするように Function アプリを構成します app = AgentFunctionApp(agents=[agent]) app.run() Automatic session management(自動セッション管理) エージェントのセッションは、Function アプリで構成した耐久性のあるストレージに自動的にチェックポイントされ、複数インスタンス間での耐久性と分散実行を可能にします。中断やプロセス障害の後でも、どのインスタンスからでもエージェントの実行を再開でき、継続的な運用が保証されます。 内部的には、エージェントは Durable Entities として実装されています。これらは、実行間で状態を保持するステートフルなオブジェクトです。このアーキテクチャにより、各エージェントセッションは、会話履歴とコンテキストを保持した信頼性の高い長寿命のエンティティとして機能します。 シナリオ例: 複数日から数週間にわたる複雑なサポート案件を処理するカスタマーサービスエージェント。エージェントが再デプロイされたり、別のインスタンスに移動した場合でも、会話履歴、コンテキスト、進捗は保持されます。 # 最初の対話 - ドキュメント作成用の新しいスレッドを開始 curl -X POST https://your-function-app.azurewebsites.net/api/agents/DocumentPublisher/threads \ -H "Content-Type: application/json" \ -d '{"message": "Azure Functions の利点についてのドキュメントを作成してください"}' # レスポンスにはスレッド ID と初期のドキュメントのアウトライン/下書きが含まれます # {"threadId": "doc789", "response": "Azure Functions の利点に関する網羅的なドキュメントを作成します。最新情報を検索します… [ドキュメント下書き] # Azure Functions の利点\n\n## はじめに\nAzure Functions は、インフラ管理なしでイベント駆動のコードを実行できるサーバーレスのコンピュートサービスです…\n\n## コスト効率\n- 実行時間に対してのみ支払う\n- アイドル状態のリソースには料金がかからない\n- 自動スケーリングにより過剰プロビジョニングを削減…\n\n## 開発者の生産性\n- 複数言語のサポート(C#, Python, JavaScript, Java)\n- 統合開発ツールと CI/CD …\n\n## スケーラビリティ\n- 需要に基づく自動スケーリング\n- 何百万ものリクエストをシームレスに処理…\n\n参考文献: [Azure ドキュメント、サーバーレス計算に関する研究]"} # 2 回目の対話 - 同じスレッドでドキュメントを改善 curl -X POST https://your-function-app.azurewebsites.net/api/agents/DocumentPublisher/threads/doc789 \ -H "Content-Type: application/json" \ -d '{"message": "他の Azure サービスとの統合に関するセクションを追加してもらえますか?"}' # エージェントは Azure Functions ドキュメントのコンテキストを保持し、要求されたセクションを追加します # {"threadId": "doc789", "response": "Azure Functions ドキュメントに、包括的な統合セクションを追加しました:\n\n## Azure サービスとの統合\n\n### Azure Storage\nBlob Storage、Queue Storage、Table Storage へのトリガーとバインディングにより、イベント駆動アーキテクチャをシームレスに実現…\n\n### Azure Event Grid と Event Hubs\nリアルタイムのイベントストリームを処理し、スケール可能な Pub/Sub パターンを実装…\n\n### Azure Cosmos DB\nドキュメントデータベース操作向けの組み込みバインディングと、変更フィードの自動処理…\n\n### Azure Service Bus\nエンタープライズメッセージング機能による信頼性の高いメッセージ処理…\n\n### Azure AI Services\nOpenAI、Cognitive Services、AI Search を容易に統合してインテリジェントなアプリケーションを実現…\n\nこのセクションはスケーラビリティのセクションの後に追加されています。ユースケースやデプロイのベストプラクティスも追加しましょうか?"} (訳者注:11/20 現在、上記のエンドポイント URL やリクエスト、レスポンスの形式は変更されています。この記事ではオリジナル記事の時のままの記載にしていますが、今後も (現在まだ preview 版で) 変わる可能性があるため、最新の情報は公式ドキュメントを参照してください:https://aka.ms/create-and-run-durable-agent ) Deterministic multi-agent orchestrations(決定的なマルチエージェントオーケストレーション) 命令型コードを使用して、複数の専門的な durable agents を調整します。この場合、制御フローは開発者が定義します。これは、エージェントが次のステップを決定するエージェント主導のワークフローとは異なります。 決定的オーケストレーションは、自動チェックポイントと復旧を備えた予測可能で再現可能な実行パターンを提供します。 シナリオ例: メール処理システムで、まずスパム検出エージェントを使用し、その分類に基づいて条件付きで異なる専門エージェントにルーティングします。オーケストレーションは、どのステップで障害が発生しても自動的に復旧し、完了済みのエージェント呼び出しは再実行されません。 # Python app.orchestration_trigger(context_name="context") def document_publishing_orchestration(context: DurableOrchestrationContext): """複数の専門エージェントを協調させる決定的オーケストレーション。""" doc_request = context.get_input() # オーケストレーションのコンテキストから専門エージェントを取得 research_agent = context.get_agent("ResearchAgent") writer_agent = context.get_agent("DocumentPublisherAgent") # ステップ 1:Web 検索でトピックを調査する research_result = yield research_agent.run( messages=f"次のトピックを調査し、主要な情報を収集してください:{doc_request.topic}", response_schema=ResearchResult ) # ステップ 2:調査結果に基づいてアウトラインを生成する outline = yield context.call_activity("generate_outline", { "topic": doc_request.topic, "research_data": research_result.findings }) # ステップ 3:調査結果とアウトラインに基づいてドキュメントを作成する document = yield writer_agent.run( messages=f"""以下のトピックについて、網羅的なドキュメントを作成してください:{doc_request.topic} 調査結果: {research_result.findings} アウトライン: {outline} 適切な書式で、構造化され読みやすく、魅力的なドキュメントにしてください。必要に応じて出典(引用)も含めてください。""", response_schema=DocumentResponse ) # ステップ 4:生成したドキュメントを保存して公開する return yield context.call_activity("publish_document", { "title": doc_request.topic, "content": document.text, "citations": document.citations }) Human-in-the-loop(人間を介在させる仕組み) オーケストレーションやエージェントは、人間の入力、承認、レビューを待つ間、コンピュートリソースを消費せずに一時停止できます。アプリケーションがクラッシュや再起動したとしても、耐久性のある実行 (durable execution) により、数日から数週間にもわたる人間の応答をオーケストレーションが待機することが可能です。サーバーレスホスティングと組み合わせることで、待機期間中はすべてのコンピュートリソースが停止し、人間が入力を提供するまでコンピュートコストが完全に排除されます。 シナリオ例: コンテンツ公開エージェントが下書きを生成し、人間のレビュー担当者に送信して、承認を数日間待機するケース。この間、レビュー期間中はコンピュートリソースを実行(または課金)しません。人間の応答が届くと、オーケストレーションは会話コンテキストと実行状態を完全に保持したまま自動的に再開します。 # Python app.orchestration_trigger(context_name="context") def content_approval_workflow(context: DurableOrchestrationContext): """人間を介在させるワークフロー(待機中はコストゼロ)""" topic = context.get_input() # ステップ 1:エージェントを使ってコンテンツを生成 content_agent = context.get_agent("ContentGenerationAgent") draft_content = yield content_agent.run(f"{topic} についての記事を書いてください") # ステップ 2:人間によるレビューを依頼 yield context.call_activity("notify_reviewer", draft_content) # ステップ 3:承認を待機(待機中はコンピュートリソースを消費しない) approval_event = context.wait_for_external_event("ApprovalDecision") timeout_task = context.create_timer(context.current_utc_datetime + timedelta(hours=24)) winner = yield context.task_any([approval_event, timeout_task]) if winner == approval_event: timeout_task.cancel() approved = approval_event.result if approved: result = yield context.call_activity("publish_content", draft_content) return result else: return "コンテンツは却下されました" else: # タイムアウト時:レビューをエスカレーション result = yield context.call_activity("escalate_for_review", draft_content) return result Built-in agent observability(エージェントの組み込み可観測性) Function App を Durable Task Scheduler を耐久バックエンドとして構成します(エージェントとオーケストレーションの状態を永続化する仕組み)。Durable Task Scheduler は、durable agents に推奨されるバックエンドであり、最高のスループット性能、完全に管理されたインフラ、そして UI ダッシュボードによる組み込みの可観測性を提供します。 Durable Task Scheduler ダッシュボードは、エージェントの操作を深く可視化します: 会話履歴 (Conversation history): 各エージェントセッションの完全な会話スレッドを表示し、すべてのメッセージ、ツール呼び出し、任意時点のコンテキストを確認可能 マルチエージェントの可視化 (Multi-agent visualization): 複数の専門エージェントを呼び出す際の実行フローを、エージェント間のハンドオフ、並列実行、条件分岐を含む視覚的な表現で表示 パフォーマンス指標 (Performance metrics): エージェントの応答時間、トークン使用量、オーケストレーションの実行時間を監視 実行履歴 (Execution history): デバッグ用に完全なリプレイ機能を備えた詳細な実行ログにアクセス可能 Demo Video Language support The Durable Task Extension は以下の言語をサポートしています: C# (.NET 8.0+) with Azure Functions Python (3.10+) with Azure Functions Support for additional computes coming soon. 今日から始めてみましょう Click here to create and run a durable agent Learn more Overview documentation C# Samples Python Samples 原文 Bulletproof agents with the durable task extension for Microsoft Agent Framework | Microsoft Community HubHow to Integrate Playwright MCP for AI-Driven Test Automation
Test automation has come a long way, from scripted flows to self-healing and now AI-driven testing. With the introduction of Model Context Protocol (MCP), Playwright can now interact with AI models and external tools to make smarter testing decisions. This guide walks you through integrating MCP with Playwright in VSCode, starting from the basics, enabling you to build smarter, adaptive tests today. What Is Playwright MCP? Playwright: An open-source framework for web testing and automation. It supports multiple browsers (Chromium, Firefox, and WebKit) and offers robust features like auto-wait, capturing screenshots, along with some great tooling like Codegen, Trace Viewer. MCP (Model Context Protocol): A protocol that enables external tools to communicate with AI models or services in a structured, secure way. By combining Playwright with MCP, you unlock: AI-assisted test generation. Dynamic test data. Smarter debugging and adaptive workflows. Why Integrate MCP with Playwright? AI-powered test generation: Reduce manual scripting. Dynamic context awareness: Tests adapt to real-time data. Improved debugging: AI can suggest fixes for failing tests. Smarter locator selection: AI helps pick stable, reliable selectors to reduce flaky tests. Natural language instructions: Write or trigger tests using plain English prompts. Getting Started in VS Code Prerequisites Node.js Download: nodejs.org Minimum version: v18.0.0 or higher (recommended: latest LTS) Check version: node --version Playwright Install Playwright: npm install @playwright/test Step 1: Create Project Folder mkdir playwrightMCP-demo cd playwrightMCP-demo Step 2: Initialize Project npm init playwright@latest Step 3: Install MCP Server for VS Code Navigate to GitHub - microsoft/playwright-mcp: Playwright MCP server and click install server for VS Code Search for 'MCP: Open user configuration' (type ‘>mcp’ in the search box) You will see a file mcp.json is created in your user -> app data folder, which is having the server details. { "servers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ], "type": "stdio" } }, "inputs": [] } Alternatively, install an MCP server directly GitHub MCP server registry using the Extensions view in VS Code. From GitHub MCP server registry Verify installation: Open Copilot Chat → select Agent Mode → click Configure Tools → confirm microsoft/playwright-mcp appears in the list. Step 4: Create a Simple Test Using MCP Once your project and MCP setup are ready in VS Code, you can create a simple test that demonstrates MCP’s capabilities. MCP can help in multiple scenarios, below is the example for Test Generation using AI: Scenario: AI-Assisted Test Generation- Use natural language prompts to generate Playwright tests automatically. Test Scenario - Validate that a user can switch the Playwright documentation language dropdown to Python, search for “Frames,” and navigate to the Frames documentation page. Confirm that the page heading correctly displays “Frames.” Sample Prompt to Use in VS Code (Copilot Agent Mode):Create a Playwright automated test in JavaScript that verifies navigation to the 'Frames' documentation page following below steps and be more specific about locators to avoid strict mode violation error Navigate to : Playwright documentation select “Python” from the dropdown options, labelled “Node.js” Type the keyword “Frames” into the search box. Click the search result for the Frames documentation page Verify that the page header reads “Frames”. Log success or provide a failure message with details. Copilot will generate the test automatically in your tests folder Step 5: Run Test npx playwright test Conclusion Integrating Playwright with MCP in VS Code helps you build smarter, adaptive tests without adding complexity. Start small, follow best practices, and scale as you grow. Note - Installation steps may vary depending on your environment. Refer to MCP Registry · GitHub for the latest instructions.