agents
40 TopicsDriving AI‑Powered Healthcare: A Data & AI Webinar and Workshop Series
Across these sessions, you’ll learn how healthcare organizations are using Microsoft Fabric, advanced analytics, and AI to unify fragmented data, modernize analytics, and enable intelligent, scalable solutions, from enterprise reporting to AI‑powered use cases. Whether you’re just getting started or looking to accelerate adoption, these sessions offer practical guidance, real‑world examples, and hands‑on learning to help you build a strong data foundation for AI in healthcare. Date Topic Details Location Registration Link May 6 Webinar: Microsoft Fabric Foundations - A Simple Path to Modern Analytics and AI Discover how Microsoft Fabric consolidates fragmented analytics into a single integrated data platform, making it easier to deliver trusted insights and adopt AI without added complexity. Virtual Register May 13 Webinar: Reduce BI Sprawl, Cut Cost and Build an AI-Ready Analytics Foundation Learn how Power BI enables enterprise BI consolidation, consistent metrics, and secure, scalable analytics that support both operational reporting and emerging AI use cases. Virtual Register May 19-20 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. Chicago Register May 27 Webinar: Unified Data Foundation for AI & Analytics - Leveraging OneLake and Microsoft Fabric This session shows how organizations can simplify fragmented data architectures by using Microsoft Fabric and OneLake as a single, governed foundation for analytics and AI. Virtual Register May 27-28 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. Silicon Valley Register June 2 Webinar: Delivering Personalized Patient Experiences at Scale with Microsoft Fabric and Adobe Learn how healthcare organizations can improve patient engagement by unifying trusted data in Microsoft Fabric and activating it through Adobe’s personalization platform. Virtual Register June 3-4 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. New York Register June 10 Webinar: From Data to Decisions: How AI Data Agents in Microsoft Fabric Redefine Analytics Join us to learn how Fabric Data Agents enable users to interact with enterprise data through AI‑powered, governed agents that understand both data and business context. Virtual Register June 17 Webinar: Building the Intelligent Factory: A Unified Data and AI Approach to Life Science Manufacturing Discover how life science & MedTech manufacturers use Microsoft Fabric to integrate operational, quality, and enterprise data and apply AI‑powered analytics for smarter, faster manufacturing decisions. Virtual Register June 23-24 In Person Workshop: Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions. Dallas RegisterGetting Started with Foundry Local: A Student Guide to the Microsoft Foundry Local Lab
If you want to start building AI applications on your own machine, the Microsoft Foundry Local Lab is one of the most useful places to begin. It is a practical workshop that takes you from first-time setup through to agents, retrieval, evaluation, speech transcription, tool calling, and a browser-based interface. The material is hands-on, cross-language, and designed to show how modern AI apps can run locally rather than depending on a cloud service for every step. This blog post is aimed at students, self-taught developers, and anyone learning how AI applications are put together in practice. Instead of treating large language models as a black box, the lab shows you how to install and manage local models, connect to them with code, structure tasks into workflows, and test whether the results are actually good enough. If you have been looking for a learning path that feels more like building real software and less like copying isolated snippets, this workshop is a strong starting point. What Is Foundry Local? Foundry Local is a local runtime for downloading, managing, and serving AI models on your own hardware. It exposes an OpenAI-compatible interface, which means you can work with familiar SDK patterns while keeping execution on your device. For learners, that matters for three reasons. First, it lowers the barrier to experimentation because you can run projects without setting up a cloud account for every test. Second, it helps you understand the moving parts behind AI applications, including model lifecycle, local inference, and application architecture. Third, it encourages privacy-aware development because the examples are designed to keep data on the machine wherever possible. The Foundry Local Lab uses that local-first approach to teach the full journey from simple prompts to multi-agent systems. It includes examples in Python, JavaScript, and C#, so you can follow the language that fits your course, your existing skills, or the platform you want to build on. Why This Lab Works Well for Learners A lot of AI tutorials stop at the moment a model replies to a prompt. That is useful for a first demo, but it does not teach you how to build a proper application. The Foundry Local Lab goes further. It is organised as a sequence of parts, each one adding a new idea and giving you working code to explore. You do not just ask a model to respond. You learn how to manage the service, choose a language SDK, construct retrieval pipelines, build agents, evaluate outputs, and expose the result through a usable interface. That sequence is especially helpful for students because the parts build on each other. Early labs focus on confidence and setup. Middle labs focus on architecture and patterns. Later labs move into more advanced ideas that are common in real projects, such as tool calling, evaluation, and custom model packaging. By the end, you have seen not just what a local AI app looks like, but how its different layers fit together. Before You Start The workshop expects a reasonably modern machine and at least one programming language environment. The core prerequisites are straightforward: install Foundry Local, clone the repository, and choose whether you want to work in Python, JavaScript, or C#. You do not need to master all three. In fact, most learners will get more value by picking one language first, completing the full path in that language, and only then comparing how the same patterns look elsewhere. If you are new to AI development, do not be put off by the number of parts. The early sections are accessible, and the later ones become much easier once you have completed the foundations. Think of the lab as a structured course rather than a single tutorial. What You Learn in Each Lab https://github.com/microsoft-foundry/foundry-local-lab Part 1: Getting Started with Foundry Local The first part introduces the basics of Foundry Local and gets you up and running. You learn how to install the CLI, inspect the model catalogue, download a model, and run it locally. This part also introduces practical details such as model aliases and dynamic service ports, which are small but important pieces of real development work. For students, the value of this part is confidence. You prove that local inference works on your machine, you see how the service behaves, and you learn the operational basics before writing any application code. By the end of Part 1, you should understand what Foundry Local does, how to start it, and how local model serving fits into an application workflow. Part 2: Foundry Local SDK Deep Dive Once the CLI makes sense, the workshop moves into the SDK. This part explains why application developers often use the SDK instead of relying only on terminal commands. You learn how to manage the service programmatically, browse available models, control model download and loading, and understand model metadata such as aliases and hardware-aware selection. This is where learners start to move from using a tool to building with a platform. You begin to see the difference between running a model manually and integrating it into software. By the end of this section, you should understand the API surface you will use in your own projects and know how to bootstrap the SDK in Python, JavaScript, or C#. Part 3: SDKs and APIs Part 3 turns the SDK concepts into a working chat application. You connect code to the local inference server and use the OpenAI-compatible API for streaming chat completions. The lab includes examples in all three supported languages, which makes it especially useful if you are comparing ecosystems or learning how the same idea is expressed through different syntax and libraries. The key learning outcome here is not just that you can get a response from a model. It is that you understand the boundary between your application and the local model service. You learn how messages are structured, how streaming works, and how to write the sort of integration code that becomes the foundation for every later lab. Part 4: Retrieval-Augmented Generation This is where the workshop starts to feel like modern AI engineering rather than basic prompting. In the retrieval-augmented generation lab, you build a simple RAG pipeline that grounds answers in supplied data. You work with an in-memory knowledge base, apply retrieval logic, score matches, and compose prompts that include grounded context. For learners, this part is important because it demonstrates a core truth of AI app development: a model on its own is often not enough. Useful applications usually need access to documents, notes, or structured information. By the end of Part 4, you understand why retrieval matters, how to pass retrieved context into a prompt, and how a pipeline can make answers more relevant and reliable. Part 5: Building AI Agents Part 5 introduces the concept of an agent. Instead of a one-off prompt and response, you begin to define behaviour through system instructions, roles, and conversation state. The lab uses the ChatAgent pattern and the Microsoft Agent Framework to show how an agent can maintain a purpose, respond with a persona, and return structured output such as JSON. This part helps learners understand the difference between a raw model call and a reusable application component. You learn how to design instructions that shape behaviour, how multi-turn interaction differs from single prompts, and why structured output matters when an AI component has to work inside a broader system. Part 6: Multi-Agent Workflows Once a single agent makes sense, the workshop expands the idea into a multi-agent workflow. The example pipeline uses roles such as researcher, writer, and editor, with outputs passed from one stage to the next. You explore sequential orchestration, shared configuration, and feedback loops between specialised components. For students, this lab is a very clear introduction to decomposition. Instead of asking one model to do everything at once, you break a task into smaller responsibilities. That pattern is useful well beyond AI. By the end of Part 6, you should understand why teams build multi-agent systems, how hand-offs are structured, and what trade-offs appear when more components are added to a workflow. Part 7: Zava Creative Writer Capstone Application The Zava Creative Writer is the capstone project that brings the earlier ideas together into a more production-style application. It uses multiple specialised agents, structured JSON hand-offs, product catalogue search, streaming output, and evaluation-style feedback loops. Rather than showing an isolated feature, this part shows how separate patterns combine into a complete system. This is one of the most valuable parts of the workshop for learner developers because it narrows the gap between tutorial code and real application design. You can see how orchestration, agent roles, and practical interfaces fit together. By the end of Part 7, you should be able to recognise the architecture of a serious local AI app and understand how the earlier labs support it. Part 8: Evaluation-Led Development Many beginner AI projects stop once the output looks good once or twice. This lab teaches a much stronger habit: evaluation-led development. You work with golden datasets, rule-based checks, and LLM-as-judge scoring to compare prompt or agent variants systematically. The goal is to move from anecdotal testing to repeatable assessment. This matters enormously for students because evaluation is one of the clearest differences between a classroom demo and dependable software. By the end of Part 8, you should understand how to define success criteria, compare outputs at scale, and use evidence rather than intuition when improving an AI component. Part 9: Voice Transcription with Whisper Part 9 broadens the workshop beyond text generation by introducing speech-to-text with Whisper running locally. You use the Foundry Local SDK to download and load the model, then transcribe local audio files through the compatible API surface. The emphasis is on privacy-first processing, with audio kept on-device. This section is a useful reminder that local AI development is not limited to chatbots. Learners see how a different modality fits into the same ecosystem and how local execution supports sensitive workloads. By the end of this lab, you should understand the transcription flow, the relevant client methods, and how speech features can be integrated into broader applications. Part 10: Using Custom or Hugging Face Models After learning the standard path, the workshop shows how to work with custom or Hugging Face models. This includes compiling models into optimised ONNX format with ONNX Runtime GenAI, choosing hardware-specific options, applying quantisation strategies, creating configuration files, and adding compiled models to the Foundry Local cache. For learner developers, this part opens the door to model engineering rather than simple model consumption. You begin to understand that model choice, optimisation, and packaging affect performance and usability. By the end of Part 10, you should have a clearer picture of how models move from an external source into a runnable local setup and why deployment format matters. Part 11: Tool Calling with Local Models Tool calling is one of the most practical patterns in current AI development, and this lab covers it directly. You define tool schemas, allow the model to request function calls, handle the multi-turn interaction loop, execute the tools locally, and return results back to the model. The examples include practical scenarios such as weather and population tools. This lab teaches learners how to move beyond generation into action. A model is no longer limited to producing text. It can decide when external data or a function is needed and incorporate that result into a useful answer. By the end of Part 11, you should understand the tool-calling flow and how AI systems connect reasoning with deterministic software behaviour. Part 12: Building a Web UI for the Zava Creative Writer Part 12 adds a browser-based front end to the capstone application. You learn how to serve a shared interface from Python, JavaScript, or C#, stream updates to the browser, consume NDJSON with the Fetch API and ReadableStream, and show live agent status as content is produced in real time. This part is especially good for students who want to build portfolio projects. It turns backend orchestration into something visible and interactive. By the end of Part 12, you should understand how to connect a local AI backend to a web interface and how streaming changes the user experience compared with waiting for one final response. Part 13: Workshop Complete The final part is a summary and extension point. It reviews what you have built across the previous sections and suggests ways to continue. Although it is not a new technical lab in the same way as the earlier parts, it plays an important role in learning. It helps you consolidate the architecture, the terminology, and the development patterns you have encountered. For learners, reflection matters. By the end of Part 13, you should be able to describe the full stack of a local AI application, from model management to user interface, and identify which area you want to deepen next. What Students Gain from the Full Workshop Taken together, these labs do more than teach Foundry Local itself. They teach how AI applications are built. You learn operational basics such as model setup and service management. You learn application integration through SDKs and APIs. You learn system design through RAG, agents, multi-agent orchestration, and web interfaces. You learn engineering discipline through evaluation. You also see how text, speech, custom models, and tool calling all fit into one local-first development workflow. That breadth makes the workshop useful in several settings. A student can use it as a self-study path. A lecturer can use it as source material for practical sessions. A learner developer can use it to build portfolio pieces and to understand which AI patterns are worth learning next. Because the repository includes Python, JavaScript, and C#, it also works well for comparing how architectural ideas transfer across languages. How to Approach the Lab as a Beginner If you are starting from scratch, the best route is simple. Complete Parts 1 to 3 in your preferred language first. That gives you the essential setup and integration skills. Then move into Parts 4 to 6 to understand how AI application patterns are composed. After that, use Parts 7 and 8 to learn how larger systems and evaluation fit together. Finally, explore Parts 9 to 12 based on your interests, whether that is speech, tooling, model customisation, or front-end work. It is also worth keeping notes as you go. Record what each part adds to your understanding, what code files matter, and what assumptions each example makes. That habit will help you move from following the labs to adapting the patterns in your own projects. Final Thoughts The Microsoft Foundry Local Lab is a strong introduction to local AI development because it treats learners like developers rather than spectators. You install, run, connect, orchestrate, evaluate, and present working systems. That makes it far more valuable than a short demo that only proves a model can answer a question. If you are a student or learner developer who wants to understand how AI applications are really built, this lab gives you a clear path. Start with the basics, pick one language, and work through the parts in order. By the time you finish, you will not just have used Foundry Local. You will have a practical foundation for building local AI applications with far more confidence and much better judgement.256Views0likes0CommentsLangchain Multi-Agent Systems with Microsoft Agent Framework and Hosted Agents
If you have been building AI agents with LangChain, you already know how powerful its tool and chain abstractions are. But when it comes to deploying those agents to production — with real infrastructure, managed identity, live web search, and container orchestration — you need something more. This post walks through how to combine LangChain with the Microsoft Agent Framework ( azure-ai-agents ) and deploy the result as a Microsoft Foundry Hosted Agent. We will build a multi-agent incident triage copilot that uses LangChain locally and seamlessly upgrades to cloud-hosted capabilities on Microsoft Foundry. Why combine LangChain with Microsoft Agent Framework? As a LangChain developer, you get excellent abstractions for building agents: the @tool decorator, RunnableLambda chains, and composable pipelines. But production deployment raises questions that LangChain alone does not answer: Where do your agents run? Containers, serverless, or managed infrastructure? How do you add live web search or code execution? Bing Grounding and Code Interpreter are not LangChain built-ins. How do you handle authentication? Managed identity, API keys, or tokens? How do you observe agents in production? Distributed tracing across multiple agents? The Microsoft Agent Framework fills these gaps. It provides AgentsClient for creating and managing agents on Microsoft Foundry, built-in tools like BingGroundingTool and CodeInterpreterTool , and a thread-based conversation model. Combined with Hosted Agents, you get a fully managed container runtime with health probes, auto-scaling, and the OpenAI Responses API protocol. The key insight: LangChain handles local logic and chain composition; the Microsoft Agent Framework handles cloud-hosted orchestration and tooling. Architecture overview The incident triage copilot uses a coordinator pattern with three specialist agents: User Query | v Coordinator Agent | +--> LangChain Triage Chain (routing decision) +--> LangChain Synthesis Chain (combine results) | +---+---+---+ | | | v v v Research Diagnostics Remediation Agent Agent Agent Each specialist agent has two execution modes: Mode LangChain Role Microsoft Agent Framework Role Local @tool functions provide heuristic analysis Not used Foundry Chains handle routing and synthesis AgentsClient with BingGroundingTool , CodeInterpreterTool This dual-mode design means you can develop and test locally with zero cloud dependencies, then deploy to Foundry for production capabilities. Step 1: Define your LangChain tools Start with what you know. Define typed, documented tools using LangChain’s @tool decorator: from langchain_core.tools import tool @tool def classify_incident_severity(query: str) -> str: """Classify the severity and priority of an incident based on keywords. Args: query: The incident description text. Returns: Severity classification with priority level. """ query_lower = query.lower() critical_keywords = [ "production down", "all users", "outage", "breach", ] high_keywords = [ "503", "500", "timeout", "latency", "slow", ] if any(kw in query_lower for kw in critical_keywords): return "severity=critical, priority=P1" if any(kw in query_lower for kw in high_keywords): return "severity=high, priority=P2" return "severity=low, priority=P4" These tools work identically in local mode and serve as fallbacks when Foundry is unavailable. Step 2: Build routing with LangChain chains Use RunnableLambda to create a routing chain that classifies the incident and selects which specialists to invoke: from langchain_core.runnables import RunnableLambda from enum import Enum class AgentRole(str, Enum): RESEARCH = "research" DIAGNOSTICS = "diagnostics" REMEDIATION = "remediation" DIAGNOSTICS_KEYWORDS = { "log", "error", "exception", "timeout", "500", "503", "crash", "oom", "root cause", } REMEDIATION_KEYWORDS = { "fix", "remediate", "runbook", "rollback", "hotfix", "patch", "resolve", "action plan", } def _route(inputs: dict) -> dict: query = inputs["query"].lower() specialists = [AgentRole.RESEARCH] # always included if any(kw in query for kw in DIAGNOSTICS_KEYWORDS): specialists.append(AgentRole.DIAGNOSTICS) if any(kw in query for kw in REMEDIATION_KEYWORDS): specialists.append(AgentRole.REMEDIATION) return {**inputs, "specialists": specialists} triage_routing_chain = RunnableLambda(_route) This is pure LangChain — no cloud dependency. The chain analyses the query and returns which specialists should handle it. Step 3: Create specialist agents with dual-mode execution Each specialist agent extends a base class. In local mode, it uses LangChain tools. In Foundry mode, it delegates to the Microsoft Agent Framework: from abc import ABC, abstractmethod from pathlib import Path class BaseSpecialistAgent(ABC): role: AgentRole prompt_file: str def __init__(self): prompt_path = Path(__file__).parent.parent / "prompts" / self.prompt_file self.system_prompt = prompt_path.read_text(encoding="utf-8") async def run(self, query, shared_context, correlation_id, client=None): if client is not None: return await self._run_on_foundry(query, shared_context, correlation_id, client) return await self._run_locally(query, shared_context, correlation_id) async def _run_on_foundry(self, query, shared_context, correlation_id, client): """Use Microsoft Agent Framework for cloud-hosted execution.""" from azure.ai.agents.models import BingGroundingTool agent = await client.agents.create_agent( model=shared_context.get("model_deployment", "gpt-4o"), name=f"{self.role.value}-{correlation_id}", instructions=self.system_prompt, tools=self._get_foundry_tools(shared_context), ) thread = await client.agents.threads.create() await client.agents.messages.create( thread_id=thread.id, role="user", content=self._build_prompt(query, shared_context), ) run = await client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent.id, ) # Extract and return the agent’s response... async def _run_locally(self, query, shared_context, correlation_id): """Use LangChain tools for local heuristic analysis.""" # Each subclass implements this with its specific tools ... The key pattern here: same interface, different backends. Your coordinator does not care whether a specialist ran locally or on Foundry. Step 4: Wire it up with FastAPI Expose the multi-agent pipeline through a FastAPI endpoint. The /triage endpoint accepts incident descriptions and returns structured reports: from fastapi import FastAPI from agents.coordinator import Coordinator from models import TriageRequest app = FastAPI(title="Incident Triage Copilot") coordinator = Coordinator() @app.post("/triage") async def triage(request: TriageRequest): return await coordinator.triage( request=request, client=app.state.foundry_client, max_turns=10, ) The application also implements the /responses endpoint, which follows the OpenAI Responses API protocol. This is what Microsoft Foundry Hosted Agents expects when routing traffic to your container. Step 5: Deploy as a Hosted Agent This is where Microsoft Foundry Hosted Agents shines. Your multi-agent system becomes a managed, auto-scaling service with a single command: # Install the azd AI agent extension azd extension install azure.ai.agents # Provision infrastructure and deploy azd up The Azure Developer CLI ( azd ) provisions everything: Azure Container Registry for your Docker image Container App with health probes and auto-scaling User-Assigned Managed Identity for secure authentication Microsoft Foundry Hub and Project with model deployments Application Insights for distributed tracing Your agent.yaml defines what tools the hosted agent has access to: name: incident-triage-copilot-langchain kind: hosted model: deployment: gpt-4o identity: type: managed tools: - type: bing_grounding enabled: true - type: code_interpreter enabled: true What you gain over pure LangChain Capability LangChain Only LangChain + Microsoft Agent Framework Local development Yes Yes (identical experience) Live web search Requires custom integration Built-in BingGroundingTool Code execution Requires sandboxing Built-in CodeInterpreterTool Managed hosting DIY containers Foundry Hosted Agents Authentication DIY Managed Identity (zero secrets) Observability DIY OpenTelemetry + Application Insights One-command deploy No azd up Testing locally The dual-mode architecture means you can test the full pipeline without any cloud resources: # Create virtual environment and install dependencies python -m venv .venv source .venv/bin/activate pip install -r requirements.txt # Run locally (agents use LangChain tools) python -m src Then open http://localhost:8080 in your browser to use the built-in web UI, or call the API directly: curl -X POST http://localhost:8080/triage \ -H "Content-Type: application/json" \ -d '{"message": "Getting 503 errors on /api/orders since 2pm"}' The response includes a coordinator summary, specialist results with confidence scores, and the tools each agent used. Running the tests The project includes a comprehensive test suite covering routing logic, tool behaviour, agent execution, and HTTP endpoints: curl -X POST http://localhost:8080/triage \ -H "Content-Type: application/json" \ -d '{"message": "Getting 503 errors on /api/orders since 2pm"}' Tests run entirely in local mode, so no cloud credentials are needed. Key takeaways for LangChain developers Keep your LangChain abstractions. The @tool decorator, RunnableLambda chains, and composable pipelines all work exactly as you expect. Add cloud capabilities incrementally. Start local, then enable Bing Grounding, Code Interpreter, and managed hosting when you are ready. Use the dual-mode pattern. Every agent should work locally with LangChain tools and on Foundry with the Microsoft Agent Framework. This makes development fast and deployment seamless. Let azd handle infrastructure. One command provisions everything: containers, identity, monitoring, and model deployments. Security comes free. Managed Identity means no API keys in your code. Non-root containers, RBAC, and disabled ACR admin are all configured by default. Get started Clone the sample repository and try it yourself: git clone https://github.com/leestott/hosted-agents-langchain-samples cd hosted-agents-langchain-samples python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt python -m src Open http://localhost:8080 to interact with the copilot through the web UI. When you are ready for production, run azd up and your multi-agent system is live on Microsoft Foundry. Resources Microsoft Agent Framework for Python documentation Microsoft Foundry Hosted Agents Azure Developer CLI (azd) LangChain documentation Microsoft Foundry documentation527Views0likes0CommentsIntegrating Microsoft Foundry with OpenClaw: Step by Step Model Configuration
Step 1: Deploying Models on Microsoft Foundry Let us kick things off in the Azure portal. To get our OpenClaw agent thinking like a genius, we need to deploy our models in Microsoft Foundry. For this guide, we are going to focus on deploying gpt-5.2-codex on Microsoft Foundry with OpenClaw. Navigate to your AI Hub, head over to the model catalog, choose the model you wish to use with OpenClaw and hit deploy. Once your deployment is successful, head to the endpoints section. Important: Grab your Endpoint URL and your API Keys right now and save them in a secure note. We will need these exact values to connect OpenClaw in a few minutes. Step 2: Installing and Initializing OpenClaw Next up, we need to get OpenClaw running on your machine. Open up your terminal and run the official installation script: curl -fsSL https://openclaw.ai/install.sh | bash The wizard will walk you through a few prompts. Here is exactly how to answer them to link up with our Azure setup: First Page (Model Selection): Choose "Skip for now". Second Page (Provider): Select azure-openai-responses. Model Selection: Select gpt-5.2-codex , For now only the models listed (hosted on Microsoft Foundry) in the picture below are available to be used with OpenClaw. Follow the rest of the standard prompts to finish the initial setup. Step 3: Editing the OpenClaw Configuration File Now for the fun part. We need to manually configure OpenClaw to talk to Microsoft Foundry. Open your configuration file located at ~/.openclaw/openclaw.json in your favorite text editor. Replace the contents of the models and agents sections with the following code block: { "models": { "providers": { "azure-openai-responses": { "baseUrl": "https://<YOUR_RESOURCE_NAME>.openai.azure.com/openai/v1", "apiKey": "<YOUR_AZURE_OPENAI_API_KEY>", "api": "openai-responses", "authHeader": false, "headers": { "api-key": "<YOUR_AZURE_OPENAI_API_KEY>" }, "models": [ { "id": "gpt-5.2-codex", "name": "GPT-5.2-Codex (Azure)", "reasoning": true, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 16384, "compat": { "supportsStore": false } }, { "id": "gpt-5.2", "name": "GPT-5.2 (Azure)", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 272000, "maxTokens": 16384, "compat": { "supportsStore": false } } ] } } }, "agents": { "defaults": { "model": { "primary": "azure-openai-responses/gpt-5.2-codex" }, "models": { "azure-openai-responses/gpt-5.2-codex": {} }, "workspace": "/home/<USERNAME>/.openclaw/workspace", "compaction": { "mode": "safeguard" }, "maxConcurrent": 4, "subagents": { "maxConcurrent": 8 } } } } You will notice a few placeholders in that JSON. Here is exactly what you need to swap out: Placeholder Variable What It Is Where to Find It <YOUR_RESOURCE_NAME> The unique name of your Azure OpenAI resource. Found in your Azure Portal under the Azure OpenAI resource overview. <YOUR_AZURE_OPENAI_API_KEY> The secret key required to authenticate your requests. Found in Microsoft Foundry under your project endpoints or Azure Portal keys section. <USERNAME> Your local computer's user profile name. Open your terminal and type whoami to find this. Step 4: Restart the Gateway After saving the configuration file, you must restart the OpenClaw gateway for the new Foundry settings to take effect. Run this simple command: openclaw gateway restart Configuration Notes & Deep Dive If you are curious about why we configured the JSON that way, here is a quick breakdown of the technical details. Authentication Differences Azure OpenAI uses the api-key HTTP header for authentication. This is entirely different from the standard OpenAI Authorization: Bearer header. Our configuration file addresses this in two ways: Setting "authHeader": false completely disables the default Bearer header. Adding "headers": { "api-key": "<key>" } forces OpenClaw to send the API key via Azure's native header format. Important Note: Your API key must appear in both the apiKey field AND the headers.api-key field within the JSON for this to work correctly. The Base URL Azure OpenAI's v1-compatible endpoint follows this specific format: https://<your_resource_name>.openai.azure.com/openai/v1 The beautiful thing about this v1 endpoint is that it is largely compatible with the standard OpenAI API and does not require you to manually pass an api-version query parameter. Model Compatibility Settings "compat": { "supportsStore": false } disables the store parameter since Azure OpenAI does not currently support it. "reasoning": true enables the thinking mode for GPT-5.2-Codex. This supports low, medium, high, and xhigh levels. "reasoning": false is set for GPT-5.2 because it is a standard, non-reasoning model. Model Specifications & Cost Tracking If you want OpenClaw to accurately track your token usage costs, you can update the cost fields from 0 to the current Azure pricing. Here are the specs and costs for the models we just deployed: Model Specifications Model Context Window Max Output Tokens Image Input Reasoning gpt-5.2-codex 400,000 tokens 16,384 tokens Yes Yes gpt-5.2 272,000 tokens 16,384 tokens Yes No Current Cost (Adjust in JSON) Model Input (per 1M tokens) Output (per 1M tokens) Cached Input (per 1M tokens) gpt-5.2-codex $1.75 $14.00 $0.175 gpt-5.2 $2.00 $8.00 $0.50 Conclusion: And there you have it! You have successfully bridged the gap between the enterprise-grade infrastructure of Microsoft Foundry and the local autonomy of OpenClaw. By following these steps, you are not just running a chatbot; you are running a sophisticated agent capable of reasoning, coding, and executing tasks with the full power of GPT-5.2-codex behind it. The combination of Azure's reliability and OpenClaw's flexibility opens up a world of possibilities. Whether you are building an automated devops assistant, a research agent, or just exploring the bleeding edge of AI, you now have a robust foundation to build upon. Now it is time to let your agent loose on some real tasks. Go forth, experiment with different system prompts, and see what you can build. If you run into any interesting edge cases or come up with a unique configuration, let me know in the comments below. Happy coding!7.9KViews2likes2CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo8.2KViews3likes0CommentsHow We Built an AI Operations Agent Using MCP Servers and Dynamic Tool Routing
Modern operations teams are turning to AI Agents to resolve shipping delays faster and more accurately. In this article, we build a “two‑brain” AI Agent on Azure—one MCP server that reads policies from Blob Storage and another that updates order data in Azure SQL—to automate decisions like hazardous‑material handling and delivery prioritization. You’ll see how these coordinated capabilities transform a simple user query into a fully automated operational workflow383Views0likes0CommentsFrom SOP Overload to Simple Answers: Building Q&A Agents With SharePoint Online + Agent Builder
Government teams run on Standard Operating Procedures, manuals, handbooks, review instructions, HR policies, and proposal workflows. They’re essential—and everywhere. But during every Government Prompt‑a‑thon we've run this year, one theme kept repeating: "Our policies are many and finding the right answer quickly is nearly impossible." Turning SOPs into Simple Q&A Agents with M365 Copilot's Agent Builder or SharePoint Agents is possibly one of the fastest wins for public‑sector teams.232Views0likes0CommentsESPC25 – Dublin | Microsoft event guide
ESPC25 | Come curious. Leave inspired. European SharePoint Conference 2025 | Dublin, Ireland | December 1–4, 2025 | SharePointEurope.com. Your guide to ESPC25 Dublin, Ireland, is one of the most dynamic and innovative tech cities in Europe, making it the perfect place for in-person, in-depth learning at the European SharePoint Conference—now simply branded as ESPC25—December 1–4, 2025. As the celebrated Irish writer Edna O’Brien once said, “Hospitality is the cornerstone of the Irish heart.” This spirit of warmth and welcome is sure to be felt throughout your stay in Dublin during ESPC25. You will have the opportunity to learn from experts, network with peers, and discover new technologies that can help you achieve your goals. The event and city have something for everyone: inspiring keynotes, in-depth sessions and workshops, community, art and culture, and many festive moments. Explore powerful new AI capabilities for every role and function: Diving into Microsoft 365, Microsoft 365 Copilot, Power Platform, and more, you’ll discover how Microsoft is transforming the way we work today—and get a firsthand look at the future of work itself. You’ll find content delivered by the world’s best Microsoft 365 and Copilot Studio experts, including many Microsoft leaders and employees from the product teams. In this pre-event guide, we list all the Microsoft-led sessions below so you can prepare for what awaits you alongside community and MVP expert sessions. There will be plenty of opportunities for you to learn, share, and engage: Evening gatherings, the attendee party, and our Innovation Hub with Microsoft product demo stations, a Promptathon, Vibe Coding, a place to network with your peers, a dedicated community space, the Inspire Stage, and 1:1 meeting spaces are all within the Expo Hall alongside all the wonderful event sponsors. The 101 on ESPC25 What: ESPC25 Where: The Convention Centre Dublin, Spencer Dock, North Wall Quay, Dublin 1 D01 T1W6, Ireland When: December 1-4, 2025 (keynotes, sessions, tutorials, workshops, and more) Presenters: 120+ sessions (35+ Microsoft-led), 115+ speakers (MVPs, RDs, Microsoft and community members); check out Microsoft-led keynotes, tutorials, and sessions Cost: From €695 + VAT (one day) to €1645 + VAT (four day) Enjoy an additional €200 discount with our exclusive community discount code: ESPC25COMM Primary social: Join in and follow @ESPC_Community (Twitter) and ESPC (LinkedIn); use hashtag #ESPC25 About ESPC ESPC25 offers you affordable, world-class learning and networking at your fingertips. Join in 120+ sessions covering: Microsoft 365 Copilot, Copilot Studio, AI and agents, SharePoint, Microsoft Viva, Microsoft Teams, Microsoft 365 adoption, governance, admin, intranets, and more. Over the years, ESPC has visited some of the most incredible cities: from the charming streets of Copenhagen and the historic beauty of Berlin to the vibrant culture of Prague, and the dynamic atmosphere of Amsterdam. Even during the COVID-19 pandemic, ESPC kept the excitement alive with engaging online events! The European SharePoint Conference (ESPC), now known simply as ESPC, is one of the largest and most prominent events in Europe dedicated to Microsoft technologies, particularly Microsoft 365 and Azure. It began in 2011 in Berlin, Germany, as a platform for discussing SharePoint and related Microsoft technologies. Over the years, ESPC expanded its scope to include other tools in the Microsoft ecosystem, reflecting the industry's trend toward digital transformation and cloud adoption. ESPC has grown to become the largest independent European event for Microsoft technology users, serving as a crucial knowledge hub and networking space. It continues to play a vital role in educating the tech community on best practices for using SharePoint, Microsoft 365, and Azure to enhance productivity, collaboration, and security in the modern workplace. Review past ESPC events: 2025: Dublin, Ireland 2024: Stockholm, Sweden 2023: Amsterdam, Netherlands 2022: Copenhagen, Denmark 2021: Online (due to COVID-19) 2020: Online (due to COVID-19) 2019: Prague, Czech Republic 2018: Copenhagen, Denmark 2017: Dublin, Ireland 2016: Vienna, Austria 2015: Stockholm, Sweden Review all Microsoft keynotes and sessions below: Start building your schedule today! Microsoft keynotes (All times represented in local Dublin time) Opening keynote: The New Frontier of Work: Copilot, AI, and the Future of Microsoft 365 Jeff Teper, President of Collaborative Apps and Platforms , Microsoft with Miceile Barrett, Zach Rosenfield, Alex Weingard, and Michel Bouman Tuesday, Dec. 2, 9:00–10:00 AM Day one PM keynote: From One to Many: The future of AI is Collaborative Jaime Teevan, Chief Scientist and Technical Fellow, Microsoft Tuesday, Dec. 2, 2:00–3:00 PM Day three keynote: Beyond the Prompt: Building an AI-Resilient Career with Power Skills Heather Cook, Principal Customer Experience PM, Microsoft Karuana Gatimu, Director, Customer Advocacy, Microsoft 365, AI and Agents Thursday, Dec. 4, 9:00–10:00 AM Community keynotes Will AI Become Intelligent? Rafal Lukawiecki, Data Scientist at Tecflix, Ireland Wednesday, Dec. 3, 9:00–10:00 AM Hacker’s Perspective on New Risks: Revising the Cybersecurity Priorities for 2025 Paula Januszkiewicz, Founder and CEO, CQURE Inc. and CQURE Academy Tuesday, Dec. 2, 4:45–5:45 PM Microsoft sessions (All times represented in local Dublin time) Breakout sessions: (60 min, in-person) Latest Innovations in Microsoft 365 Copilot Chat Speaker: Connie Welsh, Bryan Wofford Tuesday, December 2, 2025, 10:15 AM | Session Code T8 What’s new in Copilot Studio Speaker: Antonio Rodriques Tuesday, December 2, 10:15 AM | Session Code T9 SharePoint as the Intelligence Backbone of Copilot Speaker: Zach Rosenfield Tuesday, December 2, 11:45 AM | Session Code T18 IT Excellence in the AI era: Managing Copilot and Agents for Impact and Control Speaker: Ben Summers Tuesday, December 2, 3:15 PM | Session Code T26 AI in Action: Microsoft's Approach to Internal Communications Speakers: Adam Barzel, Anna Pope, Darina Sexton Tuesday, December 2, 3:15 PM | Session Code T27 Woman in Tech & Allies Panel Speaker: Heather Cook, Danielle Moon and Karuana Gatimu Tuesday, December 2, 3:15 PM | Session Code T28 Microsoft Teams: What's New and What's Next Speaker: Alex Weingart Wednesday, December 3, 10:15 AM | Session Code W8 MCP (Model Context Protocol) in action in Microsoft 365 Copilot Speaker: Paolo Pialorsi Wednesday, December 3, 10:15 AM | Session Code W9 Microsoft Teams as Your AI-Driven Work Hub Speaker: Kartik Datwani Wednesday, December 3, 10:15 AM | Session Code W18 Building Intelligent Content Apps for the Copilot Era Speakers: Steve Pucelik, Shreyas Saravanan Wednesday, December 3, 2:00 PM | Session Code W27 OneDrive Updates: Smarter Storage, Seamless Sharing Speakers: Miceile Barrett, Lincoln DeMaris, and Jason Moore Wednesday, December 3, 2:00 PM | Session Code W28 AI Intranet: Creating and Managing High Quality Content on the Intranet of Tomorrow Speaker: Katelyn Helms Wednesday, December 3, 2:00 PM | Session Code W30 AI-Enhanced Planning: Innovations in Planner & Project Manager Agents Speaker: Howard Crow Wednesday, December 3, 3:15 PM | Session Code W38 AI Risk Management: From Oversharing to Strategic Oversight Speakers: Gabriel Tiberiu Damaschin, Nishan DeSilva Wednesday, December 3, 4:45 PM | Session Code W45 Streamlining Core Operations with AI Automation Speaker: Antonio Rodriques Wednesday, December 3, 4:45 PM | Session Code W44 Understanding Copilot Extensibility: a practical guide Speakers: Vesa Juvonen and Paolo Pialorsi Thursday, December 4 ,10:15 AM | Session Code TH8 Agents Among Us – Governance for the Agentic Era Speaker: Sesha Mani Thursday, December 4 ,10:15 AM | Session Code TH 9 Creating AI Agents for People and Platforms Speaker: Nandakishor Basavanthappa Thursday, December 4, 11:45 AM | Session Code TH14 Windows AI: Powering Progress Across the Globe Speaker: Vikas Malekar Thursday, December 4 ,11:45 AM | Session Code TH15 Accelerating Copilot Adoption Across Your Organization Speakers: Karuana Gatimu, Bryan Wofford Thursday, December 4, 11:45 AM | Session Code TH16 True Stories of Copilot in Action Speakers: Connie Welsh, Danielle Moon Thursday, December 4, 2:00 PM | Session Code TH 28 Fireside Insights: Driving Value and Business Growth with AI Speaker: TBA Thursday December 4 15:15 | Session Code TH37 Lightning talks (20 minutes, in person) - Innovation Hub Supercharge Your Projects with Microsoft 365 Community Tools Speaker: Vesa Juvonen Tuesday, December 2, 10:15 AM Building Copilot with community-led engagement Speaker: Anna Pope Tuesday, December 2, 10:40 AM Managing your brand for career management Speaker: Karuana Gatimu (MCAG) Tuesday, December 2, 11:45 AM Empowering Community Builders: MGCI & Communitydays.org Speaker: Heather Cook Tuesday, December 2, 12:05 PM Your Day, Your Way: Personalized AI with OneDrive Speaker: Miceile Barrett Tuesday, December 2, 12:30 PM Security First Powering AI Transformation Speaker: Sesha Mani Tuesday, December 2, 4:45 PM Unlocking SharePoint Advanced Management for Copilot Success Speaker: Michael Holste Tuesday, December 2, 5:10 PM Fast-Tracking Value with the Copilot Success Kit & Scenario Library Speaker: Bryan Wofford Wednesday, December 3, 10:15 AM How to find learning Opportunities Everywhere Speaker: Adam Harmetz Wednesday, December 3, 10:40 AM Copilot readiness & resiliency with Microsoft 365 Backup & Archive Speaker: Kaustubh Chaudhary, Michael Holste Wednesday, December 3, 12:05 PM Unlock SharePoint Embedded integration with Power Platform Connectors Speaker: Steve Pucelik Thursday, December 4, 10:15 AM Microsoft Teams: Day-to-Day AI Speaker: Alex Weingart Thursday, December 4, 10:40 AM Driving Business Value with Copilot Analytics and ROI Measurement Speaker: Manas Kumar Biswas Thursday, December 4,11:45 AM Understanding Copilot APIs Speaker: Paolo Pialori Thursday, December 4, 12:05 PM Migration in Motion: Simplify, Secure, and Scale with Microsoft 365 Speaker: Manas Vishal Lodha Thursday, December 4, 12:30 PM Inspire Track (60 minutes, in person) - Inspire Stage Good Thing We Like Our Copilots CANCELLED: How Your Job Will Evolve in 2026 into AGENTBOSS! Speaker: Dona Sarkar Tuesday, December 2, 10:15 AM | Session Code: Inspire 2 How To Thrive in Your Tech Career: Managing Burnout and Boredom Speaker: Adam Harmetz Wednesday, December 3, 11:45 AM | Session Code: Inspire 6 Managing your brand for career management Speaker: Karuana Gatimu Thursday, December 4, 11:45 AM | Session Code: Inspire 10 Cultivating Trust and Leadership Excellence through Mentorship and Leadership Speaker: Heather Cook Thursday, December 4, 2:00 PM | Session Code: Inspire 11 Innovation Hub and Lightning Talk Theater The Microsoft Innovation Hub at ESPC is a vibrant, multi-zone environment crafted to ignite new ideas, showcase cutting-edge prototypes, and shape the future of Microsoft 365 and Microsoft 365 Copilot. Here, creativity and technology intersect, providing a dedicated space for developers, architects, consultants, and strategists to explore the forefront of innovation. The Innovation Hub is designed to foster collaboration, highlight emerging technologies, and collect real-time insights from the wider Microsoft 365 community. You can meet with Microsoft Product Team members and Microsoft MVPs! You’ll find the Microsoft Innovation Hub in the ESPC Expo Tuesday through Thursday. Each day on the Innovation Stage in Zone 1 you will find Lightning Talks for live demonstrations and peer learning in a compact format. And don't miss the SharePoint 25th Birthday Celebration and the Innovation Awards, Tuesday evening at 5:45 PM. Within Zone 2, you will find Future Tech Demos daily during breaks and lunches. The Microsoft team and MVPs will be on hand to walk you through dynamic demos: Agents/Copilot Studio, Developer, Microsoft 365 Copilot and SharePoint and One Drive. Get connected in the Zone 3 Feedback Loop Lounge and leave your thoughts on our survey for a chance to win daily prizes. And finally, meet in the Zone 4 Ideation Lounge each afternoon for guided workshops including a Promptathon and Vibe Coding activities in an environment specifically designed to encourage creativity and collaborative thinking. This event guide will continue to update as we finalize more of our activations. Stay tuned! Shout out to event leads and community members Sarah McNamara, Kevin Monahan, Ella Murphy, Bridgette Robertson, John, and Sarah G, and the #ESPC25 team for putting together ESPC25, corralling all speakers and content, and for supporting and promoting the knowledge and expertise to promote the world-class Microsoft 365 tech community around the world. Cheers, Heather Last, a glimpse of the ESPC event experience:2.3KViews2likes1CommentBuilding an Agentic, AI-Powered Helpdesk with Agents Framework, Azure, and Microsoft 365
The article describes how to build an agentic, AI-powered helpdesk using Azure, Microsoft 365, and the Microsoft Agent Framework. The goal is to automate ticket handling, enrich requests with AI, and integrate seamlessly with M365 tools like Teams, Planner, and Power Automate.730Views0likes0Comments