azure
348 TopicsAI Didn’t Break Your Production — Your Architecture Did
Most AI systems don’t fail in the lab. They fail the moment production touches them. I’m Hazem Ali — Microsoft AI MVP, Principal AI & ML Engineer / Architect, and Founder & CEO of Skytells. With a strong foundation in AI and deep learning from low-level fundamentals to production-scale, backed by rigorous cybersecurity and software engineering expertise, I design and deliver enterprise AI systems end-to-end. I often speak about what happens after the pilot goes live: real users arrive, data drifts, security constraints tighten, and incidents force your architecture to prove it can survive. My focus is building production AI with a security-first mindset: identity boundaries, enforceable governance, incident-ready operations, and reliability at scale. My mission is simple: Architect and engineer secure AI systems that operate safely, predictably, and at scale in production. And here’s the hard truth: AI initiatives rarely fail because the model is weak. They fail because the surrounding architecture was never engineered for production reality. - Hazem Ali You see this clearly when teams bolt AI onto an existing platform. In Azure-based environments, the foundation can be solid—identity, networking, governance, logging, policy enforcement, and scale primitives. But that doesn’t make the AI layer production-grade by default. It becomes production-grade only when the AI runtime is engineered like a first-class subsystem with explicit boundaries, control points, and designed failure behavior. A quick moment from the field I still remember one rollout that looked perfect on paper. Latency was fine. Error rate was low. Dashboards were green. Everyone was relaxed. Then a single workflow started creating the wrong tickets, not failing or crashing. It was confidently doing the wrong thing at scale. It took hours before anyone noticed, because nothing was broken in the traditional sense. When we finally traced it, the model was not the root cause. The system had no real gates, no replayable trail, and tool execution was too permissive. The architecture made it easy for a small mistake to become a widespread mess. That is the gap I’m talking about in this article. Production Failure Taxonomy This is the part most teams skip because it is not exciting, and it is not easy to measure in a demo. When AI fails in production, the postmortem rarely says the model was bad. It almost always points to missing boundaries, over-privileged execution, or decisions nobody can trace. So if your AI can take actions, you are no longer shipping a chat feature. You are operating a runtime that can change state across real systems, that means reliability is not just uptime. It is the ability to limit blast radius, reproduce decisions, and stop or degrade safely when uncertainty or risk spikes. You can usually tell early whether an AI initiative will survive production. Not because the model is weak, but because the failure mode is already baked into the architecture. Here are the ones I see most often. 1. Healthy systems that are confidently wrong Uptime looks perfect. Latency is fine. And the output is wrong. This is dangerous because nothing alerts until real damage shows up. 2. The agent ends up with more authority than the user The user asks a question. The agent has tools and credentials. Now it can do things the user never should have been able to do in that moment. 3. Each action is allowed, but the chain is not Read data, create ticket, send message. All approved individually. Put together, it becomes a capability nobody reviewed. 4. Retrieval becomes the attack path Most teams worry about prompt injection. Fair. But a poisoned or stale retrieval layer can be worse, because it feeds the model the wrong truth. 5. Tool calls turn mistakes into incidents The moment AI can change state—config, permissions, emails, payments, or data—a mistake is no longer a bad answer. It is an incident. 6. Retries duplicate side effects Timeouts happen. Retries happen. If your tool calls are not safe to repeat, you will create duplicate tickets, refunds, emails, or deletes. Next, let’s talk about what changes when you inject probabilistic behavior into a deterministic platform. In the Field: Building and Sharing Real-World AI In December 2025, I had the chance to speak and engage with builders across multiple AI and technology events, sharing what I consider the most valuable part of the journey: the engineering details that show up when AI meets production reality. This photo captures one of those moments: real conversations with engineers, architects, and decision-makers about what it truly takes to ship production-grade AI. During my session, Designing Scalable and Secure Architecture at the Enterprise Scale I walked through the ideas in this article live on stage then went deeper into the engineering reality behind them: from zero-trust boundaries and runtime policy enforcement to observability, traceability, and safe failure design, The goal wasn’t to talk about “AI capability,” but to show how to build AI systems that operate safely and predictably at scale in production. Deterministic platforms, probabilistic behavior Most production platforms are built for deterministic behavior: defined contracts, predictable services, stable outputs. AI changes the physics. You introduce probabilistic behavior into deterministic pipelines and your failure modes multiply. An AI system can be confidently wrong while still looking “healthy” through basic uptime dashboards. That’s why reliability in production AI is rarely about “better prompts” or “higher model accuracy.” It’s about engineering the right control points: identity boundaries, governance enforcement, behavioral observability, and safe degradation. In other words: the model is only one component. The system is the product. Production AI Control Plane Here’s the thing. Once you inject probabilistic behavior into a deterministic platform, you need more than prompts and endpoints. You need a control plane. Not a fancy framework. Just a clear place in the runtime where decisions get bounded, actions get authorized, and behavior becomes explainable when something goes wrong. This is the simplest shape I have seen work in real enterprise systems. The control plane components Orchestrator Owns the workflow. Decides what happens next, and when the system should stop. Retrieval Brings in context, but only from sources you trust and can explain later. Prompt assembly Builds the final input to the model, including constraints, policy signals, and tool schemas. Model call Generates the plan or the response. It should never be trusted to execute directly. Policy Enforcement Point The gate before any high impact step. It answers: is this allowed, under these conditions, with these constraints. Tool Gateway The firewall for actions. Scopes every operation, validates inputs, rate-limits, and blocks unsafe calls. Audit log and trace store A replayable chain for every request. If you cannot replay it, you cannot debug it. Risk engine Detects prompt injection signals, anomalous sessions, uncertainty spikes, and switches the runtime into safer modes. Approval flow For the few actions that should never be automatic. It is the line between assistance and damage. If you take one idea from this section, let it be this. The model is not where you enforce safety. Safety lives in the control plane. Next, let’s talk about the most common mistake teams make right after they build the happy-path pipeline. Treating AI like a feature. The common architectural trap: treating AI like a feature Many teams ship AI like a feature: prompt → model → response. That structure demos well. In production, it collapses the moment AI output influences anything stateful tickets, approvals, customer messaging, remediation actions, or security decisions. At that point, you’re not “adding AI.” You’re operating a semi-autonomous runtime. The engineering questions become non-negotiable: Can we explain why the system responded this way? Can we bound what it’s allowed to do? Can we contain impact when it’s wrong? Can we recover without human panic? If those answers aren’t designed into the architecture, production becomes a roulette wheel. Governance is not a document It’s a runtime enforcement capability Most governance programs fail because they’re implemented as late-stage checklists. In production, governance must live inside the execution path as an enforceable mechanism, A Policy Enforcement Point (PEP) that evaluates every high-impact step before it happens. At the moment of execution, your runtime must answer a strict chain of authorization questions: 1. What tools is this agent attempting to call? Every tool invocation is a privilege boundary. Your runtime must identify the tool, the operation, and the intended side effect (read vs write, safe vs state-changing). 2. Does the tool have the right permissions to run for this agent? Even before user context, the tool itself must be runnable by the agent’s workload identity (service principal / managed identity / workload credentials). If the agent identity can’t execute the tool, the call is denied period. 3. If the tool can run, is the agent permitted to use it for this user? This is the missing piece in most systems: delegation. The agent might be able to run the tool in general, but not on behalf of this user, in this tenant, in this environment, for this task category. This is where you enforce: user role / entitlement tenant boundaries environment (prod vs staging) session risk level (normal vs suspicious) 4. If yes, which tasks/operations are permitted? Tools are too broad. Permissions must be operation-scoped. Not “Jira tool allowed.” But “Jira: create ticket only, no delete, no project-admin actions.” Not “Database tool allowed.” But “DB: read-only, specific schema, specific columns, row-level filters.” This is ABAC/RBAC + capability-based execution. 5. What data scope is allowed? Even a permitted tool operation must be constrained by data classification and scope: public vs internal vs confidential vs PII row/column filters time-bounded access purpose limitation (“only for incident triage”) If the system can’t express data scope at runtime, it can’t claim governance. 6. What operations require human approval? Some actions are inherently high risk: payments/refunds changing production configs emailing customers deleting data executing scripts The policy should return “REQUIRE_APPROVAL” with clear obligations (what must be reviewed, what evidence is required, who can approve). 7. What actions are forbidden under certain risk conditions? Risk-aware policy is the difference between governance and theater. Examples: If prompt injection signals are high → disable tool execution If session is anomalous → downgrade to read-only mode If data is PII + user not entitled → deny and redact If environment is prod + request is destructive → block regardless of model confidence The key engineering takeaway Governance works only when it’s enforceable, runtime-evaluated, and capability-scoped: Agent identity answers: “Can it run at all?” Delegation answers: “Can it run for this user?” Capabilities answer: “Which operations exactly?” Data scope answers: “How much and what kind of data?” Risk gates + approvals answer: “When must it stop or escalate?” If policy can’t be enforced at runtime, it isn’t governance. It’s optimism. Safe Execution Patterns Policy answers whether something is allowed. Safe execution answers what happens when things get messy. Because they will, Models time out, Retries happen, Inputs are adversarial. People ask for the wrong thing. Agents misunderstand. And when tools can change state, small mistakes turn into real incidents. These patterns are what keep the system stable when the world is not. 👈 Two-phase execution Do not execute directly from a model output. First phase: propose a plan and a dry-run summary of what will change. Second phase: execute only after policy gates pass, and approval is collected if required. Idempotency for every write If a tool call can create, refund, email, delete, or deploy, it must be safe to retry. Every write gets an idempotency key, and the gateway rejects duplicates. This one change prevents a huge class of production pain. Default to read-only when risk rises When injection signals spike, when the session looks anomalous, when retrieval looks suspicious, the system should not keep acting. It should downgrade. Retrieve, explain, and ask. No tool execution. Scope permissions to operations, not tools Tools are too broad. Do not allow Jira. Allow create ticket in these projects, with these fields. Do not allow database access. Allow read-only on this schema, with row and column filters. Rate limits and blast radius caps Agents should have a hard ceiling. Max tool calls per request. Max writes per session. Max affected entities. If the cap is hit, stop and escalate. A kill switch that actually works You need a way to disable tool execution across the fleet in one move. When an incident happens, you do not want to redeploy code. You want to stop the bleeding. If you build these in early, you stop relying on luck. You make failure boring, contained, and recoverable. Think for scale, in the Era of AI for AI I want to zoom out for a second, because this is the shift most teams still design around. We are not just adding AI to a product. We are entering a phase where parts of the system can maintain and improve themselves. Not in a magical way. In a practical, engineering way. A self-improving system is one that can watch what is happening in production, spot a class of problems, propose changes, test them, and ship them safely, while leaving a clear trail behind it. It can improve code paths, adjust prompts, refine retrieval rules, update tests, and tighten policies. Over time, the system becomes less dependent on hero debugging at 2 a.m. What makes this real is the loop, not the model. Signals come in from logs, traces, incidents, drift metrics, and quality checks. The system turns those signals into a scoped plan. Then it passes through gates: policy and permissions, safe scope, testing, and controlled rollout. If something looks wrong, it stops, downgrades to read-only, or asks for approval. This is why scale changes. In the old world, scale meant more users and more traffic. In the AI for AI world, scale also means more autonomy. One request can trigger many tool calls. One workflow can spawn sub-agents. One bad signal can cause retries and cascades. So the question is not only can your system handle load. The question is can your system handle multiplication without losing control. If you want self-improving behavior, you need three things to be true: The system is allowed to change only what it can prove is safe to change. Every change is testable and reversible. Every action is traceable, so you can replay why it happened. When those conditions exist, self-improvement becomes an advantage. When they do not, self-improvement becomes automated risk. And this leads straight into governance, because in this era governance is not a document. It is the gate that decides what the system is allowed to improve, and under which conditions. Observability: uptime isn’t enough — you need traceability and causality Traditional observability answers: Is the service up. Is it fast. Is it erroring. That is table stakes. Production AI needs a deeper truth: why did it do that. Because the system can look perfectly healthy while still making the wrong decision. Latency is fine. Error rate is fine. Dashboards are green. And the output is still harmful. To debug that kind of failure, you need causality you can replay and audit: Input → context retrieval → prompt assembly → model response → tool invocation → final outcome Without this chain, incident response becomes guesswork. People argue about prompts, blame the model, and ship small patches that do not address the real cause. Then the same issue comes back under a different prompt, a different document, or a slightly different user context. The practical goal is simple. Every high-impact action should have a story you can reconstruct later. What did the system see. What did it pull. What did it decide. What did it touch. And which policy allowed it. When you have that, you stop chasing symptoms. You can fix the actual failure point, and you can detect drift before users do. RAG Governance and Data Provenance Most teams treat retrieval as a quality feature. In production, retrieval is a security boundary. Because the moment a document enters the context window, it becomes part of the system’s brain for that request. If retrieval pulls the wrong thing, the model can behave perfectly and still lead you to a bad outcome. I learned this the hard way, I have seen systems where the model was not the problem at all. The problem was a single stale runbook that looked official, ranked high, and quietly took over the decision. Everything downstream was clean. The agent followed instructions, called the right tools, and still caused damage because the truth it was given was wrong. I keep repeating one line in reviews, and I mean it every time: Retrieval is where truth enters the system. If you do not control that, you are not governing anything. - Hazem Ali So what makes retrieval safe enough for enterprise use? Provenance on every chunk Every retrieved snippet needs a label you can defend later: source, owner, timestamp, and classification. If you cannot answer where it came from, you cannot trust it for actions. Staleness budgets Old truth is a real risk. A runbook from last quarter can be more dangerous than no runbook at all. If content is older than a threshold, the system should say it is old, and either confirm or downgrade to read-only. No silent reliance. Allowlisted sources per task Not all sources are valid for all jobs. Incident response might allow internal runbooks. Customer messaging might require approved templates only. Make this explicit. Retrieval should not behave like a free-for-all search engine. Scope and redaction before the model sees it Row and column limits, PII filtering, secret stripping, tenant boundaries. Do it before prompt assembly, not after the model has already seen the data. Citation requirement for high-impact steps If the system is about to take a high-impact action, it should be able to point to the sources that justified it. If it cannot, it should stop and ask. That one rule prevents a lot of confident nonsense. Monitor retrieval like a production dependency Track which sources are being used, which ones cause incidents, and where drift is coming from. Retrieval quality is not static. Content changes. Permissions change. Rankings shift. Behavior follows. When you treat retrieval as governance, the system stops absorbing random truth. It consumes controlled truth, with ownership, freshness, and scope. That is what production needs. Security: API keys aren’t a strategy when agents can act The highest-impact AI incidents are usually not model hacks. They are architectural failures: over-privileged identities, blurred trust boundaries, unbounded tool access, and unsafe retrieval paths. Once an agent can call tools that mutate state, treat it like a privileged service, not a chatbot. Least privilege by default Explicit authorization boundaries Auditable actions Containment-first design Clear separation between user intent and system authority This is how you prevent a prompt injection from turning into a system-level breach. If you want the deeper blueprint and the concrete patterns for securing agents in practice, I wrote a full breakdown here: Zero-Trust Agent Architecture: How to Actually Secure Your Agents What “production-ready AI” actually means Production-ready AI is not defined by a benchmark score. It’s defined by survivability under uncertainty. A production-grade AI system can: Explain itself with traceability. Enforce policy at runtime. Contain blast radius when wrong. Degrade safely under uncertainty. Recover with clear operational playbooks. If your system can’t answer “how does it fail?” you don’t have production AI yet.. You have a prototype with unmanaged risk. How Azure helps you engineer production-grade AI Azure doesn’t “solve” production-ready AI by itself, it gives you the primitives to engineer it correctly. The difference between a prototype and a survivable system is whether you translate those primitives into runtime control points: identity, policy enforcement, telemetry, and containment. 1. Identity-first execution (kill credential sprawl, shrink blast radius) A production AI runtime should not run on shared API keys or long-lived secrets. In Azure environments, the most important mindset shift is: every agent/workflow must have an identity and that identity must be scoped. Guidance Give each agent/orchestrator a dedicated identity (least privilege by default). Separate identities by environment (prod vs staging) and by capability (read vs write). Treat tool invocation as a privileged service call, never “just a function.” Why this matters If an agent is compromised (or tricked via prompt injection), identity boundaries decide whether it can read one table or take down a whole environment. 2. Policy as enforcement (move governance into the execution path) Your article’s core idea governance is runtime enforcement maps perfectly to Azure’s broader governance philosophy: policies must be enforceable, not advisory. Guidance Create an explicit Policy Enforcement Point (PEP) in your agent runtime. Make the PEP decision mandatory before executing any tool call or data access. Use “allow + obligations” patterns: allow only with constraints (redaction, read-only mode, rate limits, approval gates, extra logging). Why this matters Governance fails when it’s a document. It works when it’s compiled into runtime decisions. 3. Observability that explains behavior Azure’s telemetry stack is valuable because it’s designed for distributed systems: correlation, tracing, and unified logs. Production AI needs the same plus decision traceability. Guidance Emit a trace for every request across: retrieval → prompt assembly → model call → tool calls → outcome. Log policy decisions (allow/deny/require approval) with policy version + obligations applied. Capture “why” signals: risk score, classifier outputs, injection signals, uncertainty indicators. Why this matters When incidents happen, you don’t just debug latency — you debug behavior. Without causality, you can’t root-cause drift or containment failures. 4. Zero-trust boundaries for tools and data Azure environments tend to be strong at network segmentation and access control. That foundation is exactly what AI systems need because AI introduces adversarial inputs by default. Guidance Put a Tool Gateway in front of tools (Jira, email, payments, infra) and enforce scopes there. Restrict data access by classification (PII/secret zones) and enforce row/column constraints. Degrade safely: if risk is high, drop to read-only, disable tools, or require approval. Why this matters Prompt injection doesn’t become catastrophic when your system has hard boundaries and graceful failure modes. 5. Practical “production-ready” checklist (Azure-aligned, engineering-first) If you want a concrete way to apply this: Identity: every runtime has a scoped identity; no shared secrets PEP: every tool/data action is gated by policy, with obligations Traceability: full chain captured and correlated end-to-end Containment: safe degradation + approval gates for high-risk actions Auditability: policy versions and decision logs are immutable and replayable Environment separation: prod ≠ staging identities, tools, and permissions Outcome This is how you turn “we integrated AI” into “we operate AI safely at scale.” Operating Production AI A lot of teams build the architecture and still struggle, because production is not a diagram. It is a living system. So here is the operating model I look for when I want to trust an AI runtime in production. The few SLOs that actually matter Trace completeness For high-impact requests, can we reconstruct the full chain every time, without missing steps. Policy coverage What percentage of tool calls and sensitive reads pass through the policy gate, with a recorded decision. Action correctness Not model accuracy. Real-world correctness. Did the system take the right action, on the right target, with the right scope. Time to contain When something goes wrong, how fast can we stop tool execution, downgrade to read-only, or isolate a capability. Drift detection time How quickly do we notice behavioral drift before users do. The runbooks you must have If you operate agents, you need simple playbooks for predictable bad days: Injection spike → safe mode, block tool execution, force approvals Retrieval poisoning suspicion → restrict sources, raise freshness requirements, require citations Retry storm → enforce idempotency, rate limits, and circuit breakers Tool gateway instability → fail closed for writes, degrade safely for reads Model outage → fall back to deterministic paths, templates, or human escalation Clear ownership Someone has to own the runtime, not just the prompts. Platform owns the gates, tool gateway, audit, and tracing Product owns workflows and user-facing behavior Security owns policy rules, high-risk approvals, and incident procedures When these pieces are real, production becomes manageable. When they are not, you rely on luck and hero debugging. The 60-second production readiness checklist If you want a fast sanity check, here it is. Every agent has an identity, scoped per environment No shared API keys for privileged actions Every tool call goes through a policy gate with a logged decision Permissions are scoped to operations, not whole tools Writes are idempotent, retries cannot duplicate side effects Tool gateway validates inputs, scopes data, and rate-limits actions There is a safe mode that disables tools under risk There is a kill switch that stops tool execution across the fleet Retrieval is allowlisted, provenance-tagged, and freshness-aware High-impact actions require citations or they stop and ask Audit logs are immutable enough to trust later Traces are replayable end-to-end for any incident If most of these are missing, you do not have production AI yet. You have a prototype with unmanaged risk. A quick note In Azure-based enterprises, you already have strong primitives that mirror the mindset production AI requires: identity-first access control (Microsoft Entra ID), secure workload authentication patterns (managed identities), and deep telemetry foundations (Azure Monitor / Application Insights). The key is translating that discipline into the AI runtime so governance, identity, and observability aren’t external add-ons, but part of how AI executes and acts. Closing Models will keep evolving. Tooling will keep improving. But enterprise AI success still comes down to systems engineering. If you’re building production AI today, what has been the hardest part in your environment: governance, observability, security boundaries, or operational reliability? If you’re dealing with deep technical challenges around production AI, agent security, RAG governance, or operational reliability, feel free to connect with me on LinkedIn. I’m open to technical discussions and architecture reviews. Thanks for reading. — Hazem Ali456Views0likes0CommentsUnleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge. Key Features of MCP Standardized Communication – MCP provides a structured way for AI models to interact with various tools. Tool Access & Expansion – AI assistants can now utilize external tools for real-time insights. Secure & Scalable – Enables safe and scalable integration with enterprise applications. Multi-Modal Integration – Supports STDIO, SSE (Server-Sent Events), and WebSocket communication methods. MCP Architecture & How It Works MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: Components of MCP MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. Data Flow in MCP The AI model sends a request (e.g., "fetch user profile data"). The MCP client forwards the request to the appropriate MCP server. The MCP server retrieves the required data from a database or API. The response is sent back to the AI model via the MCP client. Integrating MCP with Azure OpenAI Services Microsoft has integrated MCP with Azure OpenAI Services, allowing GPT models to interact with external services and fetch live data. This means AI models are no longer limited to static knowledge but can access real-time information. Benefits of Azure OpenAI Services + MCP Integration ✔ Real-time Data Fetching – AI assistants can retrieve fresh information from APIs, databases, and internal systems. ✔ Contextual AI Responses – Enhances AI responses by providing accurate, up-to-date information. ✔ Enterprise-Ready – Secure and scalable for business applications, including finance, healthcare, and retail. Hands-On Tools for MCP Implementation To implement MCP effectively, Microsoft provides two powerful tools: Semantic Workbench and AI Gateway. Microsoft Semantic Workbench A development environment for prototyping AI-powered assistants and integrating MCP-based functionalities. Features: Build and test multi-agent AI assistants. Configure settings and interactions between AI models and external tools. Supports GitHub Codespaces for cloud-based development. Explore Semantic Workbench Workbench interface examples Microsoft AI Gateway A plug-and-play interface that allows developers to experiment with MCP using Azure API Management. Features: Credential Manager – Securely handle API credentials. Live Experimentation – Test AI model interactions with external tools. Pre-built Labs – Hands-on learning for developers. Explore AI Gateway Setting Up MCP with Azure OpenAI Services Step 1: Create a Virtual Environment First, create a virtual environment using Python: python -m venv .venv Activate the environment: # Windows venv\Scripts\activate # MacOS/Linux source .venv/bin/activate Step 2: Install Required Libraries Create a requirements.txt file and add the following dependencies: langchain-mcp-adapters langgraph langchain-openai Then, install the required libraries: pip install -r requirements.txt Step 3: Set Up OpenAI API Key Ensure you have your OpenAI API key set up: # Windows setx OPENAI_API_KEY "<your_api_key> # MacOS/Linux export OPENAI_API_KEY=<your_api_key> Building an MCP Server This server performs basic mathematical operations like addition and multiplication. Create the Server File First, create a new Python file: touch math_server.py Then, implement the server: from mcp.server.fastmcp import FastMCP # Initialize the server mcp = FastMCP("Math") MCP.tool() def add(a: int, b: int) -> int: return a + b MCP.tool() def multiply(a: int, b: int) -> int: return a * b if __name__ == "__main__": mcp.run(transport="stdio") Your MCP server is now ready to run. Building an MCP Client This client connects to the MCP server and interacts with it. Create the Client File First, create a new file: touch client.py Then, implement the client: import asyncio from mcp import ClientSession, StdioServerParameters from langchain_openai import ChatOpenAI from mcp.client.stdio import stdio_client # Define server parameters server_params = StdioServerParameters( command="python", args=["math_server.py"], ) # Define the model model = ChatOpenAI(model="gpt-4o") async def run_agent(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await load_mcp_tools(session) agent = create_react_agent(model, tools) agent_response = await agent.ainvoke({"messages": "what's (4 + 6) x 14?"}) return agent_response["messages"][3].content if __name__ == "__main__": result = asyncio.run(run_agent()) print(result) Your client is now set up and ready to interact with the MCP server. Running the MCP Server and Client Step 1: Start the MCP Server Open a terminal and run: python math_server.py This starts the MCP server, making it available for client connections. Step 2: Run the MCP Client In another terminal, run: python client.py Expected Output 140 This means the AI agent correctly computed (4 + 6) x 14 using both the MCP server and GPT-4o. Conclusion Integrating MCP with Azure OpenAI Services enables AI applications to securely interact with external tools, enhancing functionality beyond text-based responses. With standardized communication and improved AI capabilities, developers can build smarter and more interactive AI-powered solutions. By following this guide, you can set up an MCP server and client, unlocking the full potential of AI with structured external interactions. Next Steps: Explore more MCP tools and integrations. Extend your MCP setup to work with additional APIs. Deploy your solution in a cloud environment for broader accessibility. For further details, visit the GitHub repository for MCP integration examples and best practices. MCP GitHub Repository MCP Documentation Semantic Workbench AI Gateway MCP Video Walkthrough MCP Blog MCP Github End to End Demo59KViews10likes6CommentsEdge AI for Beginners : Getting Started with Foundry Local
In Module 08 of the EdgeAI for Beginners course, Microsoft introduces Foundry Local a toolkit that helps you deploy and test Small Language Models (SLMs) completely offline. In this blog, I’ll share how I installed Foundry Local, ran the Phi-3.5-mini model on my windows laptop, and what I learned through the process. What Is Foundry Local? Foundry Local allows developers to run AI models locally on their own hardware. It supports text generation, summarization, and code completion — all without sending data to the cloud. Unlike cloud-based systems, everything happens on your computer, so your data never leaves your device. Prerequisites Before starting, make sure you have: Windows 10 or 11 Python 3.10 or newer Git Internet connection (for the first-time model download) Foundry Local installed Step 1 — Verify Installation After installing Foundry Local, open Command Prompt and type: foundry --version If you see a version number, Foundry Local is installed correctly. Step 2 — Start the Service Start the Foundry Local service using: foundry service start You should see a confirmation message that the service is running. Step 3 — List Available Models To view the models supported by your system, run: foundry model list You’ll get a list of locally available SLMs. Here’s what I saw on my machine: Note: Model availability depends on your device’s hardware. For most laptops, phi-3.5-mini works smoothly on CPU. Step 4 — Run the Phi-3.5 Model Now let’s start chatting with the model: foundry model run phi-3.5-mini-instruct-generic-cpu:1 Once it loads, you’ll enter an interactive chat mode. Try a simple prompt: Hello! What can you do? The model replies instantly — right from your laptop, no cloud needed. To exit, type: /exit How It Works Foundry Local loads the model weights from your device and performs inference locally.This means text generation happens using your CPU (or GPU, if available). The result: complete privacy, no internet dependency, and instant responses. Benefits for Students For students beginning their journey in AI, Foundry Local offers several key advantages: No need for high-end GPUs or expensive cloud subscriptions. Easy setup for experimenting with multiple models. Perfect for class assignments, AI workshops, and offline learning sessions. Promotes a deeper understanding of model behavior by allowing step-by-step local interaction. These factors make Foundry Local a practical choice for learning environments, especially in universities and research institutions where accessibility and affordability are important. Why Use Foundry Local Running models locally offers several practical benefits compared to using AI Foundry in the cloud. With Foundry Local, you do not need an internet connection, and all computations happen on your personal machine. This makes it faster for small models and more private since your data never leaves your device. In contrast, AI Foundry runs entirely on the cloud, requiring internet access and charging based on usage. For students and developers, Foundry Local is ideal for quick experiments, offline testing, and understanding how models behave in real-time. On the other hand, AI Foundry is better suited for large-scale or production-level scenarios where models need to be deployed at scale. In summary, Foundry Local provides a flexible and affordable environment for hands-on learning, especially when working with smaller models such as Phi-3, Qwen2.5, or TinyLlama. It allows you to experiment freely, learn efficiently, and better understand the fundamentals of Edge AI development. Optional: Restart Later Next time you open your laptop, you don’t have to reinstall anything. Just run these two commands again: foundry service start foundry model run phi-3.5-mini-instruct-generic-cpu:1 What I Learned Following the EdgeAI for Beginners Study Guide helped me understand: How edge AI applications work How small models like Phi 3.5 can run on a local machine How to test prompts and build chat apps with zero cloud usage Conclusion Running the Phi-3.5-mini model locally with Foundry Localgave me hands-on insight into edge AI. It’s an easy, private, and cost-free way to explore generative AI development. If you’re new to Edge AI, start with the EdgeAI for Beginners course and follow its Study Guide to get comfortable with local inference and small language models. Resources: EdgeAI for Beginners GitHub Repo Foundry Local Official Site Phi Model Link513Views1like0CommentsRedeeming Azure for Student from your GitHub Student Pack when you do not have an Academic Email
GitHub Student Developer Pack Learn to ship software like a pro. There's no substitute for hands-on experience. But for most students, real world tools can be cost-prohibitive. That's why we created the GitHub Student Developer Pack with some of our partners and friends. Sign up for Student Developer Pack23KViews1like3CommentsAI Career Navigator — Empowering Job Seekers with Azure OpenAI
AI Career Navigator is more than just a project — it’s a mission to make career growth accessible, intelligent, and human. Powered by Azure OpenAI, it transforms uncertainty into direction and effort into achievement. Author: Aryan Jaiswal — Gold Microsoft Learn Student Ambassador Reviewer: Julia Muiruri (Microsoft)330Views2likes0CommentsMicrosoft’s A-Grade Azure AI Stack: From Dissertation Prototype to Smart Campus Pilot
This post isn't just about the Student Support Agent (SSA) I built, which earned me a Distinction. It's about how Microsoft's tools made it possible to go from a rough concept to a robust pilot, proving their developer stack is one of the most convenient and powerful options for building intelligent, ethical, and scalable educational systems. The Vision: Cutting Through Campus Complexity University life is full of fragmented systems. Students constantly juggle multiple logins, websites, and interfaces just to check a timetable, book a room, or find a policy. My goal was simple: reduce that cognitive load by creating a unified assistant that could manage all these tasks through a single, intelligent conversation. The Stack That Made It Possible The core of the system relied on a few key, interconnected technologies: Technology Core Function Impact Azure AI Search Hybrid Data Retrieval Anchored responses in official documents. Azure OpenAI Natural Language Generation Created human-like, accurate answers. Semantic Kernel (SK) Multi-Agent Orchestration Managed complex workflows and memory. Azure Speech SDK Multimodal Interface Enabled accessible voice input and output. The foundation was built using Streamlit and FastAPI for rapid prototyping. Building a system that's context-aware, accessible, and extensible is a huge challenge, but it's exactly where the Microsoft AI stack shined. From Simple Chatbot to Multi-Agent Powerhouse Early campus chatbots are often single-agent models, great for basic FAQs, but they quickly fail when tasks span multiple services. I used Semantic Kernel (SK) Microsoft's powerful, open-source framework to build a modular, hub-and-spoke multi-agent system. A central orchestrator routes a request (like "book a study room") to a specialist agent (the Booking Agent), which knows exactly how to handle that task. This modularity was a game-changer: I could add new features (like an Events Agent) without breaking the core system, ensuring the architecture stayed clean and ready for expansion. Agentic Retrieval-Augmented Generation (Agentic RAG): Trust and Transparency To ensure the assistant was trustworthy, I used Agentic RAG to ground responses in real campus (Imperial College London) documentation. This included everything from admission fee payments to campus shuttle time. Azure AI Search indexed all handbooks and policies, allowing the assistant to pull relevant chunks of data and then cite the sources directly in its response. Result: The system avoids common hallucinations by refusing to answer when confidence is low. Students can verify every piece of advice, dramatically improving trust and transparency. Results: A Foundation for Scalable Support A pilot study with 15 students was highly successful: 100% positive feedback on the ease of use and perceived benefit. 93% satisfaction with the voice features. High trust was established due to transparent citations. The SSA proved it could save students time by centralising tasks like booking rooms, checking policies and offering study tips! Final Thoughts Microsoft’s AI ecosystem didn’t just support my dissertation; it shaped it. The tools were reliable, well-documented, and flexible enough to handle real-world complexity. More importantly, they allowed me to focus on student experience, ethics, and pedagogy, rather than wrestling with infrastructure. If you’re a student, educator, or developer looking to build intelligent systems that are transparent, inclusive, and scalable, Microsoft’s AI stack is a great place to start! 🙋🏽♀️ About Me I’m Tyana Tshiota, a postgraduate student in Applied Computational Science and Engineering at Imperial College London. Leveraging Microsoft’s AI stack and the extensive documentation on Microsoft Learn played a key role in achieving a Distinction in my dissertation. Moving forward, I’m excited to deepen my expertise by pursuing Azure certifications. I’d like to extend my sincere gratitude to my supervisor, Lee_Stott , for his invaluable mentorship and support throughout this project. If you haven’t already, check out his insightful posts on the Educator Developer Blog, or try building your own agent with the AI Agents for Beginners curriculum developed by Lee and his team! You can reach out via my LinkedIn if you’re interested in smart campus systems, AI in education, collaborative development, or would like to discuss opportunities.174Views0likes0CommentsComo começar e crescer no mercado de tecnologia
A #JornadaTech é uma maratona de mentorias online do Microsoft Reactor São Paulo, sobre carreira na tecnologia, em que você poderá aprender mais sobre as áreas de Cloud, Segurança, Programação e Dados. Neste artigo, você encontrará algumas dicas e recursos para começar e crescer na carreira de tecnologia.14KViews6likes10CommentsStreamlining Campus Life: A Multi-Agent System for Campus Event Management
Introduction Managing campus events has long been a complex, manual process fraught with challenges. Traditional event management systems offer limited automation, placing a considerable workload on staff for tasks ranging from resource allocation to participant communication. This procedural friction presents a clear opportunity to build a more intelligent solution, leveraging the emerging paradigm of AI agents. To solve these challenges, I developed and evaluated a multi-agent system designed to automate the campus event workflow and improve productivity. In this blog, I’ll share the journey of building this system, detailing its architecture and how I leveraged the Semantic Kernel and Azure Services to create a team of specialized agents. Background My name is Junjie Wan, and I’m a MSc student in Applied Computational Science and Engineering at Imperial College London. This research project, in collaboration with Microsoft, explores the development of a multi-agent solution for managing a university campus. The system's focus is on automating the event management workflow using Microsoft Azure AI Agent services. I would like to thank my supervisor, Lee Stott, for his guidance and mentorship during this project. Methodology: Building the Agentic System. The Model Context Protocol (MCP) and Backend Integration For agents to perform their duties effectively, they need access to a powerful set of tools. The system's backend is a high-performance API built with FastAPI, with Azure Cosmos DB serving as the scalable data store. To make these API functions usable by the agents, they are wrapped as tools using Semantic Kernel’s kernel_function decorator. These tools contain the necessary functions to utilize both the internal API and various Azure Services. The setup for making these tools accessible is straightforward: we first instantiate a central Kernel object, add the defined tools as plugins, and then convert this populated Kernel into a runnable MCP server. This approach creates an extensible system where new tools can be added as services without requiring changes to the agents themselves. Frontend Implemenation with Streamlit To build a frontend powered by the AI features and based on Python, I choose to use the Streamlit for rapid prototyping. The frontend implements role-based access control, with different interfaces for admin, staff, and students. The system inlcudes a dashbarad, form-based pages, and a conversational chat interface as the primary entry point for the multi-agent system. To enhance user experience, it supports multi-modal input through voice integration, which uses OpenAI whisper for accurate speech-to-text transcription and the OpenAI tts model in Azure AI Foundry for voice playback. Individual Agent Design The system distributes responsibilities across a team of specialized agents, each targeting a specific operational aspect of event management. Each agent is initialized as a ChatCompletionAgent with OpenAI’s Model Router and MCP plugins. Here are some of the agents implemented to improve the event management process. To address the operational challenge of manually reconciling room availability and event requirements, the system utilizes a Planning Agent and a Schedule Agent. The Planning Agent serves as the central coordinator, gathering event specifications from the user. It can even leverage the Azure Maps Weather service to provide organizers with weather forecasts that may influence venue selection. It then delegates to the Schedule Agent, which is responsible for generating conflict-free timetable entries by querying our FastAPI backend for real-time availability data stored in the database. This workflow directly replaces the error-prone manual process and prevents scheduling conflicts. For financial planning, the Budget Agent functions as the system's dedicated financial analyst, designed to solve the problem of inaccurate cost estimation. When tasked with a budget, it first retrieves the event context from Cosmos DB. To ground its responses in verifiable data, the agent utilizes a Retrieval-Augmented Generation (RAG) pipeline built on Azure AI Search. This allows it to search internal documents, such as catering menus, for pricing information. If items are not found internally, the agent uses the Grounding with Bing Search tool to gather real-time market data, ensuring estimations are both accurate and current. To automate the manual, time-consuming process of participant communication, the Communication Agent handles all interactions. It drafts personalized emails populated with event details retrieved from the database. The agent is equipped with a tool that directly interfaces with Azure Communication Service to send emails programmatically. This automates the communication workflow, from sending initial invitations with Microsoft Forms links for registration to distributing post-event feedback surveys, saving significant administrative effort. Multi-Agent Collaboration For collaboration between agents, I chose the AgentGroupChat pattern within Semantic Kernel. While orchestration patterns like sequential or handoff are suitable for linear tasks and dynamic delegation between agents, the multi-domain nature of event management required a more flexible approach. The group chat pattern allows for both structured sequential handoffs and dynamic contributions from other agents as needed. Group Chat Design The orchestration logic is governed by two dynamic, LLM-driven functions: Selection Function: This acts as a dynamic router, analyzing the conversation's context to determine the most appropriate agent to speak in the next round. It performs intent recognition for initial queries and routes subsequent turns based on the ongoing workflow. Termination Function: This function prevents infinite loops and ensures the system remains responsive. It evaluates each agent's turn to decide whether the conversation should conclude or if a clear handoff warrants its continuation, maintaining coherent system behavior. Evaluation Framework and Performance To evaluate whether the system could reliably execute domain-specific workflows, I used the LLM-as-a-Judge framework through the Azure AI Evaluation SDK, which provides a scalable and consistent assessment of agent performance. Group Chat Performance Radar Chart The evaluation focused on three main categories of metrics to get a holistic view of the system: Functional Correctness: I used metrics such as IntentResolution, TaskAdherence, and ToolCallAccuracy to assess whether the agents correctly understood user requests, followed instructions, and called the appropriate tools with the correct parameters. Response Quality: Metrics like Fluency, Coherence, Relevance, and Response Completeness were used to evaluate the linguistic quality of the agents' responses. Operational Metrics: To assess the practical viability of the system, I also measured key operational metrics, including latency and token usage for each task. The results confirmed the system's strong performance, consistently exceeding the pass threshold of 3.0. This demonstrates that the agentic architecture can successfully decompose and execute event management tasks with high precision. In contrast, linguistic metrics were lower, highlighting a potential trade-off where our multi-agent system focuses on functionality prioritized over conversational flow. The operational metrics also provided valuable insights into system behavior: Response Time by Tag Token vs Tool Call Latency: The data showed that simpler queries, such as reading information, were consistently fast. However, complex, multi-step tasks exhibited significantly longer and more variable response times. This pattern reflected the expected accumulation of latency across multiple agent handoffs and tool calls within the agentic workflow. Token: Analysis revealed a strong positive correlation between the number of tool calls and total token consumption, indicating that workflow complexity directly impacted computational cost. The baseline token usage for simple queries is high largely due to the context of tool definitions injected by the MCP server. Agents relying on RAG pipelines, like the Budget Agent, notably consumed more tokens due to the inclusion of retrieved context chunks in their prompts. Limitation and Future Work Despite the good performance, the system has several limitations: The system relies on carefully engineered prompts, making it less flexible when facing unexpected queries. Multi-turn coordination between agents and the use of MCP servers results in high token consumption, raising concerns about efficiency and scalability in production deployments. The system was tested with synthetic data and a relatively small set of test queries, which may not reflect the complexity of real-world scenarios. Future work will focus on: Enhancing error handling and recovery mechanisms within the group chat orchestration Improving conversational quality while reducing token consumption Deploying the agent system on a server for broader access and collecting user feedback Testing the system with real-world data and conducting formal user studies Conclusion This project demonstrates that a multi-agent system, built on the integrated power of Microsoft Azure services, can offer an efficient solution for campus event management. By dividing the labor among specialized agents and enabling them with a powerful toolkit, we can automate complex workflows and reduce administrative burden. This work serves as a proof-of-concept that shows how agentic approaches can deliver more intelligent and streamlined solutions that improve the quality of events and the student experience. Thank you for reading! If you have any questions or would like to discuss this work further, please feel free to contact me via email or on LinkedIn.252Views0likes0CommentsModel Mondays S2E13: Open Source Models (Hugging Face)
1. Weekly Highlights 1. Weekly Highlights Here are the key updates we covered in the Season 2 finale: O1 Mini Reinforcement Fine-Tuning (GA): Fine-tune models with as few as ~100 samples using built-in Python code graders. Azure Live Interpreter API (Preview): Real-time speech-to-speech translation supporting 76 input languages and 143 locales with near human-level latency. Agent Factory – Part 5: Connecting agents using open standards like MCP (Model Context Protocol) and A2A (Agent-to-Agent protocol). Ask Ralph by Ralph Lauren: A retail example of agentic AI for conversational styling assistance, built on Azure OpenAI and Foundry’s agentic toolset. VS Code August Release: Brings auto-model selection, stronger safety guards for sensitive edits, and improved agent workflows through new agents.md support. 2. Spotlight – Open Source Models in Azure AI Foundry Guest: Jeff Boudier, VP of Product at Hugging Face Jeff showcased the deep integration between the Hugging Face community and Azure AI Foundry, where developers can access over 10 000 open-source models across multiple modalities—LLMs, speech recognition, computer vision, and even specialized domains like protein modeling and robotics. Demo Highlights Discover models through Azure AI Foundry’s task-based catalog filters. Deploy directly from Hugging Face Hub to Azure with one-click deployment. Explore Use Cases such as multilingual speech recognition and vision-language-action models for robotics. Jeff also highlighted notable models, including: SmoLM3 – a 3 B-parameter model with hybrid reasoning capabilities Qwen 3 Coder – a mixture-of-experts model optimized for coding tasks Parakeet ASR – multilingual speech recognition Microsoft Research protein-modeling collection MAGMA – a vision-language-action model for robotics Integration extends beyond deployment to programmatic access through the Azure CLI and Python SDKs, plus local development via new VS Code extensions. 3. Customer Story – DraftWise (BUILD 2025 Segment) The finale featured a customer spotlight on DraftWise, where CEO James Ding shared how the company accelerates contract drafting with Azure AI Foundry. Problem Legal contract drafting is time-consuming and error-prone. Solution DraftWise uses Azure AI Foundry to fine-tune Hugging Face language models on legal data, generating contract drafts and redline suggestions. Impact Faster drafting cycles and higher consistency Easy model management and deployment with Foundry’s secure workflows Transparent evaluation for legal compliance 4. Community Story – Hugging Face & Microsoft The episode also celebrated the ongoing collaboration between Hugging Face and Microsoft and the impact of open-source AI on the global developer ecosystem. Community Benefits Access to State-of-the-Art Models without licensing barriers Transparent Performance through public leaderboards and benchmarks Rapid Innovation as improvements and bug fixes spread quickly Education & Empowerment via tutorials, docs, and active forums Responsible AI Practices encouraged through community oversight 5. Key Takeaways Open Source AI Is Here to Stay Azure AI Foundry and Hugging Face make deploying, fine-tuning, and benchmarking open models easier than ever. Community Drives Innovation: Collaboration accelerates progress, improves transparency, and makes AI accessible to everyone. Responsible AI and Transparency: Open-source models come with clear documentation, licensing, and community-driven best practices. Easy Deployment & Customization: Azure AI Foundry lets you deploy, automate, and customize open models from a single, unified platform. Learn, Build, Share: The open-model ecosystem is a great place for students, developers, and researchers to learn, build, and share their work. Sharda's Tips: How I Wrote This Blog For this final recap, I focused on capturing the energy of the open source AI movement and the practical impact of Hugging Face and Azure AI Foundry collaboration. I watched the livestream, took notes on the demos and interviews, and linked directly to official resources for models, docs, and community sites. Here’s my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E13 based on the transcript and episode details. Focus on open source models, Hugging Face, Azure AI Foundry, and community workflows. Include practical links and actionable insights for developers and students! Learn & Connect Explore Open Models in Azure AI Foundry Hugging Face Leaderboard Responsible AI in Azure Machine Learning Llama-3 by Meta Hugging Face Community Azure AI Documentation About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.267Views0likes0Comments