ai
28 TopicsAgentic Code Fixing with GitHub Copilot SDK and Foundry Local
Introduction AI-powered coding assistants have transformed how developers write and review code. But most of these tools require sending your source code to cloud services, a non-starter for teams working with proprietary codebases, air-gapped environments, or strict compliance requirements. What if you could have an intelligent coding agent that finds bugs, fixes them, runs your tests, and produces PR-ready summaries, all without a single byte leaving your machine? The Local Repo Patch Agent demonstrates exactly this. By combining the GitHub Copilot SDK for agent orchestration with Foundry Local for on-device inference, this project creates a fully autonomous coding workflow that operates entirely on your hardware. The agent scans your repository, identifies bugs and code smells, applies fixes, verifies them through your test suite, and generates a comprehensive summary of all changes, completely offline and secure. This article explores the architecture behind this integration, walks through the key implementation patterns, and shows you how to run the agent yourself. Whether you're building internal developer tools, exploring agentic workflows, or simply curious about what's possible when you combine GitHub's SDK with local AI, this project provides a production-ready foundation to build upon. Why Local AI Matters for Code Analysis Cloud-based AI coding tools have proven their value—GitHub Copilot has fundamentally changed how millions of developers work. But certain scenarios demand local-first approaches where code never leaves the organisation's network. Consider these real-world constraints that teams face daily: Regulatory compliance: Financial services, healthcare, and government projects often prohibit sending source code to external services, even for analysis Intellectual property protection: Proprietary algorithms and trade secrets can't risk exposure through cloud API calls Air-gapped environments: Secure facilities and classified projects have no internet connectivity whatsoever Latency requirements: Real-time code analysis in IDEs benefits from zero network roundtrip Cost control: High-volume code analysis without per-token API charges The Local Repo Patch Agent addresses all these scenarios. By running the AI model on-device through Foundry Local and using the GitHub Copilot SDK for orchestration, you get the intelligence of agentic coding workflows with complete data sovereignty. The architecture proves that "local-first" doesn't mean "capability-limited." The Technology Stack Two core technologies make this architecture possible, working together through a clever integration called BYOK (Bring Your Own Key). Understanding how they complement each other reveals the elegance of the design. GitHub Copilot SDK The GitHub Copilot SDK provides the agent runtime, the scaffolding that handles planning, tool invocation, streaming responses, and the orchestration loop that makes agentic behaviour possible. Rather than managing raw LLM API calls, developers define tools (functions the agent can call) and system prompts, and the SDK handles everything else. Key capabilities the SDK brings to this project: Session management: Maintains conversation context across multiple agent interactions Tool orchestration: Automatically invokes defined tools when the model requests them Streaming support: Real-time response streaming for responsive user interfaces Provider abstraction: Works with any OpenAI-compatible API through the BYOK configuration Foundry Local Foundry Local brings Azure AI Foundry's model catalog to your local machine. It automatically selects the best available hardware acceleration—GPU, NPU, or CP, and exposes models through an OpenAI-compatible API on localhost. Models run entirely on-device with no telemetry or data transmission. For this project, Foundry Local provides: On-device inference: All AI processing happens locally, ensuring complete data privacy Dynamic port allocation: The SDK auto-detects the Foundry Local endpoint, eliminating configuration hassle Model flexibility: Swap between models like qwen2.5-coder-1.5b , phi-3-mini , or larger variants based on your hardware OpenAI API compatibility: Standard API format means the GitHub Copilot SDK works without modification The BYOK Integration The entire connection between the GitHub Copilot SDK and Foundry Local happens through a single configuration object. This BYOK (Bring Your Own Key) pattern tells the SDK to route all inference requests to your local model instead of cloud services: const session = await client.createSession({ model: modelId, provider: { type: "openai", // Foundry Local speaks OpenAI's API format baseUrl: proxyBaseUrl, // Streaming proxy → Foundry Local apiKey: manager.apiKey, wireApi: "completions", // Chat Completions API }, streaming: true, tools: [ /* your defined tools */ ], }); This configuration is the key insight: with one config object, you've redirected an entire agent framework to run on local hardware. No code changes to the SDK, no special adapters—just standard OpenAI-compatible API communication. Architecture Overview The Local Repo Patch Agent implements a layered architecture where each component has a clear responsibility. Understanding this flow helps when extending or debugging the system. ┌─────────────────────────────────────────────────────────┐ │ Your Terminal / Web UI │ │ npm run demo / npm run ui │ └──────────────┬──────────────────────────────────────────┘ │ ┌──────────────▼──────────────────────────────────────────┐ │ src/agent.ts (this project) │ │ │ │ ┌────────────────────────────┐ ┌──────────────────┐ │ │ │ GitHub Copilot SDK │ │ Agent Tools │ │ │ │ (CopilotClient) │ │ list_files │ │ │ │ BYOK → Foundry │ │ read_file │ │ │ └────────┬───────────────────┘ │ write_file │ │ │ │ │ run_command │ │ └────────────┼───────────────────────┴──────────────────┘ │ │ │ │ JSON-RPC │ ┌────────────▼─────────────────────────────────────────────┐ │ GitHub Copilot CLI (server mode) │ │ Agent orchestration layer │ └────────────┬─────────────────────────────────────────────┘ │ POST /v1/chat/completions (BYOK) ┌────────────▼─────────────────────────────────────────────┐ │ Foundry Local (on-device inference) │ │ Model: qwen2.5-coder-1.5b via ONNX Runtime │ │ Endpoint: auto-detected (dynamic port) │ └───────────────────────────────────────────────────────────┘ The data flow works as follows: your terminal or web browser sends a request to the agent application. The agent uses the GitHub Copilot SDK to manage the conversation, which communicates with the Copilot CLI running in server mode. The CLI, configured with BYOK, sends inference requests to Foundry Local running on localhost. Responses flow back up the same path, with tool invocations happening in the agent.ts layer. The Four-Phase Workflow The agent operates through a structured four-phase loop, each phase building on the previous one's output. This decomposition transforms what would be an overwhelming single prompt into manageable, verifiable steps. Phase 1: PLAN The planning phase scans the repository and produces a numbered fix plan. The agent reads every source and test file, identifies potential issues, and outputs specific tasks to address: // Phase 1 system prompt excerpt const planPrompt = ` You are a code analysis agent. Scan the repository and identify: 1. Bugs that cause test failures 2. Code smells and duplication 3. Style inconsistencies Output a numbered list of fixes, ordered by priority. Each item should specify: file path, line numbers, issue type, and proposed fix. `; The tools available during this phase are list_files and read_file —the agent explores the codebase without modifying anything. This read-only constraint prevents accidental changes before the plan is established. Phase 2: EDIT With a plan in hand, the edit phase applies each fix by rewriting affected files. The agent receives the plan from Phase 1 and systematically addresses each item: // Phase 2 adds the write_file tool const editTools = [ { name: "write_file", description: "Write content to a file, creating or overwriting it", parameters: { type: "object", properties: { path: { type: "string", description: "File path relative to repo root" }, content: { type: "string", description: "Complete file contents" } }, required: ["path", "content"] } } ]; The write_file tool is sandboxed to the demo-repo directory, path traversal attempts are blocked, preventing the agent from modifying files outside the designated workspace. Phase 3: VERIFY After making changes, the verification phase runs the project's test suite to confirm fixes work correctly. If tests fail, the agent attempts to diagnose and repair the issue: // Phase 3 adds run_command with an allowlist const allowedCommands = ["npm test", "npm run lint", "npm run build"]; const runCommandTool = { name: "run_command", description: "Execute a shell command (npm test, npm run lint, npm run build only)", execute: async (command: string) => { if (!allowedCommands.includes(command)) { throw new Error(`Command not allowed: ${command}`); } // Execute and return stdout/stderr } }; The command allowlist is a critical security measure. The agent can only run explicitly permitted commands—no arbitrary shell execution, no data exfiltration, no system modification. Phase 4: SUMMARY The final phase produces a PR-style Markdown report documenting all changes. This summary includes what was changed, why each change was necessary, test results, and recommended follow-up actions: ## Summary of Changes ### Bug Fix: calculateInterest() in account.js - **Issue**: Division instead of multiplication caused incorrect interest calculations - **Fix**: Changed `principal / annualRate` to `principal * (annualRate / 100)` - **Tests**: 3 previously failing tests now pass ### Refactor: Duplicate formatCurrency() removed - **Issue**: Identical function existed in account.js and transaction.js - **Fix**: Both files now import from utils.js - **Impact**: Reduced code duplication, single source of truth ### Test Results - **Before**: 6/9 passing - **After**: 9/9 passing This structured output makes code review straightforward, reviewers can quickly understand what changed and why without digging through diffs. The Demo Repository: Intentional Bugs The project includes a demo-repo directory containing a small banking utility library with intentional problems for the agent to find and fix. This provides a controlled environment to demonstrate the agent's capabilities. Bug 1: Calculation Error in calculateInterest() The account.js file contains a calculation bug that causes test failures: // BUG: should be principal * (annualRate / 100) function calculateInterest(principal, annualRate) { return principal / annualRate; // Division instead of multiplication! } This bug causes 3 of 9 tests to fail. The agent identifies it during the PLAN phase by correlating test failures with the implementation, then fixes it during EDIT. Bug 2: Code Duplication The formatCurrency() function is copy-pasted in both account.js and transaction.js, even though a canonical version exists in utils.js. This duplication creates maintenance burden and potential inconsistency: // In account.js (duplicated) function formatCurrency(amount) { return '$' + amount.toFixed(2); } // In transaction.js (also duplicated) function formatCurrency(amount) { return '$' + amount.toFixed(2); } // In utils.js (canonical, but unused) export function formatCurrency(amount) { return '$' + amount.toFixed(2); } The agent identifies this duplication during planning and refactors both files to import from utils.js, eliminating redundancy. Handling Foundry Local Streaming Quirks One technical challenge the project solves is Foundry Local's behaviour with streaming requests. As of version 0.5, Foundry Local can hang on stream: true requests. The project includes a streaming proxy that works around this limitation transparently. The Streaming Proxy The streaming-proxy.ts file implements a lightweight HTTP proxy that converts streaming requests to non-streaming, then re-encodes the single response as SSE (Server-Sent Events) chunks—the format the OpenAI SDK expects: // streaming-proxy.ts simplified logic async function handleRequest(req: Request): Promise { const body = await req.json(); // If it's a streaming chat completion, convert to non-streaming if (body.stream === true && req.url.includes('/chat/completions')) { body.stream = false; const response = await fetch(foundryEndpoint, { method: 'POST', body: JSON.stringify(body), headers: { 'Content-Type': 'application/json' } }); const data = await response.json(); // Re-encode as SSE stream for the SDK return createSSEResponse(data); } // Non-streaming and non-chat requests pass through unchanged return fetch(foundryEndpoint, req); } This proxy runs on port 8765 by default and sits between the GitHub Copilot SDK and Foundry Local. The SDK thinks it's talking to a streaming-capable endpoint, while the actual inference happens non-streaming. The conversion is transparent, no changes needed to SDK configuration. Text-Based Tool Call Detection Small on-device models like qwen2.5-coder-1.5b sometimes output tool calls as JSON text rather than using OpenAI-style function calling. The SDK won't fire tool.execution_start events for these text-based calls, so the agent includes a regex-based detector: // Pattern to detect tool calls in model output const toolCallPattern = /\{[\s\S]*"name":\s*"(list_files|read_file|write_file|run_command)"[\s\S]*\}/; function detectToolCall(text: string): ToolCall | null { const match = text.match(toolCallPattern); if (match) { try { return JSON.parse(match[0]); } catch { return null; } } return null; } This fallback ensures tool calls are captured regardless of whether the model uses native function calling or text output, keeping the dashboard's tool call counter and CLI log accurate. Security Considerations Running an AI agent that can read and write files and execute commands requires careful security design. The Local Repo Patch Agent implements multiple layers of protection: 100% local execution: No code, prompts, or responses leave your machine—complete data sovereignty Command allowlist: The agent can only run npm test , npm run lint , and npm run build —no arbitrary shell commands Path sandboxing: File tools are locked to the demo-repo/ directory; path traversal attempts like ../../../etc/passwd are rejected File size limits: The read_file tool rejects files over 256 KB, preventing memory exhaustion attacks Recursion limits: Directory listing caps at 20 levels deep, preventing infinite traversal These constraints demonstrate responsible AI agent design. The agent has enough capability to do useful work but not enough to cause harm. When extending this project for your own use cases, maintain similar principles, grant minimum necessary permissions, validate all inputs, and fail closed on unexpected conditions. Running the Agent Getting the Local Repo Patch Agent running on your machine takes about five minutes. The project includes setup scripts that handle prerequisites automatically. Prerequisites Before running the setup, ensure you have: Node.js 18 or higher: Download from nodejs.org (LTS version recommended) Foundry Local: Install via winget install Microsoft.FoundryLocal (Windows) or brew install foundrylocal (macOS) GitHub Copilot CLI: Follow the GitHub Copilot CLI install guide Verify your installations: node --version # Should print v18.x.x or higher foundry --version copilot --version One-Command Setup The easiest path uses the provided setup scripts that install dependencies, start Foundry Local, and download the AI model: # Clone the repository git clone https://github.com/leestott/copilotsdk_foundrylocal.git cd copilotsdk_foundrylocal # Windows (PowerShell) .\setup.ps1 # macOS / Linux chmod +x setup.sh ./setup.sh When setup completes, you'll see: ━━━ Setup complete! ━━━ You're ready to go. Run one of these commands: npm run demo CLI agent (terminal output) npm run ui Web dashboard (http://localhost:3000) Manual Setup If you prefer step-by-step control: # Install npm packages npm install cd demo-repo && npm install --ignore-scripts && cd .. # Start Foundry Local and download the model foundry service start foundry model run qwen2.5-coder-1.5b # Copy environment configuration cp .env.example .env # Run the agent npm run demo The first model download takes a few minutes depending on your connection. After that, the model runs from cache with no internet required. Using the Web Dashboard For a visual experience with real-time streaming, launch the web UI: npm run ui Open http://localhost:3000 in your browser. The dashboard provides: Phase progress sidebar: Visual indication of which phase is running, completed, or errored Live streaming output: Model responses appear in real-time via WebSocket Tool call log: Every tool invocation logged with phase context Phase timing table: Performance metrics showing how long each phase took Environment info: Current model, endpoint, and repository path at a glance Configuration Options The agent supports several environment variables for customisation. Edit the .env file or set them directly: Variable Default Description FOUNDRY_LOCAL_ENDPOINT auto-detected Override the Foundry Local API endpoint FOUNDRY_LOCAL_API_KEY auto-detected Override the API key FOUNDRY_MODEL qwen2.5-coder-1.5b Which model to use from the Foundry Local catalog FOUNDRY_TIMEOUT_MS 180000 (3 min) How long each agent phase can run before timing out FOUNDRY_NO_PROXY — Set to 1 to disable the streaming proxy PORT 3000 Port for the web dashboard Using Different Models To try a different model from the Foundry Local catalog: # Use phi-3-mini instead FOUNDRY_MODEL=phi-3-mini npm run demo # Use a larger model for higher quality (requires more RAM/VRAM) FOUNDRY_MODEL=qwen2.5-7b npm run demo Adjusting for Slower Hardware If you're running on CPU-only or limited hardware, increase the timeout to give the model more time per phase: # 5 minutes per phase instead of 3 FOUNDRY_TIMEOUT_MS=300000 npm run demo Troubleshooting Common Issues When things don't work as expected, these solutions address the most common problems: Problem Solution foundry: command not found Install Foundry Local—see Prerequisites section copilot: command not found Install GitHub Copilot CLI—see Prerequisites section Agent times out on every phase Increase FOUNDRY_TIMEOUT_MS (e.g., 300000 for 5 min). CPU-only machines are slower. Port 3000 already in use Set PORT=3001 npm run ui Model download is slow First download can take 5-10 min. Subsequent runs use the cache. Cannot find module errors Run npm install again, then cd demo-repo && npm install --ignore-scripts Tests still fail after agent runs The agent edits files in demo-repo/. Reset with git checkout demo-repo/ and run again. PowerShell blocks setup.ps1 Run Set-ExecutionPolicy -Scope Process Bypass first, then .\setup.ps1 Diagnostic Test Scripts The src/tests/ folder contains standalone scripts for debugging SDK and Foundry Local integration issues. These are invaluable when things go wrong: # Debug-level SDK event logging npx tsx src/tests/test-debug.ts # Test non-streaming inference (bypasses streaming proxy) npx tsx src/tests/test-nostream.ts # Raw fetch to Foundry Local (bypasses SDK entirely) npx tsx src/tests/test-stream-direct.ts # Start the traffic-inspection proxy npx tsx src/tests/test-proxy.ts These scripts isolate different layers of the stack, helping identify whether issues lie in Foundry Local, the streaming proxy, the SDK, or your application code. Key Takeaways BYOK enables local-first AI: A single configuration object redirects the entire GitHub Copilot SDK to use on-device inference through Foundry Local Phased workflows improve reliability: Breaking complex tasks into PLAN → EDIT → VERIFY → SUMMARY phases makes agent behaviour predictable and debuggable Security requires intentional design: Allowlists, sandboxing, and size limits constrain agent capabilities to safe operations Local models have quirks: The streaming proxy and text-based tool detection demonstrate how to work around on-device model limitations Real-time feedback matters: The web dashboard with WebSocket streaming makes agent progress visible and builds trust in the system The architecture is extensible: Add new tools, change models, or modify phases to adapt the agent to your specific needs Conclusion and Next Steps The Local Repo Patch Agent proves that sophisticated agentic coding workflows don't require cloud infrastructure. By combining the GitHub Copilot SDK's orchestration capabilities with Foundry Local's on-device inference, you get intelligent code analysis that respects data sovereignty completely. The patterns demonstrated here, BYOK integration, phased execution, security sandboxing, and streaming workarounds, transfer directly to production systems. Consider extending this foundation with: Custom tool sets: Add database queries, API calls to internal services, or integration with your CI/CD pipeline Multiple repository support: Scan and fix issues across an entire codebase or monorepo Different model sizes: Use smaller models for quick scans, larger ones for complex refactoring Human-in-the-loop approval: Add review steps before applying fixes to production code Integration with Git workflows: Automatically create branches and PRs from agent-generated fixes Clone the repository, run through the demo, and start building your own local-first AI coding tools. The future of developer AI isn't just cloud—it's intelligent systems that run wherever your code lives. Resources Local Repo Patch Agent Repository – Full source code with setup scripts and documentation Foundry Local – Official site for on-device AI inference Foundry Local GitHub Repository – Installation instructions and CLI reference Foundry Local Get Started Guide – Official Microsoft Learn documentation Foundry Local SDK Reference – Python and JavaScript SDK documentation GitHub Copilot SDK – Official SDK repository GitHub Copilot SDK BYOK Documentation – Bring Your Own Key integration guide GitHub Copilot SDK Getting Started – SDK setup and first agent tutorial Microsoft Sample: Copilot SDK + Foundry Local – Official integration sample from Microsoft153Views0likes0CommentsBuilding with Azure OpenAI Sora: A Complete Guide to AI Video Generation
In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API. Table of Contents Introduction to Sora Models Azure AI Foundry vs. OpenAI API Structure API Integration: Request Body Parameters Video Generation Modes Cost Analysis per Generation Technical Limitations & Constraints Resolution & Duration Support Implementation Best Practices Introduction to Sora Models Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions: Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds) Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview) Azure AI Foundry vs. OpenAI API Structure Key Architectural Differences Sora 1 uses Azure's traditional deployment-based API structure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/... Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields Job Management: Uses /jobs/{id} for status polling Content Download: Uses /video/generations/{generation_id}/content/video Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720") Job Management: Uses /videos/{video_id} for status polling Content Download: Uses /videos/{video_id}/content Why This Matters? This architectural difference requires conditional request formatting in your code: const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; } API Integration: Request Body Parameters Sora 1 API Parameters Standard Text-to-Video Request: { "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" } Parameter Details: model (String, Required): Your Azure deployment name prompt (String, Required): Natural language description of the video (max 32000 chars) height (String, Required): Video height in pixels width (String, Required): Video width in pixels n_seconds (String, Required): Duration (1-20 seconds) n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution) Sora 2 API Parameters Text-to-Video Request: { "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" } Image-to-Video Request (uses FormData): const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP Video-to-Video Remix Request: Endpoint: POST .../videos/{video_id}/remix Body: Only { "prompt": "your new description" } The original video's structure, motion, and framing are reused while applying the new prompt Parameter Details: model (String, Optional): Your deployment name prompt (String, Required): Video description size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280") seconds (String, Optional): "4", "8", or "12" (defaults to "4") input_reference (File, Optional): Reference image for image-to-video mode remix_video_id (String, URL parameter): ID of video to remix Video Generation Modes 1. Text-to-Video (Both Models) The foundational mode where you provide a text prompt describing the desired video. Implementation: const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), }); Best Practices: Include shot type (wide, close-up, aerial) Describe subject, action, and environment Specify lighting conditions (golden hour, dramatic, soft) Add camera movement if desired (pans, tilts, tracking shots) 2. Image-to-Video (Sora 2 Only) Generate a video anchored to or starting from a reference image. Key Requirements: Supported formats: JPEG, PNG, WebP Image dimensions must exactly match the selected video resolution Our implementation automatically resizes uploaded images to match Implementation Detail: // Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage); 3. Video-to-Video Remix (Sora 2 Only) Create variations of existing videos while preserving their structure and motion. Use Cases: Change weather conditions in the same scene Modify time of day while keeping camera movement Swap subjects while maintaining composition Adjust artistic style or color grading Endpoint Structure: POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview Implementation: let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; } Cost Analysis per Generation Sora 1 Pricing Model Base Rate: ~$0.05 per second per variant at 720p Resolution Scaling: Cost scales linearly with pixel count Formula: const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier; Examples: 720p (1280×720), 12 seconds, 1 variant: $0.60 1080p (1920×1080), 12 seconds, 1 variant: $1.35 720p, 12 seconds, 2 variants: $1.20 Sora 2 Pricing Model Flat Rate: $0.10 per second per variant (no resolution scaling in public preview) Formula: const totalCost = 0.10 * duration * variants; Examples: 720p (1280×720), 4 seconds: $0.40 720p (1280×720), 12 seconds: $1.20 720p (720×1280), 8 seconds: $0.80 Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters. Cost Comparison Scenario Sora 1 (720p) Sora 2 (720p) Winner 4s video $0.20 $0.40 Sora 1 12s video $0.60 $1.20 Sora 1 12s + audio N/A (no audio) $1.20 Sora 2 (unique) Image-to-video N/A $0.40-$1.20 Sora 2 (unique) Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities. Technical Limitations & Constraints Sora 1 Limitations Resolution Options: 9 supported resolutions from 480×480 to 1920×1080 Includes square, portrait, and landscape formats Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080 Duration: Flexible: 1 to 20 seconds Any integer value within range Variants: Depends on resolution: 1080p: Variants disabled (n_variants must be 1) 720p: Max 2 variants Other resolutions: Max 4 variants Concurrent Jobs: Maximum 2 jobs running simultaneously Job Expiration: Videos expire 24 hours after generation Audio: No audio generation (silent videos only) Sora 2 Limitations Resolution Options (Public Preview): Only 2 options: 720×1280 (portrait) or 1280×720 (landscape) No square formats No 1080p support in current preview Duration: Fixed options only: 4, 8, or 12 seconds No custom durations Defaults to 4 seconds if not specified Variants: Not prominently supported in current API documentation Focus is on single high-quality generations with audio Concurrent Jobs: Maximum 2 jobs (same as Sora 1) Job Expiration: 24 hours (same as Sora 1) Audio: Native audio generation included (dialogue, sound effects, ambience) Shared Constraints Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third. Job Lifecycle: queued → preprocessing → processing/running → completed Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video. Generation Time: Typical: 1-5 minutes depending on resolution, duration, and API load Can occasionally take longer during high demand Resolution & Duration Support Matrix Sora 1 Support Matrix Resolution Aspect Ratio Max Variants Duration Range Use Case 480×480 Square 4 1-20s Social thumbnails 480×854 Portrait 4 1-20s Mobile stories 854×480 Landscape 4 1-20s Quick previews 720×720 Square 4 1-20s Instagram posts 720×1280 Portrait 2 1-20s TikTok/Reels 1280×720 Landscape 2 1-20s YouTube shorts 1080×1080 Square 1 1-20s Premium social 1080×1920 Portrait 1 1-20s Premium vertical 1920×1080 Landscape 1 1-20s Full HD content Sora 2 Support Matrix Resolution Aspect Ratio Duration Options Audio Generation Modes 720×1280 Portrait 4s, 8s, 12s ✅ Yes Text, Image, Video Remix 1280×720 Landscape 4s, 8s, 12s ✅ Yes Text, Image, Video Remix Note: Sora 2's limited resolution options in public preview are expected to expand in future releases. Implementation Best Practices 1. Job Status Polling Strategy Implement adaptive backoff to avoid overwhelming the API: const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; } 2. Handling Different Response Structures Sora 1 Video Download: const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`; Sora 2 Video Download: const videoUrl = `${root}/videos/${jobId}/content`; 3. Error Handling try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification } 4. Image Preprocessing for Image-to-Video Always resize images to match the target video resolution: async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); } 5. Cost Tracking Implement cost estimation before generation and tracking after: // Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId }); 6. Progressive User Feedback Provide detailed status updates during the generation process: const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`); Conclusion Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways: Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows Optimize costs: Calculate estimates upfront and track actual usage for better budget management Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge. Resources: Azure AI Foundry Sora Documentation OpenAI Sora API Reference Azure OpenAI Service Pricing This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.359Views0likes0CommentsThe Hidden Memory Architecture of LLMs
Your LLM is not running out of intelligence. It is often hitting context and runtime memory limits. I’m Hazem Ali — Microsoft AI MVP, Distinguished AI and ML Engineer / Architect, and Founder and CEO of Skytells. I’ve built and led engineering work that turns deep learning research into production systems that survive real-world constraints. I speak at major conferences and technical communities, and I regularly deliver deep technical sessions on enterprise AI and agent architectures. If there’s one thing you’ll notice about me, it’s that I’m drawn to the deepest layers of engineering, the parts most teams only discover when systems are under real pressure. My specialization spans the full AI stack, from deep learning and system design to enterprise architecture and security. One of the most distinctive parts of that work lives in the layer most people don’t see in demos: inference runtimes, memory and KV-cache behavior, serving architecture, observability, and zero-trust governance. So this article is written from that lens: translating “unexpected LLM behavior” into engineering controls you can measure, verify, and enforce. I’ll share lessons learned and practical guidance based on my experience. Where latency is percentiles, not averages. Where concurrency is real. Where cost has a curve. Where one bad assumption turns into an incident. That is why I keep repeating a simple point across my writing. When AI fails in production, it usually isn’t because the model is weak. It is because the architecture around it was never built for real conditions. I wrote about that directly in AI Didn’t Break Your Production, Your Architecture Did. If you have not read it yet, it will give you the framing. This article goes one layer deeper, So, think of this as an engineering deep-dive grounded in published systems work. Because the subsystem that quietly decides whether your GenAI stays stable under pressure is memory. Not memory as a buzzword. Memory as the actual engineering stack you are shipping: prefill and decode behavior, KV cache growth, attention budgets, paging and fragmentation, prefix reuse, retrieval tiers, cache invalidation, and the trust boundaries that decide what is allowed into context and what is not. That stack decides time to first token, tokens per second, throughput, tail latency, and cost per request. It also decides something people rarely connect to architecture: whether the agent keeps following constraints after a long session, or slowly drifts because the constraints fell out of the effective context. If you have watched a solid agent become unreliable after a long conversation, you have seen this. If you have watched a GPU sit at low utilization while tokens stream slowly, you have seen this. If you increased context length and your bill jumped while quality did not, you have seen this. So here is the goal of this piece. Turn the hidden memory mechanics of LLMs into something you can design, measure, and defend. Not just vaguely understand. Let’s break it down. A quick grounding: What evolved, and what did not! The modern LLM wave rides on the Transformer architecture introduced in Attention Is All You Need. What changed since then is not the core idea of attention. What changed is the engineering around it: kernels got smarter about memory movement inference got separated into phases and pipelines KV cache went from a tensor to an allocator problem serving systems started looking like OS schedulers So yes, the model evolved. But the deeper truth is this: LLM performance is now strongly shaped by memory behavior, not just FLOPs. That is not a vibe. It is why whole research lines exist around IO-aware attention and KV cache management. A Story from CognitionX 2025 This happened live at CognitionX Dubai Conference 2025 Most CognitionX events are community-focused on engineering-first learning, turning modern AI and cloud capabilities, including Microsoft technologies, into practical systems people can build, measure, and operate, bringing together Microsoft MVPs and practitioners to share proven patterns and hands-on best practices. I wanted to land a point in a way engineers can’t unsee.. GenAI performance is often constrained by the serving system (memory, bandwidth, scheduling, batching, and initialization paths) before it is constrained by model quality. So I ran a live demo on an NVIDIA A100 80GB instance. Before anything, we intentionally warmed the runtime. The very first request on a fresh process or fresh GPU context can include one-time overhead that is not representative of steady-state inference things like model weight loading, CUDA context creation, kernel/module initialization, allocator warm-up, and framework-level graph/runtime setup. I didn’t want the audience to confuse “first-request overhead” with actual steady-state behavior. Then I started with a clean run: a short input, fast output, stable behavior. This is what most demos show: a model that looks powerful and responsive when prompt length is small, concurrency is low, and runtime state is minimal. > After that, I changed one variable on purpose. I kept adding constraints and context exactly the way real users do: more requirements, more follow-ups, more iterations back to back. Same model, same serving stack, same GPU. The only thing that changed was the amount of context being processed and retained by the runtime across tokens, which increases memory pressure and reduces scheduling flexibility. You could see the system react in measurable ways. As context grew and request patterns became less predictable, end-to-end latency increased and sustained throughput dropped, and the available memory headroom tightened. Nothing “mystical” happened to the model. We simply pushed the serving system into a regime where it was more constrained by memory footprint, memory bandwidth, batching efficiency, and scheduler behavior than by raw compute. Then I connected it directly to LLM inference mechanics. Text generation follows the same pattern, except the dominant runtime state has a name: the KV cache. Findings During prefill, the model processes the full prompt to initialize attention state and populate the KV cache. During decode, that state is reused and extended one token at a time. KV cache memory grows linearly with sequence length per request, and it also scales with the number of concurrent sequences and with model configuration details such as number of layers, number of attention heads, head dimension, and dtype (FP16/BF16/FP8, etc.). As prompt length and concurrency increase, the serving bottleneck often shifts from pure compute to system-level constraints: HBM bandwidth and access patterns, KV residency and paging behavior, allocator efficiency and fragmentation, and batching and scheduling dynamics. That is the mental model behind the rest of this article. The mental model that fixes most confusion LLM inference is the runtime forward pass where the model turns input tokens into a probability distribution for the next token. It runs in two phases: prefill (process the whole prompt once and build KV cache) then decode (generate tokens one-by-one while reusing KV cache). Performance and stability are dominated by context limits + KV cache memory/bandwidth, not just compute. The key is that inference is not one big compute job. It is one prompt pass, then many per-token passes. Prefill builds reusable state. Decode reuses and extends it, token by token, while repeatedly reading KV cache. Once you see it this way, production behavior becomes predictable, especially why long context and high concurrency change throughput and tail latency. LLM inference has two phases Prefill You process the full prompt tokens in parallel, and you create the KV cache. Decode You generate tokens autoregressively, one token at a time, reusing the KV cache. Now the first real punchline: Prefill is compute heavy. Decode is memory hungry. Decode reuses prior keys and values, which means you are constantly reading KV cache from GPU memory. That is why decode often becomes memory-bandwidth bound and tends to underutilize GPU compute. So when people ask why the GPU looks bored while tokens are slowly streaming, the answer is usually: Because decode is waiting on memory. Each generated token forces the model to pull past keys and values from KV cache, layer by layer, from GPU memory. So even if your GPU has plenty of compute left, throughput can stall on memory bandwidth and memory access patterns. KV cache is not an optimization. It is the runtime state In a Transformer decoder, each layer produces keys and values per token. If you had to recompute those for every new token, latency would explode. So we cache K and V. That cache grows with sequence length. That is the KV cache, Now here is the engineering detail that matters more than most people admit: The KV cache is one of the largest pieces of mutable state in LLM inference. And it is dynamic. It grows per request, per turn, per decoding strategy. This is exactly the problem statement that the vLLM PagedAttention paper attacks (arXiv) High-throughput serving needs batching, but KV cache memory becomes huge and changes shape dynamically, and naive management wastes memory through fragmentation and duplication. Why this starts behaving like distributed memory Well, A single GPU can only hold so much. At scale, you do all the usual tricks: batching continuous batching kv reuse prefix caching paging speculative decoding sharding multi GPU scheduling And once you do that, your system starts looking like a memory manager. Not metaphorically. Literally. The constraint isn’t just weights, it’s live KV cache, which grows with tokens and concurrency. So serving becomes memory admission control, can you accept this request without blowing the KV budget and collapsing batch size? PagedAttention explicitly takes the OS route: Paging KV into fixed-size blocks to avoid fragmentation and keep packing/batching stable under churn. (arXiv) That is not blog language. That is the core design. So if you want a rare angle that most people cannot talk about, here it is: GenAI serving is OS design wearing a Transformer costume. It means the hardest production problems stop being attention math and become OS problems: admission control, paging/fragmentation, scheduling (prefill vs decode), and isolation for shared caches. Paging: the KV cache allocator is the hidden bottleneck Paging shows up when you stop pretending every request has a clean, contiguous memory layout. Real traffic creates fragmentation. Variable length sequences create uneven allocations. And once you batch requests, wasted KV memory becomes lost throughput. Let’s get concrete. The classical failure mode: fragmentation If you allocate KV cache as big contiguous tensors per request, two things happen: you over allocate to plan for worst case length you fragment memory as requests come and go PagedAttention addresses this by storing KV cache in non contiguous blocks allocated on demand, eliminating external fragmentation by making blocks uniform, and reducing internal fragmentation by using smaller blocks. The vLLM paper also claims near zero waste in KV cache memory with this approach, and reports 2 to 4 times throughput improvements compared to prior systems in its evaluation. If you are building your own serving stack and you do not understand your KV allocator, you are basically shipping an OS with malloc bugs and hoping Kubernetes fixes it. It will not. Attention Budgets: The real meaning of context limits Context window is often marketed like a feature. In production it behaves like a budget that you spend. > Spend it on the wrong tokens and quality drops. > Spend too much of it and performance collapses under concurrency. Most people talk about context window like it is a product feature. Engineers should talk about it like this: Context is an attention budget with quadratic pressure. The FlashAttention paper opens with the key fact: Transformers get slow and memory hungry on long sequences because self-attention has quadratic time and memory complexity in sequence length. That pressure shows up in two places: Attention compute and intermediate memory Naive attention wants to touch (and often materialize) an N×N attention structure. As N grows, the cost curve explodes. KV cache is linear in size, but decode bandwidth scales with length KV cache grows with tokens (O(n)), but during decode every new token repeatedly reads more past KV. Longer contexts mean more memory traffic per token and higher tail-latency risk under load. FlashAttention exists because naive attention spends too much time moving data between HBM and SRAM, so it uses tiling to reduce HBM reads/writes and avoids materializing the full attention matrix. So when you choose longer contexts, you are not choosing more text. You are choosing: more KV cache to store more memory bandwidth pressure during decode more IO pressure inside attention kernels more tail latency risk under concurrency This is why context length is not a free upgrade. It is an architectural trade. Prefill decode disaggregation: when memory becomes a network problem Prefill–decode disaggregation is when you run the prefill phase on one GPU/node, then ship the resulting KV cache (or a reference to it) to a different GPU/node that runs the decode phase. So instead of one engine doing prefill → decode end-to-end, you split inference into two stages with a KV transfer boundary in the middle. The reason people do it: prefill is typically compute/throughput-oriented, while decode is latency + memory-bandwidth-oriented, so separating them lets you size and schedule hardware differently, but it turns KV into distributed state you must move, track, and retire safely. Once you treat prefill and decode as different phases, the next question is obvious: > Should they run on the same device? In many systems the answer becomes no, because the resource profiles differ. But the moment you split them, KV cache becomes a transferable object and decode is now gated by network tail latency as much as GPU speed. Some systems split them so prefill happens on one engine and decode on another. This is literally called prefill decode disaggregation, and technical reports describe it as splitting inference into a prefill stage and a decode stage across different GPUs or nodes, including cross-engine KV cache transfer. Now you have a new engineering reality: The KV cache becomes a distributed object. That means you inherit distributed systems issues: serialization / layout choices transfer overhead and tail latency correctness: ordering, cancellation, retries, duplication, versioning admission control under congestion / backpressure isolation between tenants If you are reading this as a CTO or SRE, this is the part you should care about. Because this is where systems die in production. Consistency: what it even means for KV cache Consistency is not a buzzword here, It is the difference between safe reuse and silent corruption. When you reuse KV state, you are reusing computation under assumptions. If those assumptions are wrong, you may get fast answers that are simply not equivalent to running the model from scratch. Let’s define terms carefully, In classic distributed systems, consistency is about agreement on state. In LLM serving, KV cache consistency usually means these constraints: Causal alignment The KV cache you reuse must correspond exactly to the same prefix tokens (same token IDs, same order, same positions) the model already processed. Parameter + configuration alignment KV computed under one model snapshot/config must not be reused under another: different weights, tokenizer, RoPE/positioning behavior, quantization/dtype, or other model-level settings can invalidate equivalence. Conditioning alignment If the prompt includes more than text (multimodal inputs, system/tool metadata), the cache key must include all conditioning inputs, Otherwise “same text prefix” can still be a different request. (This is a real-world footgun in practice.) This is why prefix caching is implemented as caching KV blocks for processed prefixes and reusing them only when a new request shares the same prefix, so it can skip computation of the shared part. And the vLLM docs make an explicit claim: prefix caching is widely used, is “almost a free lunch,” and does not change model outputs when the prefix matches. The moment you relax the prefix equality rule, you are not caching. You are approximating. That is a different system. So here is the consistency rule that matters: Only reuse KV state when you can prove token identity, not intent similarity. Performance without proof is just corruption with low latency. — Hazem Ali So my recommendation, treat KV reuse as a correctness feature first, not a speed feature. Cache only when you can prove token identity, and label anything else as approximation with explicit guardrails. Multi-tenancy: The memory security problem nobody wants to own Most senior engineers avoid this layer because it’s as unforgiving as memory itself, and I get why even principals miss it. This is deep-systems territory, where correctness is invisible until it breaks. However, let me break it down and make it easy for you to reason about. Memory is not only a performance layer, It is also a security surface. Yes, you read that right. Memory is not only a performance layer. It is also a security surface. I remember my session at AICO Dubai 2025, where the whole point was Zero-Trust Architecture. What most teams miss is that the exact same Zero-Trust logic applies one layer deeper, at the memory level as well. Once you batch users, cache prefixes, and reuse state, you are operating a multi-tenant platform whether you admit it or not. That means isolation and scope become first-class design constraints. If you ignore this, performance optimizations become exposure risks. Now we get to the part most GenAI articles avoid. If your serving layer does any form of cross-request reuse, batching, or shared caches, then you have a trust boundary issue. The boundary isn’t just the model. It is the serving stack: the scheduler, the cache namespace, the debug surface, and the logs. User → serving → tenant-scoped cache → tools/data. Performance wants sharing; security demands scoping. In my Zero-Trust agent article, I framed the mindset clearly: do not trust the user, the model, the tools, the internet, or the documents you ground on, and any meaningful action must have identity, explicit permissions, policy checks outside the prompt, and observability. That same mindset applies here. Because KV cache can become a leakage channel if you get sloppy: cross-tenant prefix caching without strict scoping and cache key namespaces shared batch scheduling that can leak metadata through timing and resource signals debug endpoints that expose tokenization details or cache keys logs that accidentally store prompts, prefixes, or identifiers I am not claiming a specific CVE here, I am stating the architectural risk class. And the mitigation is the same pattern I already published: Once an agent can call tools that mutate state, treat it like a privileged service, not a chatbot. - Hazem Ali I would extend that line to serving, Once your inference stack shares memory state across users, treat it like a multi-tenant platform, not a demo endpoint. Speculative decoding: latency tricks that still depend on memory Speculative decoding is a clean example of a pattern you’ll keep seeing. A lot of speedups aren’t about changing the model at all. They’re about changing how you schedule work and how you validate tokens. Speculative decoding flow. A draft model proposes N tokens; the target model verifies them in parallel; accepted tokens are committed and extend KV; rejected tokens fall back to standard decode. But even when you make decode faster, you still pay the memory bill: KV reads, KV writes, and state that keeps growing. Speculative decoding is one of the most practical ways to speed up decode without touching the target model. The idea is simple: a smaller draft model proposes N tokens, then the larger target model verifies them in parallel. If most of them get accepted, you effectively get multiple tokens per expensive target-model step, while still matching the target distribution. It helps, but it doesn’t make memory go away: verification still has to attend over the current prefix and work against KV state acceptance rate is everything: poor alignment means more rejections and less real gain batching and scheduler details matter a lot in production (ragged acceptance, bookkeeping, and alignment rules can change the outcome) Figure 12B, Speedup vs acceptance rate (and the memory floor). Higher acceptance drives real gains, but KV reads/writes and state growth remain a bandwidth floor that doesn’t disappear. So speculative decoding isn’t magic. 😅 It’s a scheduling + memory strategy dressed as an algorithm. If you turn it on, benchmark it under your actual workload. Even practical inference guides call out that results depend heavily on draft/target alignment and acceptance rate you measure it, you don’t assume it. Azure: Why it matters here? Azure matters here for one reason: it gives you production control points that map directly to the failure modes we’ve been talking about memory pressure, batching behavior, cache scope, isolation, and ops. Not because you can buy a bigger GPU. Because in production, survivability comes from control points. 1. Foundry Agent Service as a governed agent surface The point isn’t agents as a feature. The point is that orchestration changes memory patterns and operational risk. According to the product documentation, Foundry Agent Service is positioned as a platform to design, deploy, and scale agents, with built-in integration to knowledge sources (e.g., Bing, SharePoint, Fabric, Azure AI Search) and a large action surface via Logic Apps connectors. Why that matters in this article: once you add tools + retrieval + multi-step execution, you amplify token volume and state. 2. Tools + grounding primitives you can actually audit Grounding is not free. It expands context, increases prefill cost, and changes what you carry into decode. According to the latest documentation, Foundry’s tools model explicitly separates knowledge tools and public web grounding That separation is operationally important: it gives you clearer “what entered the context” boundaries, so when quality drifts, you can debug whether it’s retrieval/grounding vs serving/memory. 3. AKS + MIG: when KV cache becomes a deployment decision GPU utilization isn’t just “do we have GPUs?” It’s tenancy, isolation, and throughput under hard memory budgets. According to AKS Docs, Azure AKS supports Multi-Instance GPU (MIG), where supported NVIDIA GPUs can be partitioned into multiple smaller GPU instances, each with its own compute slices and memory. That turns KV cache headroom from a runtime detail into a deployment constraint. This is exactly where the KV cache framing becomes useful: Smaller MIG slices mean tighter KV cache budgets Batching must respect per-slice memory headroom Paging and prefix caching become more important You are effectively right-sizing memory domains 4. Managed GPU nodes: reducing the ops entropy around inference A lot of production pain lives around the model: drivers, plugins, telemetry, node lifecycle. As documented, AKS now supports fully managed GPU nodes (preview) that install the NVIDIA driver, device plugin, and DCGM metrics exporter by default, reducing the moving parts in the layer that serves your KV-heavy workloads. Architectural Design: AI as Distributed Memory on Azure Now we get to the interesting part: turning the ideas into a blueprint you can actually implement. The goal is simple, keep control plane and data plane clean, and treat memory as a first-class layer. If you do that, scaling becomes a deliberate engineering exercise instead of a firefight. The moment you treat inference as a multi-tenant memory system, not a model endpoint, you stop chasing incidents and start designing control. — Hazem Ali Control plane: The Governance Unit Use Foundry Hubs/Projects as the governance boundary: a place to group agents, model deployments, tools, and access control so RBAC, policies, and monitoring attach to a single unit of ownership. Then enforce identity + least privilege for any tool calls outside the prompt, aligned with your zero-trust framing. Data plane: Where tokens turn into latency Pick one of two concrete paths: Option A: Managed models + managed orchestration Use Foundry Models / model catalog with Foundry Agent Service orchestration when you want faster time-to-prod and more managed control points. Option B: Self-hosted inference on AKS Run inference on AKS with your serving stack (e.g., vLLM + PagedAttention), and add MIG slicing where it matches your tenancy model, because KV budget becomes an actual scheduling constraint. Memory layer decisions Long prompts + repeated prefixes: enable prefix caching, and scope it properly per tenant / per model config. OOM or low batch size: treat KV cache as an allocator problem, adopt paging strategies (PagedAttention-style thinking). Tail latency spikes: consider separating prefill and decode where it fits, but accept KV becomes a distributed object with transfer + consistency overhead. Decode feels slow / GPU looks bored: consider speculative decoding, but benchmark it honestly under your workload and acceptance rate. Runtime Observability: Inside the Serving Memory Stack Before we get into metrics, a quick warning, This is where GenAI stops being a model you call and becomes a system you operate. The truth won’t show up in prompt tweaks or averages. It shows up one layer deeper, in queues, schedulers, allocators, and the KV state that decides whether your runtime stays stable under pressure. Remember what I told you above? latency is percentiles, not averages. So if you can’t see memory behavior, you can’t tune it, and you’ll keep blaming the model for what the serving layer is doing. Most teams instrument the model and forget the runtime. That’s backwards. This whole article is about the fact that performance is often constrained by the serving system (memory, bandwidth, scheduling, batching) before it’s constrained by model quality, and the dominant runtime state is the KV cache. So if you want to run an AI like an engineer, you track: TTFT (time to first token) Mostly prefill + queueing/scheduling. This is where the system feels slow starts. TPOT / ITL (time per output token / inter-token latency) Mostly decode behavior. This is where memory bandwidth and KV reads show up hardest. KV cache footprint + headroom During decode, KV grows with sequence length and with concurrency. Track how much VRAM is living state vs available runway. KV fragmentation / allocator efficiency Because your max batch size is often limited by allocator reality, not theoretical VRAM. Batch size + effective throughput (system tokens/sec) If throughput dips as contexts get longer, you’re usually watching memory pressure and batching efficiency collapse, not model randomness. Prefix cache hit rate This is where prompt engineering becomes performance engineering. When done correctly, prefix caching skips recomputing shared prefixes. Tail latency under concurrency (p95/p99) Because production is where mostly fine still means “incident.” These are the levers that make GenAI stable, everything else is vibes. Determinism Under Load: When the Serving Runtime Changes the Output In well-controlled setups, an LLM can be highly repeatable. But under certain serving conditions, especially high concurrency and dynamic/continuous batching.. You may observe something that feels counter-intuitive.. Same model. Same request. Same parameters. Different output. First, Let me clarify something here, I'm not saying here that LLMs are unreliable by design. I'm saying something more precise, and more useful. Reproducibility is a systems property. Why? Because in real serving, the model is only one part of the computation. What actually runs is a serving runtime, batching and scheduling decisions, kernel selection, numeric precision paths, and memory pressure. Under load, those factors can change the effective execution path. And if the runtime isn’t deterministic enough for the guarantees you assume, then “same request” does not always mean “same execution.” This matters because AI is no longer a toy. It’s deployed across enterprise workflows, healthcare, finance, and safety-critical environments. Places where small deviations aren’t “interesting,” they’re risk. In precision-critical fields like healthcare, tiny shifts can matter, not because every use case requires bit-identical outputs, but because safety depends on traceability, validation, and clear operating boundaries. When systematic decisions touch people’s lives, you don’t want “it usually behaves.” You want measurable guarantees, clear operating boundaries, and engineering controls. — Hazem Ali 1. First rule: “Same request” must mean same token stream + same model configuration Before blaming determinism, verify the request is identical at the level that matters: Same tokenizer behavior and token IDs (same text ≠ same tokens across versions/config) Same system prompt/template/tool traces (anything that enters the final serialized prompt) Same weights snapshot + inference configuration (dtype/quantization/positioning settings that affect numerics) If you can’t prove token + config equivalence, don’t blame hardware yet, you may be debugging input drift. Once equivalence is proven, runtime nondeterminism becomes the prime suspect. Prove byte-level equivalence before blaming runtime: same_text_prompt ≠ same_token_ids same_model_name ≠ same_weights_snapshot + quantization/dtype + RoPE/position config same_api_call ≠ same_final_serialized_context (system + tools + history) Common failure modes in the wild: Tokenizer/version changes → different token IDs Quantization/dtype paths → different numerics (often from the earliest layers) RoPE/position config mismatches → representation drift across the sequence Verify (practically): Hash the final serialized prompt bytes Hash the token ID sequence Log/hash the model revision + tokenizer revision + dtype/quantization + RoPE/position settings + decode config across runs 2. Temperature=0 reduces randomness, but it does not guarantee bit-identical execution Greedy decoding { temperature = 0 } is deterministic only if the logits are identical at every step. What greedy actually removes is one source of variability, sampling. It does not guarantee identical results by itself, because the logits are produced by a GPU runtime that may not be strictly deterministic under all serving conditions. Deterministic only if the logits match exactly next_id = logits.argmax() # Deterministic only if logits are bit-identical. # In practice, kernel selection, parallel reductions, atomic operations, # and precision paths can introduce tiny rounding differences # that may flip a borderline argmax. Reality? greedy fixes the decision rule “pick the max”. The serving runtime still controls the forward-pass execution path that produces the logits. If you need strict repeatability, you must align the runtime: deterministic algorithm settings where available, consistent library/toolkit behavior, and stable kernel/math-mode choices across runs. But GPU stacks do not automatically guarantee bit-identical logits across runs. **PyTorch** documents that reproducibility can require avoiding nondeterministic algorithms, and it provides ``deterministic`` enforcement that forces deterministic algorithms where available and errors when only nondeterministic implementations exist. So the accurate statement is: [ temp=0 ] makes the decoding rule deterministic, but it doesn’t make the runtime deterministic. 3. Why tiny runtime differences can become big output differences Sometimes a tiny runtime delta stays tiny. Sometimes it cascades. The difference is autoregressive decoding plus sequence length (prompt + generated tokens within the context window). During decode, the model generates one token at a time, and each chosen token is appended back into the context for the next step: So if two runs differ at a single step, because two candidates were near-tied and a tiny numeric delta flipped the choice then the prefixes diverge: From that moment on, the model is conditioning on a different history, so future token distributions can drift. This is not “model mood.” It’s a direct consequence of the autoregressive feedback loop. Where the context window matters is simple and fully mechanical: A longer sequence means more decode steps. More steps means more opportunities for near-ties where a tiny delta can flip a decision. Once a token flips, the rest of the generation can follow a different trajectory because the prefix is now different. So yes: small runtime differences can become big output differences—especially in long generations and long contexts. For example, this snippet demonstrates two facts: Near-tie + tiny delta can flip argmax One flipped choice can cause trajectory divergence in an autoregressive loop. import numpy as np # 1) Near-tie: tiny perturbation can flip argmax z = np.array([0.5012, 0.5008, 0.1, -0.2]) # top-2 are close a = int(np.argmax(z)) b = int(np.argsort(z)[-2]) margin = z[a] - z[b] eps = 3e-4 # tiny perturbation scale print("Top:", a, "Second:", b, "Margin:", margin) # Worst-case-style delta: push top down, runner-up up (illustrative) delta = np.zeros_like(z) delta[a] -= eps delta[b] += eps z2 = z + delta print("Argmax before:", int(np.argmax(z)), "after tiny delta:", int(np.argmax(z2))) # 2) Autoregressive divergence (toy transition model) rng = np.random.default_rng(0) V, T = 8, 30 W = rng.normal(size=(V, V)) # logits for next token given current token def next_token(prev: int, tweak: bool = False) -> int: logits = W[prev].copy() if tweak: top = int(np.argmax(logits)) second = int(np.argsort(logits)[-2]) logits[top] -= 1e-3 logits[second] += 1e-3 return int(np.argmax(logits)) yA = [0] yB = [0] inject_step = 3 for t in range(1, T): yA.append(next_token(yA[-1], tweak=False)) yB.append(next_token(yB[-1], tweak=(t == inject_step))) # single tiny change once first_div = next((i for i, (x, y) in enumerate(zip(yA, yB)) if x != y), None) print("First divergence step:", first_div) print("Run A:", yA) print("Run B:", yB) This toy example isn’t claiming GPU deltas always happen or always flip tokens, only the verified mechanism, near-ties exist, argmax flips are possible if logits differ, and autoregressive decoding amplifies a single early difference into a different continuation. To visualize what’s happening exactly, look at this diagram. On the left, it shows the decode loop as a stateful sequence generator: at step t the model produces logits zt, We pick the next token yt (greedy or sampling), then that token is appended to the prefix and becomes part of the next step’s conditioning. That feedback loop is the key, one token is not “just one token”, it becomes future context. On the right, the diagram highlights the failure mode that surprises people in serving: when two candidates are near-tied, a tiny numeric delta (from runtime execution-path differences under load) can flip the choice once. After that flip, the two runs are no longer evaluating the same prefix, so the distributions naturally drift. With a longer context window and longer generations, you simply have more steps where near-ties can occur and more opportunity for a single flip to branch the trajectory. That’s the point to internalize. The runtime doesn’t need to “break” the model to change the output. It only needs to nudge one early decision in a near-tie autoregressive conditioning does the rest. 4. Under concurrency, serving can change the execution path (and that can change results) Once you go online, the request is not executed alone. It enters a scheduler. Under load, the serving layer is allowed to reshape work to hit latency/throughput goals: Continuous/dynamic batching: requests arrive at different times, get grouped differently, and may be processed with different batch composition or ordering. Chunked or staged execution: some systems split or chunk prefill work to keep the pipeline moving and to avoid blocking decode. Runtime features that change what’s computed and when: prefix caching, speculative decoding, verification passes, paging, and other optimizations can change the shape of the forward-pass workload for “the same” logical request. None of that automatically means outputs must differ. The point is narrower and more important: If batch shape, scheduling, or kernel/math paths can change under pressure, then the effective execution path can change. And repeatability becomes a property of that path, not of your request text. This is exactly why vLLM documents that it does not guarantee reproducibility by default for performance reasons, and points to Batch Invariance when you need outputs to be independent of batch size or request order in online serving. 5. Nondeterminism isn’t folklore. The stack literally tells you it exists If you’ve ever looked at two runs that should match and thought, let me put it very clear, “This doesn’t make sense.” 👈 That reaction is rational. Your engineering brain is detecting a missing assumption. The missing assumption is that inference behaves like a pure function call. In real serving, determinism is not a property of the model alone. It’s a property of the full compute path. Framework level: what the software stack is willing to guarantee At the framework layer, reproducibility is explicitly treated as conditional. PyTorch documents that fully reproducible results are not guaranteed across releases or platforms, and it provides deterministic controls that can force deterministic algorithms where available. The important detail is that when you demand determinism, PyTorch may refuse to run an operation if only nondeterministic implementations exist. That’s not a bug. That’s the framework being honest about the contract you asked for. This matters because it draws a clean boundary: You can make the decision rule deterministic, but you still need the underlying compute path to be deterministic for bit-identical outputs. Now lets dive deeper into the most interesting part here, The GPU Level, And yes, i do understand how complex it is, but let me break it down in details. GPU level: where tiny numeric deltas can come from Now lets go one a bit deeper. A lot of GPU deep learning kernels rely on heavy parallelism, and many of the primitives inside them are reductions and accumulations across thousands of threads. Floating-point arithmetic is not strictly order independent, so if the accumulation order changes, you can get tiny rounding differences even with identical inputs. cuDNN treats this as a real engineering topic. Its documentation explicitly discusses determinism and notes that bitwise reproducibility is not guaranteed across different GPU architectures. Most of the time, these deltas are invisible. But decode is autoregressive. If the model hits a near-tie between candidates, a tiny delta can flip one token selection once. After that, the prefixes diverge, and every subsequent step is conditioned on a different history. So the runs naturally drift. That’s mechanics, not “model mood.” Why you notice it more under concurrency Under light traffic, your serving path often looks stable. Under real traffic, it adapts. Batch shape, request interleaving, and scheduling decisions can change across runs. Some stacks explicitly acknowledge this tradeoff. vLLM, for example, documents that it does not guarantee reproducible results by default for performance reasons, and it points to batch-invariance mechanisms when you need outputs that are insensitive to batching and scheduling variation in online serving. The correct interpretation So the right interpretation is not that the model became unreliable. It’s this: You assumed repeatability was a property of the request. In serving, repeatability is a property of the execution path. And under pressure, the execution path is allowed to change. 6. What engineering determinism looks like when you take it seriously Most teams say they want determinism. What they often mean is: “I want it stable enough that nobody notices.” That’s not a guarantee. That’s a hope. If reproducibility matters, treat it like a contract. A real contract has three parts. 1. Name the guarantee you actually need Different guarantees are different problems: Repeatable run-to-run on the same host Repeatable under concurrency (batch/order effects) Repeatable across replicas and rollouts Bitwise repeatable vs “functionally equivalent within tolerance” If you don’t name the target, you can’t validate it. 2. Lock the execution envelope, not just the prompt The envelope is everything that can change the compute path: Final serialized context (system, tools, history, templates) Token IDs Model snapshot / revision Tokenizer revision Precision and quantization path Positioning / RoPE configuration Serving features that reshape work (batching policy, caching, paging, speculative verification) This is exactly why PyTorch calls out that reproducibility is conditional across platforms/releases, and why deterministic enforcement can fail fast when a deterministic implementation doesn’t exist. It’s also why vLLM documents reproducibility as something you must explicitly configure for, and highlights batch invariance for reducing batch/scheduling sensitivity. 3. Make determinism observable, so it stops being a debate This is where teams usually lose time: they only notice drift after users see it. Treat it like any other system property: instrument it. Correlate divergence with what you already measure: Batch shape and scheduling conditions TTFT and TPOT KV headroom and memory pressure signals p95 and p99 under concurrency Which serving features were active (paging, prefix cache hits, speculative verification) Then something important happens: what “doesn’t make sense” becomes a measurable incident class you can reproduce, explain, and control. And this connects directly to Runtime Observability: Inside the Serving Memory Stack. If you already track TTFT/TPOT, KV headroom, batch shape, and p95/p99, You already have the signals needed to explain and control this class of behavior. Tying memory to trust boundaries Yes, I know this is a rare part, but this is where most teams split into two camps. One camp optimizes performance and treats security as someone else’s job. The other camp locks everything down and wonders why cost explodes. In reality, memory reuse is both a performance strategy and a security decision. Most people treat performance and security as separate conversations. That is a mistake. Memory reuse, batching, prefix caching, and distributed KV transfer create shared surfaces. Shared surfaces create trust boundary demands. So the real engineering posture is: Performance asks you to reuse and share Security asks you to isolate and scope Production asks you to do both, with observability That is why I keep repeating the same line across different domains: Production ready AI is defined by survivability under uncertainty, and memory is where that uncertainty becomes measurable. Closing: What you should take away If you remember one thing, make it this: LLM inference can behave like a stateful memory system first, and a model endpoint second. The serving layer (KV cache growth, memory bandwidth during decode, allocator/paging behavior, and batching/scheduling) is what decides whether your system is stable under real traffic, or only impressive in demos. The hidden thing behind the rarest and most confusing production incidents is not “the model got smarter or dumber.” It’s when you think you’re calling a pure function, but you’re actually running a system that may not be strictly deterministic (GPU execution order, atomics, kernel selection) and/or a system that reuses/moves state (KV, prefix cache, paging, continuous batching). In those conditions, same prompt + same params is not always enough to guarantee bit-identical execution. This is why the references matter, they don’t claim magic. they give you mechanisms. PyTorch explicitly documents that some ops are nondeterministic unless you force deterministic algorithms (and may error if no deterministic implementation exists). CUDA thread scheduling/atomics can execute in different orders across runs, and modern serving stacks (e.g., PagedAttention) explicitly treat KV like virtual memory to deal with fragmentation and utilization limits under batching. What this means, depending on your role Senior Engineer Your win is to stop debugging by folklore. When behavior is “weird!” ask first: did the effective input change (grounding/tool traces), did the runtime state change (KV length/concurrency), or did the execution path change (batching/kernels)? Then prove it with telemetry. Principal Engineer Your job is to make it predictable. Design the serving invariants: cache scoping rules, allocator strategy (paging vs contiguous), admission control, and a determinism stance (what you guarantee, what you don’t, and how you detect drift). PyTorch literally gives you switches for deterministic enforcement, use them deliberately, knowing the tradeoffs. SRE Treat inference like an OS workload, queues, memory headroom, allocator efficiency, and p95/p99 under concurrency. If you can’t see TTFT/TPOT + KV headroom + batching behavior, you’re not observing the system you’re operating. CTO / Platform Owner The win isn’t buying bigger GPUs. It’s building control points: governance boundaries, isolation/scoping for shared state, determinism expectations, and operational discipline that makes rare failures survivable. My recommendation > Be explicit about what you optimize and what you guarantee. > If you need strict reproducibility, enforce deterministic modes where possible and accept performance tradeoffs. > If you need scale, treat KV as a first-class resource: paging/fragmentation and scheduling will bound throughput long before “model quality” does. > And for both: measure under concurrency, because that’s where systems stop sounding like opinions and start behaving like physics. Acknowledgments While this article dives into the hidden memory mechanics that shape LLM behavior under load, I’m grateful it was peer-reviewed and challenged before publishing. A special thank you to Hammad Atta for peer-reviewing this piece and challenging it from a security-and-systems angle. A special thank you to Luis Beltran for peer-reviewing this piece and challenging it from an AI engineering and deployment angle. A special thank you to André Melancia for peer-reviewing this piece and challenging it from an operational rigor angle. If this article resonated, it’s probably because I genuinely enjoy the hard parts, the layers most teams avoid because they’re messy, subtle, and unforgiving, If you’re dealing with real AI serving complexity in production, feel free to connect with me on LinkedIn. I’m always open to serious technical conversations and knowledge sharing with engineers building scalable production-grade systems. Thanks for reading, Hope this article helps you spot the hidden variables in serving and turn them into repeatable, testable controls. And I’d love to hear what you’re seeing in your own deployments. — Hazem Ali Microsoft AI MVP, Distinguished AI and ML Engineer / Architect1.1KViews0likes0CommentsAdvanced Function Calling and Multi-Agent Systems with Small Language Models in Foundry Local
Advanced Function Calling and Multi-Agent Systems with Small Language Models in Foundry Local In our previous exploration of function calling with Small Language Models, we demonstrated how to enable local SLMs to interact with external tools using a text-parsing approach with regex patterns. While that method worked, it required manual extraction of function calls from the model's output; functional but fragile. Today, I'm excited to show you something far more powerful: Foundry Local now supports native OpenAI-compatible function calling with select models. This update transforms how we build agentic AI systems locally, making it remarkably straightforward to create sophisticated multi-agent architectures that rival cloud-based solutions. What once required careful prompt engineering and brittle parsing now works seamlessly through standardized API calls. We'll build a complete multi-agent quiz application that demonstrates both the elegance of modern function calling and the power of coordinated agent systems. The full source code is available in this GitHub repository, but rather than walking through every line of code, we'll focus on how the pieces work together and what you'll see when you run it. What's New: Native Function Calling in Foundry Local As we explored in our guide to running Phi-4 locally with Foundry Local, we ran powerful language models on our local machine. The latest version now support native function calling for models specifically trained with this capability. The key difference is architectural. In our weather assistant example, we manually parsed JSON strings from the model's text output using regex patterns and frankly speaking, meticulously testing and tweaking the system prompt for the umpteenth time 🙄. Now, when you provide tool definitions to supported models, they return structured tool_calls objects that you can directly execute. Currently, this native function calling capability is available for the Qwen 2.5 family of models in Foundry Local. For this tutorial, we're using the 7B variant, which strikes a great balance between capability and resource requirements. Quick Setup Getting started requires just a few steps. First, ensure you have Foundry Local installed. On Windows, use winget install Microsoft.FoundryLocal , and on macOS, use bash brew install microsoft/foundrylocal/foundrylocal You'll need version 0.8.117 or later. Install the Python dependencies in the requirements file, then start your model. The first run will download approximately 4GB: foundry model run qwen2.5-7b-instruct-cuda-gpu If you don't have a compatible GPU, use the CPU version instead, or you can specify any other Qwen 2.5 variant that suits your hardware. I have set a DEFAULT_MODEL_ALIAS variable you can modify to use different models in utils/foundry_client.py file. Keep this terminal window open. The model needs to stay running while you develop and test your application. Understanding the Architecture Before we dive into running the application, let's understand what we're building. Our quiz system follows a multi-agent architecture where specialized agents handle distinct responsibilities, coordinated by a central orchestrator. The flow works like this: when you ask the system to generate a quiz about photosynthesis, the orchestrator agent receives your message, understands your intent, and decides which tool to invoke. It doesn't try to generate the quiz itself, instead, it calls a tool that creates a specialist QuizGeneratorAgent focused solely on producing well-structured quiz questions. Then there's another agent, reviewAgent, that reviews the quiz with you. The project structure reflects this architecture: quiz_app/ ├── agents/ # Base agent + specialist agents ├── tools/ # Tool functions the orchestrator can call ├── utils/ # Foundry client connection ├── data/ ├── quizzes/ # Generated quiz JSON files │── responses/ # User response JSON files └── main.py # Application entry point The orchestrator coordinates three main tools: generate_new_quiz, launch_quiz_interface, and review_quiz_interface. Each tool either creates a specialist agent or launches an interactive interface (Gradio), handling the complexity so the orchestrator can focus on routing and coordination. How Native Function Calling Works When you initialize the orchestrator agent in main.py, you provide two things: tool schemas that describe your functions to the model, and a mapping of function names to actual Python functions. The schemas follow the OpenAI function calling specification, describing each tool's purpose, parameters, and when it should be used. Here's what happens when you send a message to the orchestrator: The agent calls the model with your message and the tool schemas. If the model determines a tool is needed, it returns a structured tool_calls attribute containing the function name and arguments as a proper object—not as text to be parsed. Your code executes the tool, creates a message with "role": "tool" containing the result, and sends everything back to the model. The model can then either call another tool or provide its final response. The critical insight is that the model itself controls this flow through a while loop in the base agent. Each iteration represents the model examining the current state, deciding whether it needs more information, and either proceeding with another tool call or providing its final answer. You're not manually orchestrating when tools get called; the model makes those decisions based on the conversation context. Seeing It In Action Let's walk through a complete session to see how these pieces work together. When you run python main.py, you'll see the application connect to Foundry Local and display a welcome banner: Now type a request like "Generate a 5 question quiz about photosynthesis." Watch what happens in your console: The orchestrator recognized your intent, selected the generate_new_quiz tool, and extracted the topic and number of questions from your natural language request. Behind the scenes, this tool instantiated a QuizGeneratorAgent with a focused system prompt designed specifically for creating quiz JSON. The agent used a low temperature setting to ensure consistent formatting and generated questions that were saved to the data/quizzes folder. This demonstrates the first layer of the multi-agent architecture: the orchestrator doesn't generate quizzes itself. It recognizes that this task requires specialized knowledge about quiz structure and delegates to an agent built specifically for that purpose. Now request to take the quiz by typing "Take the quiz." The orchestrator calls a different tool and Gradio server is launched. Click the link to open in a browser window displaying your quiz questions. This tool demonstrates how function calling can trigger complex interactions—it reads the quiz JSON, dynamically builds a user interface with radio buttons for each question, and handles the submission flow. After you answer the questions and click submit, the interface saves your responses to the data/responses folder and closes the Gradio server. The orchestrator reports completion: The system now has two JSON files: one containing the quiz questions with correct answers, and another containing your responses. This separation of concerns is important—the quiz generation phase doesn't need to know about response collection, and the response collection doesn't need to know how quizzes are created. Each component has a single, well-defined responsibility. Now request a review. The orchestrator calls the third tool: A new chat interface opens, and here's where the multi-agent architecture really shines. The ReviewAgent is instantiated with full context about both the quiz questions and your answers. Its system prompt includes a formatted view of each question, the correct answer, your answer, and whether you got it right. This means when the interface opens, you immediately see personalized feedback: The Multi-Agent Pattern Multi-agent architectures solve complex problems by coordinating specialized agents rather than building monolithic systems. This pattern is particularly powerful for local SLMs. A coordinator agent routes tasks to specialists, each optimized for narrow domains with focused system prompts and specific temperature settings. You can use a 1.7B model for structured data generation, a 7B model for conversations, and a 4B model for reasoning, all orchestrated by a lightweight coordinator. This is more efficient than requiring one massive model for everything. Foundry Local's native function calling makes this straightforward. The coordinator reliably invokes tools that instantiate specialists, with structured responses flowing back through proper tool messages. The model manages the coordination loop—deciding when it needs another specialist, when it has enough information, and when to provide a final answer. In our quiz application, the orchestrator routes user requests but never tries to be an expert in quiz generation, interface design, or tutoring. The QuizGeneratorAgent focuses solely on creating well-structured quiz JSON using constrained prompts and low temperature. The ReviewAgent handles open-ended educational dialogue with embedded quiz context and higher temperature for natural conversation. The tools abstract away file management, interface launching, and agent instantiation, the orchestrator just knows "this tool launches quizzes" without needing implementation details. This pattern scales effortlessly. If you wanted to add a new capability like study guides or flashcards, you could just easily create a new tool or specialists. The orchestrator gains these capabilities automatically by having the tool schemas you have defined without modifying core logic. This same pattern powers production systems with dozens of specialists handling retrieval, reasoning, execution, and monitoring, each excelling in its domain while the coordinator ensures seamless collaboration. Why This Matters The transition from text-parsing to native function calling enables a fundamentally different approach to building AI applications. With text parsing, you're constantly fighting against the unpredictability of natural language output. A model might decide to explain why it's calling a function before outputting the JSON, or it might format the JSON slightly differently than your regex expects, or it might wrap it in markdown code fences. Native function calling eliminates this entire class of problems. The model is trained to output tool calls as structured data, separate from its conversational responses. The multi-agent aspect builds on this foundation. Because function calling is reliable, you can confidently delegate to specialist agents knowing they'll integrate smoothly with the orchestrator. You can chain tool calls—the orchestrator might generate a quiz, then immediately launch the interface to take it, based on a single user request like "Create and give me a quiz about machine learning." The model handles this orchestration intelligently because the tool results flow back as structured data it can reason about. Running everything locally through Foundry Local adds another dimension of value and I am genuinely excited about this (hopefully, the phi models get this functionality soon). You can experiment freely, iterate quickly, and deploy solutions that run entirely on your infrastructure. For educational applications like our quiz system, this means students can interact with the AI tutor as much as they need without cost concerns. Getting Started With Your Own Multi-Agent System The complete code for this quiz application is available in the GitHub repository, and I encourage you to clone it and experiment. Try modifying the tool schemas to see how the orchestrator's behavior changes. Add a new specialist agent for a different task. Adjust the system prompts to see how agent personalities and capabilities shift. Think about the problems you're trying to solve. Could they benefit from having different specialists handling different aspects? A customer service system might have agents for order lookup, refund processing, and product recommendations. A research assistant might have agents for web search, document summarization, and citation formatting. A coding assistant might have agents for code generation, testing, and documentation. Start small, perhaps with two or three specialist agents for a specific domain. Watch how the orchestrator learns to route between them based on the tool descriptions you provide. You'll quickly see opportunities to add more specialists, refine the existing ones, and build increasingly sophisticated systems that leverage the unique strengths of each agent while presenting a unified, intelligent interface to your users. In the next entry, we will be deploying our quizz app which will mark the end of our journey in Foundry and SLMs these past few weeks. I hope you are as excited as I am! Thanks for reading.297Views0likes0CommentsAI Didn’t Break Your Production — Your Architecture Did
Most AI systems don’t fail in the lab. They fail the moment production touches them. I’m Hazem Ali — Microsoft AI MVP, Principal AI & ML Engineer / Architect, and Founder & CEO of Skytells. With a strong foundation in AI and deep learning from low-level fundamentals to production-scale, backed by rigorous cybersecurity and software engineering expertise, I design and deliver enterprise AI systems end-to-end. I often speak about what happens after the pilot goes live: real users arrive, data drifts, security constraints tighten, and incidents force your architecture to prove it can survive. My focus is building production AI with a security-first mindset: identity boundaries, enforceable governance, incident-ready operations, and reliability at scale. My mission is simple: Architect and engineer secure AI systems that operate safely, predictably, and at scale in production. And here’s the hard truth: AI initiatives rarely fail because the model is weak. They fail because the surrounding architecture was never engineered for production reality. - Hazem Ali You see this clearly when teams bolt AI onto an existing platform. In Azure-based environments, the foundation can be solid—identity, networking, governance, logging, policy enforcement, and scale primitives. But that doesn’t make the AI layer production-grade by default. It becomes production-grade only when the AI runtime is engineered like a first-class subsystem with explicit boundaries, control points, and designed failure behavior. A quick moment from the field I still remember one rollout that looked perfect on paper. Latency was fine. Error rate was low. Dashboards were green. Everyone was relaxed. Then a single workflow started creating the wrong tickets, not failing or crashing. It was confidently doing the wrong thing at scale. It took hours before anyone noticed, because nothing was broken in the traditional sense. When we finally traced it, the model was not the root cause. The system had no real gates, no replayable trail, and tool execution was too permissive. The architecture made it easy for a small mistake to become a widespread mess. That is the gap I’m talking about in this article. Production Failure Taxonomy This is the part most teams skip because it is not exciting, and it is not easy to measure in a demo. When AI fails in production, the postmortem rarely says the model was bad. It almost always points to missing boundaries, over-privileged execution, or decisions nobody can trace. So if your AI can take actions, you are no longer shipping a chat feature. You are operating a runtime that can change state across real systems, that means reliability is not just uptime. It is the ability to limit blast radius, reproduce decisions, and stop or degrade safely when uncertainty or risk spikes. You can usually tell early whether an AI initiative will survive production. Not because the model is weak, but because the failure mode is already baked into the architecture. Here are the ones I see most often. 1. Healthy systems that are confidently wrong Uptime looks perfect. Latency is fine. And the output is wrong. This is dangerous because nothing alerts until real damage shows up. 2. The agent ends up with more authority than the user The user asks a question. The agent has tools and credentials. Now it can do things the user never should have been able to do in that moment. 3. Each action is allowed, but the chain is not Read data, create ticket, send message. All approved individually. Put together, it becomes a capability nobody reviewed. 4. Retrieval becomes the attack path Most teams worry about prompt injection. Fair. But a poisoned or stale retrieval layer can be worse, because it feeds the model the wrong truth. 5. Tool calls turn mistakes into incidents The moment AI can change state—config, permissions, emails, payments, or data—a mistake is no longer a bad answer. It is an incident. 6. Retries duplicate side effects Timeouts happen. Retries happen. If your tool calls are not safe to repeat, you will create duplicate tickets, refunds, emails, or deletes. Next, let’s talk about what changes when you inject probabilistic behavior into a deterministic platform. In the Field: Building and Sharing Real-World AI In December 2025, I had the chance to speak and engage with builders across multiple AI and technology events, sharing what I consider the most valuable part of the journey: the engineering details that show up when AI meets production reality. This photo captures one of those moments: real conversations with engineers, architects, and decision-makers about what it truly takes to ship production-grade AI. During my session, Designing Scalable and Secure Architecture at the Enterprise Scale I walked through the ideas in this article live on stage then went deeper into the engineering reality behind them: from zero-trust boundaries and runtime policy enforcement to observability, traceability, and safe failure design, The goal wasn’t to talk about “AI capability,” but to show how to build AI systems that operate safely and predictably at scale in production. Deterministic platforms, probabilistic behavior Most production platforms are built for deterministic behavior: defined contracts, predictable services, stable outputs. AI changes the physics. You introduce probabilistic behavior into deterministic pipelines and your failure modes multiply. An AI system can be confidently wrong while still looking “healthy” through basic uptime dashboards. That’s why reliability in production AI is rarely about “better prompts” or “higher model accuracy.” It’s about engineering the right control points: identity boundaries, governance enforcement, behavioral observability, and safe degradation. In other words: the model is only one component. The system is the product. Production AI Control Plane Here’s the thing. Once you inject probabilistic behavior into a deterministic platform, you need more than prompts and endpoints. You need a control plane. Not a fancy framework. Just a clear place in the runtime where decisions get bounded, actions get authorized, and behavior becomes explainable when something goes wrong. This is the simplest shape I have seen work in real enterprise systems. The control plane components Orchestrator Owns the workflow. Decides what happens next, and when the system should stop. Retrieval Brings in context, but only from sources you trust and can explain later. Prompt assembly Builds the final input to the model, including constraints, policy signals, and tool schemas. Model call Generates the plan or the response. It should never be trusted to execute directly. Policy Enforcement Point The gate before any high impact step. It answers: is this allowed, under these conditions, with these constraints. Tool Gateway The firewall for actions. Scopes every operation, validates inputs, rate-limits, and blocks unsafe calls. Audit log and trace store A replayable chain for every request. If you cannot replay it, you cannot debug it. Risk engine Detects prompt injection signals, anomalous sessions, uncertainty spikes, and switches the runtime into safer modes. Approval flow For the few actions that should never be automatic. It is the line between assistance and damage. If you take one idea from this section, let it be this. The model is not where you enforce safety. Safety lives in the control plane. Next, let’s talk about the most common mistake teams make right after they build the happy-path pipeline. Treating AI like a feature. The common architectural trap: treating AI like a feature Many teams ship AI like a feature: prompt → model → response. That structure demos well. In production, it collapses the moment AI output influences anything stateful tickets, approvals, customer messaging, remediation actions, or security decisions. At that point, you’re not “adding AI.” You’re operating a semi-autonomous runtime. The engineering questions become non-negotiable: Can we explain why the system responded this way? Can we bound what it’s allowed to do? Can we contain impact when it’s wrong? Can we recover without human panic? If those answers aren’t designed into the architecture, production becomes a roulette wheel. Governance is not a document It’s a runtime enforcement capability Most governance programs fail because they’re implemented as late-stage checklists. In production, governance must live inside the execution path as an enforceable mechanism, A Policy Enforcement Point (PEP) that evaluates every high-impact step before it happens. At the moment of execution, your runtime must answer a strict chain of authorization questions: 1. What tools is this agent attempting to call? Every tool invocation is a privilege boundary. Your runtime must identify the tool, the operation, and the intended side effect (read vs write, safe vs state-changing). 2. Does the tool have the right permissions to run for this agent? Even before user context, the tool itself must be runnable by the agent’s workload identity (service principal / managed identity / workload credentials). If the agent identity can’t execute the tool, the call is denied period. 3. If the tool can run, is the agent permitted to use it for this user? This is the missing piece in most systems: delegation. The agent might be able to run the tool in general, but not on behalf of this user, in this tenant, in this environment, for this task category. This is where you enforce: user role / entitlement tenant boundaries environment (prod vs staging) session risk level (normal vs suspicious) 4. If yes, which tasks/operations are permitted? Tools are too broad. Permissions must be operation-scoped. Not “Jira tool allowed.” But “Jira: create ticket only, no delete, no project-admin actions.” Not “Database tool allowed.” But “DB: read-only, specific schema, specific columns, row-level filters.” This is ABAC/RBAC + capability-based execution. 5. What data scope is allowed? Even a permitted tool operation must be constrained by data classification and scope: public vs internal vs confidential vs PII row/column filters time-bounded access purpose limitation (“only for incident triage”) If the system can’t express data scope at runtime, it can’t claim governance. 6. What operations require human approval? Some actions are inherently high risk: payments/refunds changing production configs emailing customers deleting data executing scripts The policy should return “REQUIRE_APPROVAL” with clear obligations (what must be reviewed, what evidence is required, who can approve). 7. What actions are forbidden under certain risk conditions? Risk-aware policy is the difference between governance and theater. Examples: If prompt injection signals are high → disable tool execution If session is anomalous → downgrade to read-only mode If data is PII + user not entitled → deny and redact If environment is prod + request is destructive → block regardless of model confidence The key engineering takeaway Governance works only when it’s enforceable, runtime-evaluated, and capability-scoped: Agent identity answers: “Can it run at all?” Delegation answers: “Can it run for this user?” Capabilities answer: “Which operations exactly?” Data scope answers: “How much and what kind of data?” Risk gates + approvals answer: “When must it stop or escalate?” If policy can’t be enforced at runtime, it isn’t governance. It’s optimism. Safe Execution Patterns Policy answers whether something is allowed. Safe execution answers what happens when things get messy. Because they will, Models time out, Retries happen, Inputs are adversarial. People ask for the wrong thing. Agents misunderstand. And when tools can change state, small mistakes turn into real incidents. These patterns are what keep the system stable when the world is not. 👈 Two-phase execution Do not execute directly from a model output. First phase: propose a plan and a dry-run summary of what will change. Second phase: execute only after policy gates pass, and approval is collected if required. Idempotency for every write If a tool call can create, refund, email, delete, or deploy, it must be safe to retry. Every write gets an idempotency key, and the gateway rejects duplicates. This one change prevents a huge class of production pain. Default to read-only when risk rises When injection signals spike, when the session looks anomalous, when retrieval looks suspicious, the system should not keep acting. It should downgrade. Retrieve, explain, and ask. No tool execution. Scope permissions to operations, not tools Tools are too broad. Do not allow Jira. Allow create ticket in these projects, with these fields. Do not allow database access. Allow read-only on this schema, with row and column filters. Rate limits and blast radius caps Agents should have a hard ceiling. Max tool calls per request. Max writes per session. Max affected entities. If the cap is hit, stop and escalate. A kill switch that actually works You need a way to disable tool execution across the fleet in one move. When an incident happens, you do not want to redeploy code. You want to stop the bleeding. If you build these in early, you stop relying on luck. You make failure boring, contained, and recoverable. Think for scale, in the Era of AI for AI I want to zoom out for a second, because this is the shift most teams still design around. We are not just adding AI to a product. We are entering a phase where parts of the system can maintain and improve themselves. Not in a magical way. In a practical, engineering way. A self-improving system is one that can watch what is happening in production, spot a class of problems, propose changes, test them, and ship them safely, while leaving a clear trail behind it. It can improve code paths, adjust prompts, refine retrieval rules, update tests, and tighten policies. Over time, the system becomes less dependent on hero debugging at 2 a.m. What makes this real is the loop, not the model. Signals come in from logs, traces, incidents, drift metrics, and quality checks. The system turns those signals into a scoped plan. Then it passes through gates: policy and permissions, safe scope, testing, and controlled rollout. If something looks wrong, it stops, downgrades to read-only, or asks for approval. This is why scale changes. In the old world, scale meant more users and more traffic. In the AI for AI world, scale also means more autonomy. One request can trigger many tool calls. One workflow can spawn sub-agents. One bad signal can cause retries and cascades. So the question is not only can your system handle load. The question is can your system handle multiplication without losing control. If you want self-improving behavior, you need three things to be true: The system is allowed to change only what it can prove is safe to change. Every change is testable and reversible. Every action is traceable, so you can replay why it happened. When those conditions exist, self-improvement becomes an advantage. When they do not, self-improvement becomes automated risk. And this leads straight into governance, because in this era governance is not a document. It is the gate that decides what the system is allowed to improve, and under which conditions. Observability: uptime isn’t enough — you need traceability and causality Traditional observability answers: Is the service up. Is it fast. Is it erroring. That is table stakes. Production AI needs a deeper truth: why did it do that. Because the system can look perfectly healthy while still making the wrong decision. Latency is fine. Error rate is fine. Dashboards are green. And the output is still harmful. To debug that kind of failure, you need causality you can replay and audit: Input → context retrieval → prompt assembly → model response → tool invocation → final outcome Without this chain, incident response becomes guesswork. People argue about prompts, blame the model, and ship small patches that do not address the real cause. Then the same issue comes back under a different prompt, a different document, or a slightly different user context. The practical goal is simple. Every high-impact action should have a story you can reconstruct later. What did the system see. What did it pull. What did it decide. What did it touch. And which policy allowed it. When you have that, you stop chasing symptoms. You can fix the actual failure point, and you can detect drift before users do. RAG Governance and Data Provenance Most teams treat retrieval as a quality feature. In production, retrieval is a security boundary. Because the moment a document enters the context window, it becomes part of the system’s brain for that request. If retrieval pulls the wrong thing, the model can behave perfectly and still lead you to a bad outcome. I learned this the hard way, I have seen systems where the model was not the problem at all. The problem was a single stale runbook that looked official, ranked high, and quietly took over the decision. Everything downstream was clean. The agent followed instructions, called the right tools, and still caused damage because the truth it was given was wrong. I keep repeating one line in reviews, and I mean it every time: Retrieval is where truth enters the system. If you do not control that, you are not governing anything. - Hazem Ali So what makes retrieval safe enough for enterprise use? Provenance on every chunk Every retrieved snippet needs a label you can defend later: source, owner, timestamp, and classification. If you cannot answer where it came from, you cannot trust it for actions. Staleness budgets Old truth is a real risk. A runbook from last quarter can be more dangerous than no runbook at all. If content is older than a threshold, the system should say it is old, and either confirm or downgrade to read-only. No silent reliance. Allowlisted sources per task Not all sources are valid for all jobs. Incident response might allow internal runbooks. Customer messaging might require approved templates only. Make this explicit. Retrieval should not behave like a free-for-all search engine. Scope and redaction before the model sees it Row and column limits, PII filtering, secret stripping, tenant boundaries. Do it before prompt assembly, not after the model has already seen the data. Citation requirement for high-impact steps If the system is about to take a high-impact action, it should be able to point to the sources that justified it. If it cannot, it should stop and ask. That one rule prevents a lot of confident nonsense. Monitor retrieval like a production dependency Track which sources are being used, which ones cause incidents, and where drift is coming from. Retrieval quality is not static. Content changes. Permissions change. Rankings shift. Behavior follows. When you treat retrieval as governance, the system stops absorbing random truth. It consumes controlled truth, with ownership, freshness, and scope. That is what production needs. Security: API keys aren’t a strategy when agents can act The highest-impact AI incidents are usually not model hacks. They are architectural failures: over-privileged identities, blurred trust boundaries, unbounded tool access, and unsafe retrieval paths. Once an agent can call tools that mutate state, treat it like a privileged service, not a chatbot. Least privilege by default Explicit authorization boundaries Auditable actions Containment-first design Clear separation between user intent and system authority This is how you prevent a prompt injection from turning into a system-level breach. If you want the deeper blueprint and the concrete patterns for securing agents in practice, I wrote a full breakdown here: Zero-Trust Agent Architecture: How to Actually Secure Your Agents What “production-ready AI” actually means Production-ready AI is not defined by a benchmark score. It’s defined by survivability under uncertainty. A production-grade AI system can: Explain itself with traceability. Enforce policy at runtime. Contain blast radius when wrong. Degrade safely under uncertainty. Recover with clear operational playbooks. If your system can’t answer “how does it fail?” you don’t have production AI yet.. You have a prototype with unmanaged risk. How Azure helps you engineer production-grade AI Azure doesn’t “solve” production-ready AI by itself, it gives you the primitives to engineer it correctly. The difference between a prototype and a survivable system is whether you translate those primitives into runtime control points: identity, policy enforcement, telemetry, and containment. 1. Identity-first execution (kill credential sprawl, shrink blast radius) A production AI runtime should not run on shared API keys or long-lived secrets. In Azure environments, the most important mindset shift is: every agent/workflow must have an identity and that identity must be scoped. Guidance Give each agent/orchestrator a dedicated identity (least privilege by default). Separate identities by environment (prod vs staging) and by capability (read vs write). Treat tool invocation as a privileged service call, never “just a function.” Why this matters If an agent is compromised (or tricked via prompt injection), identity boundaries decide whether it can read one table or take down a whole environment. 2. Policy as enforcement (move governance into the execution path) Your article’s core idea governance is runtime enforcement maps perfectly to Azure’s broader governance philosophy: policies must be enforceable, not advisory. Guidance Create an explicit Policy Enforcement Point (PEP) in your agent runtime. Make the PEP decision mandatory before executing any tool call or data access. Use “allow + obligations” patterns: allow only with constraints (redaction, read-only mode, rate limits, approval gates, extra logging). Why this matters Governance fails when it’s a document. It works when it’s compiled into runtime decisions. 3. Observability that explains behavior Azure’s telemetry stack is valuable because it’s designed for distributed systems: correlation, tracing, and unified logs. Production AI needs the same plus decision traceability. Guidance Emit a trace for every request across: retrieval → prompt assembly → model call → tool calls → outcome. Log policy decisions (allow/deny/require approval) with policy version + obligations applied. Capture “why” signals: risk score, classifier outputs, injection signals, uncertainty indicators. Why this matters When incidents happen, you don’t just debug latency — you debug behavior. Without causality, you can’t root-cause drift or containment failures. 4. Zero-trust boundaries for tools and data Azure environments tend to be strong at network segmentation and access control. That foundation is exactly what AI systems need because AI introduces adversarial inputs by default. Guidance Put a Tool Gateway in front of tools (Jira, email, payments, infra) and enforce scopes there. Restrict data access by classification (PII/secret zones) and enforce row/column constraints. Degrade safely: if risk is high, drop to read-only, disable tools, or require approval. Why this matters Prompt injection doesn’t become catastrophic when your system has hard boundaries and graceful failure modes. 5. Practical “production-ready” checklist (Azure-aligned, engineering-first) If you want a concrete way to apply this: Identity: every runtime has a scoped identity; no shared secrets PEP: every tool/data action is gated by policy, with obligations Traceability: full chain captured and correlated end-to-end Containment: safe degradation + approval gates for high-risk actions Auditability: policy versions and decision logs are immutable and replayable Environment separation: prod ≠ staging identities, tools, and permissions Outcome This is how you turn “we integrated AI” into “we operate AI safely at scale.” Operating Production AI A lot of teams build the architecture and still struggle, because production is not a diagram. It is a living system. So here is the operating model I look for when I want to trust an AI runtime in production. The few SLOs that actually matter Trace completeness For high-impact requests, can we reconstruct the full chain every time, without missing steps. Policy coverage What percentage of tool calls and sensitive reads pass through the policy gate, with a recorded decision. Action correctness Not model accuracy. Real-world correctness. Did the system take the right action, on the right target, with the right scope. Time to contain When something goes wrong, how fast can we stop tool execution, downgrade to read-only, or isolate a capability. Drift detection time How quickly do we notice behavioral drift before users do. The runbooks you must have If you operate agents, you need simple playbooks for predictable bad days: Injection spike → safe mode, block tool execution, force approvals Retrieval poisoning suspicion → restrict sources, raise freshness requirements, require citations Retry storm → enforce idempotency, rate limits, and circuit breakers Tool gateway instability → fail closed for writes, degrade safely for reads Model outage → fall back to deterministic paths, templates, or human escalation Clear ownership Someone has to own the runtime, not just the prompts. Platform owns the gates, tool gateway, audit, and tracing Product owns workflows and user-facing behavior Security owns policy rules, high-risk approvals, and incident procedures When these pieces are real, production becomes manageable. When they are not, you rely on luck and hero debugging. The 60-second production readiness checklist If you want a fast sanity check, here it is. Every agent has an identity, scoped per environment No shared API keys for privileged actions Every tool call goes through a policy gate with a logged decision Permissions are scoped to operations, not whole tools Writes are idempotent, retries cannot duplicate side effects Tool gateway validates inputs, scopes data, and rate-limits actions There is a safe mode that disables tools under risk There is a kill switch that stops tool execution across the fleet Retrieval is allowlisted, provenance-tagged, and freshness-aware High-impact actions require citations or they stop and ask Audit logs are immutable enough to trust later Traces are replayable end-to-end for any incident If most of these are missing, you do not have production AI yet. You have a prototype with unmanaged risk. A quick note In Azure-based enterprises, you already have strong primitives that mirror the mindset production AI requires: identity-first access control (Microsoft Entra ID), secure workload authentication patterns (managed identities), and deep telemetry foundations (Azure Monitor / Application Insights). The key is translating that discipline into the AI runtime so governance, identity, and observability aren’t external add-ons, but part of how AI executes and acts. Closing Models will keep evolving. Tooling will keep improving. But enterprise AI success still comes down to systems engineering. If you’re building production AI today, what has been the hardest part in your environment: governance, observability, security boundaries, or operational reliability? If you’re dealing with deep technical challenges around production AI, agent security, RAG governance, or operational reliability, feel free to connect with me on LinkedIn. I’m open to technical discussions and architecture reviews. Thanks for reading. — Hazem Ali856Views0likes0CommentsPrivyDoc: Building a Zero-Data-Leak AI with Foundry Local & Microsoft's Agent Framework
Tired of choosing between powerful AI insights and sacrificing your data's privacy? PrivyDoc offers a groundbreaking solution. In this article, Microsoft MVP in AI, Shivam Goyal, introduces his innovative project that brings robust AI document analysis directly to your local machine, ensuring zero data ever leaves your device. Discover how PrivyDoc leverages two cutting-edge Microsoft technologies: Foundry Local: The secret sauce for 100% on-device AI processing, allowing advanced models to run securely without cloud dependency. Microsoft Agent Framework: The intelligent orchestrator that builds a sophisticated multi-agent pipeline, handling everything from text extraction and entity recognition to summarization and sentiment analysis. Learn about PrivyDoc's intuitive web UI, its multi-format support, and crucial features that make it perfect for sensitive industries like legal, healthcare, and finance. Say goodbye to privacy concerns and hello to AI-powered document intelligence without compromise.406Views3likes0CommentsLevel up your Python + AI skills with our complete series
We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python. The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP). To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series. Python + AI: Large Language Models 📺 Watch recording In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vector embeddings 📺 Watch recording In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models. Slides for this session Code repository with examples: vector-embedding-demos Python + AI: Retrieval Augmented Generation 📺 Watch recording In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Slides for this session Code repository with examples: python-openai-demos Python + AI: Vision models 📺 Watch recording Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine. Slides for this session Code repository with examples: openai-chat-vision-quickstart Python + AI: Structured outputs 📺 Watch recording In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows. Slides for this session Code repository with examples: python-openai-demos Python + AI: Quality and safety 📺 Watch recording This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM. Slides for this session Code repository with examples: ai-quality-safety-demos Python + AI: Tool calling 📺 Watch recording In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session. Slides for this session Code repository with examples: python-openai-demos Python + AI: Agents 📺 Watch recording In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows. Slides for this session Code repository with examples: python-ai-agent-frameworks-demos Python + AI: Model Context Protocol 📺 Watch recording In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer. Slides for this session Code repository with examples: python-mcp-demo3.5KViews1like0CommentsMicrosoft’s A-Grade Azure AI Stack: From Dissertation Prototype to Smart Campus Pilot
This post isn't just about the Student Support Agent (SSA) I built, which earned me a Distinction. It's about how Microsoft's tools made it possible to go from a rough concept to a robust pilot, proving their developer stack is one of the most convenient and powerful options for building intelligent, ethical, and scalable educational systems. The Vision: Cutting Through Campus Complexity University life is full of fragmented systems. Students constantly juggle multiple logins, websites, and interfaces just to check a timetable, book a room, or find a policy. My goal was simple: reduce that cognitive load by creating a unified assistant that could manage all these tasks through a single, intelligent conversation. The Stack That Made It Possible The core of the system relied on a few key, interconnected technologies: Technology Core Function Impact Azure AI Search Hybrid Data Retrieval Anchored responses in official documents. Azure OpenAI Natural Language Generation Created human-like, accurate answers. Semantic Kernel (SK) Multi-Agent Orchestration Managed complex workflows and memory. Azure Speech SDK Multimodal Interface Enabled accessible voice input and output. The foundation was built using Streamlit and FastAPI for rapid prototyping. Building a system that's context-aware, accessible, and extensible is a huge challenge, but it's exactly where the Microsoft AI stack shined. From Simple Chatbot to Multi-Agent Powerhouse Early campus chatbots are often single-agent models, great for basic FAQs, but they quickly fail when tasks span multiple services. I used Semantic Kernel (SK) Microsoft's powerful, open-source framework to build a modular, hub-and-spoke multi-agent system. A central orchestrator routes a request (like "book a study room") to a specialist agent (the Booking Agent), which knows exactly how to handle that task. This modularity was a game-changer: I could add new features (like an Events Agent) without breaking the core system, ensuring the architecture stayed clean and ready for expansion. Agentic Retrieval-Augmented Generation (Agentic RAG): Trust and Transparency To ensure the assistant was trustworthy, I used Agentic RAG to ground responses in real campus (Imperial College London) documentation. This included everything from admission fee payments to campus shuttle time. Azure AI Search indexed all handbooks and policies, allowing the assistant to pull relevant chunks of data and then cite the sources directly in its response. Result: The system avoids common hallucinations by refusing to answer when confidence is low. Students can verify every piece of advice, dramatically improving trust and transparency. Results: A Foundation for Scalable Support A pilot study with 15 students was highly successful: 100% positive feedback on the ease of use and perceived benefit. 93% satisfaction with the voice features. High trust was established due to transparent citations. The SSA proved it could save students time by centralising tasks like booking rooms, checking policies and offering study tips! Final Thoughts Microsoft’s AI ecosystem didn’t just support my dissertation; it shaped it. The tools were reliable, well-documented, and flexible enough to handle real-world complexity. More importantly, they allowed me to focus on student experience, ethics, and pedagogy, rather than wrestling with infrastructure. If you’re a student, educator, or developer looking to build intelligent systems that are transparent, inclusive, and scalable, Microsoft’s AI stack is a great place to start! 🙋🏽♀️ About Me I’m Tyana Tshiota, a postgraduate student in Applied Computational Science and Engineering at Imperial College London. Leveraging Microsoft’s AI stack and the extensive documentation on Microsoft Learn played a key role in achieving a Distinction in my dissertation. Moving forward, I’m excited to deepen my expertise by pursuing Azure certifications. I’d like to extend my sincere gratitude to my supervisor, Lee_Stott , for his invaluable mentorship and support throughout this project. If you haven’t already, check out his insightful posts on the Educator Developer Blog, or try building your own agent with the AI Agents for Beginners curriculum developed by Lee and his team! You can reach out via my LinkedIn if you’re interested in smart campus systems, AI in education, collaborative development, or would like to discuss opportunities.227Views0likes0CommentsEdge AI for Student Developers: Learn to Run AI Locally
AI isn’t just for the cloud anymore. With the rise of Small Language Models (SLMs) and powerful local inference tools, developers can now run intelligent applications directly on laptops, phones, and edge devices—no internet required. If you're a student developer curious about building AI that works offline, privately, and fast, Microsoft’s Edge AI for Beginners course is your perfect starting point. What Is Edge AI? Edge AI refers to running AI models directly on local hardware—like your laptop, mobile device, or embedded system—without relying on cloud servers. This approach offers: ⚡ Real-time performance 🔒 Enhanced privacy (no data leaves your device) 🌐 Offline functionality 💸 Reduced cloud costs Whether you're building a chatbot that works without Wi-Fi or optimizing AI for low-power devices, Edge AI is the future of intelligent, responsive apps. About the Course Edge AI for Beginners is a free, open-source curriculum designed to help you: Understand the fundamentals of Edge AI and local inference Explore Small Language Models like Phi-2, Mistral-7B, and Gemma Deploy models using tools like Llama.cpp, Olive, MLX, and OpenVINO Build cross-platform apps that run AI locally on Windows, macOS, Linux, and mobile The course is hosted on GitHub and includes hands-on labs, quizzes, and real-world examples. You can fork it, remix it, and contribute to the community. What You’ll Learn Module Focus 01. Introduction What is Edge AI and why it matters 02. SLMs Overview of small language models 03. Deployment Running models locally with various tools 04. Optimization Speeding up inference and reducing memory 05. Applications Building real-world Edge AI apps Each module is beginner-friendly and includes practical exercises to help you build and deploy your own local AI solutions. Who Should Join? Student developers curious about AI beyond the cloud Hackathon participants looking to build offline-capable apps Makers and builders interested in privacy-first AI Anyone who wants to explore the future of on-device intelligence No prior AI experience required just a willingness to learn and experiment. Why It Matters Edge AI is a game-changer for developers. It enables smarter, faster, and more private applications that work anywhere. By learning how to deploy AI locally, you’ll gain skills that are increasingly in demand across industries—from healthcare to robotics to consumer tech. Plus, the course is: 💯 Free and open-source 🧠 Backed by Microsoft’s best practices 🧪 Hands-on and project-based 🌐 Continuously updated Ready to Start? Head to aka.ms/edgeai-for-beginners and dive into the modules. Whether you're coding in your dorm room or presenting at your next hackathon, this course will help you build smarter AI apps that run right where you need them on the edge.546Views1like0CommentsModel Mondays S2E13: Open Source Models (Hugging Face)
1. Weekly Highlights 1. Weekly Highlights Here are the key updates we covered in the Season 2 finale: O1 Mini Reinforcement Fine-Tuning (GA): Fine-tune models with as few as ~100 samples using built-in Python code graders. Azure Live Interpreter API (Preview): Real-time speech-to-speech translation supporting 76 input languages and 143 locales with near human-level latency. Agent Factory – Part 5: Connecting agents using open standards like MCP (Model Context Protocol) and A2A (Agent-to-Agent protocol). Ask Ralph by Ralph Lauren: A retail example of agentic AI for conversational styling assistance, built on Azure OpenAI and Foundry’s agentic toolset. VS Code August Release: Brings auto-model selection, stronger safety guards for sensitive edits, and improved agent workflows through new agents.md support. 2. Spotlight – Open Source Models in Azure AI Foundry Guest: Jeff Boudier, VP of Product at Hugging Face Jeff showcased the deep integration between the Hugging Face community and Azure AI Foundry, where developers can access over 10 000 open-source models across multiple modalities—LLMs, speech recognition, computer vision, and even specialized domains like protein modeling and robotics. Demo Highlights Discover models through Azure AI Foundry’s task-based catalog filters. Deploy directly from Hugging Face Hub to Azure with one-click deployment. Explore Use Cases such as multilingual speech recognition and vision-language-action models for robotics. Jeff also highlighted notable models, including: SmoLM3 – a 3 B-parameter model with hybrid reasoning capabilities Qwen 3 Coder – a mixture-of-experts model optimized for coding tasks Parakeet ASR – multilingual speech recognition Microsoft Research protein-modeling collection MAGMA – a vision-language-action model for robotics Integration extends beyond deployment to programmatic access through the Azure CLI and Python SDKs, plus local development via new VS Code extensions. 3. Customer Story – DraftWise (BUILD 2025 Segment) The finale featured a customer spotlight on DraftWise, where CEO James Ding shared how the company accelerates contract drafting with Azure AI Foundry. Problem Legal contract drafting is time-consuming and error-prone. Solution DraftWise uses Azure AI Foundry to fine-tune Hugging Face language models on legal data, generating contract drafts and redline suggestions. Impact Faster drafting cycles and higher consistency Easy model management and deployment with Foundry’s secure workflows Transparent evaluation for legal compliance 4. Community Story – Hugging Face & Microsoft The episode also celebrated the ongoing collaboration between Hugging Face and Microsoft and the impact of open-source AI on the global developer ecosystem. Community Benefits Access to State-of-the-Art Models without licensing barriers Transparent Performance through public leaderboards and benchmarks Rapid Innovation as improvements and bug fixes spread quickly Education & Empowerment via tutorials, docs, and active forums Responsible AI Practices encouraged through community oversight 5. Key Takeaways Open Source AI Is Here to Stay Azure AI Foundry and Hugging Face make deploying, fine-tuning, and benchmarking open models easier than ever. Community Drives Innovation: Collaboration accelerates progress, improves transparency, and makes AI accessible to everyone. Responsible AI and Transparency: Open-source models come with clear documentation, licensing, and community-driven best practices. Easy Deployment & Customization: Azure AI Foundry lets you deploy, automate, and customize open models from a single, unified platform. Learn, Build, Share: The open-model ecosystem is a great place for students, developers, and researchers to learn, build, and share their work. Sharda's Tips: How I Wrote This Blog For this final recap, I focused on capturing the energy of the open source AI movement and the practical impact of Hugging Face and Azure AI Foundry collaboration. I watched the livestream, took notes on the demos and interviews, and linked directly to official resources for models, docs, and community sites. Here’s my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E13 based on the transcript and episode details. Focus on open source models, Hugging Face, Azure AI Foundry, and community workflows. Include practical links and actionable insights for developers and students! Learn & Connect Explore Open Models in Azure AI Foundry Hugging Face Leaderboard Responsible AI in Azure Machine Learning Llama-3 by Meta Hugging Face Community Azure AI Documentation About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.293Views0likes0Comments