ai solutions
65 TopicsVector Drift in Azure AI Search: Three Hidden Reasons Your RAG Accuracy Degrades After Deployment
What Is Vector Drift? Vector drift occurs when embeddings stored in a vector index no longer accurately represent the semantic intent of incoming queries. Because vector similarity search depends on relative semantic positioning, even small changes in models, data distribution, or preprocessing logic can significantly affect retrieval quality over time. Unlike schema drift or data corruption, vector drift is subtle: The system continues to function Queries return results But relevance steadily declines Cause 1: Embedding Model Version Mismatch What Happens Documents are indexed using one embedding model, while query embeddings are generated using another. This typically happens due to: Model upgrades Shared Azure OpenAI resources across teams Inconsistent configuration between environments Why This Matters Embeddings generated by different models: Exist in different vector spaces Are not mathematically comparable Produce misleading similarity scores As a result, documents that were previously relevant may no longer rank correctly. Recommended Practice A single vector index should be bound to one embedding model and one dimension size for its entire lifecycle. If the embedding model changes, the index must be fully re-embedded and rebuilt. Cause 2: Incremental Content Updates Without Re-Embedding What Happens New documents are continuously added to the index, while existing embeddings remain unchanged. Over time, new content introduces: Updated terminology Policy changes New product or domain concepts Because semantic meaning is relative, the vector space shifts—but older vectors do not. Observable Impact Recently indexed documents dominate retrieval results Older but still valid content becomes harder to retrieve Recall degrades without obvious system errors Practical Guidance Treat embeddings as living assets, not static artifacts: Schedule periodic re-embedding for stable corpora Re-embed high-impact or frequently accessed documents Trigger re-embedding when domain vocabulary changes meaningfully Declining similarity scores or reduced citation coverage are often early signals of drift. Cause 3: Inconsistent Chunking Strategies What Happens Chunk size, overlap, or parsing logic is adjusted over time, but previously indexed content is not updated. The index ends up containing chunks created using different strategies. Why This Causes Drift Different chunking strategies produce: Different semantic density Different contextual boundaries Different retrieval behavior This inconsistency reduces ranking stability and makes retrieval outcomes unpredictable. Governance Recommendation Chunking strategy should be treated as part of the index contract: Use one chunking strategy per index Store chunk metadata (for example, chunk_version) Rebuild the index when chunking logic changes Design Principles Versioned embedding deployments Scheduled or event-driven re-embedding pipelines Standardized chunking strategy Retrieval quality observability Prompt and response evaluation Key Takeaways Vector drift is an architectural concern, not a service defect It emerges from model changes, evolving data, and preprocessing inconsistencies Long-lived RAG systems require embedding lifecycle management Azure AI Search provides the controls needed to mitigate drift effectively Conclusion Vector drift is an expected characteristic of production RAG systems. Teams that proactively manage embedding models, chunking strategies, and retrieval observability can maintain reliable relevance as their data and usage evolve. Recognizing and addressing vector drift is essential to building and operating robust AI solutions on Azure. Further Reading The following Microsoft resources provide additional guidance on vector search, embeddings, and production-grade RAG architectures on Azure. Azure AI Search – Vector Search Overview - https://learn.microsoft.com/azure/search/vector-search-overview Azure OpenAI – Embeddings Concepts - https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/embeddings?view=foundry-classic&tabs=csharp Retrieval-Augmented Generation (RAG) Pattern on Azure - https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview?tabs=videos Azure Monitor – Observability Overview - https://learn.microsoft.com/azure/azure-monitor/overview505Views3likes2CommentsSimplifying Image Classification with Azure AutoML for Images: A Practical Guide
1. The Challenge of Traditional Image Classification Anyone who has worked with computer vision knows the drill: you need to classify images, so you dive into TensorFlow or PyTorch, spend days architecting a convolutional neural network, experiment with dozens of hyperparameters, and hope your model generalizes well. It’s time-consuming, requires deep expertise, and often feels like searching for a needle in a haystack. What if there was a better way? 2. Enter Azure AutoML for Images Azure AutoML for Images is a game-changer in the computer vision space. It’s a feature within Azure Machine Learning that automatically builds high-quality vision models from your image data with minimal code. Think of it as having an experienced ML engineer working alongside you, handling all the heavy lifting while you focus on your business problem. What Makes AutoML for Images Special? 1. Automatic Model Selection Instead of manually choosing between ResNet, EfficientNet, or dozens of other architectures, AutoML for Images (Azure ML) evaluates multiple state-of-the-art deep learning models and selects the best one for your specific dataset. It’s like having access to an entire model zoo with an intelligent curator. 2. Intelligent Hyperparameter Tuning The system doesn’t just pick a model — it optimizes it. Learning rates, batch sizes, augmentation strategies, and more are automatically tuned to squeeze out the best possible performance. What would take weeks of manual experimentation happens in hours. 3. Built-in Best Practices Data preprocessing, augmentation techniques, and training strategies that would require extensive domain knowledge are pre-configured and applied automatically. You get enterprise-grade ML without needing to be an ML expert. Key Capabilities The repository demonstrates several powerful features: Multi-class and Multi-label Classification: Whether you need to classify an image into a single category or tag it with multiple labels, AutoML manages both scenarios seamlessly. Format Flexibility: Works with standard image formats including JPEG and PNG, making it easy to integrate with existing datasets. Full Transparency: Unlike black-box solutions, you maintain complete visibility and control over the training process. You can monitor metrics, understand model decisions, and fine-tune as needed. Production-Ready Deployment: Once trained, models can be easily deployed to Azure endpoints, ready to serve predictions at scale. Real-World Applications The practical applications are vast: E-commerce: Automatically categorize product images for better search and recommendations. Healthcare: Classify medical images for diagnostic support. Manufacturing: Detect defects in production line images. Agriculture: Identify crop diseases or estimate yield from aerial imagery. Content Moderation: Automatically flag inappropriate visual content. 3. A Practical Example: Metal Defect Detection The repository includes a complete end-to-end example of detecting defects in metal surfaces — a critical quality control task in manufacturing. The notebooks demonstrate how to: Download and organize image data from sources like Kaggle, Create training and validation splits with proper directory structure, Upload data to Azure ML as versioned datasets, Configure GPU compute that scales based on demand, Train multiple models with automated hyperparameter tuning, Evaluate results with comprehensive metrics and visualizations, Deploy the best model as a production-ready REST API, Export to ONNX for edge deployment scenarios. The metal defect use case is particularly instructive because it mirrors real industrial applications where quality control is critical but expertise is scarce. The notebooks show how a small team can build production-grade computer vision systems without a dedicated ML research team. Getting Started: What You Need The prerequisites are straightforward: An Azure subscription (free tier available for experimentation) An Azure Machine Learning workspace Python 3.7 or later That’s it. No local GPU clusters to configure, no complex deep learning frameworks to master. Repository Structure The repository is thoughtfully organized into three progressive notebooks: Downloading images.ipynb Shows how to acquire and prepare image datasets Demonstrates proper directory structure for classification tasks Includes data exploration and visualization techniques image-classification-azure-automl-for-images/1. Downloading images.ipynb at main · retkowsky/image-classification-azure-automl-for-images Azure ML AutoML for Images.ipynb The core workflow: connect to Azure ML, upload data, configure training Covers both simple model training and advanced hyperparameter tuning Shows how to evaluate models and select the best performing one Demonstrates deployment to managed online endpoints image-classification-azure-automl-for-images/2. Azure ML AutoML for Images.ipynb at main · retkowsky/image-classification-azure-automl-for-images Edge with ONNX local model.ipynb Exports trained models to ONNX format Shows how to run inference locally without cloud connectivity Perfect for edge computing and IoT scenarios image-classification-azure-automl-for-images/3. Edge with ONNX local model.ipynb at main · retkowsky/image-classification-azure-automl-for-images Each Python notebook is self-contained with clear explanations, making it easy to understand each step of the process. You can run them sequentially to build a complete solution, or jump to specific sections relevant to your use case. The Developer Experience What sets this approach apart is the developer experience. The repository provides Python notebooks that guide you through the entire workflow. You’re not just reading documentation — you’re working with practical, runnable examples that demonstrate real scenarios. Let’s walk through the code to see how straightforward this actually is. Use-case description This image classification model is designed to identify and classify defects on metal surfaces in a manufacturing context. We want to classify defective images into Crazing, Inclusion, Patches, Pitted, Rolled & Scratches. Press enter or click to view image in full size All code and images are available here: retkowsky/image-classification-azure-automl-for-images: Azure AutoML for images — Image classification Step 1: Connect to Azure ML Workspace First, establish connection to your Azure ML workspace using Azure credentials: print("Connection to the Azure ML workspace…") credential = DefaultAzureCredential() ml_client = MLClient( credential, os.getenv("subscription_id"), os.getenv("resource_group"), os.getenv("workspace") ) print("✅ Done") That’s it. Step 2: Upload Your Dataset Upload your image dataset to Azure ML. The code handles this elegantly: my_images = Data( path=TRAIN_DIR, type=AssetTypes.URI_FOLDER, description="Metal defects images for images classification", name="metaldefectimagesds", ) uri_folder_data_asset = ml_client.data.create_or_update(my_images) print("🖼️ Informations:") print(uri_folder_data_asset) print("\n🖼️ Path to folder in Blob Storage:") print(uri_folder_data_asset.path) Your local images are now versioned data assets in Azure, ready for training. Step 3: Create GPU Compute Cluster AutoML needs compute power. Here’s how you create a GPU cluster that auto-scales: compute_name = "gpucluster" try: _ = ml_client.compute.get(compute_name) print("✅ Found existing Azure ML compute target.") except ResourceNotFoundError: print(f"🛠️ Creating a new Azure ML compute cluster '{compute_name}'…") compute_config = AmlCompute( name=compute_name, type="amlcompute", size="Standard_NC16as_T4_v3", # GPU VM idle_time_before_scale_down=1200, min_instances=0, # Scale to zero when idle max_instances=4, ) ml_client.begin_create_or_update(compute_config).result() print("✅ Done") The cluster scales from 0 to 4 instances based on workload, so you only pay for what you use. Step 4: Configure AutoML Training Now comes the magic. Here’s the entire configuration for an AutoML image classification job using a specific model (here a resnet34). It is possible as well to access all the available models from the image classification AutoML library. Press enter or click to view image in full size https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models?view=azureml-api-2&tabs=python#supported-model-architectures image_classification_job = automl.image_classification( compute=compute_name, experiment_name=exp_name, training_data=my_training_data_input, validation_data=my_validation_data_input, target_column_name="label", ) # Set training parameters image_classification_job.set_limits(timeout_minutes=60) image_classification_job.set_training_parameters(model_name="resnet34") That’s approximately 10 lines of code to configure what would traditionally require hundreds of lines and deep expertise. Step 5: Hyperparameter Tuning (Optional) Want to explore multiple models and configurations? image_classification_job = automl.image_classification( compute=compute_name, # Compute cluster experiment_name=exp_name, # Azure ML job training_data=my_training_data_input, # Training validation_data=my_validation_data_input, # Validation target_column_name="label", # Target primary_metric=ClassificationPrimaryMetrics.ACCURACY, # Metric tags={"usecase": "metal defect", "type" : "computer vision", "product" : "azure ML", "ai": "image classification", "hyper": "YES"}, ) image_classification_job.set_limits( timeout_minutes=60, # Timeout in min max_trials=5, # Max model number max_concurrent_trials=2, # Concurrent training ) image_classification_job.extend_search_space([ SearchSpace( model_name=Choice(["vitb16r224", "vits16r224"]), learning_rate=Uniform(0.001, 0.01), # LR number_of_epochs=Choice([15, 30]), # Epoch ), SearchSpace( model_name=Choice(["resnet50"]), learning_rate=Uniform(0.001, 0.01), # LR layers_to_freeze=Choice([0, 2]), # Layers to freeze ), ]) image_classification_job.set_sweep( sampling_algorithm="Random", # Random sampling to select combinations of hyperparameters. early_termination=BanditPolicy(evaluation_interval=2, # The model is evaluated every 2 iterations. slack_factor=0.2, # If a run’s performance is 20% worse than the best run so far, it may be terminated. delay_evaluation=6), # The policy waits until 6 iterations have completed before starting to # evaluate and potentially terminate runs. ) AutoML will now automatically try different model architectures, learning rates, and augmentation strategies to find the best configuration. Step 6: Launch Training Submit the job and monitor progress: # Submit the job returned_job = ml_client.jobs.create_or_update(image_classification_job) print(f"✅ Created job: {returned_job}") # Stream the logs in real-time ml_client.jobs.stream(returned_job.name) While training runs, you can monitor metrics, view logs, and track progress through the Azure ML Studio UI or programmatically. Step 7: Results Step 8: Deploy to Production Once training completes, deploy the best model as a REST endpoint: # Create endpoint configuration online_endpoint_name = "metal-defects-classification" endpoint = ManagedOnlineEndpoint( name=online_endpoint_name, description="Metal defects image classification", auth_mode="key", tags={ "usecase": "metal defect", "type": "computer vision" }, ) # Deploy the endpoint ml_client.online_endpoints.begin_create_or_update(endpoint).result() Your model is now a production API endpoint, ready to classify images at scale. Beyond the Cloud: Edge Deployment with ONNX One of the most powerful aspects of this approach is flexibility in deployment. The repository includes a third notebook demonstrating how to export your trained model to ONNX (Open Neural Network Exchange) format for edge deployment. This means you can: Deploy models on IoT devices for real-time inference without cloud connectivity Reduce latency by processing images locally on edge hardware Lower costs by eliminating constant cloud API calls Ensure privacy by keeping sensitive images on-premises The ONNX export process is straightforward and integrates seamlessly with the AutoML workflow. Your cloud-trained model can run anywhere ONNX Runtime is supported — from Raspberry Pi devices to industrial controllers. import onnxruntime # Load the ONNX model session = onnxruntime.InferenceSession("model.onnx") # Run inference locally results = session.run(None, {input_name: image_data}) This cloud-to-edge workflow is particularly valuable for manufacturing, retail, and remote monitoring scenarios where edge processing is essential. Interactive webapp for image classification Interpreting model predictions Deployed endpoint returns base64 encoded image string if both model_explainability and visualizations are set to True. Why This Matters? In the AI era, the competitive advantage isn’t about who can build the most complex models — it’s about who can deploy effective solutions fastest. Azure AutoML for Images democratizes computer vision by making sophisticated ML accessible to a broader audience. Small teams can now accomplish what previously required dedicated ML specialists. Prototypes that took months can be built in days. And the quality? Often on par with or better than manually crafted solutions, thanks to AutoML’s systematic approach and access to cutting-edge techniques. What the Code Reveals Looking at the actual implementation reveals several important insights: Minimal Boilerplate: The entire training pipeline — from data upload to model deployment — requires less than 50 lines of meaningful code. Compare this to traditional PyTorch or TensorFlow implementations that often exceed several hundred lines. Built-in Best Practices: Notice how the code automatically manages concerns like data versioning, experiment tracking, and compute auto-scaling. These aren’t afterthoughts — they’re integral to the platform. Production-Ready from Day One: The deployed endpoint isn’t a prototype. It includes authentication, scaling, monitoring, and all the infrastructure needed for production workloads. You’re building production systems, not demos. Flexibility Without Complexity: The simple API hides complexity without sacrificing control. Need to specify a particular model architecture? One parameter. Want hyperparameter tuning? Add a few lines. The abstraction level is perfectly calibrated. Observable and Debuggable: The `.stream()` method and comprehensive logging mean you’re never in the dark about what’s happening. You can monitor training progress, inspect metrics, and debug issues — all critical for real projects. The Cost of Complexity Traditional ML projects fail not because of technology limitations but because of complexity. The learning curve is steep, the iteration cycles are long, and the resource requirements are high. By abstracting away this complexity, AutoML for Images changes the economics of computer vision projects. You can now: Validate ideas quickly: Test whether image classification solves your problem before committing significant resources Iterate faster: Experiment with different approaches in hours rather than weeks Scale expertise: Enable more team members to work with computer vision, not just ML specialists Conclusion Image classification is a fundamental building block for countless AI applications. Azure AutoML for Images makes it accessible, practical, and production-ready. Whether you’re a seasoned data scientist looking to accelerate your workflow or a developer taking your first steps into computer vision, this approach offers a compelling path forward. The future of ML isn’t about writing more complex code — it’s about writing smarter code that leverages powerful platforms to deliver business value faster. This repository shows you exactly how to do that. Practical Tips from the Code After reviewing the notebooks, here are some key takeaways for your own projects: Start with a Single Model: The basic configuration with `model_name=”resnet34"` is perfect for initial experiments. Only move to hyperparameter sweeps once you’ve validated your data and use case. Use Tags Strategically: The code demonstrates adding tags to jobs and endpoints (e.g., `”usecase”: “metal defect”`). This becomes invaluable when managing multiple experiments and models in production. Leverage Auto-Scaling: The compute configuration with `min_instances=0` means you’re not paying for idle resources. The cluster scales up when needed and scales down to zero when idle. Monitor Training Live: The `ml_client.jobs.stream()` method is your best friend during development. You see exactly what’s happening and can catch issues early. Version Your Data: Creating named data assets (`name=”metaldefectimagesds”`) means your experiments are reproducible. You can always trace back which data version produced which model. Think Cloud-to-Edge: Even if you’re deploying to the cloud initially, the ONNX export capability gives you flexibility for future edge scenarios without retraining. Resources Azure ML: https://azure.microsoft.com/en-us/products/machine-learning Demos notebooks: https://github.com/retkowsky/image-classification-azure-automl-for-images AutoML for Images documentation: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models Available models: Set up AutoML for computer vision — Azure Machine Learning | Microsoft Learn Connect with the author: https://www.linkedin.com/in/serger/118Views0likes0CommentsTracking Every Token: Granular Cost and Usage Metrics for Microsoft Foundry Agents
As organizations scale their use of AI agents, one question keeps surfacing: how much is each agent actually costing us? Not at the subscription level. Not at the resource group level. Per agent, per model, per request. This post walks through a solution that answers that question by combining three Azure services Microsoft AI Foundry, Azure API Management (APIM), and Application Insights into an observable, metered AI gateway with granular token-level telemetry including custom dates greater than a month for deeper analysis. The Problem: AI Costs Can be a Black Box Foundry’s built-in monitoring and cost views are ultimately powered by telemetry stored in Application Insights, and the out-of-the-box dashboards don’t always provide the exact per-request/per-caller token breakdown or the custom aggregations/joins teams may want for bespoke dashboards (for example, breaking down tokens by APIM subscription, product, tenant, user, route, or agent step). Using APIM to stamp consistent caller/context metadata (headers/claims), Foundry to generate the agent/model run telemetry, and App Insights as the queryable store to let you correlate gateway, agent run, tool/model calls and then build custom KQL-driven dashboards. With data captured in App Insights and custom KQL queries, questions such as below can be answered: Which agent consumed the most tokens last week? What's the average cost per request for a specific agent? How do prompt tokens vs. completion tokens break down per model? Is one agent disproportionately expensive compared to others? Why This Solution Was Built This solution was built to close the observability gap between "we deployed agents" and "we understand what those agents cost." The goals were straightforward: Per-agent, per-model cost attribution - Know exactly which agent is consuming what, down to the token. Real-time telemetry, not batch reports - Metrics flow into Application Insights within minutes, query via KQL. Zero agent modification - The agents themselves don't need to know about telemetry. The tracking happens at the gateway layer. Extensibility - Any agent hosted in Microsoft Foundry and exposed through APIM can be added with a single function call. How It Works The architecture is intentionally simple three services, one data flow. The notebook serves as a testing and prototyping environment, but the same `call_agent()` and `track_llm_usage()` code can be lifted directly into any production Python application that calls Foundry agents. Azure API Management acts as the AI Gateway. Every request to a Foundry-hosted agent flows through APIM, which handles routing, rate limiting, authentication, and tracing. APIM adds its own trace headers (`Ocp-Apim-Trace-Location`) so you can correlate gateway-level diagnostics with your application telemetry. After the API request is successfully completed, we can extract the necessary data from response headers. The notebook is designed for testing and rapid iteration call an agent, inspect the response, verify that telemetry lands in App Insights. It uses `httpx` to call agents through APIM, authenticating with `DefaultAzureCredential` and an APIM subscription key. After each response, it extracts the `usage` object `input_tokens`, `output_tokens`, `total_tokens` — and calculates an estimated cost based on built-in per-model pricing. Application Insights receives this telemetry via OpenTelemetry. The solution sends data to two tables: customMetrics - Cumulative counters for prompt tokens, completion tokens, total tokens, and cost in USD. These power dashboards and alerts. traces - Structured log entries with `custom_dimensions` containing agent name, model, operation ID, token counts, and cost per request. These power ad-hoc KQL queries. traces - stores your application’s trace/log messages (plus custom properties/measurements) as queryable records in Azure Monitor Logs. Demonstrating Granular Cost and Usage Metrics This is where the solution shines. Once telemetry is flowing, you can answer detailed questions with simple KQL queries. Per-Request Detail Query the `traces` table to see every individual agent call with full token and cost breakdown: traces | where message == "llm.usage" | extend cd = parse_json(replace_string( tostring(customDimensions["custom_dimensions"]), "'", "\"")) | extend agent_name = tostring(cd["agent_name"]), model = tostring(cd["model"]), prompt_tokens = toint(cd["prompt_tokens"]), completion_tokens = toint(cd["completion_tokens"]), total_tokens = toint(cd["total_tokens"]), cost_usd = todouble(cd["cost_usd"]) | project timestamp, agent_name, model, prompt_tokens, completion_tokens, total_tokens, cost_usd | order by timestamp desc This gives you a line-item audit trail every request, every agent, every token. Aggregated Metrics Per Agent Summarize across all requests to see averages and totals grouped by agent and model: traces | where message == "llm.usage" | extend cd = parse_json(replace_string( tostring(customDimensions["custom_dimensions"]), "'", "\"")) | extend agent_name = tostring(cd["agent_name"]), model = tostring(cd["model"]), prompt_tokens = toint(cd["prompt_tokens"]), completion_tokens = toint(cd["completion_tokens"]), total_tokens = toint(cd["total_tokens"]), cost_usd = todouble(cd["cost_usd"]) | summarize calls = count(), avg_prompt = avg(prompt_tokens), avg_completion = avg(completion_tokens), avg_total = avg(total_tokens), avg_cost = avg(cost_usd), total_cost = sum(cost_usd) by agent_name, model | order by total_cost desc Now you can see at a glance: Which agent is the most expensive across all calls Average token consumption per request useful for prompt optimization Prompt-to-completion ratio a high ratio may indicate verbose system prompts that could be trimmed Cost trends by model is GPT-4.1 worth the premium over GPT-4o-mini for a particular agent? The same can be done in code with your custom solution: from datetime import timedelta from azure.identity import DefaultAzureCredential from azure.monitor.query import LogsQueryClient KQL = r""" traces | where message == "llm.usage" | extend cd_raw = tostring(customDimensions["custom_dimensions"]) | extend cd = parse_json(replace_string(cd_raw, "'", "\"")) | extend agent_name = tostring(cd["agent_name"]), model = tostring(cd["model"]), operation_id = tostring(cd["operation_id"]), prompt_tokens = toint(cd["prompt_tokens"]), completion_tokens = toint(cd["completion_tokens"]), total_tokens = toint(cd["total_tokens"]), cost_usd = todouble(cd["cost_usd"]) | project timestamp, agent_name, model, operation_id, prompt_tokens, completion_tokens, total_tokens, cost_usd | order by timestamp desc """ def query_logs(): credential = DefaultAzureCredential() client = LogsQueryClient(credential) resp = client.query_resource( resource_id=APP_INSIGHTS_RESOURCE_ID, # defined in config cell query=KQL, timespan=None, # No time filter — returns all available data (up to 90-day retention) ) if resp.status != "Success": raise RuntimeError(f"Query failed: {resp.status} - {getattr(resp, 'error', None)}") table = resp.tables[0] rows = [dict(zip(table.columns, r)) for r in table.rows] return rows if __name__ == "__main__": rows = query_logs() if not rows: print("No telemetry found. Wait 2-5 min after running the agent cell and try again.") else: print(f"Found {len(rows)} records\n") print(f"{'Timestamp':<28} {'Agent':<16} {'Model':<12} {'Op ID':<12} " f"{'Prompt':>8} {'Completion':>11} {'Total':>8} {'Cost ($)':>10}") print("-" * 110) for r in rows[:20]: ts = str(r.get("timestamp", ""))[:19] print(f"{ts:<28} {r.get('agent_name',''):<16} {r.get('model',''):<12} " f"{r.get('operation_id',''):<12} {r.get('prompt_tokens',0):>8} " f"{r.get('completion_tokens',0):>11} {r.get('total_tokens',0):>8} " f"{r.get('cost_usd',0):>10.6f}") What You Can Build on Top Azure Workbooks - Build interactive dashboards showing cost trends over time, agent comparison charts, and token distribution heatmaps. Alerts - Trigger notifications when a single agent exceeds a cost threshold or when token consumption spikes unexpectedly. Azure Dashboard pinning - Pin KQL query results directly to a shared Azure Dashboard for team visibility. Power BI integration - Export telemetry data for executive-level cost reporting across all AI agents. Extensibility: Add Any Agent in One Line The solution is designed to scale with your agent portfolio. Any agent hosted in Microsoft Foundry and exposed through APIM can be integrated without modifying the telemetry pipeline. Adding a new agent is a single function call: response = call_agent("YourNewAgent", "Your prompt here") Token tracking, cost estimation, and telemetry export happen automatically. No additional configuration, no new infrastructure. From Notebook to Production The notebook is a testing harness, a fast way to validate agent connectivity, inspect raw responses, and confirm that telemetry arrives in App Insights. But the code isn't limited to notebooks. The core functions `call_agent()`, `track_llm_usage()`, and the OpenTelemetry configuration are plain Python. They can be dropped directly into any production application that calls Foundry agents through APIM: FastAPI / Flask web service - Wrap `call_agent()` in an endpoint and get per-request cost tracking out of the box. Azure Functions - Call agents from a serverless function with the same telemetry pipeline. Background workers or batch pipelines - Process multiple agent calls and aggregate cost data across runs. CLI tools or scheduled jobs - Run agent evaluations on a schedule with automatic cost logging. The pattern stays the same regardless of where the code runs: # 1. Configure OpenTelemetry + App Insights (once at startup) configure_azure_monitor(connection_string=APP_INSIGHTS_CONN) # 2. Call any agent through APIM response = call_agent("FinanceAgent", "Summarize Q4 earnings") # 3. Token usage and cost are tracked automatically # → customMetrics and traces tables in App Insights Start with the notebook to prove the pattern works. Then move the same code into your production codebase, the telemetry travels with it. Key Takeaways AI cost observability matters. As agent counts grow, per-agent cost attribution becomes essential for budgeting and optimization. APIM as an AI Gateway gives you routing, rate limiting, and tracing in one place without touching agent code. OpenTelemetry + Application Insights provides a battle-tested telemetry pipeline that scales from a single notebook to production workloads. KQL makes the data actionable. Per-request audits, per-agent summaries, and cost trending are all a query away. The solution is additive, not invasive. Agents don't need modification. The telemetry layer wraps around them. This approach gives developers the abiility to view metrics per user, API Key, Agent, request / tool call, or business dimensions(Cost Center, app, environment). If you're running AI agents in Microsoft Foundry and want to understand what they cost at a granular level this pattern gives you the visibility to make informed decisions about model selection, prompt design, and budget allocation. The full solution is available on GitHub: https://github.com/ccoellomsft/foundry-agents-apim-appinsights796Views1like0CommentsMicrosoft Foundry: Unlock Adaptive, Personalized Agents with User-Scoped Persistent Memory
From Knowledgeable to Personalized: Why Memory Matters Most AI agents today are knowledgeable — they ground responses in enterprise data sources and rely on short‑term, session‑based memory to maintain conversational coherence. This works well within a single interaction. But once the session ends, the context disappears. The agent starts fresh, unable to recall prior interactions, user preferences, or previously established context. In reality, enterprise users don’t interact with agents exclusively in one‑off sessions. Conversations can span days, weeks, evolving across multiple interactions rather than isolated sessions. Without a way to persist and safely reuse relevant context across interactions, AI agents remain efficient in the short term be being stateful within a session, but lose continuity over time due to their statelessness across sessions. Bridging this gap between short-term efficiency and long‑term adaptation exposes a deeper challenge. Persisting memory across sessions is not just a technical decision; in enterprise environments, it introduces legitimate concerns around privacy, data isolation, governance, and compliance — especially when multiple users interact with the same agent. What seems like an obvious next step quickly becomes a complex architectural problem, requiring organizations to balance the ability for agents to learn and adapt over time with the need to preserve trust, enforce isolation boundaries, and meet enterprise compliance requirements. In this post, I’ll walk through a practical design pattern for user‑scoped persistent memory, including a reference architecture and a deployable sample implementation that demonstrates how to apply this pattern in a real enterprise setting while preserving isolation, governance, and compliance. The Challenge of Persistent Memory in Enterprise AI Agents Extending memory beyond a single session seems like a natural way to make AI agents more adaptive. Retaining relevant context over time — such as preferences, prior decisions, or recurring patterns — would allow an agent to progressively tailor its behavior to each user, moving from simple responsiveness toward genuine adaptation. In enterprise environments, however, persistence introduces a different class of risk. Storing and reusing user context across interactions raises questions of privacy, data isolation, governance, and compliance — particularly when multiple users interact with shared systems. Without clear ownership and isolation boundaries, naïvely persisted memory can lead to cross‑user data leakage, policy violations, or unclear retention guarantees. As a result, many systems default to ephemeral, session‑only memory. This approach prioritizes safety and simplicity — but does so at the cost of long‑term personalization and continuity. The challenge, then, is not whether agents should remember, but how memory can be introduced without violating enterprise trust boundaries. Persistent Memory: Trade‑offs Between Abstraction and Control As AI agents evolve toward more adaptive behavior, several approaches to agent memory are emerging across the ecosystem. Each reflects a different set of trade-offs between abstraction, flexibility, and control — making it useful to briefly acknowledge these patterns before introducing the design presented here. Microsoft Foundry Agent Service includes a built‑in memory capability (currently in Preview) that enables agents to retain context beyond a single interaction. This approach integrates tightly with the Foundry runtime and abstracts much of the underlying memory management, making it well suited for scenarios that align closely with the managed agent lifecycle. Another notable approach combines Mem0 with Azure AI Search, where memory entries are stored and retrieved through vector search. In this model, memory is treated as an embedding‑centric store that emphasizes semantic recall and relevance. Mem0 is intentionally opinionated, defining how memory is structured, summarized, and retrieved to optimize for ease of use and rapid iteration. Both approaches represent meaningful progress. At the same time, some enterprises require an approach where user memory is explicitly owned, scoped, and governed within their existing data architecture — rather than implicitly managed by an agent framework or memory library. These requirements often stem from stricter expectations around data isolation, compliance, and long‑term control. User-Scoped Persistent Memory with Azure Cosmos DB The solution presented in this post provides a practical reference implementation for organizations that require explicit control over how user memory is stored, scoped, and governed. Rather than embedding long‑term memory implicitly within the agent runtime, this design models memory as a first‑class system component built on Azure Cosmos DB. At a high level, the architecture introduces user‑scoped persistent memory: a durable memory layer in which each user’s context is isolated and managed independently. Persistent memory is stored in Azure Cosmos DB containers partitioned by user identity and consists of curated, long‑lived signals — such as preferences, recurring intent, or summarized outcomes from prior interactions — rather than raw conversational transcripts. This keeps memory intentional, auditable, and easy to evolve over time. Short‑term, in‑session conversation state remains managed by Microsoft Foundry on the server side through its built‑in conversation and thread model. By separating ephemeral session context from durable user memory, the system preserves conversational coherence while avoiding uncontrolled accumulation of long‑term state within the agent runtime. This design enables continuity and personalization across sessions while deliberately avoiding the risks associated with shared or global memory models, including cross‑user data leakage, unclear ownership, and unintended reuse of context. Azure Cosmos DB provides enterprises with direct control over memory isolation, data residency, retention policies, and operational characteristics such as consistency, availability, and scale. In this architecture, knowledge grounding and memory serve complementary roles. Knowledge grounding ensures correctness by anchoring responses in trusted enterprise data sources. User‑scoped persistent memory ensures relevance by tailoring interactions to the individual user over time. Together, they enable trustworthy, adaptive AI agents that improve with use — without compromising enterprise boundaries. Architecture Components and Responsibilities Identity and User Scoping Microsoft Entra ID (App Registrations) — provides the frontend a client ID and tenant ID so the Microsoft Authentication Library (MSAL) can authenticate users via browser redirect. The oid (Object ID) claim from the ID token is used as the user identifier throughout the system. Agent Runtime and Orchestration Microsoft Foundry — serves as the unified AI platform for hosting models, managing agents, and maintaining conversation state. Foundry manages in‑session and thread‑level memory on the server side, preserving conversational continuity while keeping ephemeral context separate from long‑term user memory. Backend Agent Service — implements the AI agent using Microsoft Foundry’s agent and conversation APIs. The agent is responsible for reasoning, tool‑calling decisions, and response generation, delegating memory and search operations to external MCP servers. Memory and Knowledge Services MCP‑Memory — MCP server that hosts tools for extracting structured memory signals from conversations, generating embeddings, and persisting user‑scoped memories. Memories are written to and retrieved from Azure Cosmos DB, enforcing strict per‑user isolation. MCP‑Search — MCP server exposing tools for querying enterprise knowledge sources via Azure AI Search. This separation ensures that knowledge grounding and memory retrieval remain distinct concerns. Azure Cosmos DB for NoSQL — provides the durable, serverless document store for user‑scoped persistent memory. Memory containers are partitioned by user ID, enabling isolation, auditable access, configurable retention policies, and predictable scalability. Vector search is used to support semantic recall over stored memory entries. Azure AI Search — supplies hybrid retrieval (keyword and vector) with semantic reranking over the enterprise knowledge index. An integrated vectorizer backed by an embedding model is used for query‑time vectorization. Models text‑embedding‑3‑large — used for generating vector embeddings for both user‑scoped memories and enterprise knowledge search. gpt‑5‑mini — used for lightweight analysis tasks, such as extracting structured memory facts from conversational context. gpt‑5.1 — powers the AI agent, handling multi‑turn conversations, tool invocation, and response synthesis. Application and Hosting Infrastructure Frontend Web Application — a React‑based web UI that handles user authentication and presents a conversational chat interface. Azure Container Apps Environment — provides a shared execution environment for all services, including networking, scaling, and observability. Azure Container Apps — hosts the frontend, backend agent service, and MCP servers as independently scalable containers. Azure Container Registry — stores container images for all application components. Try It Yourself Demonstration of user‑scoped persistent memory across sessions. To make these concepts concrete, I’ve published a working reference implementation that demonstrates the architecture and patterns described above. The complete solution is available in the Agent-Memory GitHub repository. The repository README includes prerequisites, environment setup notes, and configuration details. Start by cloning the repository and moving into the project directory: git clone https://github.com/mardianto-msft/azure-agent-memory.git cd azure-agent-memory Next, sign in to Azure using the Azure CLI: az login Then authenticate the Azure Developer CLI: azd auth login Once authenticated, deploy the solution: azd up After deployment is complete, sign in using the provided demo users and interact with the agent across multiple sessions. Each user’s preferences and prior context are retained independently, the interaction continues seamlessly after signing out and returning later, and user context remains fully isolated with no cross‑identity leakage. The solution also includes a knowledge index initialized with selected Microsoft Outlook Help documentation, which the agent uses for knowledge grounding. This index can be easily replaced or extended with your own publicly accessible URLs to adapt the solution to different domains. Looking Ahead: Personalized Memory as a Foundation for Adaptive Agents As enterprise AI agents evolve, many teams are looking beyond larger models and improved retrieval toward human‑centered personalization at scale — building agents that adapt to individual users while operating within clearly defined trust boundaries. User‑scoped persistent memory enables this shift. By treating memory as a first‑class, user‑owned component, agents can maintain continuity across sessions while preserving isolation, governance, and compliance. Personalization becomes an intentional design choice, aligning with Microsoft’s human‑centered approach to AI, where users retain control over how systems adapt to them. This solution demonstrates how knowledge grounding and personalized memory serve complementary roles. Knowledge grounding ensures correctness by anchoring responses in trusted enterprise data. Personalized memory ensures relevance by tailoring interactions to the individual user. Together, they enable context‑aware, adaptive, and personalized agents — without compromising enterprise trust. Finally, this solution is intentionally presented as a reference design pattern, not a prescriptive architecture. It offers a practical starting point for enterprises designing adaptive, personalized agents, illustrating how user‑scoped memory can be modeled, governed, and integrated as a foundational capability for scalable enterprise AI.501Views1like1CommentBuilding Production-Ready, Secure, Observable, AI Agents with Real-Time Voice with Microsoft Foundry
We're excited to announce the general availability of Foundry Agent Service, Observability in Foundry Control Plane, and the Microsoft Foundry portal — plus Voice Live integration with Agent Service in public preview — giving teams a production-ready platform to build, deploy, and operate intelligent AI agents with enterprise-grade security and observability.8KViews2likes0CommentsBuild Sensitivity Label‑Aware, Secure RAG with Azure AI Search and Purview
Learn why many Azure AI Search developers unknowingly miss sensitivity‑labeled data—and how integrating Microsoft Purview sensitivity labels enables more secure, complete retrieval for RAG, copilot-style apps, and enterprise search. This post explains what sensitivity labels are, why they matter for Azure AI Search, what happens when the integration is not enabled, and how label-aware indexing and query‑time enforcement ensure label-protected documents are indexed, governed, and retrieved correctly in Azure AI Search when retrieved from SharePoint, OneLake, ADLS Gen2, and Azure Blob Storage. Includes supported sources, end‑to‑end flow, and links to documentation, demo apps, and video resources.1.7KViews4likes1CommentAnnouncing extended support for Fine Tuning gpt-4o and gpt-4o-mini
At Build 2025, we announced post-retirement, extended deployment and inference support for fine tuned models. Today, we’re excited to announce we’re extending fine-tuning training for current customers of our most popular Azure OpenAI models: gpt-4o (2024-08-06) and gpt-4o-mini (2024-07-18). Hundreds of customers have pushed trillions of tokens through fine-tuned versions of these models and we’re happy to provide even more runway for your AI agents and applications. Already using these models in Foundry? We have you covered as the only provider of fine tuning gpt-4o and gpt-4o-mini come April. Keep fine tuning! Not yet using Microsoft Foundry? Get started today by migrating your training data to Microsoft Foundry and fine tune using Global or Standard Training for gpt-4o and gpt-4o-mini using your existing OpenAI code. You’ll have the runway to continuously fine tune or update your models. You have until March 31 st , 2026, to become a fine-tuning customer of these models. Model Version Training retirement date Deployment retirement date gpt-4o 2024-08-06 No earlier than 2026-09-31 1 2027-03-31 gpt-4o-mini 2024-07-18 No earlier than 2026-09-31 1 2027-03-31 gpt-4.1 2025-04-14 At base model retirement One year after training retirement gpt-4.1-mini 2025-04-14 At base model retirement One year after training retirement gpt-4.1-nano 2025-04-14 At base model retirement One year after training retirement o4-mini 2025-04-16 At base model retirement One year after training retirement 1 For existing customers only. Otherwise, training retirement occurs at base model retirement513Views0likes0CommentsBuilding Knowledge-Grounded Conversational AI Agents with Azure Speech Photo Avatars
From Chat to Presence: The Next Step in Conversational AI Chat agents are now embedded across nearly every industry, from customer support on websites to direct integrations inside business applications designed to boost efficiency and productivity. As these agents become more capable and more visible, user expectations are also rising: conversations should feel natural, trustworthy, and engaging. While text‑only chat agents work well for many scenarios, voice‑enabled agents take a meaningful step forward by introducing a clearer persona and a stronger sense of presence, making interactions feel more human and intuitive (see healow Genie success story). In domains such as Retail, Healthcare, Education, and Corporate Training, adding a visual dimension through AI avatars further elevates the experience. Pairing voice with a lifelike visual representation improves inclusiveness, reduces interaction friction, and helps users better contextualize conversations—especially in scenarios that rely on trust, guidance, or repeated engagement. To support these experiences, Microsoft offers two AI avatar options through Azure Speech: Video Avatars, which are generally available and provide full‑ or partial‑body immersive representations, and Photo Avatars, currently in public preview, which deliver a headshot‑style visual well suited for web‑based agents and digital twin scenarios. Both options support custom avatars, enabling organizations to reflect their brand identity rather than relying solely on generic representations (see W2M custom video avatar). Choosing between Video Avatars and Photo Avatars is less about preference and more about intent. Video Avatars offer higher visual fidelity and immersion but require more extensive onboarding, such as high-quality recorded video of an avatar talent. Photo Avatars, by contrast, can be created from a single image, enabling a lighter‑weight onboarding process while still delivering a human‑centered experience. The right choice depends on the desired interaction style, visual presence, and target deployment scenario. What this solution demonstrates In this post, I walk through how to integrate Azure Speech Photo Avatars — powered by Microsoft Research's VASA-1 model — into a knowledge‑grounded conversational AI agent built on Azure AI Search. The goal is to show how voice, visuals, and retrieval‑augmented generation (RAG) can come together to create a more natural and engaging agent experience. The solution exposes a web‑based interface where users can speak naturally to the AI agent using their voice. The agent responds in real time using synthesized speech, while live transcriptions of the conversation are displayed in the UI to improve clarity and accessibility. To help compare different interaction patterns, the sample application supports three modes: 1) Photo Avatar mode, which adds a lifelike visual presence. 2) Video Avatar mode, which provides a more immersive, full‑motion experience. 3) Voice‑only mode, which focuses purely on speech‑to‑speech interaction. Key architectural components An end‑to‑end architecture for the solution is shown in the diagram below. The solution is composed of the following core services and building blocks: Microsoft Foundry — provides the platform for deploying, managing, and accessing the foundation models used by the application. Azure OpenAI — provides the Realtime API for speech‑to‑speech interaction in the voice‑only mode and the Chat Completions API used by backend services for reasoning and conversational responses. gpt‑4.1 — LLM used for reasoning tasks such as deciding when to invoke tool calls and summarizing responses. gpt-realtime-mini — LLM used for speech-to-speech interaction in the Voice-only mode. text‑embedding‑3‑large — LLM used for generating vector embeddings used in retrieval‑augmented generation. Azure Speech — delivers the real‑time speech‑to‑text (STT), text‑to‑speech (TTS), and AI avatars capabilities for both Photo Avatar and Video Avatar experiences. Azure Document Intelligence — extracts structured text, layout, and key information from source documents used to build the knowledge base. Azure AI Search — provides vector‑based retrieval to ground the language model with relevant, context‑aware content. Azure Container Apps — hosts the web UI frontend, backend services, and MCP server within a managed container runtime. Azure Container Apps Environment — defines a secure and isolated boundary for networking, scaling, and observability of the containerized workloads. Azure Container Registry — stores and manages Docker images used by the container applications. How you can try it yourself The complete sample implementation is available in the LiveChat AI Voice Assistant repository, which includes instructions for deploying the solution into your Azure environment. The repository uses Infrastructure as Code (IaC) deployment via Azure Developer CLI (azd) to orchestrate Azure resource provisioning and application deployment. Prerequisites: An Azure subscription with appropriate services and models' quota is required to deploy the solution. Getting the solution up and running in just three simple steps: Clone the repository and navigate to the project git clone https://github.com/mardianto-msft/azure-speech-ai-avatars.git cd azure-speech-ai-avatars Authenticate with Azure azd auth login Initialize and deploy the solution azd up Once deployed, you can access the sample application by opening the frontend service URL in a web browser. To demonstrate knowledge grounding, the sample includes source documents derived from Microsoft’s 2025 Annual Report and Shareholder Letter. These grounding documents can optionally be replaced with your own data, allowing the same architecture to be reused for domain‑specific or enterprise scenarios. When using the provided sample documents, you can ask questions such as: “How much was Microsoft’s net income in 2025?”, “What are Microsoft’s priorities according to the shareholder letter?”, “Who is Microsoft’s CEO?” Bringing Conversational AI Agents to Life This implementation of Azure Speech Photo Avatars serves as a practical starting point for building more engaging, knowledge‑grounded conversational AI agents. By combining voice interaction, visual presence, and retrieval‑augmented generation, Photo Avatars offer a lightweight yet powerful way to make AI agents feel more approachable, trustworthy, and human‑centered — especially in web‑based and enterprise scenarios. From here, the solution can be extended over time with capabilities such as long‑term memory, richer personalization, or more advanced multi‑agent orchestration. Whether used as a reference architecture or as the foundation for a production system, this approach demonstrates how Azure Speech Photo Avatars can help bridge the gap between conversational intelligence and meaningful user experience. By emphasizing accessibility, trust, and human‑centered design, it reflects Microsoft’s broader mission to empower every person and every organization on the planet to achieve more.565Views0likes0CommentsOptiMind: A small language model with optimization expertise
Turning a real world decision problem into a solver ready optimization model can take days—sometimes weeks—even for experienced teams. The hardest part is often not solving the problem; it’s translating business intent into precise mathematical objectives, constraints, and variables. OptiMind is designed to try and remove that bottleneck. This optimization‑aware language model translates natural‑language problem descriptions into solver‑ready mathematical formulations, can help organizations move from ideas to decisions faster. Now available through public preview as an experimental model through Microsoft Foundry, OptiMind targets one of the more expertise‑intensive steps in modern optimization workflows. Addressing the Optimization Bottleneck Mathematical optimization underpins many enterprise‑critical decisions—from designing supply chains and scheduling workforces to structuring financial portfolios and deploying networks. While today’s solvers can handle enormous and complex problem instances, formulating those problems remains a major obstacle. Defining objectives, constraints, and decision variables is an expertise‑driven process that often takes days or weeks, even when the underlying business problem is well understood. OptiMind tries to address this gap by automating and accelerating formulation. Developed by Microsoft Research, OptiMind transforms what was once a slow, error‑prone modeling task into a streamlined, repeatable step—freeing teams to focus on decision quality rather than syntax. What makes OptiMind different? OptiMind is not just as a language model, but as a specialized system built for real-world optimization tasks. Unlike general-purpose large language models adapted for optimization through prompting, OptiMind is purpose-built for mixed integer linear programming (MILP), and its design reflects this singular focus. At inference time, OptiMind follows a multi‑stage process: Problem classification (e.g., scheduling, routing, network design) Hint retrieval tailored to the identified problem class Solution generation in solver‑compatible formats such as GurobiPy Optional self‑correction, where multiple candidate formulations are generated and validated This design can improve reliability without relying on agentic orchestration or multiple large models. In internal evaluations on cleaned public benchmarks—including IndustryOR, Mamo‑Complex, and OptMATH—OptiMind demonstrated higher formulation accuracy than similarly sized open models and competitive performance relative to significantly larger systems. OptiMind improved accuracy by approximately 10 percent over the base model. In comparison to open-source models under 32 billion parameters, OptiMind was also found to match or exceed performance benchmarks. For more information on the model, please read the official research blog or the technical paper for OptiMind. Practical use cases: Unlocking efficiency across domains OptiMind is especially valuable where modeling effort—not solver capability—is the primary bottleneck. Typical use cases include: Supply Chain Network Design: Faster formulation of multi‑period network models and logistics flows Manufacturing and Workforce Scheduling: Easier capacity planning under complex operational constraints Logistics and Routing Optimization: Rapid modeling that captures real‑world constraints and variability Financial Portfolio Optimization: More efficient exploration of portfolios under regulatory and market constraints By reducing the time and expertise required to move from problem description to validated model, OptiMind helps teams reach actionable decisions faster and with greater confidence. Getting started OptiMind is available today as an experimental model, and Microsoft Research welcomes feedback from practitioners and enterprise teams. Next steps: Explore the research details: Read more about the model on Foundry Labs and the technical paper on arXiv Try the model: Access OptiMind through Microsoft Foundry Test sample code: Available in the OptiMind GitHub repository Take the next step in optimization innovation with OptiMind—empowering faster, more accurate, and cost-effective problem solving for the future of decision intelligence.1.7KViews0likes0CommentsFrom Zero to Hero: AgentOps - End-to-End Lifecycle Management for Production AI Agents
The shift from proof-of-concept AI agents to production-ready systems isn't just about better models—it's about building robust infrastructure that can develop, deploy, and maintain intelligent agents at enterprise scale. As organizations move beyond simple chatbots to agentic systems that plan, reason, and act autonomously, the need for comprehensive Agent LLMOps becomes critical. This guide walks through the complete lifecycle for building production AI agents, from development through deployment to monitoring, with special focus on leveraging Azure AI Foundry's hosted agents infrastructure. The Evolution: From Single-Turn Prompts to Agentic Workflows Traditional AI applications operated on a simple request-response pattern. Modern AI agents, however, are fundamentally different. They maintain state across multiple interactions, orchestrate complex multi-step workflows, and dynamically adapt their approach based on intermediate results. According to recent analysis, agentic workflows represent systems where language models and tools are orchestrated through a combination of predefined logic and dynamic decision-making. Unlike monolithic systems where a single model attempts everything, production agents break down complex tasks into specialized components that collaborate effectively. The difference is profound. A simple customer service chatbot might answer questions from a knowledge base. An agentic customer service system, however, can search multiple data sources, escalate to specialized sub-agents for technical issues, draft response emails, schedule follow-up tasks, and learn from each interaction to improve future responses. Stage 1: Development with any agentic framework Why LangGraph for Agent Development? LangGraph has emerged as a leading framework for building stateful, multi-agent applications. Unlike traditional chain-based approaches, LangGraph uses a graph-based architecture where each node represents a unit of work and edges define the workflow paths between them. The key advantages include: Explicit State Management: LangGraph maintains persistent state across nodes, making it straightforward to track conversation history, intermediate results, and decision points. This is critical for debugging complex agent behaviors. Visual Workflow Design: The graph structure provides an intuitive way to visualize and understand agent logic. When an agent misbehaves, you can trace execution through the graph to identify where things went wrong. Flexible Control Flows: LangGraph supports diverse orchestration patterns—single agent, multi-agent, hierarchical, sequential—all within one framework. You can start simple and evolve as requirements grow. Built-in Memory: Agents automatically store conversation histories and maintain context over time, enabling rich personalized interactions across sessions. Core LangGraph Components Nodes: Individual units of logic or action. A node might call an AI model, query a database, invoke an external API, or perform data transformation. Each node is a Python function that receives the current state and returns updates. Edges: Define the workflow paths between nodes. These can be conditional (routing based on the node's output) or unconditional (always proceeding to the next step). State: The data structure passed between nodes and updated through reducers. Proper state design is crucial—it should contain all information needed for decision-making while remaining manageable in size. Checkpoints: LangGraph's checkpointing mechanism saves state at each node, enabling features like human-in-the-loop approval, retry logic, and debugging. Implementing the Agentic Workflow Pattern A robust production agent typically follows a cyclical pattern of planning, execution, reflection, and adaptation: Planning Phase: The agent analyzes the user's request and creates a structured plan, breaking complex problems into manageable steps. Execution Phase: The agent carries out planned actions using appropriate tools—search engines, calculators, code interpreters, database queries, or API calls. Reflection Phase: After each action, the agent evaluates results against expected outcomes. This critical thinking step determines whether to proceed, retry with a different approach, or seek additional information. Decision Phase: Based on reflection, the agent decides the next course of action—continue to the next step, loop back to refine the approach, or conclude with a final response. This pattern handles real-world complexity far better than simple linear workflows. When an agent encounters unexpected results, the reflection phase enables adaptive responses rather than brittle failure. Example: Building a Research Agent with LangGraph from langgraph.graph import StateGraph, END from langchain_openai import ChatOpenAI from typing import TypedDict, List class AgentState(TypedDict): query: str plan: List[str] current_step: int research_results: List[dict] final_answer: str def planning_node(state: AgentState): # Agent creates a research plan llm = ChatOpenAI(model="gpt-4") plan = llm.invoke(f"Create a research plan for: {state['query']}") return {"plan": plan, "current_step": 0} def research_node(state: AgentState): # Execute current research step step = state['plan'][state['current_step']] # Perform web search, database query, etc. results = perform_research(step) return {"research_results": state['research_results'] + [results]} def reflection_node(state: AgentState): # Evaluate if we have enough information if len(state['research_results']) >= len(state['plan']): return {"next": "synthesize"} return {"next": "research", "current_step": state['current_step'] + 1} def synthesize_node(state: AgentState): # Generate final answer from all research llm = ChatOpenAI(model="gpt-4") answer = llm.invoke(f"Synthesize research: {state['research_results']}") return {"final_answer": answer} # Build the graph workflow = StateGraph(AgentState) workflow.add_node("planning", planning_node) workflow.add_node("research", research_node) workflow.add_node("reflection", reflection_node) workflow.add_node("synthesize", synthesize_node) workflow.add_edge("planning", "research") workflow.add_edge("research", "reflection") workflow.add_conditional_edges( "reflection", lambda s: s["next"], {"research": "research", "synthesize": "synthesize"} ) workflow.add_edge("synthesize", END) agent = workflow.compile() This pattern scales from simple workflows to complex multi-agent systems with dozens of specialized nodes. Stage 2: CI/CD Pipeline for AI Agents Traditional software CI/CD focuses on code quality, security, and deployment automation. Agent CI/CD must additionally handle model versioning, evaluation against behavioral benchmarks, and non-deterministic behavior. Build Phase: Packaging Agent Dependencies Unlike traditional applications, AI agents have unique packaging requirements: Model artifacts: Fine-tuned models, embeddings, or model configurations Vector databases: Pre-computed embeddings for knowledge retrieval Tool configurations: API credentials, endpoint URLs, rate limits Prompt templates: Versioned prompt engineering assets Evaluation datasets: Test cases for agent behavior validation Best practice is to containerize everything. Docker provides reproducibility across environments and simplifies dependency management: FROM python:3.11-slim WORKDIR /app COPY . user_agent/ WORKDIR /app/user_agent RUN if [ -f requirements.txt ]; then \ pip install -r requirements.txt; \ else \ echo "No requirements.txt found"; \ fi EXPOSE 8088 CMD ["python", "main.py"] Register Phase: Version Control Beyond Git Code versioning is necessary but insufficient for AI agents. You need comprehensive artifact versioning: Container Registry: Azure Container Registry stores Docker images with semantic versioning. Each agent version becomes an immutable artifact that can be deployed or rolled back at any time. Prompt Registry: Version control your prompts separately from code. Prompt changes can dramatically impact agent behavior, so treating them as first-class artifacts enables A/B testing and rapid iteration. Configuration Management: Store agent configurations (model selection, temperature, token limits, tool permissions) in version-controlled files. This ensures reproducibility and enables easy rollback. Evaluate Phase: Testing Non-Deterministic Behavior The biggest challenge in agent CI/CD is evaluation. Unlike traditional software where unit tests verify exact outputs, agents produce variable responses that must be evaluated holistically. Behavioral Testing: Define test cases that specify desired agent behaviors rather than exact outputs. For example, "When asked about product pricing, the agent should query the pricing API, handle rate limits gracefully, and present information in a structured format." Evaluation Metrics: Track multiple dimensions: Task completion rate: Did the agent accomplish the goal? Tool usage accuracy: Did it call the right tools with correct parameters? Response quality: Measured via LLM-as-judge or human evaluation Latency: Time to first token and total response time Cost: Token usage and API call expenses Adversarial Testing: Intentionally test edge cases—ambiguous requests, tool failures, rate limiting, conflicting information. Production agents will encounter these scenarios. Recent research on CI/CD for AI agents emphasizes comprehensive instrumentation from day one. Track every input, output, API call, token usage, and decision point. After accumulating production data, patterns emerge showing which metrics actually predict failures versus noise. Deploy Phase: Safe Production Rollouts Never deploy agents directly to production. Implement progressive delivery: Staging Environment: Deploy to a staging environment that mirrors production. Run automated tests and manual QA against real data (appropriately anonymized). Canary Deployment: Route a small percentage of traffic (5-10%) to the new version. Monitor error rates, latency, user satisfaction, and cost metrics. Automatically rollback if any metric degrades beyond thresholds. Blue-Green Deployment: Maintain two production environments. Deploy to the inactive environment, verify it's healthy, then switch traffic. Enables instant rollback by switching back. Feature Flags: Deploy new agent capabilities behind feature flags. Gradually enable them for specific user segments, gather feedback, and iterate before full rollout. Now since we know how to create an agent using langgraph, the next step will be understand how can we use this langgraph agent to deploy in Azure AI Foundry. Stage 3: Azure AI Foundry Hosted Agents Hosted agents are containerized agentic AI applications that run on Microsoft's Foundry Agent Service. They represent a paradigm shift from traditional prompt-based agents to fully code-driven, production-ready AI systems. When to Use Hosted Agents: ✅ Complex agentic workflows - Multi-step reasoning, branching logic, conditional execution ✅ Custom tool integration - External APIs, databases, internal systems ✅ Framework-specific features - LangGraph graphs, multi-agent orchestration ✅ Production scale - Enterprise deployments requiring autoscaling ✅ Auth- Identity and authentication, Security and compliance controls ✅ CI/CD integration - Automated testing and deployment pipelines Why Hosted Agents Matter Hosted agents bridge the gap between experimental AI prototypes and production systems: For Developers: Full control over agent logic via code Use familiar frameworks and tools Local testing before deployment Version control for agent code For Enterprises: No infrastructure management overhead Built-in security and compliance Scalable pay-as-you-go pricing Integration with existing Azure ecosystem For AI Systems: Complex reasoning patterns beyond prompts Stateful conversations with persistence Custom tool integration and orchestration Multi-agent collaboration Before you get started with Foundry. Deploy Foundry project using the starter code using AZ CLI. # Initialize a new agent project azd init -t https://github.com/Azure-Samples/azd-ai-starter-basic # The template automatically provisions: # - Foundry resource and project # - Azure Container Registry # - Application Insights for monitoring # - Managed identities and RBAC # Deploy azd up The extension significantly reduces the operational burden. What previously required extensive Azure knowledge and infrastructure-as-code expertise now works with a few CLI commands. The extension significantly reduces the operational burden. What previously required extensive Azure knowledge and infrastructure-as-code expertise now works with a few CLI commands. Local Development to Production Workflow A streamlined workflow bridges development and production: Develop Locally: Build and test your LangGraph agent on your machine. Use the Foundry SDK to ensure compatibility with production APIs. Validate Locally: Run the agent locally against the Foundry Responses API to verify it works with production authentication and conversation management. Containerize: Package your agent in a Docker container with all dependencies. Deploy to Staging: Use azd deploy to push to a staging Foundry project. Run automated tests. Deploy to Production: Once validated, deploy to production. Foundry handles versioning, so you can maintain multiple agent versions and route traffic accordingly. Monitor and Iterate: Use Application Insights to monitor agent performance, identify issues, and plan improvements. Azure AI Toolkit for Visual Studio offer this great place to test your hosted agent. You can also test this using REST. FROM python:3.11-slim WORKDIR /app COPY . user_agent/ WORKDIR /app/user_agent RUN if [ -f requirements.txt ]; then \ pip install -r requirements.txt; \ else \ echo "No requirements.txt found"; \ fi EXPOSE 8088 CMD ["python", "main.py"] Once you are able to run agent and test in local playground. You want to move to the next step of registering, evaluating and deploying agent in Microsoft AI Foundry. CI/CD with GitHub Actions This repository includes a GitHub Actions workflow (`.github/workflows/mslearnagent-AutoDeployTrigger.yml`) that automatically builds and deploys the agent to Azure when changes are pushed to the main branch. 1. Set Up Service Principal # Create service principal az ad sp create-for-rbac \ --name "github-actions-agent-deploy" \ --role contributor \ --scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP # Output will include: # - appId (AZURE_CLIENT_ID) # - tenant (AZURE_TENANT_ID) 2. Configure Federated Credentials # For GitHub Actions OIDC az ad app federated-credential create \ --id $APP_ID \ --parameters '{ "name": "github-actions-deploy", "issuer": "https://token.actions.githubusercontent.com", "subject": "repo:YOUR_ORG/YOUR_REPO:ref:refs/heads/main", "audiences": ["api://AzureADTokenExchange"] }' 3. Set Required Permissions Critical: Service principal needs Azure AI User role on AI Services resource: # Get AI Services resource ID AI_SERVICES_ID=$(az cognitiveservices account show \ --name $AI_SERVICES_NAME \ --resource-group $RESOURCE_GROUP \ --query id -o tsv) # Assign Azure AI User role az role assignment create \ --assignee $SERVICE_PRINCIPAL_ID \ --role "Azure AI User" \ --scope $AI_SERVICES_ID 4. Configure GitHub Secrets Go to GitHub repository → Settings → Secrets and variables → Actions Add the following secrets: AZURE_CLIENT_ID=<from-service-principal> AZURE_TENANT_ID=<from-service-principal> AZURE_SUBSCRIPTION_ID=<your-subscription-id> AZURE_AI_PROJECT_ENDPOINT=<your-project-endpoint> ACR_NAME=<your-acr-name> IMAGE_NAME=calculator-agent AGENT_NAME=CalculatorAgent 5. Create GitHub Actions Workflow Create .github/workflows/deploy-agent.yml: name: Deploy Agent to Azure AI Foundry on: push: branches: - main paths: - 'main.py' - 'custom_state_converter.py' - 'requirements.txt' - 'Dockerfile' workflow_dispatch: inputs: version_tag: description: 'Version tag (leave empty for auto-increment)' required: false type: string permissions: id-token: write contents: read jobs: deploy: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Generate version tag id: version run: | if [ -n "${{ github.event.inputs.version_tag }}" ]; then echo "VERSION=${{ github.event.inputs.version_tag }}" >> $GITHUB_OUTPUT else # Auto-increment version VERSION="v$(date +%Y%m%d-%H%M%S)" echo "VERSION=$VERSION" >> $GITHUB_OUTPUT fi - name: Azure Login (OIDC) uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: Set up Python uses: actions/setup-python@v4 with: python-version: '3.11' - name: Install Azure AI SDK run: | pip install azure-ai-projects azure-identity - name: Build and push Docker image run: | az acr build \ --registry ${{ secrets.ACR_NAME }} \ --image ${{ secrets.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }} \ --image ${{ secrets.IMAGE_NAME }}:latest \ --file Dockerfile \ . - name: Register agent version env: AZURE_AI_PROJECT_ENDPOINT: ${{ secrets.AZURE_AI_PROJECT_ENDPOINT }} ACR_NAME: ${{ secrets.ACR_NAME }} IMAGE_NAME: ${{ secrets.IMAGE_NAME }} AGENT_NAME: ${{ secrets.AGENT_NAME }} VERSION: ${{ steps.version.outputs.VERSION }} run: | python - <<EOF import os from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.projects.models import ImageBasedHostedAgentDefinition project_endpoint = os.environ["AZURE_AI_PROJECT_ENDPOINT"] credential = DefaultAzureCredential() project_client = AIProjectClient.from_connection_string( credential=credential, conn_str=project_endpoint, ) agent_name = os.environ["AGENT_NAME"] version = os.environ["VERSION"] image_uri = f"{os.environ['ACR_NAME']}.azurecr.io/{os.environ['IMAGE_NAME']}:{version}" agent_definition = ImageBasedHostedAgentDefinition( image=image_uri, cpu=1.0, memory="2Gi", ) agent = project_client.agents.create_or_update( agent_id=agent_name, body=agent_definition ) print(f"Agent version registered: {agent.version}") EOF - name: Start agent run: | echo "Agent deployed successfully with version ${{ steps.version.outputs.VERSION }}" - name: Deployment summary run: | echo "### Deployment Summary" >> $GITHUB_STEP_SUMMARY echo "- **Agent Name**: ${{ secrets.AGENT_NAME }}" >> $GITHUB_STEP_SUMMARY echo "- **Version**: ${{ steps.version.outputs.VERSION }}" >> $GITHUB_STEP_SUMMARY echo "- **Image**: ${{ secrets.ACR_NAME }}.azurecr.io/${{ secrets.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }}" >> $GITHUB_STEP_SUMMARY echo "- **Status**: Deployed" >> $GITHUB_STEP_SUMMARY 6. Add Container Status Verification To ensure deployments are truly successful, add a script to verify container startup before marking the pipeline as complete. Create wait_for_container.py: """ Wait for agent container to be ready. This script polls the agent container status until it's running successfully or times out. Designed for use in CI/CD pipelines to verify deployment. """ import os import sys import time import requests from typing import Optional, Dict, Any from azure.identity import DefaultAzureCredential class ContainerStatusWaiter: """Polls agent container status until ready or timeout.""" def __init__( self, project_endpoint: str, agent_name: str, agent_version: str, timeout_seconds: int = 600, poll_interval: int = 10, ): """ Initialize the container status waiter. Args: project_endpoint: Azure AI Foundry project endpoint agent_name: Name of the agent agent_version: Version of the agent timeout_seconds: Maximum time to wait (default: 10 minutes) poll_interval: Seconds between status checks (default: 10s) """ self.project_endpoint = project_endpoint.rstrip("/") self.agent_name = agent_name self.agent_version = agent_version self.timeout_seconds = timeout_seconds self.poll_interval = poll_interval self.api_version = "2025-11-15-preview" # Get Azure AD token credential = DefaultAzureCredential() token = credential.get_token("https://ml.azure.com/.default") self.headers = { "Authorization": f"Bearer {token.token}", "Content-Type": "application/json", } def _get_container_url(self) -> str: """Build the container status URL.""" return ( f"{self.project_endpoint}/agents/{self.agent_name}" f"/versions/{self.agent_version}/containers/default" ) def get_container_status(self) -> Optional[Dict[str, Any]]: """Get current container status.""" url = f"{self._get_container_url()}?api-version={self.api_version}" try: response = requests.get(url, headers=self.headers, timeout=30) if response.status_code == 200: return response.json() elif response.status_code == 404: return None else: print(f"⚠️ Warning: GET container returned {response.status_code}") return None except Exception as e: print(f"⚠️ Warning: Error getting container status: {e}") return None def wait_for_container_running(self) -> bool: """ Wait for container to reach running state. Returns: True if container is running, False if timeout or error """ print(f"\n🔍 Checking container status for {self.agent_name} v{self.agent_version}") print(f"⏱️ Timeout: {self.timeout_seconds}s | Poll interval: {self.poll_interval}s") print("-" * 70) start_time = time.time() iteration = 0 while time.time() - start_time < self.timeout_seconds: iteration += 1 elapsed = int(time.time() - start_time) container = self.get_container_status() if not container: print(f"[{iteration}] ({elapsed}s) ⏳ Container not found yet, waiting...") time.sleep(self.poll_interval) continue # Extract status information status = ( container.get("status") or container.get("state") or container.get("provisioningState") or "Unknown" ) # Check for replicas information replicas = container.get("replicas", {}) ready_replicas = replicas.get("ready", 0) desired_replicas = replicas.get("desired", 0) print(f"[{iteration}] ({elapsed}s) 📊 Status: {status}") if replicas: print(f" 🔢 Replicas: {ready_replicas}/{desired_replicas} ready") # Check if container is running and ready if status.lower() in ["running", "succeeded", "ready"]: if desired_replicas == 0 or ready_replicas >= desired_replicas: print("\n" + "=" * 70) print("✅ Container is running and ready!") print("=" * 70) return True elif status.lower() in ["failed", "error", "cancelled"]: print("\n" + "=" * 70) print(f"❌ Container failed to start: {status}") print("=" * 70) return False time.sleep(self.poll_interval) # Timeout reached print("\n" + "=" * 70) print(f"⏱️ Timeout reached after {self.timeout_seconds}s") print("=" * 70) return False def main(): """Main entry point for CLI usage.""" project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT") agent_name = os.getenv("AGENT_NAME") agent_version = os.getenv("AGENT_VERSION") timeout = int(os.getenv("TIMEOUT_SECONDS", "600")) poll_interval = int(os.getenv("POLL_INTERVAL_SECONDS", "10")) if not all([project_endpoint, agent_name, agent_version]): print("❌ Error: Missing required environment variables") sys.exit(1) waiter = ContainerStatusWaiter( project_endpoint=project_endpoint, agent_name=agent_name, agent_version=agent_version, timeout_seconds=timeout, poll_interval=poll_interval, ) success = waiter.wait_for_container_running() sys.exit(0 if success else 1) if __name__ == "__main__": main() Key Features: REST API Polling: Uses Azure AI Foundry REST API to check container status Timeout Handling: Configurable timeout (default 10 minutes) Progress Tracking: Shows iteration count and elapsed time Replica Checking: Verifies all desired replicas are ready Clear Output: Emoji-enhanced status messages for easy reading Exit Codes: Returns 0 for success, 1 for failure (CI/CD friendly) Update workflow to include verification: Add this step after starting the agent: - name: Start the new agent version id: start_agent env: FOUNDRY_ACCOUNT: ${{ steps.foundry_details.outputs.FOUNDRY_ACCOUNT }} PROJECT_NAME: ${{ steps.foundry_details.outputs.PROJECT_NAME }} AGENT_NAME: ${{ secrets.AGENT_NAME }} run: | LATEST_VERSION=$(az cognitiveservices agent show \ --account-name "$FOUNDRY_ACCOUNT" \ --project-name "$PROJECT_NAME" \ --name "$AGENT_NAME" \ --query "versions.latest.version" -o tsv) echo "AGENT_VERSION=$LATEST_VERSION" >> $GITHUB_OUTPUT az cognitiveservices agent start \ --account-name "$FOUNDRY_ACCOUNT" \ --project-name "$PROJECT_NAME" \ --name "$AGENT_NAME" \ --agent-version $LATEST_VERSION - name: Wait for container to be ready env: AZURE_AI_PROJECT_ENDPOINT: ${{ secrets.AZURE_AI_PROJECT_ENDPOINT }} AGENT_NAME: ${{ secrets.AGENT_NAME }} AGENT_VERSION: ${{ steps.start_agent.outputs.AGENT_VERSION }} TIMEOUT_SECONDS: 600 POLL_INTERVAL_SECONDS: 15 run: | echo "⏳ Waiting for container to be ready..." python wait_for_container.py - name: Deployment Summary if: success() run: | echo "## Deployment Complete! 🚀" >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY echo "- **Agent**: ${{ secrets.AGENT_NAME }}" >> $GITHUB_STEP_SUMMARY echo "- **Version**: ${{ steps.version.outputs.VERSION }}" >> $GITHUB_STEP_SUMMARY echo "- **Status**: ✅ Container running and ready" >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY echo "### Deployment Timeline" >> $GITHUB_STEP_SUMMARY echo "1. ✅ Image built and pushed to ACR" >> $GITHUB_STEP_SUMMARY echo "2. ✅ Agent version registered" >> $GITHUB_STEP_SUMMARY echo "3. ✅ Container started" >> $GITHUB_STEP_SUMMARY echo "4. ✅ Container verified as running" >> $GITHUB_STEP_SUMMARY - name: Deployment Failed Summary if: failure() run: | echo "## Deployment Failed ❌" >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY echo "Please check the logs above for error details." >> $GITHUB_STEP_SUMMARY Benefits of Container Status Verification: Deployment Confidence: Know for certain that the container started successfully Early Failure Detection: Catch startup errors before users are affected CI/CD Gate: Pipeline only succeeds when container is actually ready Debugging Aid: Clear logs show container startup progress Timeout Protection: Prevents infinite waits with configurable timeout REST API Endpoints Used: GET {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default?api-version=2025-11-15-preview Response includes: status or state: Container state (Running, Failed, etc.) replicas.ready: Number of ready replicas replicas.desired: Target number of replicas error: Error details if failed Container States: Running/Ready: Container is operational InProgress: Container is starting up Failed/Error: Container failed to start Stopped: Container was stopped 7. Trigger Deployment # Automatic trigger - push to main git add . git commit -m "Update agent implementation" git push origin main # Manual trigger - via GitHub UI # Go to Actions → Deploy Agent to Azure AI Foundry → Run workflow Now this will trigger the Workflow as soon as you checkin the implementation code. You can play with the Agent in Foundry UI: Evaluation is now part the workflow You can also visualize the Evaluation in AI Foundry Best Practices for Production Agent LLMOps 1. Start with Simple Workflows, Add Complexity Gradually Don't build a complex multi-agent system on day one. Start with a single agent that does one task well. Once that's stable in production, add additional capabilities: Single agent with basic tool calling Add memory/state for multi-turn conversations Introduce specialized sub-agents for complex tasks Implement multi-agent collaboration This incremental approach reduces risk and enables learning from real usage before investing in advanced features. 2. Instrument Everything from Day One The worst time to add observability is after you have a production incident. Comprehensive instrumentation should be part of your initial development: Log every LLM call with inputs, outputs, token usage Track all tool invocations Record decision points in agent reasoning Capture timing metrics for every operation Log errors with full context After accumulating production data, you'll identify which metrics matter most. But you can't retroactively add logging for incidents that already occurred. 3. Build Evaluation into the Development Process Don't wait until deployment to evaluate agent quality. Integrate evaluation throughout development: Maintain a growing set of test conversations Run evaluations on every code change Track metrics over time to identify regressions Include diverse scenarios—happy path, edge cases, adversarial inputs Use LLM-as-judge for scalable automated evaluation, supplemented with periodic human review of sample outputs. 4. Embrace Non-Determinism, But Set Boundaries Agents are inherently non-deterministic, but that doesn't mean anything goes: Set acceptable ranges for variability in testing Use temperature and sampling controls to manage randomness Implement retry logic with exponential backoff Add fallback behaviors for when primary approaches fail Use assertions to verify critical invariants (e.g., "agent must never perform destructive actions without confirmation") 5. Prioritize Security and Governance from Day One Security shouldn't be an afterthought: Use managed identities and RBAC for all resource access Implement least-privilege principles—agents get only necessary permissions Add content filtering for inputs and outputs Monitor for prompt injection and jailbreak attempts Maintain audit logs for compliance Regularly review and update security policies 6. Design for Failure Your agents will fail. Design systems that degrade gracefully: Implement retry logic for transient failures Provide clear error messages to users Include fallback behaviors (e.g., escalate to human support) Never leave users stuck—always provide a path forward Log failures with full context for post-incident analysis 7. Balance Automation with Human Oversight Fully autonomous agents are powerful but risky. Consider human-in-the-loop workflows for high-stakes decisions: Draft responses that require approval before sending Request confirmation before executing destructive actions Escalate ambiguous situations to human operators Provide clear audit trails of agent actions 8. Manage Costs Proactively LLM API costs can escalate quickly at scale: Monitor token usage per conversation Set per-conversation token limits Use caching for repeated queries Choose appropriate models (not always the largest) Consider local models for suitable use cases Alert on cost anomalies that indicate runaway loops 9. Plan for Continuous Learning Agents should improve over time: Collect feedback on agent responses (thumbs up/down) Analyze conversations that required escalation Identify common failure patterns Fine-tune models on production interaction data (with appropriate consent) Iterate on prompts based on real usage Share learnings across the team 10. Document Everything Comprehensive documentation is critical as teams scale: Agent architecture and design decisions Tool configurations and API contracts Deployment procedures and runbooks Incident response procedures Version migration guides Evaluation methodologies Conclusion You now have a complete, production-ready AI agent deployed to Azure AI Foundry with: LangGraph-based agent orchestration Tool-calling capabilities Multi-turn conversation support Containerized deployment CI/CD automation Evaluation framework Multiple client implementations Key Takeaway LangGraph provides flexible agent orchestration with state management Azure AI Agent Server SDK simplifies deployment to Azure AI Foundry Custom state converter is critical for production deployments with tool calls CI/CD automation enables rapid iteration and deployment Evaluation framework ensures agent quality and performance Resources Azure AI Foundry Documentation LangGraph Documentation Azure AI Agent Server SDK OpenAI Responses API Thanks Manoranjan Rajguru https://www.linkedin.com/in/manoranjan-rajguru/3.6KViews2likes0Comments