Blog Post

Apps on Azure Blog
6 MIN READ

Agentic Applications on Azure Container Apps with Microsoft Foundry

Cary_Chai's avatar
Cary_Chai
Icon for Microsoft rankMicrosoft
Nov 18, 2025

Learn how to deploy agents on Azure Container Apps using Microsoft Agent Framework and integrate them with Microsoft Foundry for rich observability.

Agents have exploded in popularity over the last year, reshaping not only the kinds of applications developers build but also the underlying architectures required to run them. As agentic applications grow more complex by invoking tools, collaborating with other services, and orchestrating multi-step workflows, architectures are naturally shifting toward microservice patterns. Azure Container Apps is purpose-built for this world: a fully managed, serverless platform designed to run independent, composable services with autoscaling, pay-per-second pricing, and seamless app-to-app communication.

By combining Azure Container Apps with the Microsoft Agent Framework (MAF) and Microsoft Foundry, developers can run containerized agents on ACA while using Foundry to visualize and monitor how those agents behave. Azure Container Apps handles scalable, high-performance execution of agent logic, and Microsoft Foundry lights up rich observability for reasoning, planning, tool calls, and errors through its integrated monitoring experience. Together, they form a powerful foundation for building and operating modern, production-grade agentic applications.

In this blog, we’ll walk through how to build an agent running on Azure Container Apps using Microsoft Agent Framework and OpenTelemetry, and then connect its telemetry to Microsoft Foundry so you can see your ACA-hosted agent directly in the Foundry monitoring experience.

Prerequisites

The Sample Agent

The complete sample code is available in this repo and can be deployed end-to-end with a single command. It's a basic currency agent.
This sample deploys:

  • An Azure Container Apps environment
  • An agent built with Microsoft Agent Framework (MAF)
  • OpenTelemetry instrumentation using the Azure Monitor exporter
  • Application Insights to collect agent telemetry
  • Microsoft Foundry resource
  • Environment wiring to integrate the agent with Microsoft Foundry

Deployment Steps

  1. Clone the repository:
    git clone https://github.com/cachai2/foundry-3p-agents-samples.git
    cd foundry-3p-agents-samples/azure
  2. Authenticate with Azure:
    azd auth login
  3. Set the following azd environment variable 
    azd env set AZURE_AI_MODEL_DEPLOYMENT_NAME gpt-4.1-mini
  4. Deploy to Azure
    azd up

This provisions your Azure Container App, Application Insights, logs pipeline and required environment variables. While deployment runs, let’s break down how the code becomes compatible with Microsoft Foundry.

1. Setting up the agent in Azure Container Apps

To integrate with Microsoft Foundry, the agent needs two essential capabilities:

  1. Microsoft Agent Framework (MAF): Handles agent logic, tools, schema-driven execution, and emits standardized gen_ai.* spans.
  2. OpenTelemetry: Sends the required agent/model/tool spans to Application Insights, which Microsoft Foundry consumes for visualization and monitoring.

Although this sample uses MAF, the same pattern works with any agent framework. MAF and LangChain currently provide the richest telemetry support out-of-the-box.

1.1 Configure Microsoft Agent Framework (MAF)

The agent includes:

  • A tool (get_exchange_rate)
  • An agent created by ChatAgent
  • A runtime manager (AgentRuntime)
  • A FastAPI app exposing /invoke

Telemetry is enabled using two components already present in the repo:

  • configure_azure_monitor: Configures OpenTelemetry + Azure Monitor exporter + auto-instrumentation.
  • setup_observability(): Enables MAF’s additional spans (gen_ai.*, tool spans, agent lifecycle spans).

From the repo (_configure_observability()):

from azure.monitor.opentelemetry import configure_azure_monitor
from agent_framework.observability import setup_observability
from opentelemetry.sdk.resources import Resource

def _configure_observability() -> None:
    configure_azure_monitor(
        resource=Resource.create({"service.name": SERVICE_NAME}),
        connection_string=APPLICATION_INSIGHTS_CONNECTION_STRING,
    )

    setup_observability(enable_sensitive_data=False)

This gives you:

  • gen_ai.model.* spans (model usage + token counts)
  • tool call spans
  • agent lifecycle & execution spans
  • HTTP + FastAPI instrumentation
  • Standardized telemetry required by Microsoft Foundry

No manual TracerProvider wiring or OTLP exporter setup is needed.

1.2 OpenTelemetry Setup (Azure Monitor Exporter)

In this sample, OpenTelemetry is fully configured by Azure Monitor’s helper:

import os
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry.sdk.resources import Resource
from agent_framework.observability import setup_observability

SERVICE_NAME = os.getenv("ACA_SERVICE_NAME", "aca-currency-exchange-agent")

configure_azure_monitor(
    resource=Resource.create({"service.name": SERVICE_NAME}),
    connection_string=os.getenv("APPLICATION_INSIGHTS_CONNECTION_STRING"),
)

# Enable Microsoft Agent Framework gen_ai/tool spans on top of OTEL
setup_observability(enable_sensitive_data=False)

This automatically:

  • Installs and configures the OTEL tracer provider
  • Enables batching + exporting of spans
  • Adds HTTP/FastAPI/Requests auto-instrumentation
  • Sends telemetry to Application Insights
  • Adds MAF’s agent + tool spans

All required environment variables (such as APPLICATION_INSIGHTS_CONNECTION_STRING) are injected automatically by azd up.

2. Deploy the Model and Test Your Agent

Once azd up completes, you're ready to deploy a model to the Microsoft Foundry instance and test it.

  1. Find the resource name of your deployed Azure AI Services from azd up and navigate to it.
  2. From there, open it in Microsoft Foundry, navigate to the Model Catalog and add the gpt-4.1-mini model.
  3. Find the resource name of your deployed Azure Container App and navigate to it. Copy the application URL
  4. Set your container app URL environment variable in your terminal. (The below commands are for WSL.) 
    export APP_URL="Your container app URL"
  5. Now, go back to your terminal and run the following curl command to invoke the agent
    curl -X POST "$APP_URL/invoke" \
      -H "Content-Type: application/json" \
      -d '{
        "prompt": "How do I convert 100 USD to EUR?"
      }'
    

3. Verifying Telemetry to Application Insights

Once your Container App starts, you can validate telemetry:

  1. Open the Application Insights resource created by azd up
  2. Go to Logs
  3. Run these queries (make sure you're in KQL mode not simple mode)
    Check MAF-genAI spans:
    dependencies
    | where timestamp > ago(3h)
    | extend 
        genOp    = tostring(customDimensions["gen_ai.operation.name"]),
        genSys   = tostring(customDimensions["gen_ai.system"]),
        reqModel = tostring(customDimensions["gen_ai.request.model"]),
        resModel = tostring(customDimensions["gen_ai.response.model"])
    | summarize count() by genOp, genSys, reqModel, resModel
    | order by count_ desc
    

    Check agent + tools:

    dependencies
    | where timestamp > ago(1h)
    | extend 
        genOp = tostring(customDimensions["gen_ai.operation.name"]),
        agent = tostring(customDimensions["gen_ai.agent.name"]),
        tool  = tostring(customDimensions["gen_ai.tool.name"])
    | where genOp in ("agent.run", "invoke_agent", "execute_tool")
    | project 
        timestamp,
        genOp,
        agent,
        tool,
        name,
        target,
        customDimensions
    | order by timestamp desc
    

If telemetry is flowing, you’re ready to plug your agent into Microsoft Foundry.

4. Connect Application Insights to Microsoft Foundry

Microsoft Foundry uses your Application Insights resource to power:

  • Agent monitoring
  • Tool call traces
  • Reasoning graphs
  • Multi-agent orchestration views
  • Error analysis

To connect:

  1. Navigate to Monitoring in the left navigation pane of the Microsoft Foundry portal.
  2. Select the Application analytics tab.
  3. Select your application insights resource created from azd up
  4. Connect the resource to your AI Foundry project.

Note: If you are unable to add your application insights connection this way, you may need to follow the following: Navigate to the Overview of your Foundry project -> Open in management center -> Connected resources -> New Connection -> Application Insights

Foundry will automatically start ingesting:

  • gen_ai.* spans
  • tool spans
  • agent lifecycle spans
  • workflow traces

No additional configuration is required.

5. Viewing Dashboards & Traces in Microsoft Foundry

Once your Application Insights connection is added, you can view your agent’s telemetry directly in Microsoft Foundry’s Monitoring experience.

5.1 Monitoring

The Monitoring tab shows high-level operational metrics for your application, including:

  • Total inference calls
  • Average call duration
  • Overall success/error rate
  • Token usage (when available)
  • Traffic trends over time

This view is useful for spotting latency spikes, increased load, or changes in usage patterns, and these visualizations are powered by the telemetry emitting from your agents in Azure Container Apps.

5.2 Traces Timeline

The Tracing tab shows the full distributed trace of each agent request, including all spans emitted by Microsoft Foundry and your Azure Container App with Microsoft Agent Framework. You can see:

  • Top-level operations such as invoke_agent, chat, and process_thread_run
  • Tool calls like execute_tool_get_exchange_rate
  • Internal MAF steps (create_thread, create_message, run tool)
  • Azure credential calls (GET /msi/token)
  • Input/output tokens and duration for each span

This view gives you an end-to-end breakdown of how your agent executed, which tools it invoked, and how long each step took — essential for debugging and performance tuning.

Conclusion

By combining Azure Container Apps, the Microsoft Agent Framework, and OpenTelemetry, you can build agents that are not only scalable and production-ready, but also fully observable and orchestratable inside Microsoft Foundry. Container Apps provides the execution engine and autoscaling foundation, MAF supplies structured agent logic and telemetry, and Microsoft Foundry ties everything together with powerful planning, monitoring, and workflow visualization.

This architecture gives you the best of both worlds: the flexibility of running your own containerized agents with the dependencies you choose, and the intelligence of Microsoft Foundry to coordinate multi-step reasoning, tool call, and cross-agent workflows.

As the agent ecosystem continues to evolve, Azure Container Apps and Microsoft Foundry provide a strong, extensible foundation for building the next generation of intelligent, microservice-driven applications.

Updated Nov 18, 2025
Version 2.0
No CommentsBe the first to comment