Blog Post

Apps on Azure Blog
6 MIN READ

Even simpler to Safely Execute AI-generated Code with Azure Container Apps Dynamic Sessions

Jan-Kalis's avatar
Jan-Kalis
Icon for Microsoft rankMicrosoft
Mar 05, 2026

AI agents are writing code. The question is: where does that code run? If it runs in your process, a single hallucinated import os; os.remove('/') can ruin your day. Azure Container Apps dynamic sessions solve this with on-demand sandboxed environments — Hyper-V isolated, fully managed, and ready in milliseconds.

Thanks to your feedback, Dynamic Sessions are now easier to use with AI via MCP. Agents can quickly start a session interpreter and safely run code – all using a built-in MCP endpoint. Additionally - new starter samples show how to invoke dynamic sessions from Microsoft Agent Framework with code interpreter and with a custom container for even more versatility.   

What Are Dynamic Sessions?

A session pool maintains a reservoir of pre-warmed, isolated sandboxes. When your app needs one, it’s allocated instantly via REST API. When idle, it’s destroyed automatically after provided session cool down period.

What you get:

  • Strong isolation - Each session runs in its own Hyper-V sandbox — enterprise-grade security
  • Millisecond startup -Pre-warmed pool eliminates cold starts
  • Fully managed - No infra to maintain — automatic lifecycle, cleanup, scaling
  • Simple access - Single HTTP endpoint, session identified by a unique ID
  • Scalable - Hundreds to thousands of concurrent sessions

Two Session Types

1. Code Interpreter — Run Untrusted Code Safely

Code interpreter sessions accept inline code, run it in a Hyper-V sandbox, and return the output. Sessions support network egress and persistent file systems within the session lifetime. Three runtimes are available:

  • Python — Ships with popular libraries pre-installed (NumPy, pandas, matplotlib, etc.). Ideal for AI-generated data analysis, math computation, and chart generation.
  • Node.js — Comes with common npm packages. Great for server-side JavaScript execution, data transformation, and scripting.
  • Shell — A full Linux shell environment where agents can run arbitrary commands, install packages, start processes, manage files, and chain multi-step workflows. Unlike Python/Node.js interpreters, shell sessions expose a complete OS — ideal for agent-driven DevOps, build/test environments, CLI tool execution, and multi-process pipelines.

2. Custom Containers — Bring Your Own Runtime

Custom container sessions let you run your own container image in the same isolated, on-demand model. Define your image, and Container Apps handles the pooling, scaling, and lifecycle. Typical use cases are hosting proprietary runtimes, custom code interpreters, and specialized tool chains. This sample (Azure Samples) dives deeper into Customer Containers with Microsoft agent Framework orchestration.

MCP Support for Dynamic Sessions

Dynamic sessions also support Model Context Protocol (MCP) on both shell and Python session types. This turns a session pool into a remote MCP server that AI agents can connect to — enabling tool execution, file system access, and shell commands in a secure, ephemeral environment.

With an MCP-enabled shell session, an Azure Foundry agent can spin up a Flask app, run system commands, or install packages — all in an isolated container that vanishes when done. The MCP server is enabled with a single property on the session pool (isMCPServerEnabled: true), and the resulting endpoint + API key can be plugged directly into Azure Foundry as a connected tool. For a step-by-step walkthrough, see How to add an MCP tool to your Azure Foundry agent using dynamic sessions.

Deep Dive: Building an AI Travel Agent with Code Interpreter Sessions

Let’s walk through a sample implementation — a travel planning agent that uses dynamic sessions for both static code execution (weather research) and LLM-generated code execution (charting). Full source: github.com/jkalis-MS/AIAgent-ACA-DynamicSession

Architecture

Travel Agent Architecture

Component

Purpose

Microsoft Agent Framework

Agent runtime with middleware, telemetry, and DevUI

Azure OpenAI (GPT-4o)

LLM for conversation and code generation

ACA Session Pools

Sandboxed Python code interpreter

Azure Container Apps

Hosts the agent in a container

Application Insights

Observability for agent spans

The agent implements with two variants switchable in the Agent Framework DevUI — tools in ACA Dynamic Session (sandbox) and tools running locally (no isolation) — making the security value immediately visible.

Scenario A: Static Code in a Sandbox — Weather Research

The agent sends pre-written Python code to the session pool to fetch live weather data. The code runs with network egress enabled, calls the Open-Meteo API, and returns formatted results — all without touching the host process.

import requests
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential()
token = credential.get_token("https://dynamicsessions.io/.default")

response = requests.post(
    f"{pool_endpoint}/code/execute?api-version=2024-02-02-preview&identifier=weather-session-1",
    headers={"Authorization": f"Bearer {token.token}"},
    json={"properties": {
        "codeInputType": "inline",
        "executionType": "synchronous",
        "code": weather_code,  # Python that calls Open-Meteo API
    }},
)
result = response.json()["properties"]["stdout"]

Scenario B: LLM-Generated Code in a Sandbox — Dynamic Charting

This is where it gets interesting. The user asks “plot a chart comparing Miami and Tokyo weather.” The agent:

  1. Fetches weather data
  2. Asks Azure OpenAI to generate matplotlib code using a tightly-scoped system prompt
  3. Safety-checks the generated code for forbidden imports (subprocess, os.system, etc.)
  4. Wraps the code with data injection and sends it to the sandbox
  5. Downloads the resulting PNG from the sandbox’s /mnt/data/ directory
from openai import AzureOpenAI

# 1. LLM generates chart code
client = AzureOpenAI(azure_endpoint=endpoint, api_key=key, api_version="2024-12-01-preview")
generated_code = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "system", "content": CODE_GEN_PROMPT},
              {"role": "user", "content": f"Weather data: {weather_json}"}],
    temperature=0.2,
).choices[0].message.content

# 2. Execute in sandbox
requests.post(
    f"{pool_endpoint}/code/execute?api-version=2024-02-02-preview&identifier=chart-session-1",
    headers={"Authorization": f"Bearer {token.token}"},
    json={"properties": {
        "codeInputType": "inline", "executionType": "synchronous",
        "code": f"import json, matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nweather_data = json.loads('{weather_json}')\n{generated_code}",
    }},
)

# 3. Download the chart
img = requests.get(
    f"{pool_endpoint}/files/content/chart.png?api-version=2024-02-02-preview&identifier=chart-session-1",
    headers={"Authorization": f"Bearer {token.token}"},
).content

 

The result — a dark-themed dual-subplot chart comparing maximal and minimal temperature forecast chart example rendered by the Chart Weather tool in Dynamic Session:

Authentication

The agent uses DefaultAzureCredential locally and ManagedIdentityCredential when deployed. Tokens are cached and refreshed automatically:

from azure.identity import DefaultAzureCredential

token = DefaultAzureCredential().get_token("https://dynamicsessions.io/.default")
auth_header = f"Bearer {token.token}"
# Uses ManagedIdentityCredential automatically when deployed to Container Apps

Observability

The agent uses Application Insights for end-to-end tracing. The Microsoft Agent Framework exposes OpenTelemetry spans for invoke_agent, chat, and execute tool — wired to Azure Monitor with custom exporters:

from azure.monitor.opentelemetry import configure_azure_monitor
from agent_framework.observability import create_resource, enable_instrumentation

# Configure Azure Monitor first
configure_azure_monitor(
    connection_string="InstrumentationKey=...",
    resource=create_resource(),  # Uses OTEL_SERVICE_NAME, etc.
    enable_live_metrics=True,
)

# Then activate Agent Framework's telemetry code paths, optional if ENABLE_INSTRUMENTATION and/or ENABLE_SENSITIVE_DATA are set in env vars
enable_instrumentation(enable_sensitive_data=False)

 

This gives you traces for every agent invocation, tool execution (including sandbox timing), and LLM call — visible in the Application Insights transaction search and end-to-end transaction view in the new Agents blade in Application Insights. You can also open a detailed dashboard by clicking Explore in Grafana.

 

Session pools emit their own metrics and logs for monitoring sandbox utilization and performance. Combined with the agent-level Application Insights traces, you can get full visibility from the user prompt → agent → LLM → sandbox execution → response — across both your application and the infrastructure running untrusted code.

Deploy with One Command

The project includes full Bicep infrastructure-as-code. A single azd up provisions Azure OpenAI, Container Apps, Session Pool (with egress enabled), Container Registry, Application Insights, and all role assignments.

azd auth login
azd up

 

Next Steps

  1. Dynamic sessions documentationMicrosoft Learn
  2. MCP + Shell sessions tutorial - How to add an MCP tool to your Foundry agent
  3. Custom container sessions sample - github.com/Azure-Samples/dynamic-sessions-custom-container
  4. AI Agent + Dynamic Sessions - github.com/jkalis-MS/AIAgent-ACA-DynamicSession
Updated Mar 05, 2026
Version 1.0
No CommentsBe the first to comment