Blog Post

Microsoft Developer Community Blog
10 MIN READ

Hosted Containers and AI Agent Solutions

Lee_Stott's avatar
Lee_Stott
Icon for Microsoft rankMicrosoft
Mar 24, 2026

If you have built a proof-of-concept AI agent on your laptop and wondered how to turn it into something other people can actually use, you are not alone. The gap between a working prototype and a production-ready service is where most agent projects stall. Hosted containers close that gap faster than any other approach available today.

This post walks through why containers and managed hosting platforms like Azure Container Apps are an ideal fit for multi-agent AI systems, what practical benefits they unlock, and how you can get started with minimal friction.

The problem with "it works on my machine"

Most AI agent projects begin the same way: a Python script, an API key, and a local terminal. That workflow is perfect for experimentation, but it creates a handful of problems the moment you try to share your work.

First, your colleagues need the same Python version, the same dependencies, and the same environment variables. Second, long-running agent pipelines tie up your machine and compete with everything else you are doing. Third, there is no reliable URL anyone can visit to use the system, which means every demo involves a screen share or a recorded video.

Containers solve all three problems in one step. A single Dockerfile captures the runtime, the dependencies, and the startup command. Once the image builds, it runs identically on any machine, any cloud, or any colleague's laptop.

Why containers suit AI agents particularly well

AI agents have characteristics that make them a better fit for containers than many traditional web applications.

Long, unpredictable execution times

A typical web request completes in milliseconds. An agent pipeline that retrieves context from a database, imports a codebase, runs four verification agents in sequence, and generates a report can take two to five minutes. Managed container platforms handle long-running requests gracefully, with configurable timeouts and automatic keep-alive, whereas many serverless platforms impose strict execution limits that agent workloads quickly exceed.

Heavy, specialised dependencies

Agent applications often depend on large packages: machine learning libraries, language model SDKs, database drivers, and Git tooling. A container image bundles all of these once at build time. There is no cold-start dependency resolution and no version conflict with other projects on the same server.

Stateless by design

Most agent pipelines are stateless. They receive a request, execute a sequence of steps, and return a result. This maps perfectly to the container model, where each instance handles requests independently and the platform can scale the number of instances up or down based on demand.

Reproducible environments

When an agent misbehaves in production, you need to reproduce the issue locally. With containers, the production environment and the local environment are the same image. There is no "works on my machine" ambiguity.

A real example: multi-agent code verification

To make this concrete, consider a system called Opustest, an open-source project that uses the Microsoft Agent Framework with Azure OpenAI to analyse Python codebases automatically.

The system runs AI agents in a pipeline:

  1. A Code Example Retrieval Agent queries Azure Cosmos DB for curated examples of good and bad Python code, providing the quality standards for the review.
  2. A Codebase Import Agent reads all Python files from a Git repository cloned on the server.
  3. Four Verification Agents each score a different dimension of code quality (coding standards, functional correctness, known error handling, and unknown error handling) on a scale of 0 to 5.
  4. A Report Generation Agent compiles all scores and errors into an HTML report with fix prompts that can be exported and fed directly into a coding assistant.

The entire pipeline is orchestrated by a FastAPI backend that streams progress updates to the browser via Server-Sent Events. Users paste a Git URL, watch each stage light up in real time, and receive a detailed report at the end.

The app in action

Landing page: the default Git URL mode, ready for a repository link.

Landing page showing Git URL input mode

Local Path mode: toggling to analyse a codebase from a local directory.

Local Path input mode

Repository URL entered: a GitHub repository ready for verification.

Repository URL entered in the input field

Stage 1: the Code Example Retrieval Agent fetching standards from Cosmos DB.

Stage 1 code example retrieval in progress

Stage 3: the four Verification Agents scoring the codebase.

Stage 3 verification agents running

Stage 4: the Report Generation Agent compiling the final report.

Stage 4 report generation

Verification complete: all stages finished with a success banner.

Verification complete with success banner

Report detail: scores and the errors table with fix prompts.

Report showing scores and error table

The Dockerfile

The container definition for this system is remarkably simple:

 
FROM python:3.12-slim

RUN apt-get update && apt-get install -y --no-install-recommends git \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY backend/ backend/
COPY frontend/ frontend/

RUN adduser --disabled-password --gecos "" appuser
USER appuser

EXPOSE 8000

CMD ["uvicorn", "backend.app:app", "--host", "0.0.0.0", "--port", "8000"]
 

Twenty lines. That is all it takes to package a six-agent AI system with a web frontend, a FastAPI backend, Git support, and all Python dependencies into a portable, production-ready image.

Notice the security detail: the container runs as a non-root user. This is a best practice that many tutorials skip, but it matters when you are deploying to a shared platform.

From image to production in one command

With the Azure Developer CLI (azd), deploying this container to Azure Container Apps takes a single command:

 
azd up
 

Behind the scenes, azd reads an azure.yaml file that declares the project structure, provisions the infrastructure defined in Bicep templates (a Container Apps environment, an Azure Container Registry, and a Cosmos DB account), builds the Docker image, pushes it to the registry, deploys it to the container app, and even seeds the database with sample data via a post-provision hook.

The result is a publicly accessible URL serving the full agent system, with automatic HTTPS, built-in scaling, and zero infrastructure to manage manually.

Microsoft Hosted Agents vs Azure Container Apps: choosing the right home

Microsoft offers two distinct approaches for running AI agent workloads in the cloud. Understanding the difference is important when deciding how to host your solution.

Microsoft Foundry Hosted Agent Service (Microsoft Foundry)

Microsoft Foundry provides a fully managed agent hosting service. You define your agent's behaviour declaratively, upload it to the platform, and Foundry handles execution, scaling, and lifecycle management. This is an excellent choice when your agents fit within the platform's conventions: single-purpose agents that respond to prompts, use built-in tool integrations, and do not require custom server-side logic or a bespoke frontend.

Key characteristics of hosted agents in Foundry:

  • Fully managed execution. You do not provision or maintain any infrastructure. The platform runs your agent and handles scaling automatically.
  • Declarative configuration. Agents are defined through configuration and prompt templates rather than custom application code.
  • Built-in tool ecosystem. Foundry provides pre-built connections to Azure services, knowledge stores, and evaluation tooling.
  • Opinionated runtime. The platform controls the execution environment, request handling, and networking.

Azure Container Apps

Azure Container Apps is a managed container hosting platform. You package your entire application (agents, backend, frontend, and all dependencies) into a Docker image and deploy it. The platform handles scaling, HTTPS, and infrastructure, but you retain full control over what runs inside the container.

Key characteristics of Container Apps:

  • Full application control. You own the runtime, the web framework, the agent orchestration logic, and the frontend.
  • Custom networking. You can serve a web UI, expose REST APIs, stream Server-Sent Events, or run WebSocket connections.
  • Arbitrary dependencies. Your container can include any system package, any Python library, and any tooling (like Git for cloning repositories).
  • Portable. The same Docker image runs locally, in CI, and in production without modification.

Why Opustest uses Container Apps

Opustest requires capabilities that go beyond what a managed agent hosting platform provides:

RequirementHosted Agents (Foundry)Container Apps
Custom web UI with real-time progressNot supported nativelyFull control via FastAPI and SSE
Multi-agent orchestration pipelinePlatform-managed, limited customisationCustom orchestrator with arbitrary logic
Git repository cloning on the serverNot availableInstall Git in the container image
Server-Sent Events streamingNot supportedFull HTTP control
Custom HTML report generationLimited to platform outputsGenerate and serve any content
Export button for Copilot promptsNot availableCustom frontend with JavaScript
RAG retrieval from Cosmos DBPossible via built-in connectorsDirect SDK access with full query control

The core reason is straightforward: Opustest is not just a set of agents. It is a complete web application that happens to use agents as its processing engine. It needs a custom frontend, real-time streaming, server-side Git operations, and full control over how the agent pipeline executes. Container Apps provides all of this while still offering managed infrastructure, automatic scaling, and zero server maintenance.

When to choose which

Choose Microsoft Hosted Agents when your use case is primarily conversational or prompt-driven, when you want the fastest path to a working agent with minimal code, and when the built-in tool ecosystem covers your integration needs.

Choose Azure Container Apps when you need a custom frontend, custom orchestration logic, real-time streaming, server-side processing beyond prompt-response patterns, or when your agent system is part of a larger application with its own web server and API surface.

Both approaches use the same underlying AI models via Azure OpenAI. The difference is in how much control you need over the surrounding application.

Five practical benefits of hosted containers for agents

1. Consistent deployments across environments

Whether you are running the container locally with docker run, in a CI pipeline, or on Azure Container Apps, the behaviour is identical. Configuration differences are handled through environment variables, not code changes. This eliminates an entire category of "it works locally but breaks in production" bugs.

2. Scaling without re-architecture

Azure Container Apps can scale from zero instances (paying nothing when idle) to multiple instances under load. Because agent pipelines are stateless, each request is routed to whichever instance is available. You do not need to redesign your application to handle concurrency; the platform does it for you.

3. Isolation between services

If your agent system grows to include multiple services (perhaps a separate service for document processing or a background worker for batch analysis), each service gets its own container. They can be deployed, scaled, and updated independently. A bug in one service does not bring down the others.

4. Built-in observability

Managed container platforms provide logging, metrics, and health checks out of the box. When an agent pipeline fails after three minutes of execution, you can inspect the container logs to see exactly which stage failed and why, without adding custom logging infrastructure.

5. Infrastructure as code

The entire deployment can be defined in code. Bicep templates, Terraform configurations, or Pulumi programmes describe every resource. This means deployments are repeatable, reviewable, and version-controlled alongside your application code. No clicking through portals, no undocumented manual steps.

Common concerns addressed

"Containers add complexity"

For a single-file script, this is a fair point. But the moment your agent system has more than one dependency, a Dockerfile is simpler to maintain than a set of installation instructions. It is also self-documenting: anyone reading the Dockerfile knows exactly what the system needs to run.

"Serverless is simpler"

Serverless functions are excellent for short, event-driven tasks. But agent pipelines that run for minutes, require persistent connections (like SSE streaming), and depend on large packages are a poor fit for most serverless platforms. Containers give you the operational simplicity of managed hosting without the execution constraints.

"I do not want to learn Docker"

A basic Dockerfile for a Python application is fewer than ten lines. The core concepts are straightforward: start from a base image, install dependencies, copy your code, and specify the startup command. The learning investment is small relative to the deployment problems it solves.

"What about cost?"

Azure Container Apps supports scale-to-zero, meaning you pay nothing when the application is idle. For development and demonstration purposes, this makes hosted containers extremely cost-effective. You only pay for the compute time your agents actually use.

Getting started: a practical checklist

If you are ready to containerise your own agent solution, here is a step-by-step approach.

Step 1: Write a Dockerfile. Start from an official Python base image. Install system-level dependencies (like Git, if your agents clone repositories), then your Python packages, then your application code. Run as a non-root user.

Step 2: Test locally. Build and run the image on your machine:

 
docker build -t my-agent-app .
docker run -p 8000:8000 --env-file .env my-agent-app
 

If it works locally, it will work in the cloud.

Step 3: Define your infrastructure. Use Bicep, Terraform, or the Azure Developer CLI to declare the resources you need: a container app, a container registry, and any backing services (databases, key vaults, AI endpoints).

Step 4: Deploy. Push your image to the registry and deploy to the container platform. With azd, this is a single command. With CI/CD, it is a pipeline that runs on every push to your main branch.

Step 5: Iterate. Change your agent code, rebuild the image, and redeploy. The cycle is fast because Docker layer caching means only changed layers are rebuilt.

The broader picture

The AI agent ecosystem is maturing rapidly. Frameworks like Microsoft Agent Framework, LangChain, Semantic Kernel, and AutoGen make it straightforward to build sophisticated multi-agent systems. But building is only half the challenge. The other half is running these systems reliably, securely, and at scale.

Hosted containers offer the best balance of flexibility and operational simplicity for agent workloads. They do not impose the execution limits of serverless platforms. They do not require the operational overhead of managing virtual machines. They give you a portable, reproducible unit of deployment that works the same everywhere.

If you have an agent prototype sitting on your laptop, the path to making it available to your team, your organisation, or the world is shorter than you think. Write a Dockerfile, define your infrastructure, run azd up, and share the URL.

Your agents deserve a proper home. Hosted containers are that home.

Updated Mar 09, 2026
Version 1.0
No CommentsBe the first to comment