Blog Post

Educator Developer Blog
5 MIN READ

Integrating Microsoft Foundry with OpenClaw: Step by Step Model Configuration

suzarilshah's avatar
Feb 23, 2026

Welcome back to the blog! If you have been keeping up with the AI agent space, you already know how fast things are moving. Honestly, who doesn't know about OpenClaw these days, evolved from ClawdBot to Moltbot to now, OpenClaw! It has quickly become the ultimate open-source framework for developers who want a personal AI assistant that can actually execute tasks, run shell commands, and control browsers locally. However, OpenClaw is only as smart as the underlying models powering it. This is exactly where Microsoft Foundry comes into play. Having spent so much time exploring these tools within the Foundry MVP community, I can confidently say that it provides the perfect backend architecture. With a massive catalog of over 11,000 models, from Anthropic's Claude Opus 4.6 to the brand-new GPT-5.2 series and DeepSeek V3.1, Azure AI Foundry delivers the enterprise-grade reliability and reasoning power required for complex autonomous workflows. In this post, I am going to show you how to bridge the gap. We will walk through the technical steps of configuring your preferred models in Microsoft Foundry and setting them up to run flawlessly on your local OpenClaw instance. Grab your favorite coffee, fire up your terminal, and let us start building!

Step 1: Deploying Models on Microsoft Foundry

Let us kick things off in the Azure portal. To get our OpenClaw agent thinking like a genius, we need to deploy our models in Microsoft Foundry. For this guide, we are going to focus on deploying gpt-5.2-codex on Microsoft Foundry with OpenClaw. 

Navigate to your AI Hub, head over to the model catalog, choose the model you wish to use with OpenClaw and hit deploy. Once your deployment is successful, head to the endpoints section.

 

Important: Grab your Endpoint URL and your API Keys right now and save them in a secure note. We will need these exact values to connect OpenClaw in a few minutes.

Step 2: Installing and Initializing OpenClaw

Next up, we need to get OpenClaw running on your machine. Open up your terminal and run the official installation script:

curl -fsSL https://openclaw.ai/install.sh | bash

The wizard will walk you through a few prompts. Here is exactly how to answer them to link up with our Azure setup:

  • First Page (Model Selection): Choose "Skip for now".

     

  • Second Page (Provider): Select azure-openai-responses.

 

  • Model Selection: Select gpt-5.2-codex , For now only the models listed (hosted on Microsoft Foundry) in the picture below are available to be used with OpenClaw.
  • Follow the rest of the standard prompts to finish the initial setup.

Step 3: Editing the OpenClaw Configuration File

Now for the fun part. We need to manually configure OpenClaw to talk to Microsoft Foundry. Open your configuration file located at ~/.openclaw/openclaw.json in your favorite text editor.

Replace the contents of the models and agents sections with the following code block:

{
    "models": {
    "providers": {
      "azure-openai-responses": {
        "baseUrl": "https://<YOUR_RESOURCE_NAME>.openai.azure.com/openai/v1",
        "apiKey": "<YOUR_AZURE_OPENAI_API_KEY>",
        "api": "openai-responses",
        "authHeader": false,
        "headers": {
          "api-key": "<YOUR_AZURE_OPENAI_API_KEY>"
        },
        "models": [
          {
            "id": "gpt-5.2-codex",
            "name": "GPT-5.2-Codex (Azure)",
            "reasoning": true,
            "input": ["text", "image"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 400000,
            "maxTokens": 16384,
            "compat": { "supportsStore": false }
          },
          {
            "id": "gpt-5.2",
            "name": "GPT-5.2 (Azure)",
            "reasoning": false,
            "input": ["text", "image"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 272000,
            "maxTokens": 16384,
            "compat": { "supportsStore": false }
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "azure-openai-responses/gpt-5.2-codex"
      },
      "models": {
        "azure-openai-responses/gpt-5.2-codex": {}
      },
      "workspace": "/home/<USERNAME>/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      }
    }
  }
}

 

You will notice a few placeholders in that JSON. Here is exactly what you need to swap out:

Placeholder VariableWhat It IsWhere to Find It
<YOUR_RESOURCE_NAME>The unique name of your Azure OpenAI resource.Found in your Azure Portal under the Azure OpenAI resource overview.
<YOUR_AZURE_OPENAI_API_KEY>The secret key required to authenticate your requests.Found in Microsoft Foundry under your project endpoints or Azure Portal keys section.
<USERNAME>Your local computer's user profile name.Open your terminal and type whoami to find this.

 

Step 4: Restart the Gateway

After saving the configuration file, you must restart the OpenClaw gateway for the new Foundry settings to take effect. Run this simple command:

openclaw gateway restart

Configuration Notes & Deep Dive

If you are curious about why we configured the JSON that way, here is a quick breakdown of the technical details.

Authentication Differences Azure OpenAI uses the api-key HTTP header for authentication. This is entirely different from the standard OpenAI Authorization: Bearer header. Our configuration file addresses this in two ways:

  • Setting "authHeader": false completely disables the default Bearer header.
  • Adding "headers": { "api-key": "<key>" } forces OpenClaw to send the API key via Azure's native header format.

Important Note: Your API key must appear in both the apiKey field AND the headers.api-key field within the JSON for this to work correctly.

The Base URL Azure OpenAI's v1-compatible endpoint follows this specific format: https://<your_resource_name>.openai.azure.com/openai/v1

The beautiful thing about this v1 endpoint is that it is largely compatible with the standard OpenAI API and does not require you to manually pass an api-version query parameter.

Model Compatibility Settings

  • "compat": { "supportsStore": false } disables the store parameter since Azure OpenAI does not currently support it.
  • "reasoning": true enables the thinking mode for GPT-5.2-Codex. This supports low, medium, high, and xhigh levels.
  • "reasoning": false is set for GPT-5.2 because it is a standard, non-reasoning model.

Model Specifications & Cost Tracking

If you want OpenClaw to accurately track your token usage costs, you can update the cost fields from 0 to the current Azure pricing. Here are the specs and costs for the models we just deployed:

Model Specifications

ModelContext WindowMax Output TokensImage InputReasoning
gpt-5.2-codex400,000 tokens16,384 tokensYesYes
gpt-5.2272,000 tokens16,384 tokensYesNo

 

Current Cost (Adjust in JSON)

ModelInput (per 1M tokens)Output (per 1M tokens)Cached Input (per 1M tokens)
gpt-5.2-codex$1.75$14.00$0.175
gpt-5.2$2.00$8.00$0.50

 

Conclusion:

And there you have it! You have successfully bridged the gap between the enterprise-grade infrastructure of Microsoft Foundry and the local autonomy of OpenClaw. By following these steps, you are not just running a chatbot; you are running a sophisticated agent capable of reasoning, coding, and executing tasks with the full power of GPT-5.2-codex behind it.

The combination of Azure's reliability and OpenClaw's flexibility opens up a world of possibilities. Whether you are building an automated devops assistant, a research agent, or just exploring the bleeding edge of AI, you now have a robust foundation to build upon.

Now it is time to let your agent loose on some real tasks. Go forth, experiment with different system prompts, and see what you can build. If you run into any interesting edge cases or come up with a unique configuration, let me know in the comments below. Happy coding!

Updated Feb 18, 2026
Version 1.0
No CommentsBe the first to comment