Blog Post

Microsoft Foundry Blog
4 MIN READ

Announcing GPT‑5.2‑Codex in Microsoft Foundry: Enterprise‑Grade AI for Secure Software Engineering

Naomi Moneypenny's avatar
Jan 14, 2026

Enterprise developers know the grind: wrestling with legacy code, navigating complex dependency challenges, and waiting on security reviews that stall releases. OpenAI’s GPT‑5.2‑Codex flips that equation and helps engineers ship faster without cutting corners. It’s not just autocomplete; it’s a reasoning engine for real-world software engineering.

Generally available starting today through Azure OpenAI in Microsoft Foundry Models, GPT‑5.2‑Codex is built for the realities of enterprise codebases, large repos, evolving requirements, and security constraints that can’t be overlooked. As OpenAI’s most advanced agentic coding model, it brings sustained reasoning, and security-aware assistance directly into the workflows enterprise developers already rely on with Microsoft’s secure and reliable infrastructure.

GPT-5.2-Codex at a Glance

GPT‑5.2‑Codex is designed for how software gets built in enterprise teams. You start with imperfect inputs including legacy code, partial docs, screenshots, diagrams, and work through multi‑step changes, reviews, and fixes. The model helps keep context, intent, and standards intact across that entire lifecycle, so teams can move faster without sacrificing quality or security.

What it enables

  • Work across code and artifacts: Reason over source code alongside screenshots, architecture diagrams, and UI mocks — so implementation stays aligned with design intent.
  • Stay productive in long‑running tasks: Maintain context across migrations, refactors, and investigations, even as requirements evolve.
  • Build and review with security in mind: Get practical support for secure coding patterns, remediation, reviews, and vulnerability analysis — where correctness matters as much as speed.

Feature Specs (quick reference)

  • Context window: 400K tokens (approximately 100K lines of code)
  • Supported languages: 50+ including Python, JavaScript/TypeScript, C#, Java, Go, Rust
  • Multimodal inputs: Code, images (UI mocks, diagrams), and natural language
  • API compatibility: Drop-in replacement for existing Codex API calls

Use cases where it really pops

  • Legacy modernization with guardrails: Safely migrate and refactor “untouchable” systems by preserving behavior, improving structure, and minimizing regression risk.
  • Large‑scale refactors that don’t lose intent: Execute cross‑module updates and consistency improvements without the typical “one step forward, two steps back” churn.
  • AI‑assisted code review that raises the floor: Catch risky patterns, propose safer alternatives, and improve consistency, especially across large teams and long‑lived codebases.
  • Defensive security workflows at scale: Accelerate vulnerability triage, dependency/path analysis, and remediation when speed matters, but precision matters more.
  • Lower cognitive load in long, multi‑step builds: Keep momentum across multi‑hour sessions: planning, implementing, validating, and iterating with context intact.

Pricing

Model

Input Price/1M Tokens

Cached Input Price/1M Tokens

Output Price/1M Tokens

GPT-5.2-Codex

$1.75

$0.175

$14.00

 

Security Aware by Design, not as an Afterthought

For many organizations, AI adoption hinges on one nonnegotiable question: Can this be trusted in security sensitive workflows?

GPT-5.2-Codex meaningfully advances the Codex lineage in this area. As models grow more capable, we’ve seen that general reasoning improvements naturally translate into stronger performance in specialized domains — including defensive cybersecurity.

With GPT‑5.2‑Codex, this shows up in practical ways:

  • Improved ability to analyze unfamiliar code paths and dependencies
  • Stronger assistance with secure coding patterns and remediation
  • More dependable support during code reviews, vulnerability investigations, and incident response

At the same time, Microsoft continues to deploy these capabilities thoughtfully balancing access, safeguards, and platform level controls so enterprises can adopt AI responsibly as capabilities evolve.

 

Why Run GPT-5.2-Codex on Microsoft Foundry?

Powerful models matter — but where and how they run matters just as much for enterprise.

Organizations choose Microsoft Foundry because it combines Foundry frontier AI with Azure enterprise grade fundamentals:

  • Integrated security, compliance, and governance
    Deploy GPT-5.2-Codex within existing Azure security boundaries, identity systems, and compliance frameworks — without reinventing controls.
  • Enterprise ready orchestration and tooling
    Build, evaluate, monitor, and scale AI powered developer experiences using the same platform teams already rely on for production workloads.
  • A unified path from experimentation to scale
    Foundry makes it easier to move from proof of concept to real deployment —without changing platforms, vendors, or operating assumptions.
  • Trust at the platform level
    For teams working in regulated or security critical environments, Foundry and Azure provide assurances that go beyond the model itself.

Together with GitHub Copilot, Microsoft Foundry provides a unified developer experience — from in‑IDE assistance to production‑grade AI workflows — backed by Azure’s security, compliance, and global scale. This is where GPT-5.2-Codex becomes not just impressive but adoptable.

 

Get Started Today

Explore GPT‑5.2‑Codex in Microsoft today. Start where you already work: Try GPT‑5.2‑Codex in GitHub Copilot for everyday coding and scale the same model to larger workflows using Azure OpenAI in Microsoft Foundry. Let’s build what’s next with speed and security.

Updated Jan 14, 2026
Version 3.0
No CommentsBe the first to comment