OpenAI’s GPT‑5.3 Chat marks the next step in the GPT‑5 series, designed to deliver more dependable, context‑aware chat experiences for enterprise workloads. The model emphasizes steadier instruction handling and clearer responses, supporting high‑volume, real‑world conversations with greater consistency.
GPT‑5.3 Chat is coming soon to Microsoft Foundry, where teams will be able to deploy production‑ready chat and agent experiences that are standardized, governed, and built to scale across the enterprise.
What’s new in GPT‑5.3 Chat
GPT‑5.3 Chat centers on predictable behavior, relevance, and response quality, helping teams build chat experiences that operate reliably across end‑to‑end workflows while aligning with enterprise safety and compliance expectations.
Fewer dead ends, more resolved conversations
- Reduces unnecessary refusals by responding more proportionately when safe context is available
- Supports compliant reformulation to keep interactions moving forward
- Enables end‑to‑end resolution in support, IT, and policy‑driven workflows
Grounded answers you can operationalize
- Combines built‑in web search with model reasoning to surface relevant, actionable information
- Prioritizes relevance and context over long lists of loosely related results
- Keeps responses actionable while maintaining enterprise controls and traceability
Consistent outputs at scale
- Improved tone, explanation quality, and instruction following
- Easier to template, govern, and monitor across apps
- Less downstream cleanup as usage scales
Built for production in Microsoft Foundry
Production‑grade infrastructure
- Observability, failover, quota management, and performance monitoring
- Designed for real workloads—not experiments
- Consistent behavior across regions and use cases without re‑architecting
Smarter scaling with quota tiers
- Automatic quota increases with sustained usage
- Fewer rate‑limit interruptions as demand grows
- Flexible tiers from Free through Tier 6
Security and compliance by default
- Identity, access controls, policy enforcement, and data boundaries built in
- Meets regulated‑industry requirements out of the box
- Teams can move fast without compromising trust
GPT-5.3 Chat in Microsoft Foundry is priced at $1.75 per million input tokens, $0.175 per million cached input tokens, and $14.00 per million output tokens.
Ready to build with GPT‑5.3 Chat in Foundry?
Start turning reliable conversations into real applications. Explore GPT-5.3 Chat in Microsoft Foundry and begin building production ready‑ chat and agent experiences today.