Blog Post

Microsoft Foundry Blog
3 MIN READ

Introducing DeepSeek-V3.2 and DeepSeek-V3.2-Speciale in Microsoft Foundry

RashaudSavage's avatar
RashaudSavage
Icon for Microsoft rankMicrosoft
Dec 15, 2025

Next-generation open reasoning models, now available with Azure-grade security, observability, and enterprise integration.

Today, we’re excited to announce that DeepSeek-V3.2 and DeepSeek-V3.2-Speciale are now available in Microsoft Foundry directly from Azure.These models bring state-of-the-art reasoning, hyper-efficient architecture, and breakthrough reinforcement learning innovations into the Foundry ecosystem giving developers unprecedented access to frontier-class intelligence that is fully enterprise-ready.

With this launch, customers can deploy DeepSeek’s latest reasoning engines with the reliability, scalability, and compliance of Azure, and integrate them seamlessly with Foundry’s evaluation tools, routing systems, agent framework, and observability stack.

Why DeepSeek-V3.2 Matters

DeepSeek-V3.2 represents a major evolution of the DeepSeek family—a model engineered for thinking, not just text generation.
It introduces breakthrough innovations across architecture, training, and agentic reasoning, all optimized for long-horizon problem-solving.

Three Core Pillars Behind DeepSeek-V3.2

1. Hyper-Efficient Architecture
DeepSeek Sparse Attention (DSA) intelligently filters out noise in long contexts, dynamically selecting only relevant tokens.
This leads to:

  • Up to 3× faster reasoning paths
  • Far lower memory consumption
  • Identical output quality to dense attention at 128k context

2. Scalable Intelligence
DeepSeek flips the traditional training budget:

Over 10% of total compute is spent on reinforcement learning compared to ~1% in traditional LLM pipelines.

This mirrors “System 2” training strategies, enabling the model to learn how to think, not just predict.

3. Generalizable Agents
DeepSeek-V3.2 integrates agentic reinforcement learning (A-RL) at scale, producing models capable of:

  • Multi-step reasoning
  • Reliable tool-use
  • Maintaining long chains of thought

DeepSeek’s GRPO (Group Relative Policy Optimization) eliminates critic networks, reducing memory by 50% while stabilizing RL training.

Thinking Inside Tool-Use: A Breakthrough for Agents

One of the biggest innovations in V3.2 is its Thinking Retention Mechanism optimized specifically for tool-calling scenarios. 

Traditional tool-using models discard their internal reasoning trace with each iteration, forcing repeated re-reasoning.
DeepSeek solves this by:

  • Preserving reasoning as long as the user message hasn’t changed
  • Retaining tool-call history throughout the session
  • Reducing redundant token generation
  • Dramatically improving agent reliability and cost-efficiency

For customers building autonomous agents on Foundry Agent Service, this is a significant advancement.

DeepSeek-V3.2-Speciale: Maximum Reasoning, Zero Constraints

Alongside the base model, Microsoft Foundry is also introducing DeepSeek-V3.2-Speciale, a variant designed explicitly for maximum reasoning accuracy.

Speciale differs from the standard model in key ways:

What Makes Speciale Special

  • Prioritizes raw cognitive depth over structured outputs
  • Produces the strongest chain-of-thought trajectories in the DeepSeek family
  • Achieves frontier-level benchmark performance, including best-in-class results on Olympiad-style problem sets
  • Does not support native function/tool calling—reserving all capacity for reasoning

For research labs, hedge funds, scientific workflows, or evaluation teams using Foundry:
Speciale is your “gold medalist” for pure reasoning.

Why this Launch Matters

DeepSeek-V3.2 and Speciale represent a shift in how reasoning models are built:

  • More efficient
  • More transparent
  • More aligned with agent workflows
  • More accessible through open model releases

And now—available with Azure’s trust, governance, and ecosystem—these models become production-ready for enterprises at global scale.

Microsoft Foundry is committed to giving customers choice, flexibility, and frontier intelligence, and this launch is a major milestone toward that mission.

Pricing

ModelDeployment TypeAzure Resource RegionsPricing/1K tokensAvailability
DeepSeek v3.2Global StandardAll resource regions

I/P - $0.00058

O/P- $0.00168

Public Preview, Dec 15, 2025
DeepSeek v3.2 SpecialeGlobal StandardAll resource regions

I/P - $0.00058

O/P- $0.00168

Public Preview, Dec 15, 2025

 

Try DeepSeek v3.2 today on AI Model Catalog | Microsoft Foundry Models

 

Updated Dec 15, 2025
Version 2.0
No CommentsBe the first to comment