Blog Post

Microsoft Foundry Blog
2 MIN READ

Announcing extended support for Fine Tuning gpt-4o and gpt-4o-mini

davevoutila's avatar
davevoutila
Icon for Microsoft rankMicrosoft
Feb 26, 2026

Fine-tune gpt-4o and gpt-4o-mini only on Microsoft Foundry

At Build 2025, we announced post-retirement, extended deployment and inference support for fine tuned models.

Today, we’re excited to announce we’re extending fine-tuning training for current customers of our most popular Azure OpenAI models: gpt-4o (2024-08-06) and gpt-4o-mini (2024-07-18). Hundreds of customers have pushed trillions of tokens through fine-tuned versions of these models and we’re happy to provide even more runway for your AI agents and applications.

Already using these models in Foundry? We have you covered as the only provider of fine tuning gpt-4o and gpt-4o-mini come April. Keep fine tuning!

Not yet using Microsoft Foundry? Get started today by migrating your training data to Microsoft Foundry and fine tune using Global or Standard Training for gpt-4o and gpt-4o-mini using your existing OpenAI code. You’ll have the runway to continuously fine tune or update your models. You have until March 31st, 2026, to become a fine-tuning customer of these models.

 

Model

Version

Training retirement date

Deployment retirement date

gpt-4o

2024-08-06

No earlier than 2026-09-311

2027-03-31

gpt-4o-mini

2024-07-18

No earlier than 2026-09-311

2027-03-31

gpt-4.1

2025-04-14

At base model retirement

One year after training retirement

gpt-4.1-mini

2025-04-14

At base model retirement

One year after training retirement

gpt-4.1-nano

2025-04-14

At base model retirement

One year after training retirement

o4-mini

2025-04-16

At base model retirement

One year after training retirement

1For existing customers only. Otherwise, training retirement occurs at base model retirement

Updated Feb 25, 2026
Version 1.0
No CommentsBe the first to comment