We value your feedback and recognize the demand for fine tuning to be accessible in more regions. Today, we are excited to announce that serverless finetuning for Mistral, Phi, and NTT models is now available across all US regions where base model inferencing is also accessible. This expansion aims to provide greater flexibility and accessibility for users, ensuring that everyone can benefit from the enhanced capabilities of serverless finetuning.
Region Availability
Cross region finetuning is now enabled in the following regions:
- EastUS
- EastUS2
- SouthCentralUS
- NorthCentralUS
- WestUS
- WestUS3
Model Availability
- Mistral-Nemo
- Mistral-Large-2411
- Ministral-3B
- Phi-3.5-mini-instruct
- Phi-3.5-MoE-instruct
- Phi-4-mini-instruct
- Tsuzumi-7b
Looking Ahead: More Models and Regions
As we continue to innovate and expand our model offerings, more models and regions will soon be supported. Our team is working diligently to ensure that users across various locations can benefit from the latest advancements in serverless finetuning. Stay tuned for updates as we roll out these enhancements, providing even greater flexibility and accessibility for our global user base. We appreciate your ongoing support and look forward to sharing more details in the near future.
Get started today!
Whether you're a newcomer to fine-tuning or an experienced developer, getting started with Azure AI Foundry is now more accessible than ever. Fine-tuning is available through both Azure AI Foundry and Azure ML Studio, offering a user-friendly interface for those who prefer a graphical user interface (GUI) and SDK’s and CLI for advanced users.
Learn more!
- Try it out with Azure AI Foundry
- Explore documentation for the model catalog in Azure AI Foundry
- Begin using the Finetuning SDK in the notebook
- Learn more about Azure AI Content Safety - Azure AI Content Safety – AI Content Moderation | Microsoft Azure
- Get started with Finetuning on Azure AI Foundry
- Learn more about region availability