model catalog
57 TopicsExpanded Models Available in Microsoft Foundry Agent Service
Announcement Summary Foundry Agent Service now supports an expanded ecosystem of frontier and specialist models. Access models from Anthropic, DeepSeek AI, Meta, Microsoft, xAI, and more. Avoid model lock-in and choose the best model for each scenario. Build complex, multimodal, multi-agent workflows at enterprise scale. From document intelligence to operational automation, Microsoft Foundry makes AI agents ready for mission-critical workloads.93Views0likes0CommentsFoundry Agent Service at Ignite 2025: Simple to Build. Powerful to Deploy. Trusted to Operate.
The upgraded Foundry Agent Service delivers a unified, simplified platform with managed hosting, built-in memory, tool catalogs, and seamless integration with Microsoft Agent Framework. Developers can now deploy agents faster and more securely, leveraging one-click publishing to Microsoft 365 and advanced governance features for streamlined enterprise AI operations.2.5KViews0likes1CommentRosettaFold3 Model at Ignite 2025: Extending Frontier of Biomolecular Modeling in Microsoft Foundry
Today at Microsoft Ignite 2025, we are excited to launch RosettaFold3 (RF3) on Microsoft Foundry - making a new generation of multi-molecular structure prediction models available to researchers, biotech innovators, and scientific teams worldwide. RF3 was developed by the Baker lab and DiMaio lab from the Institute for Protein Design (IPD) at the University of Washington, in collaboration with Microsoft’s AI for Good lab and other research partners. RF3 is now available in Foundry Models, offering scalable access to a new generation of biomolecular modeling capabilities. Try RF3 now in Foundry Models A new multi-molecular modeling system, now accessible in Foundry Models RF3 represents a leap forward in biomolecular structure prediction. Unlike previous generation models focused narrowly on proteins, RF3 can jointly model: Proteins (enzymes, antibodies, peptides) Nucleic acids (DNA, RNA) Small molecules/ligands Multi-chain complexes This unified modeling approach allows researchers to explore entire interaction systems—protein–ligand docking, protein–RNA assembly, protein–DNA binding, and more—in a single end-to-end workflow. Key advances in RF3 RF3 incorporates several advancements in protein and complex prediction, making it the state-of-the-art open-source model. Joint atom-level modeling across molecular types RF3 can simultaneously model all atom types across proteins, nucleic acids, and ligands—enabled by innovations in multimodal transformers and generative diffusion models. Unprecedented control: atom-level conditioning Users can provide the 3D structure of a ligand or compound, and RF3 will fold a protein around it. This atom-level conditioning unlocks: Targeted drug-design workflows Protein pocket and surface engineering Complex interaction modeling Example showing how RF3 allows conditioning on user inputs offering greater control of the model’s predictions. Broad templating support for structure-guided design RF3 allows users to guide structure prediction using: Distance constraints Geometric templates Experimental data (e.g., cryo-EM) This flexibility is limited in other models and makes RF3 ideal for hybrid computation–wet-lab workflows. Extensible foundation for scientific and industrial research RF3 can be adapted to diverse application areas—including enzyme engineering, materials science, agriculture, sustainability, and synthetic biology. Use cases RF3’s multimolecular modeling capabilities have broad applicability beyond fundamental biology. The model enables breakthroughs across medicine, materials science, sustainability, and defense—where structure-guided design directly translates into measurable innovation. Sector Illustrative Use Cases Medicine Gene therapy research: RF3 enables the design of custom proteins that bind specific DNA sequences for targeted genome repair. Materials Science Inspired by natural protein fibers such as wool and silk, IPD researchers are designing synthetic fibers with tunable mechanical properties and texture—enabling sustainable textiles and advanced materials. Sustainability RF3 supports enzyme design for plastic degradation and waste recycling, contributing to circular bioeconomy initiatives. Disease & Vaccine Development RF3-powered workflows will contribute to structure-guided vaccine design, building on IPD’s prior success with the SKYCovione COVID-19 nanoparticle vaccine developed with SK Bioscience and GSK. Crop Science and Food security Support for gene-editing technology (due to protein-DNA binding prediction capabilities) for agricultural research, design of small proteins called Anti-Microbial Peptides or Anti-Fungal peptides to fight crop diseases and tree diseases such as citrus greening. Defense & Biosecurity Enables detection and rapid countermeasure design against toxins or novel pathogens; models of this class are being studied for biosafety applications (Horvitz et al., Science, 2025). Aerospace & Extreme Environments Supports design of lightweight, self-healing, and radiation-resistant biomaterials capable of functioning under non-terrestrial conditions (e.g., high temperature, pressure, or radiation exposure). RF3 has the potential to lower the cost of exploratory modeling, raise success rates in structure-guided discovery, and expand biomolecular AI into domains that were previously limited by sparse experimental structures or difficult multimolecular interactions. Because the model and training framework are open and extensible, partners can also adapt RF3 for their own research, making it a foundation for the next generation of biomolecular AI on Microsoft Foundry. Get started today RosettaFold3 (RF3) brings advanced multimolecular modeling capabilities into Foundry Models, enabling researchers and biotech teams to run structure-guided workflows with greater flexibility and speed. Within Microsoft Foundry, you can integrate RF3 into your existing scientific processes—combining your data, templates, and downstream analysis tools in one connected environment. Start exploring the next frontier of biomolecular modeling with RosettaFold3 in the Foundry Models. You can also discover other early-stage AI innovations in Foundry Labs. If you’re attending Microsoft Ignite 2025, or watching on demand, be sure to check out our session: Session: AI Frontier in Foundry Labs: Experiment Today, Lead Tomorrow About the session: “Curious about the next wave of AI breakthroughs? Get a sneak peek into the future of AI with Azure AI Foundry Labs—your front door to experimental models, multi-agent orchestration prototypes, Agent Factory blueprints, and edge innovations. If you’re a researcher eager to test, validate, and influence what’s next in enterprise AI, this session is your launchpad. See how Labs lets you experiment fast, collaborate with innovators, and turn new ideas into real impact.”171Views0likes0CommentsGPT‑5.1 in Foundry: A Workhorse for Reasoning, Coding, and Chat
The pace of AI innovation is accelerating, and developers—across startups and global enterprises—are at the heart of this transformation. Today marks a significant moment for enterprise AI innovation: Azure AI Foundry is unveiling OpenAI’s GPT-5.1 series, the next generation of reasoning, analytics, and conversational intelligence. The following models will be rolling out in Foundry today: GPT-5.1: adaptive, more efficient reasoning GPT-5.1-chat: chat with new chain-of-thought for end-users GPT-5.1-codex: optimized for long-running conversations with enhanced tools and agentic workflows GPT-5.1-codex-mini: a compact variant for resource-constrained environments What’s new with GPT-5.1 series The GPT-5.1 series is built to respond faster to users in a variety of situations with adaptive reasoning, improving latency and cost efficiency across the series by varying thinking time more significantly. This, combined with other tooling improvements, enhanced stepwise reasoning visibility, multimodal intelligence, and enterprise-grade compliance. GPT-5.1: Adaptive and Efficient Reasoning GPT-5.1 is the mainline model engineered to deliver adaptive, stepwise reasoning that adjusts its approach based on the complexity of each task. Core capabilities included: Adaptive reasoning for nuanced, context-aware thinking time Multimodal intelligence: supporting text, image, and audio inputs/outputs Enterprise-grade performance, security, and compliance This model’s flexibility empowers developers to tackle a wide spectrum of tasks—from simple queries to deep, multi-step workflows for enterprise-grade solutions. With its ability to intelligently balance speed, cost, and intelligence, GPT-5.1 sets a new standard for both performance and efficiency in AI-powered development. GPT-5.1-chat: Elevating Interactive Experiences with Smart, Safe Conversations GPT-5.1-chat powers fast, context-aware chat experiences with adaptive reasoning and robust safety guardrails. With chain-of-thought added in the chat for the first time, it brings an interactive experience to the next level. It’s tuned for safety and instruction-following, making it ideal for customer support, IT helpdesk, HR, and sales enablement. Multimodal chat (text, image, and audio) improves long-turn consistency for real problem solving, delivering brand-aligned, safe conversations, and supporting next-best-action recommendations. GPT-5.1-codex and GPT-5.1-codex-mini: Frontier Models for Agentic Coding GPT-5.1-codex builds on the foundation set by GPT-5-codex, advancing developer tooling with: Enhanced reasoning frameworks for stepwise, context-aware code analysis and generation; plus Enhanced tool handling for certain development scenario's Multimodal intelligence for richer developer experiences when coding With Foundry’s enterprise-grade security and governance, GPT-5.1-codex is ideal for automated code generation and review, accelerating development cycles with intelligent code suggestions, refactoring, and bug detection. GPT-5.1-codex-mini is a compact, efficient variant optimized for resource-constrained environments. It maintains near state-of-the-art performance, multimodal intelligence, and the same safety stack and tool access as GPT-5.1-codex, making it best for cost-effective, scalable solutions in education, startups, and cost-conscience settings. Together, these Codex models empower teams to innovate faster and with greater confidence. Selecting Your AI Engine: Match Model Strengths to Your Business Goals One of the advantages of the GPT-5.1 series is unified access to deep reasoning, adaptive chat, and advanced coding—all in one place. Here’s how to match model strengths to your needs: Opt for GPT-5.1 for general ai application use—tasks like analytics, research, legal/financial review, or consolidating large documents and codebases. It’s the model of choice for reliability and high-impact outputs. Go with GPT-5.1-chat for interactive assistants and product UX, especially when adaptive reasoning is required for complex cases. Reasoning hints and adaptive reasoning help with customer latency perception. Leverage GPT-5.1-codex for deep, stepwise reasoning in complex code generation, refactoring, or multi-step analysis—ideal for demanding agentic workflows and enterprise automation. Utilize GPT-5.1-codex-mini for efficient, cost-effective coding intelligence in broad-scale deployment, education, or resource-constrained environments—delivering near-mainline performance in a compact model. Deployment and Pricing Model Deployment Available Regions Pricing ($/million tokens) Input Cached Input Output GPT-5.1 Standard Global Global $1.25 $0.125 $10.00 Standard Data Zone Data Zone (US & EU) $1.38 $0.14 $11.00 GPT-5.1-chat Standard Global Global $1.25 $0.125 $10.00 GPT-5.1-codex Standard Global Global $1.25 $0.125 $10.00 GPT-5.1-codex-mini Standard Global Global $0.25 $0.025 $2.00 Start Building Today The GPT-5.1 series is now available in Foundry Models. Whether you’re building for enterprise, small and medium-sized business, or launching the next digital-native app, these models and the Foundry platform are designed to help you innovate faster, safer, and at scale.12KViews1like22CommentsMiniMax-M2: The Open-Source Innovator in Coding and Agentic Workflows Now in Azure AI Foundry
We’re thrilled to announce that MiniMax-M2, the latest breakthrough from MiniMax, is now available in Azure AI Foundry through Hugging Face. Built for developers, this model advances capabilities for what’s possible in coding, multi-turn reasoning, and agentic workflows—while delivering enhanced efficiency and scalability. What makes MiniMax-M2 different? MiniMax-M2 isn’t just another large language model—it’s a 230B-parameter Mixture of Experts (MoE) architecture that activates 10B parameters per task, ensuring better performance at a lower cost. This design enables: Enhanced efficiency: Achieve top-tier results up to 8% of the cost of comparable models. Increased context handling: With an industry-leading 204K token context window and 131K output capacity, MiniMax-M2 can process entire codebases, multi-file projects, and long-form documentation without losing coherence. Commercial ready: Released under Apache 2.0, MiniMax-M2 is open-source and ready to deploy into your workflow. The model was ranked #5 overall on the Artificial Analysis Intelligence Index, making MiniMax-M2 one of the highest-ranked open-source model globally, outperforming many proprietary systems in reasoning, coding, and language understanding. For organizations seeking high-throughput, low-latency deployments, MiniMax-M2 runs seamlessly on an 8xH100 setup using vLLM, making it both powerful and practical. The graphic above compares MiniMax-M2’s performance across multiple industry-standard benchmarks against leading models like DeepSeek-V3.2, GLM-4.6, and Gemini 2.5 Pro. While proprietary models such as GPT-5 (thinking) and Claude Sonnet 4.5 remain strong in certain areas, MiniMax-M2 delivers competitive results as an open-source solution, offering enterprise-grade performance for organizations seeking high-quality AI without compromising scalability or flexibility. Why it matters for developers MiniMax-M2 is built for modern development workflows. Whether you’re generating production-ready code, automating agentic tasks, or managing large-scale projects, this model delivers accuracy, speed, and flexibility while keeping infrastructure costs in check. Mixture of Experts Architecture: 230B total parameters, 10B active per task for cost-effective scalability. Ultra-Large Context Window: 204K tokens for comprehensive project understanding. Advanced Coding Intelligence: Optimized for code generation, debugging, multi-file editing, and test-driven development. Agentic Workflow Support: Handles complex tool integrations and multi-step problem-solving with ease. Open Source Freedom: Apache 2.0 license for commercial use. MiniMax-M2 can support finance and legal workflows by automating document-heavy tasks. In finance, it could help generate audit reports, investment summaries, and portfolio analyses by processing large datasets and regulatory guidelines in a single pass, which can improve accuracy and reduce manual effort. In legal, it could assist with case law research by summarizing extensive statutes and precedents, extracting relevant insights, and providing context-specific recommendations. With its large context window and reasoning capabilities, MiniMax-M2 can enable faster, more efficient handling of complex information, allowing professionals to focus on higher-value activities. Get started today MiniMax-M2 is now live in Azure AI Foundry, explore its capabilities and try it today.515Views0likes0CommentsAzure OpenAI Model Upgrades: Prompt Safety Pitfalls with GPT-4o and Beyond
Upgrading to New Azure OpenAI Models? Beware Your Old Prompts Might Break. I recently worked on upgrading our Azure OpenAI integration from gpt-35-turbo to gpt-4o-mini, expecting it to be a straightforward configuration change. Just update the Azure Foundry resource endpoint, change the model name, deploy the code — and voilà, everything should work as before. Right? Not quite. The Unexpected Roadblock As soon as I deployed the updated code, I started seeing 400 status errors from the OpenAI endpoint. The message was cryptic: The response was filtered due to the prompt triggering Azure OpenAI's content management policy. At first, I assumed it was a bug in my SDK call or a malformed payload. But after digging deeper, I realized this wasn’t a technical failure — it was a content safety filter kicking in before the prompt even reached the model. The Prompt That Broke It Here’s the original system prompt that worked perfectly with gpt-35-turbo: YOU ARE A QNA EXTRACTOR IN TEXT FORMAT. YOU WILL GET A SET OF SURVEYJS QNA JSONS. YOU WILL CONVERT THAT INTO A TEXT DOCUMENT. FOR THE QUESTIONS WHERE NO ANSWER WAS GIVEN, MARK THOSE AS NO ANSWER. HERE IS THE QNA: BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE A DOCUMENT TO BE PUBLISHED. {{$style}} +++++ {{$input}} +++++ This prompt had been reliable for months. But with gpt-4o-mini, it triggered Azure’s new input safety layer, introduced in mid-2024. What Changed with GPT-4o-mini? Unlike gpt-35-turbo, the gpt-4o family: Applies stricter content filtering — not just on the output, but also on the input prompt. Treats system messages and user messages as role-based chat messages, passing them through moderation before the model sees them. Flags prompts that look like prompt injection attempts like aggressive instructions like “YOU ARE…”, “BE CREATIVE”, “GENERATE”, “PROFESSIONAL”. Flags unusual formatting (like `+++++`), artificial delimiters or token markers as it may look like encoded content. In short, the model didn’t even get a chance to process my prompt — it was blocked at the gate. Fixing It: Softening the Prompt The solution wasn’t to rewrite the entire logic, but to soften the system prompt and remove formatting that could be misinterpreted. Here’s what helped: - Replacing “YOU ARE…” with a gentler instruction like “Please help convert the following Q&A data…” - Removing creative directives like “BE CREATIVE” or “PROFESSIONAL” unless clearly contextualized. - Avoiding raw JSON markers and template syntax (`{{ }}`, `+++++`) in the prompt. Once I made these changes, the model responded smoothly — and the upgrade was finally complete. Evolving the Prompt — Not Abandoning It Interestingly, for some prompts I didn’t have to completely eliminate the “YOU ARE…” structure. Instead, I refined it to be more natural and less directive. Here’s a comparison: ❌ Old Prompt (Blocked) ✅ New Prompt (Accepted) YOU ARE A SOURCING AND PROCUREMENT MANAGER. YOU WILL GET BUYER'S REQUIREMENTS IN QNA FORMAT. HERE IS THE QNA: {{$input}} +++++ YOU WILL GENERATE TOP 10 {{$category}} RELATED QUESTIONS THAT CAN BE ASKED OF A SUPPLIER IN JSON FORMAT. THE JSON MUST HAVE QUESTION NUMBER AS THE KEY AND QUESTION TEXT AS THE QUESTION. DON'T ADD ANY DESCRIPTION TEXT OR FORMATTING IN THE OUTPUT. BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE AN RFX. You are an AI assistant that helps clarify sourcing requirements. You will receive buyer's requirements in QnA format. Here is the QnA: {$input} Your task is to generate the top 10 {$category} related questions that can be asked of a supplier, in JSON format. - The JSON must use the question number as the key and the question text as the value. - Do not include any description text or formatting in the output. - Focus on creating clear, professional, and relevant questions that will help prepare an RFX. Key Takeaways - Model upgrades aren’t just about configuration changes — they can introduce new moderation layers that affect prompt design. - Prompt safety filtering is now a first-class citizen in Azure OpenAI, especially for newer models. - System prompts need to be rewritten with moderation in mind, not just clarity or creativity. This experience reminded me that even small upgrades can surface big learning moments. If you're planning to move to gpt-4o-mini or any newer Azure OpenAI model, take a moment to review your prompts — they might need a little more finesse than before.120Views3likes1CommentNVIDIA NIM for NVIDIA Nemotron, Cosmos, & Microsoft Trellis: Now Available in Azure AI Foundry
We’re excited to announce 7 new powerful NVIDIA NIM™ additions to Azure AI Foundry Models now on Managed Compute. The latest wave of models—NVIDIA Nemotron Nano 9B v2, Llama 3.1 Nemotron Nano VL 8B, Llama 3.3 Nemotron Super 49B v1.5 (coming soon), Cosmos Reason1-7B, Cosmos Predict 2.5 (coming soon), Cosmos Transfer 2.5. (coming soon), and Microsoft Trellis—marks a significant leap forward in intelligent application development. Collectively, these models redefine what’s possible in advanced instruction-following, vision-language understanding, and efficient language modeling, empowering developers to build multimodal, visually rich, and context-aware solutions. By combining robust reasoning, flexible input handling, and enterprise-grade deployment options, these additions accelerate innovation across industries—from robotics and autonomous vehicles to immersive retail and digital twins—enabling smarter, safer, and more adaptive experiences at scale. Meet the Models Model Name Size Primary Use Cases NVIDIA Nemotron Nano 9B v2 Available Now 9B parameters Multilingual Reasoning: Multilingual and code-based reasoning tasks Enterprise Agents: AI and productivity agents Math/Science: Scientific reasoning, advanced math Coding: Software engineering and tool calling Llama 3.3 Nemotron Super 49B v1.5 Coming Soon 49B Enterprise Agents: AI and productivity agents Math/Science: Scientific reasoning, advanced math Coding: Software engineering and tool calling Llama 3.1 Nemotron Nano VL 8B Available Now 8B Multimodal: Multimodal vision-language tasks, document intelligence and understanding Edge Agents: Mobile and edge AI agents Cosmos Reason1-7B Available Now 7B Robotics: Planning and executing tasks with physical constraints. Autonomous Vehicles: Understanding environments and making decisions. Video Analytics Agents: Extracting insights and performing root-cause analysis from video data. Cosmos Predict 2.5 Coming Soon 2B Generalist Model: World state generation and prediction Cosmos Transfer 2.5 Coming Soon 2B Structural Conditioning: Physical AI Microsoft TRELLIS by Microsoft Research Available Now - Digital Twins: Generate accurate 3D assets from simple prompts Immersive Retail experiences: photorealistic product models for AR, virtual try-ons Game and simulation development: Turn creative ideas into production-ready 3D content Meet the NVIDIA Nemotron Family NVIDIA Nemotron Nano 9B v2: Compact power for high-performance reasoning and agentic tasks NVIDIA Nemotron Nano 9B v2 is a high-efficiency large language model built with a hybrid Mamba-Transformer architecture, designed to excel in both reasoning and non-reasoning tasks. Efficient architecture for high-performance reasoning: Combines Mamba-2 and Transformer components to deliver strong reasoning capabilities with higher throughput. Extensive multilingual and code capabilities: Trained on diverse language and programming data, it performs exceptionally well across tasks involving natural language (English, German, French, Italian, Spanish and Japanese), code generation, and complex problem solving. Reasoning Budget Control: Supports runtime “thinking” budget control. During inference, the user can specify how many tokens the model is allowed to "think" for helping balance speed, cost, and accuracy during inference. For example, a user can tell the model to think for “1K tokens or 3K tokens, etc ” for different use cases with far better cost predictability. Fig 1. provided by NVIDIA Nemotron Nano 9B v2 is built from the ground up with training data spanning 15 languages and 43 programming languages, giving it broad multilingual and coding fluency. Its capabilities were sharpened through advanced post-training techniques like GRPO and DPO enabling it to reason deeply, follow instructions precisely, and adapt dynamically to different tasks. -> Explore the model card on Azure AI Foundry Llama 3.3 Nemotron Super 49B v1.5: High-throughput reasoning at scale Llama 3.3 Nemotron Super 49Bv1.5 (coming soon) is a significantly upgraded version of Llama-3.3-Nemotron-Super-49B-v1 and is a large language model which is a derivative of Meta Llama-3.3-70B-Instruct (the reference model) optimized for advanced reasoning, instruction following, and tool use across a wide range of tasks. Excels in applications such as chatbots, AI agents, and retrieval-augmented generation (RAG) systems Balances accuracy and compute efficiency for enterprise-scale workloads Designed to run efficiently on a single NVIDIA H100 GPU, making it practical for real-world applications Llama-3.3-Nemotron-Super-49B-v1.5 was trained through a multi-phase process combining human expertise, synthetic data, and advanced reinforcement learning techniques to refine its reasoning and instruction-following abilities. Its impressive performance across benchmarks like MATH500 (97.4%) and AIME 2024 (87.5%) highlights its strength in tackling complex tasks with precision and depth. Llama 3.1 Nemotron Nano VL 8B: Multimodal intelligence for edge deployments Llama 3.1 Nemotron Nano VL 8B is a compact vision-language model that excels in tasks such as report generation, Q&A, visual understand, and document intelligence. This model delivers low latency and high efficiency, reducing TCO. This model was trained on a diverse mix of human-annotated and synthetic data, enabling robust performance across multimodal tasks such as document understanding and visual question answering. It achieved strong results on evaluation benchmarks including DocVQA (91.2%), ChartQA (86.3%), AI2D (84.8%), and OCRBenchV2 English (60.1%). -> Explore the model card on Azure AI Foundry What Sets Nemotron Apart NVIDIA Nemotron is a family of open models, datasets, recipes, and tools. 1. Open-source AI technologies: Open models, data, and recipes offer transparency, allowing developers to create trustworthy custom AI for their specific needs, from creating new agents to refining existing applications. Open Weights: NVIDIA Open Model License offers enterprises data control and flexible deployment. Open Data: Models are trained with transparent, permissively-licensed NVIDIA data, available on Hugging Face, ensuring confidence in use. Additionally, it allows developers to train their high-accuracy custom models with these open datasets. Open Recipe: NVIDIA shares development techniques, like NAS, hybrid architecture, Minitron, as well as NeMo tools enabling customization or creation of custom models. 2. Highest Accuracy & Efficiency: Engineered for efficiency, Nemotron delivers industry leading accuracy in the least amount of time for reasoning, vision, and agentic tasks. 3. Run Anywhere On Cloud: Packaged as NVIDIA NIM, for secure and reliable deployment of high-performance AI model inferencing across Azure platforms. Meet the Cosmos Family NVIDIA Cosmos™ is a world foundation model (WFM) development platform to advance physical AI. At its core are Cosmos WFMs, openly available pretrained multimodal models that developers can use out-of-the-box for generating world states as videos and physical AI reasoning, or post-train to develop specialized physical AI models. Cosmos Reason1-7B: Physical AI Cosmos Reason1-7B combines chain-of-thought reasoning, flexible input handling for images and video, a compact 7B parameter architecture, and advanced physical world understanding making it ideal for real-time robotics, video analytics, and AI agents that require contextual, step-by-step decision-making in complex environments. This model transforms how AI and robotics interact with the real world giving your systems the power to not just see and describe, but truly understand, reason, and make decisions in complex environments like factories, cities, and autonomous vehicles. With its ability to analyze video, plan robot actions, and verify safety protocols, Cosmos Reason1-7B helps developers build smarter, safer, and more adaptive solutions for real-world challenges. Cosmos Reason1-7B is physical AI for 4 embodiments: Fig.2 Physical AI Model Strengths Physical World Reasoning: Leverages prior knowledge, physics laws, and common sense to understand complex scenarios. Chain-of-Thought (CoT) Reasoning: Delivers contextual, step-by-step analysis for robust decision-making. Flexible Input: Handles images, video (up to 30 seconds, 1080p), and text with a 16k context window. Compact & Deployable: 7B parameters runs efficiently from edge devices to the cloud. Production-Ready: Available via Hugging Face, GitHub, and NVIDIA NIM; integrates with industry-standard APIs. Enterprise Use Cases Cosmos Reason1-7B is more than a model, it’s a catalyst for building intelligent, adaptive solutions that help enterprises shape a safer, more efficient, and truly connected physical world. Fig.3 Use Cases Reimagine safety and efficiency by empowering AI agents to analyze millions of live streams and recorded videos, instantly verifying protocols and detecting risks in factories, cities, and industrial sites. Accelerate robotics innovation with advanced reasoning and planning, enabling robots to understand their environment, make methodical decisions, and perform complex tasks—from autonomous vehicles navigating busy streets to household robots assisting with daily chores. Transform data curation and annotation by automating the selection, labeling, and critiquing of massive, diverse datasets, fueling the next generation of AI with high-quality training data. Unlock smarter video analytics with chain-of-thought reasoning, allowing systems to summarize events, verify actions, and deliver actionable insights for security, compliance, and operational excellence. -> Explore the model card on Azure AI Foundry Also coming soon to Azure AI Foundry are two models of the Cosmos WFM, designed for world generation and data augmentation. Cosmos Predict 2.5 2B Cosmos Predict 2.5 is a next-generation world foundation model that generates realistic, controllable video worlds from text, images, or videos—all through a unified architecture. Trained on 200M+ high-quality clips and enhanced with reinforcement learning, it delivers stronger physics and prompt alignment while cutting compute cost and post-training time for faster Physical AI workflows. Cosmos Transfer 2.5 2B While Predict 2.5 generates worlds, Transfer 2.5 that transforms structured simulation inputs—like segmentation, depth, or LiDAR maps—into photorealistic synthetic data for Physical AI training and development. What Sets Cosmos Apart Built for Physical AI — Purpose-built for robotics, autonomous systems, and embodied agents that understand physics, motion, and spatial environments. Multimodal World Modeling — Combines images, video, depth, segmentation, LiDAR, and trajectories to create physics-aware, controllable world simulations. Scalable Synthetic Data Generation — Generates diverse, photorealistic data at scale using structured simulation inputs for faster Sim2Real training and adaptation. Microsoft Trellis by Microsoft Research: Enterprise-ready 3D Generation Microsoft Trellis by Microsoft Research is a cutting-edge 3D asset generation model developed by Microsoft Research, designed to create high-quality, versatile 3D assets, complete with shapes and textures, from text or image prompts. Seamlessly integrated within the NVIDIA NIM microservice, Trellis accelerates asset generation and empowers creators with flexible, production-ready outputs. Quickly generate high-fidelity 3D models from simple text or image prompts perfect for industries like manufacturing, energy, and smart infrastructure looking to accelerate digital twin creation, predictive maintenance, and immersive training environments. From virtual try-ons in retail to production-ready assets in media, TRELLIS empowers teams to create stunning 3D content at scale, cutting down production time and unlocking new levels of interactivity and personalization. -> Explore the model card on Azure AI Foundry Pricing The pricing breakdown consists of the Azure Compute charges plus a flat fee per GPU for the NVIDIA AI Enterprise license that is required to use the NIM software. Pay-as-you-go (per gpu hour) NIM Surcharge: $1 per gpu hour Azure Compute charges also apply based on deployment configuration Why use Managed Compute? Managed Compute is a deployment option within Azure AI Foundry Models that lets you run large language models (LLMs), SLMs, HuggingFace models and custom models fully hosted on Azure infrastructure. Azure Managed Compute is a powerful deployment option for models not available via standard (pay-go) endpoints. It gives you: Custom model support: Deploy open-source or third-party models Infrastructure flexibility: Choose your own GPU SKUs (NVIDIA A10, A100, H100) Detailed control: Configure inference servers, protocols, and advanced settings Full integration: Works with Azure ML SDK, CLI, Prompt Flow, and REST APIs Enterprise-ready: Supports VNet, private endpoints, quotas, and scaling policies NVIDIA NIM Microservices on Azure These models are available as NVIDIA NIM™ microservices on Azure AI Foundry. NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing. NIM microservices are pre-built, containerized AI endpoints that simplify deployment and scale across environments. They allow developers to run models securely and efficiently in the cloud environment. If you're ready to build smarter, more capable AI agents, start exploring Azure AI Foundry. Build Trustworthy AI Solutions Azure AI Foundry delivers managed compute designed for enterprise-grade security, privacy, and governance. Every deployment of NIM microservices through Azure AI Foundry is backed by Microsoft’s Responsible AI principles and Secure Future Initiative ensuring fairness, reliability, and transparency so organizations can confidently build and scale agentic AI workflows. How to Get Started in Azure AI Foundry Explore Azure AI Foundry: Begin by accessing the Azure AI Foundry portal and then following the steps below. Navigate to ai.azure.com. Select on top left existing project that is (Hub) resource provider. If you do not have a HUB Project, create new Hub Project using “+ Create New” link. Choose AI Hub Resource: Deploy with NIM Microservices: Use NVIDIA’s optimized containers for secure, scalable deployment. Select Model Catalog from the left sidebar menu: In the "Collections" filter, select NVIDIA to see all the NIM microservices that are available on Azure AI Foundry. Select the NIM you want to use. Click Deploy. Choose the deployment name and virtual machine (VM) type that you would like to use for your deployment. VM SKUs that are supported for the selected NIM and also specified within the model card will be preselected. Note that this step requires having sufficient quota available in your Azure subscription for the selected VM type. If needed, follow the instructions to request a service quota increase. Use this NVIDIA NeMo Agent Toolkit: designed to orchestrate, monitor, and optimize collaborative AI agents. Note about the License Users are responsible for compliance with the terms of NVIDIA AI Product Agreement . Learn More How to Deploy NVIDIA NIM Docs Learn More about Accelerating agentic workflows with Azure AI Foundry, NVIDIA NIM, and NVIDIA NeMo Agent Toolkit Register for Microsoft Ignite 2025577Views1like0CommentsDeepening our Partnership with Mistral AI on Azure AI Foundry
We’re excited to mark a new chapter in our collaboration with Mistral AI, a leading European AI innovator, with the launch of Mistral Document AI in Azure AI Foundry Models. This marks the first in a series of Mistral models coming to Azure as a serverless API, giving customers seamless access to Mistral’s cutting-edge capabilities, fully hosted, managed, and integrated into the Foundry ecosystem. This launch also deepens our support for sovereign cloud customers —especially in Europe. At Microsoft, we believe Sovereign AI is essential for enabling organizations and regulated industries to harness the full potential of AI while maintaining control over their security, data, and governance. As Satya Nadella has said, “We want every country, every organization, to build AI in a way that respects their sovereignty—of data, of applications, and of infrastructure.” By combining Mistral’s state-of-the-art models with Azure’s enterprise-grade reliability and scale we’re enabling customers to confidently deploy AI that meets strict regulatory and data sovereignty requirements. Mistral Document AI By the Mistral AI Team “Enterprises today are overwhelmed with documents—contracts, forms, research papers, invoices—holding critical information that’s often trapped in scanned images and PDFs. With nearly 90% of enterprise data stored in unstructured formats, traditional OCR simply can’t keep up. Mistral Document AI is built with a multimodal approach that combines vision and language understanding, it interprets documents with contextual intelligence and delivers structured outputs that reflect the original layout—tables remain tables, headings remain headings, and images are preserved alongside the text.” Key Capabilities Document Parsing: Mistral Document AI interprets complex layouts and extracts rich structures such as tables, charts, and LaTeX-formatted equations with markdown-style clarity. Multilingual & Multimodal: The model supports dozens of languages and understands both text and visual elements, making it well-suited for global, diverse datasets. Structured Output & Doc-as-Prompt: Mistral Document AI delivers results in structured formats like JSON, enabling easy downstream integration with databases or AI agents. This supports use cases like Retrieval-Augmented Generation (RAG), where document content becomes a prompt for subsequent queries. Use Cases Document Digitization: Process archives of scanned PDFs or handwritten forms into structured digital records. Knowledge Extraction: Transform research papers, technical manuals, or customer guides into machine-readable formats. RAG pipelines and Intelligent Agents: Integrate structured output into pipelines that feed AI systems for Q&A, summarization, and more. Mistral Document AI on Azure AI Foundry You can now access Mistral Document AI’s capabilities through Azure AI Foundry as a serverless Azure model, sold directly from Microsoft. One-Click Deployment (Serverless) – With a few clicks, you can deploy the model as a serverless REST API, without needing to provision any GPU machines or container hosts. This makes it easy to get started. Enterprise-Grade Security & Privacy – Because the model runs within your Azure environment, you get network isolation and data security out of the box. All inferencing happens in Azure’s cloud under your account, so your documents aren’t sent to a third-party server. Azure AI Foundry ensures your data stays private (no data leaves the Azure region you choose) and offers compliance with enterprise security standards. This is critical for sensitive use cases like banking or healthcare documents. Integrated Responsible AI Capabilities – With Mistral Doc AI running in Azure AI Foundry, you can apply Azure’s built-in Responsible AI tools—such as content filtering, safety system monitoring, and evaluation frameworks—to ensure your deployments align with your organization’s ethical and compliance standards. Observability & Monitoring – Foundry’s monitoring features give you full visibility into model usage, performance, and cost. You can track API calls, latency, and error rates, enabling proactive troubleshooting and optimization. Agent Services Enablement – You can connect Mistral Document AI to Azure AI Agent Service, enabling intelligent agents to process, reason over, and act on extracted document data—unlocking new automation and decision-making scenarios. Azure Ecosystem Integration – Once deployed, the Mistral Document AI endpoint can easily plug into your existing Azure workflows. And because it’s part of Foundry, you can manage it alongside other models in a unified way. This interoperability accelerates the development of intelligent applications. Getting Started: Deploying and Using Mistral Document AI on Azure Setting up Mistral Document AI on Azure AI Foundry is straightforward. Here’s a quick guide to get you up and running: Create an Azure AI Foundry workspace – Ensure you have an Azure subscription (pay-as-you-go, not a free trial) and create an AI Foundry hub and project in the Azure portal Deploy the Mistral Document AI model – In the Azure AI Foundry Model Catalog, search for “mistral-document-ai-2505”. Then click the Deploy button. You’ll be prompted to select a pricing plan – choose deploy. Call the Mistral Document AI API – Once deployed, using the model is as easy as calling a REST API. You can do this from any programming language or even a command-line tool like cURL. Integrate and iterate – With the OCR results in hand, you can integrate Mistral Document AI into your workflows. Conclusion Mistral Document AI joins Azure AI Foundry as one of the several tools available to help organizations unlock insights from unstructured documents. This launch reflects our continued commitment to bringing the latest, most capable models into Foundry, giving developers and enterprises more choice than ever. Whether you’re digitizing records, building knowledge bases, or enhancing your AI workflows, Azure AI Foundry offers powerful and accessible solutions. Pricing Model Name Pricing /1K pages mistral-document-ai-2505 Global $3 mistral-document-ai-2505 DataZone $3.3 Mistral OCR Global $1 Resources Explore Mistral Document AI MS Learn Github Code Samples10KViews3likes3CommentsAnnouncing the Grok 4 Fast Models from xAI: Now Available in Azure AI Foundry
These models, grok-4-fast-reasoning and grok-4-fast-non-reasoning, empower developers with distinct approaches to suit their application needs. Each model brings advanced capabilities such as structured outputs, long-context processing, and seamless integration with enterprise-grade security and governance. This release marks a significant step toward scalable, agentic AI systems that orchestrate tools, APIs, and domain data with low latency. Leveraging the Grok 4 Fast models within Azure AI Foundry Models accelerates the development of intelligent applications that combine speed, flexibility, and compliance. The unified model experience, paired with Azure’s enterprise controls, positions the Grok 4 Fast models as foundational technologies for next-generation AI-powered workflows. Why use the Grok 4 Fast Models on Azure Modern AI applications are increasingly agentic—capable of orchestrating tools, APIs, and domain data at low latency. The Grok 4 Fast models were designed for these patterns: fast, intelligent, and agent-ready, enabling parallel tool use, JSON-structured outputs, and image input for multimodal understanding. Azure AI Foundry enhances these models with enterprise controls (RBAC, private networking, customer-managed keys), observability and evaluations, and first-party hosting through Foundry Models—helping teams move confidently from prototype to production. Beyond that, using the Grok 4 Fast models on Azure offers the following: Global scalability and reliability – Azure’s worldwide infrastructure supports resilient, high-availability deployments across multiple regions. Integrated security and compliance – Enterprise-grade identity management, network isolation, encryption at rest and in transit, and compliance certifications help safeguard sensitive data and comply with regulatory requirements. Unified management experience – Centralized monitoring, governance, and cost controls through Azure Portal and Azure Resource Manager simplify operations and oversight. Native integration across Azure services – Easily connect to data sources, analytics, and other services like Azure Synapse, Cosmos DB, and Logic Apps for end-to-end solutions. Enterprise support and SLAs – Azure delivers 24/7 support, service-level agreements, and best-in-class reliability for mission-critical workloads. By building withDeploying Grok 4 Fast models throughon Azure, enables organizations tocan build robust, secure, and scalable AI applications with confidence and agility. Key capabilities The Grok 4 Fast models introduce a suite of advanced features designed to enhance agentic workflows and multimodal integration. With flexible model choices and powerful context handling, the Grok 4 Fast models are engineered for efficiency, scalability, and seamless deployment. Choose reasoning level by selecting which Grok 4 Fast model to use: grok-4-fast-reasoning: Optimized for fast reasoning in agentic workflows. grok-4-fast-non-reasoning: Uses the same underlying weights but is constrained by a non-reasoning system prompt, offering a streamlined approach for specific tasks. Multimodal: Provides image understanding when deployed with Grok image tokenizer. Tool use & structured outputs: Enables parallel function calling and supports JSON schemas for predictable integration. Long context: Supports approximately 131K tokens for deep, comprehensive understanding. Efficient H100 performance: Designed to run efficiently on H100 GPUs for agentic search and real-time orchestration. Collectively, these features make the Grok 4 Fast models a robust and versatile solution for developers and enterprises looking to push the boundaries of AI-powered workflows. What you can do with the Grok 4 Fast models Building on the advanced capabilities of the Grok 4 Fast models, developers can unlock innovative solutions across a wide variety of applications. The following use cases highlight how these models streamline complex workflows, maximize efficiency, and accelerate intelligent automation with robust, scalable AI. Real-time agentic task orchestration : Automate and coordinate multi-step processes across systems with fast, flexible reasoning for dynamic business operations. Multimodal document analysis : Extract insights and process information from both text and images for comprehensive, context-aware understanding. Enterprise search and knowledge retrieval : Leverage long-context support for enhanced semantic search, surfacing relevant information from massive data repositories. Parallel tool integration : Invoke multiple APIs and functions simultaneously, enabling sophisticated workflows with structured, predictable outputs. Scalable conversational AI : Deploy high-capacity virtual agents capable of handling extended dialogues and nuanced queries with low latency. Customizable decision support- : Empower users with AI-driven recommendations and scenario analysis tailored to organizational needs and governance requirements. With the Grok 4 Fast models, developers are equipped to build and iterate on next-generation AI solutions, leveraging powerful tools and streamlined deployment workflows. Start shaping the future of intelligent applications by harnessing the speed, scalability, and multimodal capabilities of the Grok 4 Fast models today. The Grok 4 Fast models offer developers the speed, scalability, and multimodal capabilities needed to advance intelligent applications, supporting complex workflows and innovative solutions across a range of use cases. Pricing for Grok 4 Fast Models on Azure AI Foundry Model Deployment Price $/1m tokens grok-4-fast-reasoning Global Standard (PayGo) Input - $0.43 Output - $1.73 grok-4-fast-non-reasoning Get started in minutes With the Grok 4 Fast models, developers gain access to cutting-edge AI with a massive context window, efficient GPU performance, and enterprise-grade governance. Start building the future of AI today,visit the Model Catalog in Azure AI Foundry and deploy grok-4-fast-reasoning and grok-4-fast-non-reasoning to accelerate your innovation.1.6KViews0likes1CommentThe Future of AI: Horses for Courses - Task-Specific Models and Content Understanding
Task-specific models are designed to excel at specific use cases, offering highly specialized solutions that can be more efficient and cost-effective than general-purpose models. These models are optimized for particular tasks, resulting in faster performance and lower latency, and they often do not require prompt engineering or fine-tuning.1.3KViews2likes1Comment