model inference
5 TopicsFoundry Agent Service at Ignite 2025: Simple to Build. Powerful to Deploy. Trusted to Operate.
The upgraded Foundry Agent Service delivers a unified, simplified platform with managed hosting, built-in memory, tool catalogs, and seamless integration with Microsoft Agent Framework. Developers can now deploy agents faster and more securely, leveraging one-click publishing to Microsoft 365 and advanced governance features for streamlined enterprise AI operations.4.8KViews3likes1CommentRosettaFold3 Model at Ignite 2025: Extending Frontier of Biomolecular Modeling in Microsoft Foundry
Today at Microsoft Ignite 2025, we are excited to launch RosettaFold3 (RF3) on Microsoft Foundry - making a new generation of multi-molecular structure prediction models available to researchers, biotech innovators, and scientific teams worldwide. RF3 was developed by the Baker lab and DiMaio lab from the Institute for Protein Design (IPD) at the University of Washington, in collaboration with Microsoft’s AI for Good lab and other research partners. RF3 is now available in Foundry Models, offering scalable access to a new generation of biomolecular modeling capabilities. Try RF3 now in Foundry Models A new multi-molecular modeling system, now accessible in Foundry Models RF3 represents a leap forward in biomolecular structure prediction. Unlike previous generation models focused narrowly on proteins, RF3 can jointly model: Proteins (enzymes, antibodies, peptides) Nucleic acids (DNA, RNA) Small molecules/ligands Multi-chain complexes This unified modeling approach allows researchers to explore entire interaction systems—protein–ligand docking, protein–RNA assembly, protein–DNA binding, and more—in a single end-to-end workflow. Key advances in RF3 RF3 incorporates several advancements in protein and complex prediction, making it the state-of-the-art open-source model. Joint atom-level modeling across molecular types RF3 can simultaneously model all atom types across proteins, nucleic acids, and ligands—enabled by innovations in multimodal transformers and generative diffusion models. Unprecedented control: atom-level conditioning Users can provide the 3D structure of a ligand or compound, and RF3 will fold a protein around it. This atom-level conditioning unlocks: Targeted drug-design workflows Protein pocket and surface engineering Complex interaction modeling Example showing how RF3 allows conditioning on user inputs offering greater control of the model’s predictions. Broad templating support for structure-guided design RF3 allows users to guide structure prediction using: Distance constraints Geometric templates Experimental data (e.g., cryo-EM) This flexibility is limited in other models and makes RF3 ideal for hybrid computation–wet-lab workflows. Extensible foundation for scientific and industrial research RF3 can be adapted to diverse application areas—including enzyme engineering, materials science, agriculture, sustainability, and synthetic biology. Use cases RF3’s multimolecular modeling capabilities have broad applicability beyond fundamental biology. The model enables breakthroughs across medicine, materials science, sustainability, and defense—where structure-guided design directly translates into measurable innovation. Sector Illustrative Use Cases Medicine Gene therapy research: RF3 enables the design of custom proteins that bind specific DNA sequences for targeted genome repair. Materials Science Inspired by natural protein fibers such as wool and silk, IPD researchers are designing synthetic fibers with tunable mechanical properties and texture—enabling sustainable textiles and advanced materials. Sustainability RF3 supports enzyme design for plastic degradation and waste recycling, contributing to circular bioeconomy initiatives. Disease & Vaccine Development RF3-powered workflows will contribute to structure-guided vaccine design, building on IPD’s prior success with the SKYCovione COVID-19 nanoparticle vaccine developed with SK Bioscience and GSK. Crop Science and Food security Support for gene-editing technology (due to protein-DNA binding prediction capabilities) for agricultural research, design of small proteins called Anti-Microbial Peptides or Anti-Fungal peptides to fight crop diseases and tree diseases such as citrus greening. Defense & Biosecurity Enables detection and rapid countermeasure design against toxins or novel pathogens; models of this class are being studied for biosafety applications (Horvitz et al., Science, 2025). Aerospace & Extreme Environments Supports design of lightweight, self-healing, and radiation-resistant biomaterials capable of functioning under non-terrestrial conditions (e.g., high temperature, pressure, or radiation exposure). RF3 has the potential to lower the cost of exploratory modeling, raise success rates in structure-guided discovery, and expand biomolecular AI into domains that were previously limited by sparse experimental structures or difficult multimolecular interactions. Because the model and training framework are open and extensible, partners can also adapt RF3 for their own research, making it a foundation for the next generation of biomolecular AI on Microsoft Foundry. Get started today RosettaFold3 (RF3) brings advanced multimolecular modeling capabilities into Foundry Models, enabling researchers and biotech teams to run structure-guided workflows with greater flexibility and speed. Within Microsoft Foundry, you can integrate RF3 into your existing scientific processes—combining your data, templates, and downstream analysis tools in one connected environment. Start exploring the next frontier of biomolecular modeling with RosettaFold3 in the Foundry Models. You can also discover other early-stage AI innovations in Foundry Labs. If you’re attending Microsoft Ignite 2025, or watching on demand, be sure to check out our session: Session: AI Frontier in Foundry Labs: Experiment Today, Lead Tomorrow About the session: “Curious about the next wave of AI breakthroughs? Get a sneak peek into the future of AI with Azure AI Foundry Labs—your front door to experimental models, multi-agent orchestration prototypes, Agent Factory blueprints, and edge innovations. If you’re a researcher eager to test, validate, and influence what’s next in enterprise AI, this session is your launchpad. See how Labs lets you experiment fast, collaborate with innovators, and turn new ideas into real impact.”324Views0likes0CommentsThe Future of AI: "Wigit" for computational design and prototyping
Discover how AI is revolutionizing software prototyping. Learn how Wigit, an internal AI-powered tool created with Azure AI Foundry, enables anyone—from designers to product managers—to create live, interactive prototypes in minutes. This blog explores how AI democratizes tool creation, accelerates innovation, and transforms static workflows into dynamic, collaborative environments.1.8KViews0likes0CommentsBuilding AI Apps with the Foundry Local C# SDK
What Is Foundry Local? Foundry Local is a lightweight runtime designed to run AI models directly on user devices. It supports a wide range of hardware (CPU, GPU, NPU) and provides a consistent developer experience across platforms. The SDKs are available in multiple languages, including Python, JavaScript, Rust, and now C#. Why a C# SDK? The C# SDK brings Foundry Local into the heart of the .NET ecosystem. It allows developers to: Download and manage models locally. Run inference using OpenAI-compatible APIs. Integrate seamlessly with existing .NET applications. This means you can build intelligent apps that run offline, reduce latency, and maintain data privacy—all without sacrificing developer productivity. Bootstrap Process: How the SDK Gets You Started One of the most developer-friendly aspects of the C# SDK is its automatic bootstrap process. Here's what happens under the hood when you initialise the SDK: Service Discovery and Startup The SDK automatically locates the Foundry Local installation on the device and starts the inference service if it's not already running. Model Download and Caching If the specified model isn't already cached locally, the SDK will download the most performant model variant (e.g. GPU, CPU, NPU) for the end user's hardware from the Foundry model catalog. This ensures you're always working with the latest optimised version. Model Loading into Inference Service Once downloaded (or retrieved from cache), the model is loaded into the Foundry Local inference engine, ready to serve requests. This streamlined process means developers can go from zero to inference with just a few lines of code—no manual setup or configuration required. Leverage Your Existing AI Stack One of the most exciting aspects of the Foundry Local C# SDK is its compatibility with popular AI tools such as: OpenAI SDK - Foundry local provides an OpenAI compliant chat completions (and embedding) API meaning. If you’re already using `OpenAI` chat completions API, you can reuse your existing code with minimal changes. Semantic Kernel - Foundry Local also integrates well with Semantic Kernel, Microsoft’s open-source framework for building AI agents. You can use Foundry Local models as plugins or endpoints within Semantic Kernel workflows—enabling advanced capabilities like memory, planning, and tool calling. Quick Start Example Follow these three steps: 1. Create a new project Create a new C# project and navigate to it: dotnet new console -n hello-foundry-local cd hello-foundry-local 2. Install NuGet packages Install the following NuGet packages into your project: dotnet add package Microsoft.AI.Foundry.Local --version 0.1.0 dotnet add package OpenAI --version 2.2.0-beta.4 3. Use the OpenAI SDK with Foundry Local The following example demonstrates how to use the OpenAI SDK with Foundry Local. The code initializes the Foundry Local service, loads a model, and generates a response using the OpenAI SDK. Copy-and-paste the following code into a C# file named Program.cs: using Microsoft.AI.Foundry.Local; using OpenAI; using OpenAI.Chat; using System.ClientModel; using System.Diagnostics.Metrics; var alias = "phi-3.5-mini"; var manager = await FoundryLocalManager.StartModelAsync(aliasOrModelId: alias); var model = await manager.GetModelInfoAsync(aliasOrModelId: alias); ApiKeyCredential key = new ApiKeyCredential(manager.ApiKey); OpenAIClient client = new OpenAIClient(key, new OpenAIClientOptions { Endpoint = manager.Endpoint }); var chatClient = client.GetChatClient(model?.ModelId); var completionUpdates = chatClient.CompleteChatStreaming("Why is the sky blue'"); Console.Write($"[ASSISTANT]: "); foreach (var completionUpdate in completionUpdates) { if (completionUpdate.ContentUpdate.Count > 0) { Console.Write(completionUpdate.ContentUpdate[0].Text); } } Run the code using the following command: dotnet run Final thoughts The Foundry Local C# SDK empowers developers to build intelligent, privacy-preserving applications that run anywhere. Whether you're working on desktop, mobile, or embedded systems, this SDK offers a robust and flexible way to bring AI closer to your users. Ready to get started? Dive into the official documentation: Getting started guide C# Reference documentation You can also make contributions to the C# SDK by creating a PR on GitHub: Foundry Local on GitHub550Views0likes0CommentsAzure AI Foundry Models: Futureproof Your GenAI Applications
Years of Rapid Growth and Innovation The Azure AI Foundry Models journey started with the launch of Models as a Service (MaaS) in partnership with Meta Llama at Ignite 2023. Since then, we’ve rapidly expanded our catalog and capabilities: 2023: General Availability of the model catalog and launch of MaaS 2024: 1800+ models available including Cohere, Mistral, Meta, G42, AI21, Nixtla and more, with 250+ OSS models deployed on managed compute 2025 (Build): 10000+ models, new models sold directly by Microsoft, more managed compute models and expanded partnerships, introduction of advanced tooling like Model Leaderboard, Model Router, MCP Server, and Image Playground GenAI Trends Reshaping the Model Landscape To stay ahead of the curve, Azure AI Foundry Models is designed to support the most important trends in GenAI: Emergence of Reasoning-Centric Models Proliferation of Agentic AI and Multi-agent systems Expansion of Open-Source Ecosystems Multimodal Intelligence Becoming Mainstream Rise of Small, Efficient Models (SLMs) These trends are shaping a future where enterprises need not just access to models—but smart tools to pick, combine, and deploy the best ones for each task. A Platform Built for Flexibility and Scale Azure AI Foundry is more than a catalog—it’s your end-to-end platform for building with AI. You can: Explore over 10000+ models, including foundation, industry, multimodal, and reasoning models along with agents. Deploy using flexible options like PayGo, Managed Compute, or Provisioned Throughput (PTU) Monitor and optimize performance with integrated observability and compliance tooling Whether you're prototyping or scaling globally, Foundry gives you the flexibility you need. Two Core Model Categories 1. Models Sold Directly by Microsoft These models are hosted and billed directly by Microsoft under Microsoft Product Terms. They offer: Enterprise-grade SLAs and reliability Deep Azure service integration Responsible AI standards Flexible usage of reserved quota by using Azure AI Foundry Provisioned Throughput (PTU) across direct models including OpenAI, Meta, Mistral, Grok, DeepSeek and Black Forest Labs. Reduce AI workload costs on predictable consumption patterns with Azure AI Foundry Provisioned Throughput reservations. Learn more here Coming to the family of direct models from Azure: Grok 3 / Grok 3 Mini (from xAI) Flux Pro 1.1 Ultra (from Black Forest Labs) Llama 4 Scout & Maverick (from Meta) Codestral 2501, OCR (from Mistral) 2. Models from Partners & Community These models come from the broader ecosystem, including open-source and monetized partners. They are deployed as Managed Compute or Standard PayGo, and include models from Cohere, Paige and Saifr. We also have new industry models joining this ecosystem of partner and community models NVIDIA NIMs: ProteinMPNN, RFDiffusion, OpenFold2, MSA Paige AI: Virchow 2G, Virchow 2G-mini Microsoft Research: EvoDiff, BioEmu-1 Expanded capabilities that make model choice simpler and faster Azure AI Foundry Models isn’t just about more models. We’re introducing tools to help developers intelligently navigate model complexity: 1. Model Leaderboard Easily compare model performance across real-world tasks with: Transparent benchmark scores Task-specific rankings (summarization, RAG, classification, etc.) Live updates as new models are evaluated Whether you want the highest accuracy, fastest throughput, or best price-performance ratio—the leaderboard guides your selection. 2. Model Router Don’t pick just one—let Azure do the heavy lifting. Automatically route queries to the best available model Optimize based on speed, cost, or quality Supports dynamic fallback and load balancing This capability is a game-changer for agents, copilots, and apps that need adaptive intelligence. 3. Image/Video Playground A new visual interface for: Testing image generation models side-by-side Tuning prompts and decoding settings Evaluating output quality interactively This is particularly useful for multimodal experimentation across marketing, design, and research use cases. 4. MCP Server Enables model-aware orchestration, especially for agentic workloads: Tool use integration Multi-model planning and reasoning Unified coordination across model APIs A Futureproof Foundation With Azure AI Foundry Models, you're not just selecting from a list of models—you’re stepping into a full-stack, flexible, and future-ready AI environment: Choose the best model for your needs Deploy on your terms—serverless, managed, or reserved Rely on enterprise-grade performance, security, and governance Stay ahead with integrated innovation from Microsoft and the broader ecosystem The AI future isn’t one-size-fits-all—and neither is Azure AI Foundry. Explore Today : Azure AI Foundry8.2KViews0likes0Comments