Blog Post

Apps on Azure Blog
6 MIN READ

Microsoft.Extensions.AI: Integrating AI into your .NET applications

japhlet-nwamu's avatar
japhlet-nwamu
Icon for Microsoft rankMicrosoft
May 01, 2025

Artificial Intelligence (AI) is transforming the way we build applications. With the introduction of Microsoft.Extensions.AI, integrating AI services into .NET applications has never been easier. In this blog, we'll explore Microsoft.Extensions.AI, why .NET developers should try it out and how to get started using it to build a simple text generation application.

Why Microsoft.Extensions.AI?

Microsoft.Extensions.AI provides unified abstractions and middleware for integrating AI services into .NET applications. This means you can work with AI capabilities like chat features, embedding generation, and tool calling without worrying about specific platform implementations. Whether you're using Azure AI, OpenAI, or other AI services, Microsoft.Extensions.AI ensures seamless integration and collaboration across the .NET ecosystem

 

With this extension, .NET developers can easily connect their applications to AI services like:

  • Azure OpenAI (GPT models) 
  • OpenAI API 
  • Any other library that supports MEAI - including all the models in Azure AI Foundry!

Instead of juggling raw HTTP requests and complex authentication for each provider, Microsoft.Extensions.AI gives you a unified API surface—so you interact with any AI model through one consistent, maintainable interface.

Getting Started with Microsoft.Extensions.AI

Let's walk through how to add text generation capabilities to your .NET application step by step whether you decide to use GitHub Models or Azure AI Foundry.

Using GitHub Models

GitHub Models is a free service that lets you try out and interact with different AI models right within your development environment. It's easy to use with Codespaces, making it a great tool for experimenting with various models and understanding their capabilities before you decide to implement them. GitHub Models are particularly suitable for quick trials and allow easy model switching, giving you the flexibility to test different AI functionalities without any hassle.

⚙️ Step 1: Creating a Personal Access Token for GitHub Models Access

To use GitHub Models, we would first need to create a personal access token. Personal access tokens are like passwords. Here's how to create one.

  • Select your GitHub profile picture and click ⚙️ Settings
  • In the left sidebar, click <>Developer settings
Sidebar in GitHub showing developer setting options
  • In the left sidebar, under 🔑 Personal access tokens, click Tokens (classic)
Section in GitHub for generating personal access tokens
  • Select Generate new token, then click Generate new token (classic)
Button in GitHub for generating a new personal access token
  • Under the "Note" field, give your token a descriptive name (e.g., Testing-MEAI-In-NET)
Section to add a descriptive name for your token
  • To give your token an expiration, select Expiration, then click Custom to enter a date (we recommend 7 days for security best practices)
  • Click Generate token and remember to copy the new token to your clipboard.

🛠️ Step 2: Creating a Codespace

Let's create a GitHub Codespace to use for the rest of this section.

  1. Head over to the repository's main page by right-clicking here and selecting Open in new window to open in a new window
  2. Fork the repo into your GitHub account by clicking the Fork button in the top right corner of the page
  3. Click the Code dropdown button and select the Codespaces tab
  4. Click the Create codespace on main
  5. You would be prompted to install any required extensions (like the C# Dev Kit)

🚀 Step 3: Test your application

Run the sample app to verify that everything is working correctly:

  • Open the terminal. You can open a terminal window by typing Ctrl+` (backtick) on Windows or Cmd+` on macOS.
  • Switch to the right directory by running the following command:
cd MEAI-for-Dotnet-Developers/MEAI-GitHub-Models
  • You can also decide to right click on the folder and select Open in Integrated Terminal
Option in GitHub Codespaces for opening the integrated terminal
  • To run the application, enter the following command into the terminal:
dotnet run
  • It may take a couple of seconds, but eventually your application should output a message.
  • You can try out new prompts by replacing the default in line 16 of Program.cs with your prompt.

Using Azure AI Foundry

Azure AI Foundry offers a robust and growing catalog of frontier and open-source models that can be applied over your data from Microsoft, OpenAI, Hugging Face, Meta, Mistral, and other partners. It requires a subscription and provides more extensive capabilities. It offers a unified platform for enterprise AI operations, model builders, and application development. Azure AI Foundry combines production-grade infrastructure with user-friendly interfaces, ensuring organizations can build and operate AI applications with confidence.

⚙️ Step 1: Setting Up Azure AI Foundry

We would be using Azure AI services for our solution. To use Azure AI Foundry models, you would need to follow the steps in this section.

  • Go to Azure portal
  • Create an Azure Open AI resource
  • Open the resource and click on Go to Azure AI Foundry portal
Azure portal interface for managing Azure OpenAI resources
  • You would be directed to the Azure AI Foundry portal and should see a page like the one below.
Azure AI Foundry portal interface for creating and managing AI resources
  • Click on Deployments to open a new page that allows you create a deployment using any of the 27 models available on Azure AI Foundry
  • Click on Deploy model. For this solution, we would be using gpt-40-mini. So go ahead to select that model from the dropdown and click on Confirm to continue.
Page in Azure AI Foundry for creating and managing model deployments
  • Give your deployment a name and click on Deploy
Button in Azure AI Foundry for deploying a selected model
  • Well-done. For the next step, you would need your endpoint and api key, so be sure to take note of those.

🚀 Step 2: Create a .NET Console application

You can either decide to work with VS Code or Visual Studio.

  • If you're using VS Code, open the terminal and the command below to create a .NET console application. Give your application a name, for example "TextGenerationApp".
dotnet new console -o [application name]
  • If you're using Visual Studio, go ahead to create a .NET console project with your desired name. I'll name my project "TextGenerationApp". Ensure your terminal is switched to this directory.
  • As an alternative, you can also decide to clone the repository - https://github.com/japhletnwamu/MEAI-for-Dotnet-Developers either in VS Code or Visual Studio.

🛠️ Step 3: Install the NuGet Package

  • You can either decide to install Microsoft.Extensions.AI using the .NET CLI:
dotnet add package Microsoft.Extensions.AI
  • Or, if you use a windows device and prefer to, you can use Package Manager:
Install-Package Microsoft.Extensions.AI

🔗 Step 4: Configure AI Services in Your App

Now let's configure the AI service inside Program.cs. This setup connects your .NET app to Azure OpenAI, allowing you to use GPT models with minimal effort. Don't forget to replace the placeholders with your endpoint, Api key, and deployment name.

using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Extensions.AI; 

var endpoint = new Uri("YOUR_ENDPOINT"); // e.g. "https://< your hub name >.openai.azure.com/"
var apiKey = new ApiKeyCredential("YOUR_API_KEY");
var deploymentName = "YOUR_DEPLOYMENT_NAME"; // e.g. "gpt-4o-mini"

💬 Step 5: Initialize the AzureOpenAIClient

Next, we would need to set up the communication channel between our .NET application and the Azure OpenAI service.

IChatClient client = new AzureOpenAIClient(
    endpoint,
    apiKey)
.AsChatClient(deploymentName);

💬 Step 6: Sending a Request and Waiting for Response

Next, we want to include a request. This request would be sent to our Azure OpenAI service. The app waits for the answer and stores it in a variable called "response".

var response = await client.GetResponseAsync("Write a short story about a robot who loves to learn");

Finally, you want to print the answer provided by the AI to the screen. We do so by using the code below:

Console.WriteLine(response);

🚀 Step 7: Test your application

To test your application, run the following command without debugging:

dotnet run

 

Congratulations! 🎉 You’ve successfully integrated AI capabilities into your .NET application using Microsoft.Extensions.AI🚀

🎒 Additional Resources

If you want to know more, the Generative AI for Beginners .NET repo will help with next steps like Functions, Agents, MCP, etc. Check out this resource here - Generative AI for Beginners .NET

We have a lot of other content to help your learning journey. Check out:

Updated May 05, 2025
Version 5.0

1 Comment

  • JamesFieldist's avatar
    JamesFieldist
    Copper Contributor

    Hi, this code doesn't work with the latest betas of the Microsoft.Extensions.AI library because .AsChatClient doesn't exist (and was previously depreciated) and I can't find any code that works around this with Azure. Ollama works fine, but not azure.