openai
115 TopicsAI-900: Microsoft Azure AI Fundamentals Study Guide
This comprehensive study guide provides a thorough overview of the topics covered in the Microsoft Azure AI Fundamentals (AI-900) exam, including Artificial Intelligence workloads, fundamental principles of machine learning, computer vision and natural language processing workloads. Learn about the exam's intended audience, how to earn the certification, and the skills measured as of April 2022. Discover the important considerations for responsible AI, the capabilities of Azure Machine Learning Studio and more. Get ready to demonstrate your knowledge of AI and ML concepts and related Microsoft Azure services with this helpful study guide.38KViews11likes3CommentsIntroducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases. Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images. One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc. Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges: How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models? How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models? How can we make sure that the API key is securely distributed across multiple applications? How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances? To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities: Azure OpenAI Token Limit Policy Azure OpenAI Emit Token Metric Policy Load Balancer and Circuit Breaker Import Azure OpenAI as an API Azure OpenAI Semantic Caching Policy (in public preview) Azure OpenAI Token Limit Policy Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM). This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit. Learn more about this policy here. Azure OpenAI Emit Token Metric Policy Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers. This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice. Learn more about this policy here. Load Balancer and Circuit Breaker Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints. With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements. Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules. Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest. Learn more about load balancer and circuit breaker here. Import Azure OpenAI as an API New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs. We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup. Learn more about Import Azure OpenAI as an API here. Azure OpenAI Semantic Caching policy Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning. Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance. Learn more about semantic caching policy here. Get Started with GenAI Gateway Capabilities in Azure API Management We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.35KViews10likes14CommentsUnleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge. Key Features of MCP Standardized Communication – MCP provides a structured way for AI models to interact with various tools. Tool Access & Expansion – AI assistants can now utilize external tools for real-time insights. Secure & Scalable – Enables safe and scalable integration with enterprise applications. Multi-Modal Integration – Supports STDIO, SSE (Server-Sent Events), and WebSocket communication methods. MCP Architecture & How It Works MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: Components of MCP MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. Data Flow in MCP The AI model sends a request (e.g., "fetch user profile data"). The MCP client forwards the request to the appropriate MCP server. The MCP server retrieves the required data from a database or API. The response is sent back to the AI model via the MCP client. Integrating MCP with Azure OpenAI Services Microsoft has integrated MCP with Azure OpenAI Services, allowing GPT models to interact with external services and fetch live data. This means AI models are no longer limited to static knowledge but can access real-time information. Benefits of Azure OpenAI Services + MCP Integration ✔ Real-time Data Fetching – AI assistants can retrieve fresh information from APIs, databases, and internal systems. ✔ Contextual AI Responses – Enhances AI responses by providing accurate, up-to-date information. ✔ Enterprise-Ready – Secure and scalable for business applications, including finance, healthcare, and retail. Hands-On Tools for MCP Implementation To implement MCP effectively, Microsoft provides two powerful tools: Semantic Workbench and AI Gateway. Microsoft Semantic Workbench A development environment for prototyping AI-powered assistants and integrating MCP-based functionalities. Features: Build and test multi-agent AI assistants. Configure settings and interactions between AI models and external tools. Supports GitHub Codespaces for cloud-based development. Explore Semantic Workbench Workbench interface examples Microsoft AI Gateway A plug-and-play interface that allows developers to experiment with MCP using Azure API Management. Features: Credential Manager – Securely handle API credentials. Live Experimentation – Test AI model interactions with external tools. Pre-built Labs – Hands-on learning for developers. Explore AI Gateway Setting Up MCP with Azure OpenAI Services Step 1: Create a Virtual Environment First, create a virtual environment using Python: python -m venv .venv Activate the environment: # Windows venv\Scripts\activate # MacOS/Linux source .venv/bin/activate Step 2: Install Required Libraries Create a requirements.txt file and add the following dependencies: langchain-mcp-adapters langgraph langchain-openai Then, install the required libraries: pip install -r requirements.txt Step 3: Set Up OpenAI API Key Ensure you have your OpenAI API key set up: # Windows setx OPENAI_API_KEY "<your_api_key> # MacOS/Linux export OPENAI_API_KEY=<your_api_key> Building an MCP Server This server performs basic mathematical operations like addition and multiplication. Create the Server File First, create a new Python file: touch math_server.py Then, implement the server: from mcp.server.fastmcp import FastMCP # Initialize the server mcp = FastMCP("Math") MCP.tool() def add(a: int, b: int) -> int: return a + b MCP.tool() def multiply(a: int, b: int) -> int: return a * b if __name__ == "__main__": mcp.run(transport="stdio") Your MCP server is now ready to run. Building an MCP Client This client connects to the MCP server and interacts with it. Create the Client File First, create a new file: touch client.py Then, implement the client: import asyncio from mcp import ClientSession, StdioServerParameters from langchain_openai import ChatOpenAI from mcp.client.stdio import stdio_client # Define server parameters server_params = StdioServerParameters( command="python", args=["math_server.py"], ) # Define the model model = ChatOpenAI(model="gpt-4o") async def run_agent(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await load_mcp_tools(session) agent = create_react_agent(model, tools) agent_response = await agent.ainvoke({"messages": "what's (4 + 6) x 14?"}) return agent_response["messages"][3].content if __name__ == "__main__": result = asyncio.run(run_agent()) print(result) Your client is now set up and ready to interact with the MCP server. Running the MCP Server and Client Step 1: Start the MCP Server Open a terminal and run: python math_server.py This starts the MCP server, making it available for client connections. Step 2: Run the MCP Client In another terminal, run: python client.py Expected Output 140 This means the AI agent correctly computed (4 + 6) x 14 using both the MCP server and GPT-4o. Conclusion Integrating MCP with Azure OpenAI Services enables AI applications to securely interact with external tools, enhancing functionality beyond text-based responses. With standardized communication and improved AI capabilities, developers can build smarter and more interactive AI-powered solutions. By following this guide, you can set up an MCP server and client, unlocking the full potential of AI with structured external interactions. Next Steps: Explore more MCP tools and integrations. Extend your MCP setup to work with additional APIs. Deploy your solution in a cloud environment for broader accessibility. For further details, visit the GitHub repository for MCP integration examples and best practices. MCP GitHub Repository MCP Documentation Semantic Workbench AI Gateway MCP Video Walkthrough MCP Blog MCP Github End to End Demo52KViews9likes4CommentsAzure Cognitive Search AMA: Vector search, Azure OpenAI Service, generative apps, plugins & more
In this session we’ll answer questions about the emerging Retrieval-Augmented Generation pattern and how you can use Azure OpenAI service and Azure Cognitive Search to implement it today in your applications to power ChatGPT-like experiences, generative scenarios, and more. Bring your questions about vector search in Azure Cognitive Search, which is coming to public preview soon, as well as about implementation details, data preparation, integration with large language models, and anything else related to Azure Cognitive Search. An AMA is a live text-based online event similar to an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. Feel free to post your questions anytime in the comments below beforehand, if it fits your schedule or time zone better, though questions will not be answered until the live hour.55KViews8likes116CommentsIn Preview: Bringing generative AI to Azure AI Health Bot
In the current era of Large Language Models (LLMs), there is a growing demand for AI in healthcare. Healthcare organizations are actively exploring ways to leverage these advanced technologies to develop their own GPT-powered copilot experiences for doctors or virtual health assistants for patients. It's important to understand that healthcare organizations will only use these tools when they adhere to the highest security and compliance standards required for healthcare purposes The escalating demand is driven by the recognition that AI systems can significantly enhance healthcare experiences with many different tasks, such as assisting with administrative or clinical tasks. The goal is to build intelligent and engaging chat experiences that utilize generative AI, providing accurate, relevant, and consistent information based on credible healthcare information or validated customer sources.Azure OpenAI Service is now generally available
Early this week Satya Nadella, Microsoft CEO, and Eric Boyd, Corporate AI Vice President, announced Azure OpenAI Service generally available, which will soon include ChatGPT – the fine-tuned version of GPT-3.5 built upon Azure AI infrastructure gone viral in the last few weeks. But let’s take a step back. What is Azure OpenAI? And how can you get started?29KViews6likes3CommentsScaling PostgreSQL at OpenAI: Lessons in Reliability, Efficiency, and Innovation
At POSETTE: An Event for Postgres 2025, Bohan Zhang of OpenAI delivered a compelling talk on how OpenAI has scaled Azure Database for PostgreSQL- Flexible Server to meet the demands of one of the world’s most advanced AI platforms running at planetary scale. The Postgres team at Microsoft has partnered deeply with OpenAI for years to enhance the service to meet their performance, scale, and availability requirements, and it is great to see how OpenAI is now deploying and depending on Flexible Server as a core component of ChatGPT. Hearing firsthand about their challenges and breakthroughs is a reminder of what’s possible when innovation meets real-world needs. This blog post captures the key insights from Bohan’s POSETTE talk, paired with how Azure’s cloud platform supports innovation at scale. PostgreSQL at the Heart of OpenAI As Bohan shared during his talk, PostgreSQL is the backbone of OpenAI’s most critical systems. Because PostgreSQL plays a critical role in powering services like ChatGPT, Open AI has prioritized making it more resilient and scalable to avoid any disruptions. That’s why OpenAI has invested deeply in optimizing PostgreSQL for reliability and scale. Why Azure Database for PostgreSQL? OpenAI has long operated PostgreSQL on Azure, initially using a single primary instance without sharding. This architecture worked well—until write scalability limits emerged. Azure’s managed PostgreSQL service provides the flexibility to scale read replicas, optimize performance, and maintain high availability to provide global low latency reads without the burden of managing infrastructure. This is why we designed Azure Database for PostgreSQL to support precisely these kinds of high-scale, mission-critical workloads, and OpenAI’s use case is a powerful validation of that vision. Tackling Write Bottlenecks PostgreSQL’s MVCC (Multi-Version Concurrency Control) design presents challenges for write-heavy workloads—such as index bloat, autovacuum tuning complexity, and version churn. OpenAI addressed this by: Reducing unnecessary writes at the application level Using lazy writes and controlled backfills to smooth spikes Migrating extreme write-heavy workloads with natural sharding keys to other systems. These strategies allowed OpenAI to preserve PostgreSQL’s strengths while mitigating its limitations. Optimizing Read-Heavy Workloads With writes offloaded, OpenAI focused on scaling read-heavy workloads. Key optimizations included: Offloading read queries to replicas Avoiding long-running queries and expensive multi-way join queries Using PgBouncer for connection pooling, reducing latency from 50ms to under 5ms Categorizing requests by priority and assigning dedicated read replicas to high-priority traffic As Bohan noted, “After all the optimization we did, we are super happy with Postgres right now for our read-heavy workloads.” Schema Governance and Resilience OpenAI also implemented strict schema governance to avoid full table rewrites and production disruptions. Only lightweight schema changes are allowed, and long-running queries are monitored to prevent them from blocking migrations. To ensure resilience, we categorized requests by priority and implemented multi-level rate limiting—at the application, connection, and query digest levels. This helped prevent resource exhaustion and service degradation. Takeaway OpenAI’s journey is a masterclass in how to operate PostgreSQL at hyper-scale. By offloading writes, scaling read replicas, and enforcing strict schema governance, OpenAI demonstrated PostgreSQL on Azure meets the demands of cutting-edge AI systems. It also reinforces the value of Azure’s managed database services in enabling teams to focus on innovation rather than infrastructure. We’re proud of the work we’ve done to co-innovate with OpenAI and excited to see how other organizations can apply these lessons to their own PostgreSQL deployments. Check out the on-demand talk “Scaling Postgres to the next level at OpenAI” and many more PostgreSQL community sessions from POSETTE.Expose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, we’re excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke tools—such as APIs—via a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integration—securely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as “tools,” can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP servers—without rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP tools—offering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIM’s monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your API’s methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ➤ Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capability—serving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP servers—ensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. What’s next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support We’re enabling APIM to act as a transparent proxy between your APIs and AI agents—no custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. We’re working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation — The Azure API Management & API Center Teams7.4KViews5likes7Comments