api management
196 TopicsBuild. Secure. Launch Your Private MCP Registry with Azure API Center.
We are thrilled to embrace a new era in the world of MCP registries. As organizations increasingly build and consume MCP servers, the need for a secure, governed, robust and easily discoverable tools catalog has become critical. Today, we are excited to show you how to do just that with MCP Center, a live example demonstrating how Azure API Center (APIC) can serve as a private and enterprise-ready MCP registry. The registry puts your MCPs just one click away for developers, ensuring no setup fuss and a direct path to coding brilliance. Why a private registry? 🤔 Public OSS registries have been instrumental in driving growth and innovation across the MCP ecosystem. But as adoption scales, so does the need for tighter security, governance, and control, this is where private MCP registries step in. This is where Azure API Center steps in. Azure API Center offers a powerful and centralized approach to MCP discovery and governance across diverse teams and services within an organization. Let's delve into the key benefits of leveraging a private MCP registry with Azure API Center. Security and Trust: The Foundation of AI Adoption Review and Verification: Public registries, by their open nature, accept submissions from a wide range of developers. This can introduce risks from tools with limited security practices or even malicious intent. A private registry empowers your organization to thoroughly review and verify every MCP server before it becomes accessible to internal developers or AI agents (like Copilot Studio and AI Foundry). This eliminates the risk of introducing random, potentially vulnerable first or third-party tools into your ecosystem. Reduced Attack Surface: By controlling which MCP servers are accessible, organizations significantly shrink their potential attack surface. When your AI agents interact solely with known and secure internal tools, the likelihood of external attackers exploiting vulnerabilities in unvetted solutions is drastically reduced. Enterprise-Grade Authentication and Authorization: Private registries enable the enforcement of your existing robust enterprise authentication and authorization mechanisms (e.g., OAuth 2) across all MCP servers. Public registries, in contrast, may have varying or less stringent authentication requirements. Enforced AI Gateway Control (Azure API Management): Beyond vetting, a private registry enables organizations to route all MCP server traffic through an AI gateway such as Azure API Management. This ensures that every interaction, whether internal or external, adheres to strict security policies, including centralized authentication, authorization, rate limiting, and threat protection, creating a secure front for your AI services. Governance and Control: Navigating the AI Landscape with Confidence Centralized Oversight and "Single Source of Truth": A private registry provides a centralized "single source of truth" for all AI-related tools and data connections within your organization. This empowers comprehensive oversight of AI initiatives, clearly identifying ownership and accountability for each MCP server. Preventing "Shadow AI": Without a formal registry, individual teams might independently develop or integrate AI tools, leading to "shadow AI" – unmanaged and unmonitored AI deployments that can pose significant risks. A private registry encourages a standardized approach, bringing all AI tools under central governance and visibility. Tailored Tool Development: Organizations can develop and host MCP servers specifically tailored to their unique needs and requirements. This means optimized efficiency and utility, providing specialized tools you won't typically find in broader public registries. Simplified Integration and Accelerated Development: A well-managed private registry simplifies the discovery and integration of internal tools for your AI developers. This significantly accelerates the development and deployment of AI-powered applications, fostering innovation. Good news! Azure API Center can be created for free in any Azure subscription. You can find a detailed guide to help you get started: Inventory and Discover MCP Servers in Your API Center - Azure API Center Get involved 💡 Your remote MCP server can be discoverable on API Center’s MCP Discovery page today! Bring your MCP server and reach Azure customers! These Microsoft partners are shaping the future of the MCP ecosystem by making their remote MCP Servers discoverable via API Center’s MCP Discovery page. Early Partners: Atlassian – Connect to Jira and Confluence for issue tracking and documentation Box – Use Box to securely store, manage and share your photos, videos, and documents in the cloud Neon – Manage and query Neon Postgres databases with natural language Pipedream – Add 1000s of APIs with built-in authentication and 10,000+ tools to your AI assistant or agent - coming soon - Stripe – Payment processing and financial infrastructure tools If partners would like their remote MCP servers to be featured in our Discover Panel, reach out to us here: GitHub/mcp-center and comment under the following GitHub issue: MCP Server Onboarding Request Ready to Get Started? 🚀 Modernize your AI strategy and empower your teams with enhanced discovery, security, and governance of agentic tools. Now's the time to explore creating your own private enterprise MCP registry. Check out MCP Center, a public showcase demonstrating how you can build your own enterprise MCP registry - MCP Center - Build Your Own Enterprise MCP Registry - or go ahead and create your Azure API Center today!7.3KViews7likes4CommentsNot able to setup azure private endpoint url as webservice/backend for Azure API Management service
Hi all, I have integrated Private endpoint connected to private link service. Private link service is created by azure standard load balancer created by kubernetes load balancer service using below annotations . annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-pls-create: "true" service.beta.kubernetes.io/azure-pls-name: myPLS service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: YOUR SUBNET service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1" service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: SUBNET_IP service.beta.kubernetes.io/azure-pls-proxy-protocol: "false" service.beta.kubernetes.io/azure-pls-visibility: "*" # does not apply here because we will use Front Door later service.beta.kubernetes.io/azure-pls-auto-approval: "YOUR SUBSCRIPTION ID" i am getting expected response i.e response from kubernetes service from Private endpoint ip which confirms that private link and private endpoint integration is working fine. we now want to integrate above private endpoint service with azure api management service so we tried adding private endpoint url as web service url for api management service but api management service is returning 500 error { "statusCode": 500, "message": "Internal server error", "activityId": "76261291-7121-4814-b0e4-66b52284d76c" } I also tried api management service Troubleshoot & analysis page for exact error its showing below error: BackendConnectionFailure An attempt was made to access a socket in a way forbidden by its access permissions <private_endpoint_url>:80 Please help me what i am doing wrong in this implementation Our requirement is to have kubernetes private load balancer and integrate it with azure api management service. so user can access api only through api management service and only api management service should be able to access load balancer service. Thanks in advance709Views0likes1CommentHow to update the proxyAddresses of a Cloud-only Entra ID user
I currently have a client with an Entra ID user (not migrated from on-premises) that is cloud-based, but has proxyAddresses values assigned. Now, I want to update the proxyAddresses through the Graph Explorer and have used this link as a guide: https://learn.microsoft.com/en-us/answers/questions/2280046/entra-connect-sync-blocking-user-creation-due-to-h. Now this guide is suggesting you can use the BETA model and this URL format... https://graph.microsoft.com/beta/users/%USERGUID% It states you can use that URL to do both 'GET' and 'PATCH' queries - the PATCH query being the one that will change the settings. You have to put forth a body for the proxyAddresses property in the PATCH query, which represents all of the addresses you want the user to utilise as proxy addresses. Now the GET query works... The PATCH query does not... Screenshot provided: Now, regarding the error message, I have applied ALL possible permissions in the 'Modify Permissions' tab. It is still erroring, Now I cannot use Exchange Online PowerShell, as the user does not have a mailbox! Aside from potentially using a license for Exchange Online or provisioning a mailbox for the user, and making the necessary changes, would the only other option be to delete/recreate the user?Solved140Views0likes3CommentsAnnouncing the Public Preview of the Applications feature in Azure API management
API Management now supports built-in OAuth 2.0 application-based access to product APIs using the client credentials flow. This feature allows API managers to register Microsoft Entra ID applications, streamlining secure API access for developers through OAuth 2.0 authorization. API publishers and developers can now more effectively manage client identity, access, and authorization flows. With this feature: API managers can identify which products require OAuth authorization by setting a product property to enable application-based access API managers can create and manage client applications and assign them access to specific products. Developers can see their registered applications in API management developer portal and use OAuth tokens to securely call APIs and products OAuth tokens presented in API requests are validated by the API Management gateway to authorize access to the product's APIs. This feature simplifies identity and access management in API programs, enabling a more secure and scalable approach to API consumption. Enable OAuth authorization API managers can now identify specific products which are protected by Microsoft Entra identity by enabling "Application based access". This ensures that only valid client applications which have a secure OAuth token from Microsoft Entra identity can access the APIs associated with this product. An application is created in Microsoft Entra corresponding to the product, with appropriate app role. Register client applications and assign products API managers can register client applications, identify specific developers as owners of these applications and assign products to these applications. This creates a new application in Microsoft Entra and assigns API permissions to access the product. Securely access the API using client applications Developers can login into API management developer portal and see the appropriate applications assigned to them. They can retrieve the application credentials and call Microsoft Entra to get an OAuth token, use this token to call APIM gateway and securely access the product/API. Preview limitations The public preview of the Applications is a limited-access feature. To participate in the preview and enable Applications in your APIM service instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. Learn more Securely access product APIs with Microsoft Entra applicationsEnforce or Audit Policy Inheritance in API Management
We’re excited to announce a new Azure Policy definition that lets you enforce or audit policy inheritance in Azure API Management. With this capability, platform and governance teams can ensure that API Management policies are always inherited across all policy scopes — operations, APIs, products, and workspaces — strengthening consistency, compliance, and security across your API estate. Why this matters In Azure API Management, the <base /> policy element plays a critical role: it ensures that a runtime policy inherits policies defined at a higher scope, such as product, workspace, or all APIs (global). Without <base />, developers can inadvertently (or intentionally) bypass important platform rules, for example: Security controls like authentication or IP restrictions Operational requirements such as logging, tracing, or rate-limiting Business policies such as quota enforcement The result can be inconsistent behavior, compliance drift, and gaps in governance. How the new policy helps With the new Azure Policy definition, you can automatically ensure that <base /> is located at the start of each API Management policy section — <inbound>, <outbound>, <backend>, and <on-error> — across policies configured on operations, APIs, products, and workspaces. You can set the effect parameter to: Audit: Identify operation, API, product, or workspace policies where <base /> is missing. Deny: Prevent deployment of policies that do not include <base />. Get started To enable this new Azure Policy definition: Navigate to Azure Policy in the Azure portal. Select “Definitions” from the menu and choose “API Management policies should inherit parent scope policies using <base />”. In the policy definition view, select “Assign”. Configure the policy assignment scope, parameter (audit or deny), and other details. View built-in Azure Policy definitions for API Management.464Views0likes0CommentsUpdate To API Management Workspaces Breaking Changes: Built-in Gateway & Tiers Support
What’s changing? If your API Management service uses preview workspaces on the built-in gateway and meets the tier-based limits below, those workspaces will continue to function as-is and will automatically transition to general availability once built-in gateway support is fully announced. API Management tier Limit of workspaces on built-in gateway Premium and Premium v2 Up to 30 workspaces Standard and Standard v2 Up to 5 workspaces Basic and Basic v2 Up to 1 workspace Developer Up to 1 workspace Why this change? We introduced the requirement for workspace gateways to improve reliability and scalability in large, federated API environments. While we continue to recommend workspace gateways, especially for scenarios that require greater scalability, isolation, and long-term flexibility, we understand that many customers have established workflows using the preview workspaces model or need workspaces support in non-Premium tiers. What’s not changing? Other aspects of the workspace-related breaking changes remain in effect. For example, service-level managed identities are not available within workspaces. In addition to workspaces support on the built-in gateway described in the section above, Premium and Premium v2 services will continue to support deploying workspaces with workspace gateways. Resources Workspaces in Azure API Management Original breaking changes announcements Reduced tier availability Requirement for workspace gateways1.3KViews2likes7CommentsBuilding Secure, Multi-User AI Workflows with the Responses API
With the recent GA (General Availability) of the Responses API, developers and enterprises now have access to a production-ready service purpose-built for stateful, multi-turn, tool-using AI agents. This milestone means you can confidently integrate the Responses API into real-world applications, knowing it’s fully supported, scalable, and designed for enterprise-grade use cases. Unlike traditional stateless APIs like Chat Completions, the Responses API maintains conversation history, supports tool orchestration, and enables multi-modal interactions. It’s ideal for building intelligent agents that need to remember context, call external tools, and interact with users over time. The Challenge: Securing AI Responses in Multi-User Environments As AI becomes more deeply embedded in enterprise apps, a new challenge emerges: response leakage. In multi-user environments, any user with a response ID could potentially access content they didn’t create—posing serious risks to privacy, data ownership, and compliance. By default, the Responses API allows retrieval of any response if you have the response ID. While this is convenient for prototyping, it’s not secure for production. There’s no built-in mechanism to verify who is making the request or whether they’re authorized to access that response. In this lab, I set out to solve that problem using Azure API Management (APIM). The goal? To ensure that only the user who created a response can retrieve or add to it, even if someone else has the response ID. This is especially important in scenarios where AI-generated content may include sensitive or proprietary information. The Problem: Response IDs Aren’t Enough The default behavior of the Responses API is simple: if you have a response ID, you can fetch the response. That’s convenient, but it’s also risky. There’s no built-in check to verify who is making the request. The Responses API is designed to be stateful, combining capabilities from chat completions and assistants into a unified experience. It’s powerful—but without additional safeguards, it can expose sensitive content to unintended users. This lab introduces a way to wrap the Responses API with APIM policies that enforce user-level access control. It’s a lightweight but powerful approach to securing AI-generated content. The Solution: APIM as a Gatekeeper Here’s how it works: A user sends a request to retrieve or update a response. APIM intercepts the request and extracts the user ID—either from the authentication token or, for testing purposes, from a custom header. APIM compares the user ID with the one associated with the response. If they match, the request proceeds. If not, it’s blocked. This ensures that only the original creator of a response can access or modify it. What’s in the Lab The lab in the AI Gateway repo includes: A sample API that mimics AI-generated responses. APIM policies that enforce user-level access. A test harness that lets you simulate requests with different user IDs. Header-based user ID injection for easier testing (ideal for labs and demos). This setup gives you a repeatable pattern for securing AI responses in production environments. Sample APIM Policy Snippet Here’s a simplified version of the APIM inbound policy that enforces user-level access: This policy checks the x-user-id header against the stored owner ID of the response. If they don’t match, the request is blocked with a 403 error. In a production scenario, you would want to use something other than just a userid in the header, for this I might suggest the userid from the authentication token. Why This Matters As AI becomes more embedded in our apps, we need to think beyond just securing the model—we need to secure the responses too. This lab shows how APIM can be used to: Enforce ownership of AI-generated content. Prevent unauthorized access to sensitive responses. Build trust into your AI workflows. Final Thoughts This lab is a great starting point for anyone building AI APIs in a multi-user environment. It’s simple, effective, and leverages tools you already know—like APIM. If you’re interested in extending this to token validation, role-based access, or integrating with Entra ID, let’s talk. I’d love to hear how you’re securing your AI stack.Announcing the availability of TLS 1.3 in Azure API Management in Preview
TLS 1.3 is the latest version of the internet’s most deployed security protocol, which encrypts data to provide a secure communication channel between two endpoints. TLS 1.3 support in Azure API Management is planned to rollout during the first week of February 2024. The rollout will happen in stages, this means some regions will get it first as we roll out globally.23KViews2likes6CommentsFetch Email of Login User In System Context
Dear Team, We are working on retrieving email address of the user joined to Entra ID from Entra-joined Windows devices, specifically while running in a system context.The whoami /upn command successfully returns the joined user’s email address in a user context, but it does not work in a system context, particularly when using an elevated terminal via the psexec utility.We also tested the dsregcmd /status command; however, in a system context, the User Identity tab in the SSO State section only appears when there is an error in AzureAdPrt. Under normal, healthy operating conditions, this command does not provide the user identity or the full domain username. We would greatly appreciate guidance on how to retrieve the Entra ID joined user’s email address in a system context, especially from those with prior experience in this area. Thank you for your support.Solved186Views0likes3Comments