integration
199 TopicsCentralizing Enterprise API Access for Agent-Based Architectures
Problem Statement When building AI agents or automation solutions, calling enterprise APIs directly often means configuring individual HTTP actions within each agent for every API. While this works for simple scenarios, it quickly becomes repetitive and difficult to manage as complexity grows. The challenge becomes more pronounced when a single business domain exposes multiple APIs, or when the same APIs are consumed by multiple agents. This leads to duplicated configurations, higher maintenance effort, inconsistent behavior, and increased governance and security risks. A more scalable approach is to centralize and reuse API access. By grouping APIs by business domain using an API management layer, shaping those APIs through a Model Context Protocol (MCP) server, and exposing the MCP server as a standardized tool or connector, agents can consume business capabilities in a consistent, reusable, and governable manner. This pattern not only reduces duplication and configuration overhead but also enables stronger versioning, security controls, observability, and domain‑driven ownership—making agent-based systems easier to scale and operate in enterprise environments. Designing Agent‑Ready APIs with Azure API Management, an MCP Server, and Copilot Studio As enterprises increasingly adopt AI‑powered assistants and Copilots, API design must evolve to meet the needs of intelligent agents. Traditional APIs—often designed for user interfaces or backend integrations—can expose excessive data, lack intent-level abstraction, and increase security risk when consumed directly by AI systems. This document outlines a practical, enterprise-‑ready approach to organize APIs in Azure API Management (APIM), introduce a Model Context Protocol (MCP) server to shape and control context, and integrate the solution with Microsoft Copilot Studio. The goal is to make APIs truly agent-‑ready: secure, scalable, reusable, and easy to govern. Architecture at a glance Back-end services expose domain APIs. Azure API Management (APIM) groups and governs those APIs (products, policies, authentication, throttling, versions). An MCP server calls APIM, orchestrates/filters responses, and returns concise, model-friendly outputs. Copilot Studio connects to the MCP server and invokes a small set of predictable operations to satisfy user intents. Why Traditional API Designs Fall Short for AI Agents Enterprise APIs have historically been built around CRUD operations and service-‑to-‑service integration patterns. While this works well for deterministic applications, AI agents work best with intent-driven operations and context-aware responses. When agents consume traditional APIs directly, common issues include: overly verbose payloads, multiple calls to satisfy a single user intent, and insufficient guardrails for read vs. write operations. The result can be unpredictable agent behavior that is difficult to test, validate, and govern. Structuring APIs Effectively in Azure API Management Azure API Management (APIM) is the control plane between enterprise systems and AI agents. A well-‑structured APIM instance improves security, discoverability, and governance through products, policies, subscriptions, and analytics. Key design principles for agent consumption Organize APIs by business capability (for example, Customer, Orders, Billing) rather than technical layers. Expose agent-facing APIs via dedicated APIM products to enable controlled access, throttling, versioning, and independent lifecycle management. Prefer read-only operations where possible; scope write operations narrowly and protect them with explicit checks, approvals, and least-privilege identities. Read‑only APIs should be prioritized, while action‑oriented APIs must be carefully scoped and gated. The Role of the MCP Server in Agent‑Based Architectures APIM provides governance and security, but agents also need an intent-level interface and model-friendly responses. A Model Context Protocol (MCP) server fills this gap by acting as a mediator between Copilot Studio and APIM-exposed APIs. Instead of exposing many back-end endpoints directly to the agent, the MCP server can: orchestrate multiple API calls, filter irrelevant fields, enforce business rules, enrich results with additional context, and emit concise, predictable JSON outputs. This makes agent behavior more reliable and easier to validate. Instead of exposing multiple backend APIs directly to the agent, the MCP server aggregates responses, filters irrelevant data, enriches results with business context, and formats responses into LLM‑friendly schemas. By introducing this abstraction layer, Copilot interactions become simpler, safer, and more deterministic. The agent interacts with a small number of well‑defined MCP operations that encapsulate enterprise logic without exposing internal complexity. Designing an Effective MCP Server An MCP server should have a focused responsibility: shaping context for AI models. It should not replace core back-end services; it should adapt enterprise capabilities for agent consumption. What MCP should do An MCP server should be designed with a clear and focused responsibility: shaping context for AI models. Its primary role is not to replace backend services, but to adapt enterprise data for intelligent consumption. MCP does not orchestrate enterprise workflows or apply business logic. It standardizes how agents discover and invoke external tools and APIs by exposing them through a structured protocol interface. Orchestration, intent resolution, and policy-driven execution are handled by the agent runtime or host framework. It is equally important to understand what does not belong in MCP. Complex transactional workflows, long‑running processes, and UI‑specific formatting should remain in backend systems. Keeping MCP lightweight ensures scalability and easier maintenance. Call APIM-managed APIs and orchestrate multi-step retrieval when needed. Apply security checks and business rules consistently. Filter and minimize payloads (return only fields needed for the intent). Normalize and reshape responses into stable, predictable JSON schemas. Handle errors and edge cases with safe, descriptive messages. What MCP should not do Avoid implementing complex transactional workflows, long-running processes, or UI-specific formatting in MCP. Keep it lightweight so it remains scalable, testable, and easy to maintain. Step by step guide 1) Create an MCP server in Azure API Management (APIM) Open the Azure portal (portal.azure.com). Go to your API Management instance. In the left navigation, expand APIs. Create (or select) an API group for the business domain you want to expose (for example, Orders or Customers). Add the relevant APIs/operations to that API group. Create or select an APIM product dedicated for agent usage, and ensure the product requires a subscription (subscription key). Create an MCP server in APIM and map it to the API (or API group) you want to expose as MCP operations. In the MCP server settings, ensure Subscription key required is enabled. From the product’s Subscriptions page, copy the subscription key you will use in Copilot Studio. Screenshot placeholders: APIM API group, product configuration, MCP server mapping, subscription settings, subscription key location. * Note: Using an API Management subscription key to access MCP operations is one supported way to authenticate and consume enterprise APIs. However, this approach is best suited for initial setups, demos, or scenarios where key-based access is explicitly required. For production‑grade enterprise solutions, Microsoft recommends using managed identity–based access control. Managed identities for Azure resources eliminate the need to manage secrets such as subscription keys or client secrets, integrate natively with Microsoft Entra ID, and support fine‑grained role‑based access control (RBAC). This approach improves security posture while significantly reducing operational and governance overhead for agent and service‑to‑service integrations. Wherever possible, agents and MCP servers should authenticate using managed identities to ensure secure, scalable, and compliant access to enterprise APIs. 2) Create a Copilot Studio agent and connect to the APIM MCP server using a subscription key Copilot Studio natively supports Model Context Protocol (MCP) servers as tools. When an agent is connected to an MCP server, the tool metadata—including operation names, inputs, and outputs—is automatically discovered and kept in sync, reducing manual configuration and maintenance overhead. Sign in to Copilot Studio. Create a new agent and add clear instructions describing when to use the MCP tool and how to present results (for example, concise summaries plus key fields). Open Tools > Add tool > Model Context Protocol, then choose Create. Enter the MCP server details: Server endpoint URL: copy this from your MCP server in APIM. Authentication: select API Key. Header name: use the subscription key header required by your APIM configuration. Select Create new connection, paste the APIM subscription key, and save. Test the tool in the agent by prompting for a domain-specific task (for example, “Get order status for 12345”). Validate that responses are concise and that errors are handled safely. Screenshot placeholders: MCP tool creation screen, endpoint + auth configuration, connection creation, test prompt and response. Operational best practices and guardrails Least privilege by default: create separate APIM products and identities for agent scenarios; avoid broad access to internal APIs. Prefer intent-level operations: expose fewer, higher-level MCP operations instead of many low-level endpoints. Protect write operations: require explicit parameters, validation, and (when appropriate) approval flows; keep “read” and “write” tools separate. Stable schemas: return predictable JSON shapes and limit optional fields to reduce prompt brittleness. Observability: log MCP requests/responses (with sensitive fields redacted), monitor APIM analytics, and set alerts for failures and throttling. Versioning: version MCP operations and APIM APIs; deprecate safely. Security hygiene: treat subscription keys as secrets, rotate regularly, and avoid exposing them in prompts or logs. Summary As organizations scale agent‑based and Copilot‑driven solutions, directly exposing enterprise APIs to AI agents quickly becomes complex and risky. Centralizing API access through Azure API Management, shaping agent‑ready context via a Model Context Protocol (MCP) server, and consuming those capabilities through Copilot Studio establishes a clean and governable architecture. This pattern reduces duplication, enforces consistent security controls, and enables intent‑driven API consumption without exposing unnecessary backend complexity. By combining domain‑aligned API products, lightweight MCP operations, and least‑privilege identity‑based access, enterprises can confidently scale AI agents while maintaining strong governance, observability, and operational control. References Azure API Management (APIM) – Overview Azure API Management – Key Concepts Azure MCP Server Documentation (Model Context Protocol) Extend your agent with Model Context Protocol Managed identities for Azure resources – Overview284Views0likes0CommentsEnabling Agentic Data Governance with Hybrid Cloud Flexibility in Azure
The “Why” Do you manage data in a complex multi-cloud environment? Are you struggling with data silos, evolving regulations, and the pressure to maintain control and compliance across on-prem and multiple clouds? Do you ever wish an intelligent assistant could help shoulder the load of data governance? If so, I can relate. Let me tell you a story that might sound familiar. Meet Mark (pictured above). He is a data governance officer at Contoso (a fictional but very representative enterprise). Mark’s day job is ensuring data governance and compliance across his company’s vast hybrid cloud estate – think around ~2 million data assets sprawled across 12+ datacenters on-premises and in different public clouds. Regulatory requirements are constantly shifting. Customer data is increasingly sensitive. Each department and region has its own way of doing things. Mark is fighting an uphill battle with data silos and disconnected cloud operations. He bounces between a patchwork of tools – spreadsheets, cloud consoles, governance portals – trying to answer basic questions: Where is our data? Who’s using it? Are we in compliance? Armed with an old desk calculator and a pile of paper-based reports (a perfect 1990s backdrop), he is dealing with the data around him that has exploded in volume and complexity. What if Mark had a single pane of glass. The glass that reflects and acts. It reflects your governance state and enforces compliance – a self-hydrating pane of glass accompanied by a conversational AI. And he’s not alone. We’re all living in a data overload era. Every day, organizations generate and ingest more information than ever before. Transistors and mainframes gave way to the internet boom of the ’90s, then an explosion of mobile devices in the 2000s, social media in the 2010s, and now widespread cloud computing – all funneling data into our systems at an exponential rate. On top of that, a new wave of AI and conversational interfaces has arrived here in the mid-2020s, making data more accessible but also increasing expectations for real-time insight. It’s no wonder modern IT leaders feel overwhelmed. But these challenges are also opportunities. The way I see it, the incredible growth of data and cloud capabilities means we have a chance to reimagine data governance. The fact that I’m writing about this right now is no coincidence. My customers are looking to resolve problems in this space. In my conversations with them, I hear the same needs: We want better governance, more visibility, streamlined oversight… and cherry on top, we want it in an “agentic” fashion. In other words, they want to delegate the grunt work to the platform toolset augmented by AI, so they can focus on higher-value tasks. The “What” That vision – agentic data governance with hybrid cloud flexibility – became the driver for this work. This is a modular solution, and you have these building block style components (cloud services, governance tools, AI agents), which you can snap them together into an intended solution. Think of it as a jumpstart kit for continuous data governance across multiple clouds, with autonomous (“agentic”) assistance baked in that you can leverage and build upon. It’s not the final, productized solution – more a vision of what’s possible. Contoso’s Requirements These are the high-level requirements from Contoso: Data governance across clouds under one roof A single pane of glass dashboard consolidating reporting on the 5 governance domains: o Visibility on data residency and lineage o PII (Personally Identifiable Information) must run on a CC (Confidential Compute) o Security software (Defender) compliance o Resource tagging compliance (foundational for a good governance posture) o OS updates compliance Ability to enforce compliance in an agentic manner with a human in the loop Agentic enforcement of compliance pertaining to residency and confidential compute Solution – The breakdown The solution is comprised of 8 modules addressing these requirements. These solution modules are: Foundational (Landing zones, Data Sources, Operational setup, Policies, etc.) Dashboard Hydration + Agentic Reporting – Residency Compliance Dashboard Hydration + Agentic Reporting – Confidential Compute for PII Compliance Dashboard Hydration + Agentic Reporting – MS Defender Compliance Dashboard Hydration + Agentic Reporting – Resource Tag Compliance Dashboard Hydration + Agentic Reporting – OS Updates/Patch Compliance Enforce Compliance via Copilot Agent - Residency Compliance Enforce Compliance via Copilot Agent – CC PII Compliance Solution – The architecture view These are the main technical components that make up the solution architecture: Data sources of all shapes and sizes on the left, governed by the native Azure or the Arc plane. Additional Azure services across the bottom layer for the foundational governance posture Microsoft Purview, in the top middle, as the unified data governance platform Microsoft Fabric, in the bottom middle, as the end-to-end ingestion and analytics platform Microsoft Power Platform, on the right, as the low code/no code business flow and the copilot agent experience Solution – The end user view So how does Mark see this solution as a data governance officer? He doesn’t see all the intricacies of the solution integration and the logic execution. He sees two things: A Power BI dashboard running on Microsoft Fabric with A compliance dashboard with an overall score in each of the five compliance domains alongside scores for each of the data products across these domains Additional reporting views for more granular reporting Fabric-based pipeline that hydrates the underlying semantic models from various sources to keep the reports fresh and current A Copilot agent (in Teams) for both: Reporting on all compliance domains Enforcing in-scope compliance across selected domains The agent takes care of it - queries Fabric’s semantic model, calls Azure Function endpoints, updates Purview glossary terms, applies Azure tags, and sends Teams notifications. The “How” – Residency Compliance Let’s pick a few modules to walk through how these solution modules work together to give a cohesive agentic governance experience to Mark. It’s Monday morning, and Mark logs into the Contoso governance portal with a cup of coffee in hand. Instead of a dozen browser tabs, he has two main tools opened: the Data Governance Dashboard and the Contoso Governance Copilot agent. To address some inquiries that came as an assigned action to him, he interacted with the agent. During this interaction, not only did he validate if there were any residency missing in the unified data governance platform (Purview), but he was also able to address a mismatch between Purview and Azure resource, based on the designed principles. Here is the snippet of the chat: Now, under the hood, several components have worked on behalf of the agent in performing this governance checking and applying the necessary course of action: Even before Mark's conversation with the agent, an ongoing hydration process keeps the Fabric Power BI dashboard up to date. Dashboard Hydration + Agentic Reporting – Residency Compliance A Fabric notebook runs the residency scorecard code block through a pipeline. It reads two Lakehouse tables containing latest residency information from Purview and the approved region list Then, the notebook gets a Microsoft Entra bearer token Once acquired, the notebook then calls an Azure Function endpoint This endpoint, then searches for the Azure resources associated with the data products in Purview using an Azure resource tag. The notebook then compares the declared Purview residency with the approved region list and the associated resource’s region The notebook then calculates the final 0 / 25 / 50 / 75 / 100 residency compliance score and a reason. For example: A data product without an associated Azure resource gets a 0, while a data product whose residency in Purview is an approved region by Contoso, and also matches with the associated Azure resource, gets a 100. It then writes the results to the relevant residency compliance Lakehouse tables The dedicated compliance table then feeds to the semantic model for reporting The compliance Power BI dashboard is hydrated Enforce Compliance via Copilot Agent - Residency Compliance With the dashboard data regularly updated, the agent follows this logic, the updated reporting data, and the actions at its disposal, during the earlier conversation with Mark : Mark initiates the conversation with the agent The agent calls a Power Automate flow This flow retrieves Purview’s residency information stored in the Fabric semantic model 5, 6, 7 and 8. When Mark asks to investigate further on a data product, the agent carries the conversation using a topic, which then leverages a flow, which uses a Power Automate custom connector to access an Azure Function endpoint. This endpoint then retrieves latest glossary (residency) information about the data product in question, from Purview, and provides a preview back to the user 10, 11, 12, and 13. If the update criteria are met, and if there is no conflict, and with Mark’s blessings, the topic then calls another flow to access the Functions Purview Update endpoint, and make the glossary (residency) update in Purview for that data product The “How” – Confidential Compute for PII Compliance Dashboard Hydration + Agentic Reporting – Confidential Compute for PII Compliance The following snippet shows how Mark addresses the compliance risk with a critical data product (application), S/4 HANA, and performed the necessary compliance actions, such as tagging the associated resources and notifying the data product owners via Teams channel. The following diagram shows the under-the-hood hydration flow for confidential compute compliance: Enforce Compliance via Copilot Agent – CC PII Compliance Finally, the diagram below shows how Mark’s conversation flows through the main solution components: Outcome Stepping back, what did we accomplish for Mark and Contoso? We turned an onslaught of governance challenges into an opportunity to modernize how data is managed. This gave Mark: Centralized Visibility into data assets across the landscape through Purview and a unified dashboard Proactive compliance enabled with automated checks - controlled with Purview exports and Fabric pipeline schedules And compliance enforcement using an agent Hybrid Cloud Consistency. By using Azure Arc and a foundational data plane management setup Reduced Operational overhead with agentic reporting and compliance Though the solution is comprised of wide variety of components/services, it is built from standard building blocks and is relatively simple to implement. In total, the solution combined around a dozen Azure services and over 40 distinct components (from Purview catalogs to data pipelines, to custom functions and flows). You can choose to implement some or all the compliance domains. Or, better yet, build upon and create new domains and pave new paths. Wrap-up I believe many enterprises could take a similar journey. If you’re facing these issues, consider this an invitation to think differently about data governance. Start with the pieces you already have – your own building blocks of cloud services and data – and imagine what you could build. Chances are that a lot of the heavy lifting can be orchestrated with today’s technology. And with the rise of AI copilots, the dream of agentic data governance – where your policies are continuously enforced by smart agents – is no longer science fiction. It’s here, right now, waiting for you to take it for a spin. Next steps Watch the video narrative on SAP on Azure YouTube channel: Build it with the GitHub Repository: https://github.com/moazmirza/data-sov-and-hyb-cloud Comments/questions: Here, or @ LinkedIn /moazmirza Solution Selfies Azure Policy Compliance - Foundational Governance Posture Purview Data Product Catalog and Data Lineage Purview Governance Metadata à Fabric Lakehouse Fabric Semantic Model Additional Fabric Power BI Dashboard Copilot Studio Topic Flow Azure Function Endpoints176Views0likes0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
We’re excited to announce a preview capability in Azure API Management (APIM) — you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus — no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they don’t support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management — no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the “Service Bus data sender” role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. 📖 Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing – Queue incoming orders for fulfillment or billing. Event notifications – Trigger internal workflows across multiple applications. Telemetry ingestion – Forward IoT or mobile app data to Service Bus for analytics. Partner integrations – Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus — no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single API’s workload, or extend to enterprise-wide event distribution using topics and subscriptions. You’ll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation 👉 here4.4KViews4likes7CommentsMigrate Data Ingestion from Data Collector to Log Ingestion
HTTP Data Collector API in Log Analytics workspaces is being deprecated, and will be totally out of support in September 2026. Data Collector actions in logic app using already created API connections (which uses workspace Id & Key) would still work against old custom log tables, however, newly created table will not be able to ingest data, although the connector would still succeed in logic app, but no data will be populated in newly created custom logs. In case new API connection is created for Data Collector action (using workspace Id & Key); these will fail with 403 - Forbidden action. Users should start using the Log Ingestion API to send data to custom tables, and this document will guide users on how to use Log Ingestion API in logic apps. Note: Azure portal currently is update so it doesn't show the Workspace keys in Log Analytics workspace page, however, Az CLI will still get the keys, but as stated, actions will fail with 403 when using them in Data Collector Action. Creating DCE & DCRs: To utilize the Log Ingestion API, Data Collection Endpoint & Data Collection Rule should be created first. DCE Creation is simple, from azure portal search for DCE, and then create a new one: For DCR creation, it can be either created from the DCR page in Azure Portal, or upon creating the custom log in Log Analytics workspace. DCR Popup You need to upload sample data file, so the custom log table has a schema, it needs to be JSON array. In case the sample log doesn't have a TimeGenerated field, you can easily add it using the mapping function as below: Add the below code in the Transformation box, then click run. Once you complete the DCR creation, we need to get the DCE full endpoint. Getting DCE Log Ingestion Full URL To get the full endpoint URL, please follow the below: 1. Get the DCE Log Ingestion URL from the DCE overview page: 2. On the DCR Page, get the immutable id for the DCR., then click on the JSON view of the DCR resource: 3. From the JSON view, get the stream name from the streamDeclarations field Now the full Log Ingestion URL is: DCE_URL/dataCollectionRules/{immutable_id}/streams/{streamName}?api-version=2023-01-01 It would be similar to: https://mshbou****.westeurope-1.ingest.monitor.azure.com/dataCollectionRules/dcr-7*****4e988bef2995cd52ae/streams/Custom-mshboulLogAPI_CL?api-version=2023-01-01 Granting Logic App MI needed IAM Roles: To call the ingestion endpoint using Logic Apps MI, we need to grant logic apps MI the role "Monitoring Metrics Publisher" over the DCR resource. To do this, open the DCR, from the blade choose Access Control (IAM), and then grant the logic app MI the role "Monitoring Metrics Publisher" Calling Log Ingestion Endpoint from logic apps: To call the ingestion endpoint from logic apps, we need to use the HTTP action, as below, the URI is the full DCE Endpoint we created before. Add the content-type headers, and the json body that contains the log data you want to send. For the authentication, it will be as below: Once executed, it should succeed, with status code 204. For more details on the Log Ingestion API, and the migration, please see our documentation: Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn Logs Ingestion API in Azure Monitor - Azure Monitor | Microsoft Learn Thanks.342Views0likes0CommentsAnnouncing General Availability: Azure Logic Apps Standard Custom Code with .NET 8
We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. What’s New in GA This GA release introduces several key enhancements that improve the development experience and expand the capabilities of custom code in Logic Apps: Bring Your Own Packages Developers can now include and manage their own NuGet packages within custom code projects without having to resolve conflicts with the dependencies used by the language worker host. The update includes the ability to load the assembly dependencies of the custom code project into a separate Assembly context allowing you to bring any NET8 compatible dependent assembly versions that your project need. There are only three exceptions to this rule: Microsoft.Extensions.Logging.Abstractions Microsoft.Extensions.DependencyInjection.Abstractions Microsoft.Azure.Functions.Extensions.Workflows.Abstractions Dependency Injection Native Support Custom code now supports native Dependency Injection (DI), enabling better separation of concerns and more testable, maintainable code. This aligns with modern .NET development patterns and simplifies service management within your custom logic. To enable Dependency Injection, developers will need to provide a StartupConfiguration class, defining the list of dependencies: using Microsoft.Azure.Functions.Extensions.Workflows; using Microsoft.Extensions.DependencyInjection; public class StartupConfiguration : IConfigureStartup { /// <summary> /// Configures services for the Azure Functions application. /// </summary> /// <param name="services">The service collection to configure.</param> public void Configure(IServiceCollection services) { // Register the routing service with dependency injection services.AddSingleton<IRoutingService, OrderRoutingService>(); services.AddSingleton<IDiscountService, DiscountService>(); } } You will also need to initialize those register those services during your custom code class constructor: public class MySampleFunction { private readonly ILogger<MySampleFunction> logger; private readonly IRoutingService routingService; private readonly IDiscountService discountService; public MySampleFunction(ILoggerFactory loggerFactory, IRoutingService routingService, IDiscountService discountService) { this.logger = loggerFactory.CreateLogger<MySampleFunction>(); this.routingService = routingService; this.discountService = discountService; } // your function logic here } Improved Authoring Experience The development experience has been significantly enhanced with improved tooling and templates. Whether you're using Visual Studio or Visual Studio Code, you’ll benefit from streamlined scaffolding, local debugging, and deployment workflows that make building and managing custom code faster and more intuitive. The following user experience improvements were added: Local functions metadata are kept between VS Code sessions, so you don't receive validation errors when editing workflows that depend on the local functions. Projects are also built when designer starts, so you don't have to manually update references. New context menu gestures, allowing you to create new local functions or build your functions project directly from the explorer area Unified debugging experience, making it easer for you to debug. We have now a single task for debugging custom code and logic apps, which makes starting a new debug session as easy as pressing F5. Learn More To get started with custom code in Azure Logic Apps Standard, visit the official Microsoft Learn documentation: Create and run custom code in Azure Logic Apps Standard You can also find example code for Dependency injection wsilveiranz/CustomCode-Dependency-InjectionAdvancing to Agentic AI with Azure NetApp Files VS Code Extension v1.2.0
The Azure NetApp Files VS Code Extension v1.2.0 introduces a major leap toward agentic, AI‑informed cloud operations with the debut of the autonomous Volume Scanner. Moving beyond traditional assistive AI, this release enables intelligent infrastructure analysis that can detect configuration risks, recommend remediations, and execute approved changes under user governance. Complemented by an expanded natural language interface, developers can now manage, optimize, and troubleshoot Azure NetApp Files resources through conversational commands - from performance monitoring to cross‑region replication, backup orchestration, and ARM template generation. Version 1.2.0 establishes the foundation for a multi‑agent system built to reduce operational toil and accelerate a shift toward self-managing enterprise storage in the cloud.290Views0likes0CommentsLogic Apps Aviators Newsletter - April 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month April 2026's Ace Aviator: Marcelo Gomes What’s your role and title? What are your responsibilities? I’m an Integration Team Leader (Azure Integrations) at COFCO International, working within the Enterprise Integration Platform. My core responsibility is to design, architect, and operate integration solutions that connect multiple enterprise systems in a scalable, secure, and resilient way. I sit at the intersection of business, architecture, and engineering, ensuring that business requirements are correctly translated into technical workflows and integration patterns. From a practical standpoint, my responsibilities include: - Defining integration architecture standards and patterns across the organization - Designing end‑to‑end integration solutions using Azure Integration Services - Owning and evolving the API landscape (via Azure API Management) - Leading, mentoring, and supporting the integration team - Driving PoCs, experiments, and technical explorations to validate new approaches - Acting as a bridge between systems, teams, and business domains, ensuring alignment and clarity In short, my role is to make sure integrations are not just working — but are well‑designed, maintainable, and aligned with business goals. Can you give us some insights into your day‑to‑day activities and what a typical day looks like? My day‑to‑day work is a balance between technical leadership, architecture, and execution. A typical day usually involves: - Working closely with Business Analysts and Product Owners to understand integration requirements, constraints, and expected outcomes - Translating those requirements into integration flows, APIs, and orchestration logic - Defining or validating the architecture of integrations, including patterns, error handling, resiliency, and observability - Guiding developers during implementation, reviewing approaches, and helping them make architectural or design decisions - Managing and governing APIs through Azure API Management, ensuring consistency, security, and reusability - Unblocking team members by resolving technical issues, dependencies, or architectural doubts - Performing estimations, supporting planning, and aligning delivery expectations I’m also hands‑on. I actively build integrations myself — not just to help deliver, but to stay close to the platform, understand real challenges, and continuously improve our standards and practices. I strongly believe technical leadership requires staying connected to the actual implementation. What motivates and inspires you to be an active member of the Aviators / Microsoft community? What motivates me is knowledge sharing. A big part of what I know today comes from content shared by others — blog posts, samples, talks, community discussions, and real‑world experiences. Most of my learning followed a simple loop: someone shared → I tried it → I broke it → I fixed it → I learned. For me, learning only really completes its cycle when we share back. Explaining what worked (and what didn’t) helps others avoid the same mistakes and accelerates collective growth. Communities like Aviators and the Microsoft ecosystem create a space where learning is practical, honest, and experience‑driven — and that’s exactly the type of environment I want to contribute to. Looking back, what advice would you give to people getting into STEM or technology? My main advice is: start by doing. Don’t wait until you feel ready or confident — you won’t. When you start doing, you will fail. And that failure is not a problem; it’s part of the learning process. Each failure builds experience, confidence, and technical maturity. Another important point: ask questions. There is no such thing as a stupid question. Asking questions opens perspectives, challenges assumptions, and often triggers better solutions. Sometimes, a simple question from a fresh point of view can completely change how a problem is solved. Progress in technology comes from curiosity, iteration, and collaboration — not perfection. What has helped you grow professionally? Curiosity has been the biggest driver of my professional growth. I like to understand how things work under the hood, not just how to use them. When I’m curious about something, I try it myself, test different approaches, and build my own experience around it. That hands‑on curiosity helps me: - Develop stronger technical intuition - Understand trade‑offs instead of just following patterns blindly - Make better architectural decisions - Communicate more clearly with both technical and non‑technical stakeholders Having personal experience with successes and failures gives me clarity about what I’m really looking for in a solution — and that has been key to my growth. If you had a magic wand to create a new feature in Logic Apps, what would it be and why? I’d add real‑time debugging with execution control. Specifically, the ability to: - Pause a running Logic App execution - Inspect intermediate states, variables, and payloads in real time - Step through actions one by one, similar to a traditional debugger This would dramatically improve troubleshooting, learning, and optimization, especially in complex orchestrations. Today, we rely heavily on post‑execution inspection, which works — but real‑time visibility would be a huge leap forward in productivity and understanding. For integration engineers, that kind of feature would be a true game‑changer. News from our product group How to revoke connection OAuth programmatically in Logic Apps The post shows how to revoke an API connection’s OAuth tokens programmatically in Logic Apps, without using the portal. It covers two approaches: invoking the Revoke Connection Keys REST API directly from a Logic App using the 'Invoke an HTTP request' action, and using an Azure AD app registration to acquire a bearer token that authorizes the revoke call from Logic Apps or tools like Postman. Step-by-step guidance includes building the request URL, obtaining tokens with client credentials, parsing the token response, and setting the Authorization header. It also documents required permissions and a least-privilege custom RBAC role. Introducing Skills in Azure API Center This article introduces Skills in Azure API Center—registered, reusable capabilities that AI agents can discover and use alongside APIs, models, agents, and MCP servers. A skill describes what it does, its source repository, ownership, and which tools it is allowed to access, providing explicit governance. Teams can register skills manually in the Azure portal or automatically sync them from a Git repository, supporting GitOps workflows at scale. The portal offers discovery, filtering, and lifecycle visibility. Benefits include a single inventory for AI assets, better reuse, and controlled access via Allowed tools. Skills are available in preview with documentation links. Reliable blob processing using Azure Logic Apps: Recommended architecture The post explains limitations of the in‑app Azure Blob trigger in Logic Apps, which relies on polling and best‑effort storage logs that can miss events under load. For mission‑critical scenarios, it recommends a queue‑based pattern: have the source system emit a message to Azure Storage Queues after each blob upload, then trigger the Logic App from the queue and fetch the blob by metadata. Benefits include guaranteed triggering, decoupling, retries, and observability. As an alternative, it outlines using Event Grid with single‑tenant Logic App endpoints, plus caveats for private endpoints and subscription validation requirements. Implementing / Migrating the BizTalk Server Aggregator Pattern to Azure Logic Apps Standard This article shows how to implement or migrate the classic BizTalk Server Aggregator pattern to Azure Logic Apps Standard using a production-ready template available in the Azure portal. It maps BizTalk orchestration concepts (correlation sets, pipelines, MessageBox) to cloud-native equivalents: a stateful workflow, Azure Service Bus as the messaging backbone, CorrelationId-based grouping, and FlatFileDecoding for reusing existing BizTalk XSD schemas with zero refactoring. Step-by-step guidance covers triggering with the Service Bus connector, grouping messages by CorrelationId, decoding flat files, composing aggregated results, and delivering them via HTTP. A side‑by‑side comparison highlights architectural differences and migration considerations, aligned with BizTalk Server end‑of‑life timelines. News from our community Resilience for Azure IPaaS services Post by Stéphane Eyskens Stéphane Eyskens examines resilience patterns for Azure iPaaS workloads and how to design multi‑region architectures spanning stateless and stateful services. The article maps strategies across Service Bus, Event Hubs, Event Grid, Durable Functions, Logic Apps, and API Management, highlighting failover models, idempotency, partitioning, and retry considerations. It discusses trade‑offs between active‑active and active‑passive, the role of a governed API front door, and the importance of consistent telemetry for recovery and diagnostics. The piece offers pragmatic guidance for integration teams building high‑availability, fault‑tolerant solutions on Azure. From APIs to Agents: Rethinking Integration in the Agentic Era Post by Al Ghoniem, MBA This article frames AI agents as a new layer in enterprise integration rather than a replacement for existing platforms. It contrasts deterministic orchestration with agent‑mediated behavior, then proposes an Azure‑aligned architecture: Azure AI Agent Service as runtime, API Management as the governed tool gateway, Service Bus/Event Grid for events, Logic Apps for deterministic workflows, API Center as registry, and Entra for identity and control. It also outlines patterns—tool‑mediated access, hybrid orchestration, event+agent systems, and policy‑enforced interaction—plus anti‑patterns to avoid. DevUP Talks 01 - 2026 Q1 trends with Kent Weare Video by Mattias Lögdberg Mattias Lögdberg hosts Kent Weare for a concise discussion on early‑2026 trends affecting integration and cloud development. The conversation explores how AI is reshaping solution design, where new opportunities are emerging, and how teams can adapt practices for reliability, scalability, and speed. It emphasizes practical implications for developers and architects working with Azure services and modern integration stacks. The episode serves as a quick way to track directional changes and focus on skills that matter as agentic automation and platform capabilities evolve. Azure Logic Apps as MCP Servers: A Step-by-Step Guide Post by Stephen W Thomas Stephen W Thomas shows how to expose Azure Logic Apps (Standard) as MCP servers so AI agents can safely reuse existing enterprise workflows. The guide explains why this matters—reusing logic, tapping 1,400+ connectors, and applying key-based auth—and walks through creating an HTTP workflow, defining JSON schemas, connecting to SQL Server, and generating API keys from the MCP Servers blade. It closes with testing in VS Code, demonstrating how agents invoke Logic Apps tools to query live data with governance intact, without rewriting integration code. BizTalk to Azure Migration Roadmap: Integration Journey Post by Sandro Pereira This roadmap-style article distills lessons from BizTalk-to-Azure migrations into a structured journey. It outlines motivations for moving, capability mapping from BizTalk to Azure Integration Services, and phased strategies that reduce risk while modernizing. Readers get guidance on assessing dependencies, choosing target Azure services, designing hybrid or cloud‑native architectures, and sequencing workloads. The post emphasises that migration is not a lift‑and‑shift but a program of work aligned to business priorities, platform governance, and operational readiness. BizTalk Adapters to Azure Logic Apps Connectors Post by Michael Stephenson Michael Stephenson discusses how organizations migrating from BizTalk must rethink integration patterns when moving to Azure Logic Apps connectors. The post considers what maps well, where gaps and edge cases appear, and how real-world implementations often require re‑architecting around AIS capabilities rather than a one‑to‑one adapter swap. It highlights community perspectives and practical considerations for planning, governance, and operationalizing new designs beyond pure connector parity. Pro-Code Enterprise AI-Agents using MCP for Low-Code Integration Video by Sebastian Meyer This short video demonstrates bridging pro‑code and low‑code by using the Model Context Protocol (MCP) to let autonomous AI agents interact with enterprise systems via Logic Apps. It walks through the high‑level setup—agent, MCP server, and Logic Apps workflows—and shows how to connect to platforms like ServiceNow and SAP. The focus is on practical tool choice and architecture so teams can extend existing integration assets to agent‑driven use cases without rebuilding from scratch. Friday Fact: The Hidden Retry Behavior That Makes Logic Apps Feel Stuck Post by João Ferreira This Friday Fact explains why a Logic App can appear “stuck” when calling unstable APIs: hidden retry policies, exponential backoff, and looped actions can accumulate retries and slow runs dramatically. It lists default behaviors many miss, common causes like throttling, and mitigation steps such as setting explicit retry policies, using Configure run after for failure paths, and introducing circuit breakers for flaky backends. The takeaway: the workflow may not be broken—just retrying too aggressively—so design explicit limits and recovery paths. Your Logic App Is NOT Your Business Process (Here’s Why) Video by Al Ghoniem, MBA This short explainer argues that mapping Logic Apps directly to a business process produces brittle workflows. Real systems require retries, enrichment, and exception paths, so the design quickly diverges from a clean process diagram. The video proposes separating technical orchestration from business visibility using Business Process Tracking. That split yields clearer stakeholder views and more maintainable solutions, while keeping deterministic execution inside Logic Apps. It’s a practical reminder to design for operational reality rather than mirroring a whiteboard flow. BizTalk Server Migration to Azure Integration Services Architecture Guidance Post by Sandro Pereira A brief overview of Microsoft’s architecture guidance for migrating BizTalk Server to Azure Integration Services. The post explains the intent of the guidance, links to sections on reasons to migrate, AIS capabilities, BizTalk vs. AIS comparisons, and service selection. It highlights planning topics such as migration approaches, best practices, and a roadmap, helping teams frame decisions for hybrid or cloud‑native architectures as they modernize BizTalk workloads. Logic Apps & Power Automate Action Name to Code Translator Post by Sandro Pereira This post introduces a lightweight utility that converts Logic Apps and Power Automate action names into their code identifiers—useful when writing expressions or searching in Code View. It explains the difference between designer-friendly labels and underlying names (spaces become underscores and certain symbols are disallowed), why this causes friction, and how the tool streamlines the translation. It includes screenshots, usage notes, and the download link to the open-source project, making it a practical time-saver for developers moving between designer and code-based workflows. Logic Apps Consumption CI/CD from Zero to Hero Whitepaper Post by Sandro Pereira This whitepaper provides an end‑to‑end path to automate CI/CD for Logic Apps Consumption using Azure DevOps. It covers solution structure, parameterization, and environment promotion, then shows how to build reliable pipelines for packaging, deploying, and validating Logic Apps. The guidance targets teams standardizing delivery with repeatable patterns and governance. With templates and practical advice, it helps reduce manual steps, improve quality, and accelerate releases for Logic Apps workloads. Logic App Best practices, Tips and Tricks: #2 Actions Naming Convention Post by Sandro Pereira This best‑practices post focuses on action naming in Logic Apps. It explains why consistent, descriptive names improve readability, collaboration, and long‑term maintainability, then outlines rules and constraints on allowed characters. It shows common pitfalls—default names, uneditable trigger/branch labels—and practical tips for renaming while avoiding broken references. The guidance helps teams treat names as living documentation so workflows remain understandable without drilling into each action’s configuration. How to Expose and Protect Logic App Using Azure API Management (Whitepaper) Post by Sandro Pereira This whitepaper explains how to front Logic Apps with Azure API Management for governance and security. It covers publishing Logic Apps as APIs, restricting access, enforcing IP filtering, preventing direct calls to Logic Apps, and documenting operations. It also discusses combining multiple Logic Apps under a single API, naming conventions, and how to remove exposed operations safely. The paper provides step‑by‑step guidance and a download link to help teams standardize exposure and protection patterns. Logic apps – Check the empty result in SQL connector Post by Anitha Eswaran This post shows a practical pattern for handling empty SQL results in Logic Apps. Using the SQL connector’s output, it adds a Parse JSON step to normalize the result and then evaluates length() to short‑circuit execution when no rows are returned. Screenshots illustrate building the schema, wiring the content, and introducing a conditional branch that terminates the run when the array is empty. The approach avoids unnecessary downstream actions and reduces failures, providing a reusable, lightweight guard for query‑driven workflows. Azure Logic Apps Is Basically Power Automate on Steroids (And You Shouldn’t Be Afraid of It) Post by Kim Brian Kim Brian explains why Logic Apps feels familiar to Power Automate builders while removing ceilings that appear at scale. The article contrasts common limits in cloud flows with Standard/Consumption capabilities, highlights the designer vs. code‑view model, and calls out built‑in Azure management features such as versioning, monitoring, and CI/CD. It positions Logic Apps as the “bigger sibling” for enterprise‑grade integrations and data throughput, offering more control without abandoning the visual authoring experience. Logic Apps CSV Alphabetic Sorting Explained Post by Sandro Pereira Sandro Pereira describes why CSV headers and columns can appear in alphabetical order after deploying Logic Apps via ARM templates. He explains how JSON serialization and array ordering influence CSV generation, what triggers the sorting behavior, and practical workarounds to preserve intended column order. The article helps teams avoid subtle defects in data exports by aligning workflow design and deployment practices with how Logic Apps materializes CSV content at runtime. Azure Logic Apps Translation vs Transformation – Actions, Examples, and Schema Mapping Explained Post by Maheshkumar Tiwari Maheshkumar Tiwari clarifies the difference between translation (format change) and transformation (business logic) in Logic Apps, then maps each to concrete Azure capabilities. Using a purchase‑order scenario, he shows how to decode/encode flat files and EDI, convert XML↔JSON, and apply Liquid/XSLT, Select, Compose, and Filter Array for schema mapping and enrichment. A quick reference table ties common tasks to the right action, helping architects separate concerns so format changes don’t break business rules and workflow design remains maintainable.411Views0likes0CommentsHow to revoke connection OAuth programmatically in Logic Apps
There are multiple ways to revoke the OAuth of an API Connection other than clicking on the Revoke button in the portal: For using the "Invoke an HTTP request": Get the connection name: Create a Logic App (Consumption), use a trigger of your liking, then add the “Invoke an HTTP request” Action. Create a connection on the same Tenant that has the connection, then add the below URL to the action, and test it: https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview Your test should be successful. For using an App Registration to fetch the Token (which is how you can do this with Postman or similar as well): App Registration should include this permission: For your Logic App, or Postman, get the Bearer Token by calling this URL: https://login.microsoftonline.com/[TENANT_ID]/oauth2/v2.0/token With this in the Body: Client_Id=[CLIENT_ID_OF_THE_APP_REG]&Client_Secret=[CLIENT_SECRET_FROM_APP_REG]&grant_type=client_credentials&scope=https://management.azure.com/.default For the Header use: Content-Type = application/x-www-form-urlencoded If you’ll use a Logic App for this; Add a Parse JSON action, use the Body of the Get Bearer Token HTTP Action as an input to the Parse JSON Action, then use the below as the Schema: { "properties": { "access_token": { "type": "string" }, "expires_in": { "type": "integer" }, "ext_expires_in": { "type": "integer" }, "token_type": { "type": "string" } }, "type": "object" } Finally, add another HTTP Action (or call this in Postman or similar) to call the Revoke API. In the Header add “Authorization”key with a value of “Bearer” followed by a space then add the bearer token from the output of the Parse JSON Action. https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview If you want to use CURL: Request the Token Use the OAuth 2.0 client credentials flow to get the token: curl -X POST \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=[CLIENT_ID_OF_APP_REG]" \ -d "scope= https://management.azure.com/.default" \ -d "client_secret=[CLIENT_SECRET_FROM_APP_REG]" \ -d "grant_type=client_credentials" \ https://login.microsoftonline.com/[TENANT_ID]/oauth2/v2.0/token The access_token in the response is your Bearer token. Call the Revoke API curl -X POST "https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{"key":"value"}' If you face the below error, you will need to grant Contributor Role to the App Registration on the Resource Group that contains the API Connection. (If you want “Least privilege” skip to below ) { "error": { "code": "AuthorizationFailed", "message": "The client '[App_reg_client_id]' with object id '[App_reg_object_id]' does not have authorization to perform action 'Microsoft.Web/connections/revokeConnectionKeys/action' over scope '/subscriptions/[subscription_id]/resourceGroups/[resource_group_id]/providers/Microsoft.Web/connections/[connection_name]' or the scope is invalid. If access was recently granted, please refresh your credentials." } } For “Least privilege” solution, create a Custom RBAC Role with the below Roles, and assign it to the App Registration Object ID same as above: { "actions": [ "Microsoft.Web/connections/read", "Microsoft.Web/connections/write", "Microsoft.Web/connections/delete", "Microsoft.Web/connections/revokeConnectionKeys/action" ] }Introducing Skills in Azure API Center
The problem Modern applications depend on more than APIs. A single AI workflow might call an LLM, invoke an MCP tool, integrate a third-party service, and reference a business capability spanning dozens of endpoints. Without a central inventory, these assets become impossible to discover, easy to duplicate, and painful to govern. Azure API Center — part of the Azure API Management platform — already catalogs models, agents, and MCP servers alongside traditional APIs. Skills extend that foundation to cover reusable AI capabilities. What is a Skill? As AI agents become more capable, organizations need a way to define and govern what those agents can actually do. Skills are the answer. A Skill in Azure API Center is a reusable, registered capability that AI agents can discover and consume to extend their functionality. Each skill is backed by source code — typically hosted in a Git repository — and describes what it does, what APIs or MCP servers it can access, and who owns it. Think of skills as the building blocks of AI agent behavior, promoted into a governed inventory alongside your APIs, MCP servers, models, and agents. Example: A "Code Review Skill" performs automated code reviews using static analysis. It is registered in API Center with a Source URL pointing to its GitHub repo, allowed to access your code analysis API, and discoverable by any AI agent in your organization. How Skills work in API Center Skills can be added to your inventory in two ways: registered manually through the Azure portal, or synchronized automatically from a connected Git repository. Both approaches end up in the same governed catalog, discoverable through the API Center portal. Option 1: Register a Skill manually Use the Azure portal to register a skill directly. Navigate to Inventory > Assets in your API center, select + Register an asset > Skill, and fill in the registration form. Figure 2: Register a skill form in the Azure portal. The form captures everything needed to make a skill discoverable and governable: Field Description Title Display name for the skill (e.g. Code Review Skill). Identification Auto-generated URL slug based on the title. Editable. Summary One-line description of what the skill does. Description Full detail on capabilities, use cases, and expected behavior. Lifecycle stage Current state: Design, Preview, Production, or Deprecated. Source URL Git repository URL for the skill source code. Allowed tools The APIs or MCP servers from your inventory this skill is permitted to access. Enforces governance at the capability level. License Licensing terms: MIT, Apache 2.0, Proprietary, etc. Contact information Owner or support contact for the skill. Governance note: The Allowed tools field is key for AI governance. It explicitly defines which APIs and MCP servers a skill can invoke — preventing uncontrolled access and making security review straightforward. Option 2: Sync Skills from a Git repository For teams managing skills in source control, API Center can integrate directly with a Git repository and synchronize skill information automatically. This is the recommended approach for teams practicing GitOps or managing many skills at scale. Figure 3: Integrating a Git repository to sync skills automatically into API Center. When you configure a Git integration, API Center: Creates an Environment representing the repository as a source of skills Scans for files matching the configured pattern (default: **/skill.md) Syncs matching skills into your inventory and keeps them current as the repo changes For private repositories, a Personal Access Token (PAT) stored in Azure Key Vault is used for authentication. API Center's managed identity retrieves the PAT securely — no credentials are stored in the service itself. Tip: Use the Automatically configure managed identity and assign permissions option when setting up the integration if you haven't pre-configured a managed identity. API Center handles the Key Vault permissions automatically. Discovering Skills in your catalog Once registered — manually or via Git — skills appear in the Inventory > Assets page alongside all other asset types. Linked skills (synced from Git) are visually identified with a link icon, so teams can see at a glance which skills are source-controlled. From the API Center portal, developers and other stakeholders can browse the full skill catalog, filter by lifecycle stage or type, and view detailed information about each skill — including its source URL, allowed tools, and contact information. Figure 4: Skills catalog in API Center portal, showing registered skills and the details related to the skill. Developer experience: The API Center portal gives teams a self-service way to discover approved skills without needing to ask around or search GitHub. The catalog becomes the authoritative source of what's available and what's allowed. Why this matters for AI development teams Skills close a critical gap in AI governance. As organizations deploy AI agents, they need to know — and control — what those agents can do. Without a governed skill registry, capability discovery is ad hoc, reuse is low, and security review is difficult. By bringing skills into Azure API Center alongside APIs, MCP servers, models, and agents, teams get: A single inventory for all the assets AI agents depend on Explicit governance over which resources each skill can access via Allowed tools Automated, source-controlled skill registration via Git integration Discoverability for developers and AI systems through the API Center portal Consistent lifecycle management — Design through Production to Deprecated API Center, as part of the Azure API Management platform and the broader AI Gateway vision, is evolving into the system of record for AI-ready development. Skills are the latest step in that direction. Available now Skills are available today in Azure API Center (preview). To register your first skill: Sign in to the Azure portal and navigate to your API Center instance In the sidebar, select Inventory > Assets Select + Register an asset > Skill Fill in the registration form and select Create → Register and discover skills in Azure API Center (docs) → Set up your API Center portal → Explore the Azure API Management platform1.8KViews0likes2CommentsGranting Azure Resources Access to SharePoint Online Sites Using Managed Identity
When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternatively, use an App Registration if you prefer a multi-tenant or reusable identity. How to register an app in Microsoft Entra ID - Microsoft identity platform | Microsoft Learn Step 2: Grant Sites.Selected Permission in Microsoft Entra Open Microsoft Entra ID > App registrations. Select your Logic App’s managed identity or app registration. Under API permissions, click Add a permission > Microsoft Graph. Select Application permissions and add: Sites.Selected Click Grant admin consent. Note: Sites.Selected ensures least-privilege access — you must explicitly allow site-level access later. Step 3: Assign SharePoint Site-Level Permission SharePoint Online requires site-level consent for apps with Sites.Selected. Use the script below to assign access. Note: You must be a SharePoint Administrator and have the Sites.FullControl.All permission when running this. PowerShell Script: # Replace with your values $application = @{ id = "{ApplicationID}" # Client ID of the Managed Identity displayName = "{DisplayName}" # Display name (optional but recommended) } $appRole = "write" # Can be "read" or "write" $spoTenant = "contoso.sharepoint.com" # Sharepoint site host $spoSite = "{Sitename}" # Sharepoint site name # Site ID format for Graph API $spoSiteId = $spoTenant + ":/sites/" + $spoSite + ":" # Load Microsoft Graph module Import-Module Microsoft.Graph.Sites # Connect with appropriate permissions Connect-MgGraph -Scope Sites.FullControl.All # Grant site-level permission New-MgSitePermission -SiteId $spoSiteId -Roles $appRole -GrantedToIdentities @{ Application = $application } That's it, Your Logic App or Azure resource can now call Microsoft Graph APIs to interact with that specific SharePoint site (e.g., list files, upload documents). You maintain centralized control and least-privilege access, complying with enterprise security standards. By following this approach, you ensure secure, auditable, and scalable access from Azure services to SharePoint Online — no secrets, no user credentials, just managed identity done right.10KViews2likes6Comments