integration
368 TopicsMigrate Data Ingestion from Data Collector to Log Ingestion
HTTP Data Collector API in Log Analytics workspaces is being deprecated, and will be totally out of support in September 2026. Data Collector actions in logic app using already created API connections (which uses workspace Id & Key) would still work against old custom log tables, however, newly created table will not be able to ingest data, although the connector would still succeed in logic app, but no data will be populated in newly created custom logs. In case new API connection is created for Data Collector action (using workspace Id & Key); these will fail with 403 - Forbidden action. Users should start using the Log Ingestion API to send data to custom tables, and this document will guide users on how to use Log Ingestion API in logic apps. Note: Azure portal currently is update so it doesn't show the Workspace keys in Log Analytics workspace page, however, Az CLI will still get the keys, but as stated, actions will fail with 403 when using them in Data Collector Action. Creating DCE & DCRs: To utilize the Log Ingestion API, Data Collection Endpoint & Data Collection Rule should be created first. DCE Creation is simple, from azure portal search for DCE, and then create a new one: For DCR creation, it can be either created from the DCR page in Azure Portal, or upon creating the custom log in Log Analytics workspace. DCR Popup You need to upload sample data file, so the custom log table has a schema, it needs to be JSON array. In case the sample log doesn't have a TimeGenerated field, you can easily add it using the mapping function as below: Add the below code in the Transformation box, then click run. Once you complete the DCR creation, we need to get the DCE full endpoint. Getting DCE Log Ingestion Full URL To get the full endpoint URL, please follow the below: 1. Get the DCE Log Ingestion URL from the DCE overview page: 2. On the DCR Page, get the immutable id for the DCR., then click on the JSON view of the DCR resource: 3. From the JSON view, get the stream name from the streamDeclarations field Now the full Log Ingestion URL is: DCE_URL/dataCollectionRules/{immutable_id}/streams/{streamName}?api-version=2023-01-01 It would be similar to: https://mshbou****.westeurope-1.ingest.monitor.azure.com/dataCollectionRules/dcr-7*****4e988bef2995cd52ae/streams/Custom-mshboulLogAPI_CL?api-version=2023-01-01 Granting Logic App MI needed IAM Roles: To call the ingestion endpoint using Logic Apps MI, we need to grant logic apps MI the role "Monitoring Metrics Publisher" over the DCR resource. To do this, open the DCR, from the blade choose Access Control (IAM), and then grant the logic app MI the role "Monitoring Metrics Publisher" Calling Log Ingestion Endpoint from logic apps: To call the ingestion endpoint from logic apps, we need to use the HTTP action, as below, the URI is the full DCE Endpoint we created before. Add the content-type headers, and the json body that contains the log data you want to send. For the authentication, it will be as below: Once executed, it should succeed, with status code 204. For more details on the Log Ingestion API, and the migration, please see our documentation: Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn Logs Ingestion API in Azure Monitor - Azure Monitor | Microsoft Learn Thanks.249Views0likes0CommentsAnnouncing General Availability: Azure Logic Apps Standard Custom Code with .NET 8
We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. What’s New in GA This GA release introduces several key enhancements that improve the development experience and expand the capabilities of custom code in Logic Apps: Bring Your Own Packages Developers can now include and manage their own NuGet packages within custom code projects without having to resolve conflicts with the dependencies used by the language worker host. The update includes the ability to load the assembly dependencies of the custom code project into a separate Assembly context allowing you to bring any NET8 compatible dependent assembly versions that your project need. There are only three exceptions to this rule: Microsoft.Extensions.Logging.Abstractions Microsoft.Extensions.DependencyInjection.Abstractions Microsoft.Azure.Functions.Extensions.Workflows.Abstractions Dependency Injection Native Support Custom code now supports native Dependency Injection (DI), enabling better separation of concerns and more testable, maintainable code. This aligns with modern .NET development patterns and simplifies service management within your custom logic. To enable Dependency Injection, developers will need to provide a StartupConfiguration class, defining the list of dependencies: using Microsoft.Azure.Functions.Extensions.Workflows; using Microsoft.Extensions.DependencyInjection; public class StartupConfiguration : IConfigureStartup { /// <summary> /// Configures services for the Azure Functions application. /// </summary> /// <param name="services">The service collection to configure.</param> public void Configure(IServiceCollection services) { // Register the routing service with dependency injection services.AddSingleton<IRoutingService, OrderRoutingService>(); services.AddSingleton<IDiscountService, DiscountService>(); } } You will also need to initialize those register those services during your custom code class constructor: public class MySampleFunction { private readonly ILogger<MySampleFunction> logger; private readonly IRoutingService routingService; private readonly IDiscountService discountService; public MySampleFunction(ILoggerFactory loggerFactory, IRoutingService routingService, IDiscountService discountService) { this.logger = loggerFactory.CreateLogger<MySampleFunction>(); this.routingService = routingService; this.discountService = discountService; } // your function logic here } Improved Authoring Experience The development experience has been significantly enhanced with improved tooling and templates. Whether you're using Visual Studio or Visual Studio Code, you’ll benefit from streamlined scaffolding, local debugging, and deployment workflows that make building and managing custom code faster and more intuitive. The following user experience improvements were added: Local functions metadata are kept between VS Code sessions, so you don't receive validation errors when editing workflows that depend on the local functions. Projects are also built when designer starts, so you don't have to manually update references. New context menu gestures, allowing you to create new local functions or build your functions project directly from the explorer area Unified debugging experience, making it easer for you to debug. We have now a single task for debugging custom code and logic apps, which makes starting a new debug session as easy as pressing F5. Learn More To get started with custom code in Azure Logic Apps Standard, visit the official Microsoft Learn documentation: Create and run custom code in Azure Logic Apps Standard You can also find example code for Dependency injection wsilveiranz/CustomCode-Dependency-InjectionAdvancing to Agentic AI with Azure NetApp Files VS Code Extension v1.2.0
The Azure NetApp Files VS Code Extension v1.2.0 introduces a major leap toward agentic, AI‑informed cloud operations with the debut of the autonomous Volume Scanner. Moving beyond traditional assistive AI, this release enables intelligent infrastructure analysis that can detect configuration risks, recommend remediations, and execute approved changes under user governance. Complemented by an expanded natural language interface, developers can now manage, optimize, and troubleshoot Azure NetApp Files resources through conversational commands - from performance monitoring to cross‑region replication, backup orchestration, and ARM template generation. Version 1.2.0 establishes the foundation for a multi‑agent system built to reduce operational toil and accelerate a shift toward self-managing enterprise storage in the cloud.184Views0likes0CommentsCrowdStrike API Data Connector (via Codeless Connector Framework) (Preview)
API scopes created. Added to Connector however only streams observed are from Alerts and Hosts. Detections is not logging? Anyone experiencing this issue? Github has post about it apears to be escalated for feature request. CrowdStrikeDetections. not ingested Anyone have this setup and working?328Views0likes2CommentsIssue connecting Azure Sentinel GitHub app to Sentinel Instance when IP allow list is enabled
Hi everyone, I’m running into an issue connecting the Azure Sentinel GitHub app to my Sentinel workspace in order to create our CI/CD pipelines for our detection rules, and I’m hoping someone can point me in the right direction. Symptoms: When configuring the GitHub connection in Sentinel, the repository dropdown does not populate. There are no explicit errors, but the connection clearly isn’t completing. If I disable my organization’s IP allow list, everything works as expected and the repos appear immediately. I’ve seen that some GitHub Apps automatically add the IP ranges they require to an organization’s allow list. However, from what I can tell, the Azure Sentinel GitHub app does not seem to have this capability, and requires manual allow listing instead. What I’ve tried / researched: Reviewed Microsoft documentation for Sentinel ↔ GitHub integrations Looked through Azure IP range and Service Tag documentation I’ve seen recommendations to allow list the IP ranges published at //api.github.com/meta, as many GitHub apps rely on these ranges I’ve already tried allow listing multiple ranges from the GitHub meta endpoint, but the issue persists My questions: Does anyone know which IP ranges are used by the Azure Sentinel GitHub app specifically? Is there an official or recommended approach for using this integration in environments with strict IP allow lists? Has anyone successfully configured this integration without fully disabling IP restrictions? Any insight, references, or firsthand experience would be greatly appreciated. Thanks in advance!145Views0likes1CommentLogic Apps Aviators Newsletter - April 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month April 2026's Ace Aviator: Marcelo Gomes What’s your role and title? What are your responsibilities? I’m an Integration Team Leader (Azure Integrations) at COFCO International, working within the Enterprise Integration Platform. My core responsibility is to design, architect, and operate integration solutions that connect multiple enterprise systems in a scalable, secure, and resilient way. I sit at the intersection of business, architecture, and engineering, ensuring that business requirements are correctly translated into technical workflows and integration patterns. From a practical standpoint, my responsibilities include: - Defining integration architecture standards and patterns across the organization - Designing end‑to‑end integration solutions using Azure Integration Services - Owning and evolving the API landscape (via Azure API Management) - Leading, mentoring, and supporting the integration team - Driving PoCs, experiments, and technical explorations to validate new approaches - Acting as a bridge between systems, teams, and business domains, ensuring alignment and clarity In short, my role is to make sure integrations are not just working — but are well‑designed, maintainable, and aligned with business goals. Can you give us some insights into your day‑to‑day activities and what a typical day looks like? My day‑to‑day work is a balance between technical leadership, architecture, and execution. A typical day usually involves: - Working closely with Business Analysts and Product Owners to understand integration requirements, constraints, and expected outcomes - Translating those requirements into integration flows, APIs, and orchestration logic - Defining or validating the architecture of integrations, including patterns, error handling, resiliency, and observability - Guiding developers during implementation, reviewing approaches, and helping them make architectural or design decisions - Managing and governing APIs through Azure API Management, ensuring consistency, security, and reusability - Unblocking team members by resolving technical issues, dependencies, or architectural doubts - Performing estimations, supporting planning, and aligning delivery expectations I’m also hands‑on. I actively build integrations myself — not just to help deliver, but to stay close to the platform, understand real challenges, and continuously improve our standards and practices. I strongly believe technical leadership requires staying connected to the actual implementation. What motivates and inspires you to be an active member of the Aviators / Microsoft community? What motivates me is knowledge sharing. A big part of what I know today comes from content shared by others — blog posts, samples, talks, community discussions, and real‑world experiences. Most of my learning followed a simple loop: someone shared → I tried it → I broke it → I fixed it → I learned. For me, learning only really completes its cycle when we share back. Explaining what worked (and what didn’t) helps others avoid the same mistakes and accelerates collective growth. Communities like Aviators and the Microsoft ecosystem create a space where learning is practical, honest, and experience‑driven — and that’s exactly the type of environment I want to contribute to. Looking back, what advice would you give to people getting into STEM or technology? My main advice is: start by doing. Don’t wait until you feel ready or confident — you won’t. When you start doing, you will fail. And that failure is not a problem; it’s part of the learning process. Each failure builds experience, confidence, and technical maturity. Another important point: ask questions. There is no such thing as a stupid question. Asking questions opens perspectives, challenges assumptions, and often triggers better solutions. Sometimes, a simple question from a fresh point of view can completely change how a problem is solved. Progress in technology comes from curiosity, iteration, and collaboration — not perfection. What has helped you grow professionally? Curiosity has been the biggest driver of my professional growth. I like to understand how things work under the hood, not just how to use them. When I’m curious about something, I try it myself, test different approaches, and build my own experience around it. That hands‑on curiosity helps me: - Develop stronger technical intuition - Understand trade‑offs instead of just following patterns blindly - Make better architectural decisions - Communicate more clearly with both technical and non‑technical stakeholders Having personal experience with successes and failures gives me clarity about what I’m really looking for in a solution — and that has been key to my growth. If you had a magic wand to create a new feature in Logic Apps, what would it be and why? I’d add real‑time debugging with execution control. Specifically, the ability to: - Pause a running Logic App execution - Inspect intermediate states, variables, and payloads in real time - Step through actions one by one, similar to a traditional debugger This would dramatically improve troubleshooting, learning, and optimization, especially in complex orchestrations. Today, we rely heavily on post‑execution inspection, which works — but real‑time visibility would be a huge leap forward in productivity and understanding. For integration engineers, that kind of feature would be a true game‑changer. News from our product group How to revoke connection OAuth programmatically in Logic Apps The post shows how to revoke an API connection’s OAuth tokens programmatically in Logic Apps, without using the portal. It covers two approaches: invoking the Revoke Connection Keys REST API directly from a Logic App using the 'Invoke an HTTP request' action, and using an Azure AD app registration to acquire a bearer token that authorizes the revoke call from Logic Apps or tools like Postman. Step-by-step guidance includes building the request URL, obtaining tokens with client credentials, parsing the token response, and setting the Authorization header. It also documents required permissions and a least-privilege custom RBAC role. Introducing Skills in Azure API Center This article introduces Skills in Azure API Center—registered, reusable capabilities that AI agents can discover and use alongside APIs, models, agents, and MCP servers. A skill describes what it does, its source repository, ownership, and which tools it is allowed to access, providing explicit governance. Teams can register skills manually in the Azure portal or automatically sync them from a Git repository, supporting GitOps workflows at scale. The portal offers discovery, filtering, and lifecycle visibility. Benefits include a single inventory for AI assets, better reuse, and controlled access via Allowed tools. Skills are available in preview with documentation links. Reliable blob processing using Azure Logic Apps: Recommended architecture The post explains limitations of the in‑app Azure Blob trigger in Logic Apps, which relies on polling and best‑effort storage logs that can miss events under load. For mission‑critical scenarios, it recommends a queue‑based pattern: have the source system emit a message to Azure Storage Queues after each blob upload, then trigger the Logic App from the queue and fetch the blob by metadata. Benefits include guaranteed triggering, decoupling, retries, and observability. As an alternative, it outlines using Event Grid with single‑tenant Logic App endpoints, plus caveats for private endpoints and subscription validation requirements. Implementing / Migrating the BizTalk Server Aggregator Pattern to Azure Logic Apps Standard This article shows how to implement or migrate the classic BizTalk Server Aggregator pattern to Azure Logic Apps Standard using a production-ready template available in the Azure portal. It maps BizTalk orchestration concepts (correlation sets, pipelines, MessageBox) to cloud-native equivalents: a stateful workflow, Azure Service Bus as the messaging backbone, CorrelationId-based grouping, and FlatFileDecoding for reusing existing BizTalk XSD schemas with zero refactoring. Step-by-step guidance covers triggering with the Service Bus connector, grouping messages by CorrelationId, decoding flat files, composing aggregated results, and delivering them via HTTP. A side‑by‑side comparison highlights architectural differences and migration considerations, aligned with BizTalk Server end‑of‑life timelines. News from our community Resilience for Azure IPaaS services Post by Stéphane Eyskens Stéphane Eyskens examines resilience patterns for Azure iPaaS workloads and how to design multi‑region architectures spanning stateless and stateful services. The article maps strategies across Service Bus, Event Hubs, Event Grid, Durable Functions, Logic Apps, and API Management, highlighting failover models, idempotency, partitioning, and retry considerations. It discusses trade‑offs between active‑active and active‑passive, the role of a governed API front door, and the importance of consistent telemetry for recovery and diagnostics. The piece offers pragmatic guidance for integration teams building high‑availability, fault‑tolerant solutions on Azure. From APIs to Agents: Rethinking Integration in the Agentic Era Post by Al Ghoniem, MBA This article frames AI agents as a new layer in enterprise integration rather than a replacement for existing platforms. It contrasts deterministic orchestration with agent‑mediated behavior, then proposes an Azure‑aligned architecture: Azure AI Agent Service as runtime, API Management as the governed tool gateway, Service Bus/Event Grid for events, Logic Apps for deterministic workflows, API Center as registry, and Entra for identity and control. It also outlines patterns—tool‑mediated access, hybrid orchestration, event+agent systems, and policy‑enforced interaction—plus anti‑patterns to avoid. DevUP Talks 01 - 2026 Q1 trends with Kent Weare Video by Mattias Lögdberg Mattias Lögdberg hosts Kent Weare for a concise discussion on early‑2026 trends affecting integration and cloud development. The conversation explores how AI is reshaping solution design, where new opportunities are emerging, and how teams can adapt practices for reliability, scalability, and speed. It emphasizes practical implications for developers and architects working with Azure services and modern integration stacks. The episode serves as a quick way to track directional changes and focus on skills that matter as agentic automation and platform capabilities evolve. Azure Logic Apps as MCP Servers: A Step-by-Step Guide Post by Stephen W Thomas Stephen W Thomas shows how to expose Azure Logic Apps (Standard) as MCP servers so AI agents can safely reuse existing enterprise workflows. The guide explains why this matters—reusing logic, tapping 1,400+ connectors, and applying key-based auth—and walks through creating an HTTP workflow, defining JSON schemas, connecting to SQL Server, and generating API keys from the MCP Servers blade. It closes with testing in VS Code, demonstrating how agents invoke Logic Apps tools to query live data with governance intact, without rewriting integration code. BizTalk to Azure Migration Roadmap: Integration Journey Post by Sandro Pereira This roadmap-style article distills lessons from BizTalk-to-Azure migrations into a structured journey. It outlines motivations for moving, capability mapping from BizTalk to Azure Integration Services, and phased strategies that reduce risk while modernizing. Readers get guidance on assessing dependencies, choosing target Azure services, designing hybrid or cloud‑native architectures, and sequencing workloads. The post emphasises that migration is not a lift‑and‑shift but a program of work aligned to business priorities, platform governance, and operational readiness. BizTalk Adapters to Azure Logic Apps Connectors Post by Michael Stephenson Michael Stephenson discusses how organizations migrating from BizTalk must rethink integration patterns when moving to Azure Logic Apps connectors. The post considers what maps well, where gaps and edge cases appear, and how real-world implementations often require re‑architecting around AIS capabilities rather than a one‑to‑one adapter swap. It highlights community perspectives and practical considerations for planning, governance, and operationalizing new designs beyond pure connector parity. Pro-Code Enterprise AI-Agents using MCP for Low-Code Integration Video by Sebastian Meyer This short video demonstrates bridging pro‑code and low‑code by using the Model Context Protocol (MCP) to let autonomous AI agents interact with enterprise systems via Logic Apps. It walks through the high‑level setup—agent, MCP server, and Logic Apps workflows—and shows how to connect to platforms like ServiceNow and SAP. The focus is on practical tool choice and architecture so teams can extend existing integration assets to agent‑driven use cases without rebuilding from scratch. Friday Fact: The Hidden Retry Behavior That Makes Logic Apps Feel Stuck Post by João Ferreira This Friday Fact explains why a Logic App can appear “stuck” when calling unstable APIs: hidden retry policies, exponential backoff, and looped actions can accumulate retries and slow runs dramatically. It lists default behaviors many miss, common causes like throttling, and mitigation steps such as setting explicit retry policies, using Configure run after for failure paths, and introducing circuit breakers for flaky backends. The takeaway: the workflow may not be broken—just retrying too aggressively—so design explicit limits and recovery paths. Your Logic App Is NOT Your Business Process (Here’s Why) Video by Al Ghoniem, MBA This short explainer argues that mapping Logic Apps directly to a business process produces brittle workflows. Real systems require retries, enrichment, and exception paths, so the design quickly diverges from a clean process diagram. The video proposes separating technical orchestration from business visibility using Business Process Tracking. That split yields clearer stakeholder views and more maintainable solutions, while keeping deterministic execution inside Logic Apps. It’s a practical reminder to design for operational reality rather than mirroring a whiteboard flow. BizTalk Server Migration to Azure Integration Services Architecture Guidance Post by Sandro Pereira A brief overview of Microsoft’s architecture guidance for migrating BizTalk Server to Azure Integration Services. The post explains the intent of the guidance, links to sections on reasons to migrate, AIS capabilities, BizTalk vs. AIS comparisons, and service selection. It highlights planning topics such as migration approaches, best practices, and a roadmap, helping teams frame decisions for hybrid or cloud‑native architectures as they modernize BizTalk workloads. Logic Apps & Power Automate Action Name to Code Translator Post by Sandro Pereira This post introduces a lightweight utility that converts Logic Apps and Power Automate action names into their code identifiers—useful when writing expressions or searching in Code View. It explains the difference between designer-friendly labels and underlying names (spaces become underscores and certain symbols are disallowed), why this causes friction, and how the tool streamlines the translation. It includes screenshots, usage notes, and the download link to the open-source project, making it a practical time-saver for developers moving between designer and code-based workflows. Logic Apps Consumption CI/CD from Zero to Hero Whitepaper Post by Sandro Pereira This whitepaper provides an end‑to‑end path to automate CI/CD for Logic Apps Consumption using Azure DevOps. It covers solution structure, parameterization, and environment promotion, then shows how to build reliable pipelines for packaging, deploying, and validating Logic Apps. The guidance targets teams standardizing delivery with repeatable patterns and governance. With templates and practical advice, it helps reduce manual steps, improve quality, and accelerate releases for Logic Apps workloads. Logic App Best practices, Tips and Tricks: #2 Actions Naming Convention Post by Sandro Pereira This best‑practices post focuses on action naming in Logic Apps. It explains why consistent, descriptive names improve readability, collaboration, and long‑term maintainability, then outlines rules and constraints on allowed characters. It shows common pitfalls—default names, uneditable trigger/branch labels—and practical tips for renaming while avoiding broken references. The guidance helps teams treat names as living documentation so workflows remain understandable without drilling into each action’s configuration. How to Expose and Protect Logic App Using Azure API Management (Whitepaper) Post by Sandro Pereira This whitepaper explains how to front Logic Apps with Azure API Management for governance and security. It covers publishing Logic Apps as APIs, restricting access, enforcing IP filtering, preventing direct calls to Logic Apps, and documenting operations. It also discusses combining multiple Logic Apps under a single API, naming conventions, and how to remove exposed operations safely. The paper provides step‑by‑step guidance and a download link to help teams standardize exposure and protection patterns. Logic apps – Check the empty result in SQL connector Post by Anitha Eswaran This post shows a practical pattern for handling empty SQL results in Logic Apps. Using the SQL connector’s output, it adds a Parse JSON step to normalize the result and then evaluates length() to short‑circuit execution when no rows are returned. Screenshots illustrate building the schema, wiring the content, and introducing a conditional branch that terminates the run when the array is empty. The approach avoids unnecessary downstream actions and reduces failures, providing a reusable, lightweight guard for query‑driven workflows. Azure Logic Apps Is Basically Power Automate on Steroids (And You Shouldn’t Be Afraid of It) Post by Kim Brian Kim Brian explains why Logic Apps feels familiar to Power Automate builders while removing ceilings that appear at scale. The article contrasts common limits in cloud flows with Standard/Consumption capabilities, highlights the designer vs. code‑view model, and calls out built‑in Azure management features such as versioning, monitoring, and CI/CD. It positions Logic Apps as the “bigger sibling” for enterprise‑grade integrations and data throughput, offering more control without abandoning the visual authoring experience. Logic Apps CSV Alphabetic Sorting Explained Post by Sandro Pereira Sandro Pereira describes why CSV headers and columns can appear in alphabetical order after deploying Logic Apps via ARM templates. He explains how JSON serialization and array ordering influence CSV generation, what triggers the sorting behavior, and practical workarounds to preserve intended column order. The article helps teams avoid subtle defects in data exports by aligning workflow design and deployment practices with how Logic Apps materializes CSV content at runtime. Azure Logic Apps Translation vs Transformation – Actions, Examples, and Schema Mapping Explained Post by Maheshkumar Tiwari Maheshkumar Tiwari clarifies the difference between translation (format change) and transformation (business logic) in Logic Apps, then maps each to concrete Azure capabilities. Using a purchase‑order scenario, he shows how to decode/encode flat files and EDI, convert XML↔JSON, and apply Liquid/XSLT, Select, Compose, and Filter Array for schema mapping and enrichment. A quick reference table ties common tasks to the right action, helping architects separate concerns so format changes don’t break business rules and workflow design remains maintainable.356Views0likes0CommentsHow to revoke connection OAuth programmatically in Logic Apps
There are multiple ways to revoke the OAuth of an API Connection other than clicking on the Revoke button in the portal: For using the "Invoke an HTTP request": Get the connection name: Create a Logic App (Consumption), use a trigger of your liking, then add the “Invoke an HTTP request” Action. Create a connection on the same Tenant that has the connection, then add the below URL to the action, and test it: https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview Your test should be successful. For using an App Registration to fetch the Token (which is how you can do this with Postman or similar as well): App Registration should include this permission: For your Logic App, or Postman, get the Bearer Token by calling this URL: https://login.microsoftonline.com/[TENANT_ID]/oauth2/v2.0/token With this in the Body: Client_Id=[CLIENT_ID_OF_THE_APP_REG]&Client_Secret=[CLIENT_SECRET_FROM_APP_REG]&grant_type=client_credentials&scope=https://management.azure.com/.default For the Header use: Content-Type = application/x-www-form-urlencoded If you’ll use a Logic App for this; Add a Parse JSON action, use the Body of the Get Bearer Token HTTP Action as an input to the Parse JSON Action, then use the below as the Schema: { "properties": { "access_token": { "type": "string" }, "expires_in": { "type": "integer" }, "ext_expires_in": { "type": "integer" }, "token_type": { "type": "string" } }, "type": "object" } Finally, add another HTTP Action (or call this in Postman or similar) to call the Revoke API. In the Header add “Authorization”key with a value of “Bearer” followed by a space then add the bearer token from the output of the Parse JSON Action. https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview If you want to use CURL: Request the Token Use the OAuth 2.0 client credentials flow to get the token: curl -X POST \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=[CLIENT_ID_OF_APP_REG]" \ -d "scope= https://management.azure.com/.default" \ -d "client_secret=[CLIENT_SECRET_FROM_APP_REG]" \ -d "grant_type=client_credentials" \ https://login.microsoftonline.com/[TENANT_ID]/oauth2/v2.0/token The access_token in the response is your Bearer token. Call the Revoke API curl -X POST "https://management.azure.com/subscriptions/[SUBSCRIPTION_ID]/resourceGroups/[RESOURCE_GROUP]/providers/Microsoft.Web/connections/[NAME_OF_CONNECTION]/revokeConnectionKeys?api-version=2018-07-01-preview" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{"key":"value"}' If you face the below error, you will need to grant Contributor Role to the App Registration on the Resource Group that contains the API Connection. (If you want “Least privilege” skip to below ) { "error": { "code": "AuthorizationFailed", "message": "The client '[App_reg_client_id]' with object id '[App_reg_object_id]' does not have authorization to perform action 'Microsoft.Web/connections/revokeConnectionKeys/action' over scope '/subscriptions/[subscription_id]/resourceGroups/[resource_group_id]/providers/Microsoft.Web/connections/[connection_name]' or the scope is invalid. If access was recently granted, please refresh your credentials." } } For “Least privilege” solution, create a Custom RBAC Role with the below Roles, and assign it to the App Registration Object ID same as above: { "actions": [ "Microsoft.Web/connections/read", "Microsoft.Web/connections/write", "Microsoft.Web/connections/delete", "Microsoft.Web/connections/revokeConnectionKeys/action" ] }Introducing Skills in Azure API Center
The problem Modern applications depend on more than APIs. A single AI workflow might call an LLM, invoke an MCP tool, integrate a third-party service, and reference a business capability spanning dozens of endpoints. Without a central inventory, these assets become impossible to discover, easy to duplicate, and painful to govern. Azure API Center — part of the Azure API Management platform — already catalogs models, agents, and MCP servers alongside traditional APIs. Skills extend that foundation to cover reusable AI capabilities. What is a Skill? As AI agents become more capable, organizations need a way to define and govern what those agents can actually do. Skills are the answer. A Skill in Azure API Center is a reusable, registered capability that AI agents can discover and consume to extend their functionality. Each skill is backed by source code — typically hosted in a Git repository — and describes what it does, what APIs or MCP servers it can access, and who owns it. Think of skills as the building blocks of AI agent behavior, promoted into a governed inventory alongside your APIs, MCP servers, models, and agents. Example: A "Code Review Skill" performs automated code reviews using static analysis. It is registered in API Center with a Source URL pointing to its GitHub repo, allowed to access your code analysis API, and discoverable by any AI agent in your organization. How Skills work in API Center Skills can be added to your inventory in two ways: registered manually through the Azure portal, or synchronized automatically from a connected Git repository. Both approaches end up in the same governed catalog, discoverable through the API Center portal. Option 1: Register a Skill manually Use the Azure portal to register a skill directly. Navigate to Inventory > Assets in your API center, select + Register an asset > Skill, and fill in the registration form. Figure 2: Register a skill form in the Azure portal. The form captures everything needed to make a skill discoverable and governable: Field Description Title Display name for the skill (e.g. Code Review Skill). Identification Auto-generated URL slug based on the title. Editable. Summary One-line description of what the skill does. Description Full detail on capabilities, use cases, and expected behavior. Lifecycle stage Current state: Design, Preview, Production, or Deprecated. Source URL Git repository URL for the skill source code. Allowed tools The APIs or MCP servers from your inventory this skill is permitted to access. Enforces governance at the capability level. License Licensing terms: MIT, Apache 2.0, Proprietary, etc. Contact information Owner or support contact for the skill. Governance note: The Allowed tools field is key for AI governance. It explicitly defines which APIs and MCP servers a skill can invoke — preventing uncontrolled access and making security review straightforward. Option 2: Sync Skills from a Git repository For teams managing skills in source control, API Center can integrate directly with a Git repository and synchronize skill information automatically. This is the recommended approach for teams practicing GitOps or managing many skills at scale. Figure 3: Integrating a Git repository to sync skills automatically into API Center. When you configure a Git integration, API Center: Creates an Environment representing the repository as a source of skills Scans for files matching the configured pattern (default: **/skill.md) Syncs matching skills into your inventory and keeps them current as the repo changes For private repositories, a Personal Access Token (PAT) stored in Azure Key Vault is used for authentication. API Center's managed identity retrieves the PAT securely — no credentials are stored in the service itself. Tip: Use the Automatically configure managed identity and assign permissions option when setting up the integration if you haven't pre-configured a managed identity. API Center handles the Key Vault permissions automatically. Discovering Skills in your catalog Once registered — manually or via Git — skills appear in the Inventory > Assets page alongside all other asset types. Linked skills (synced from Git) are visually identified with a link icon, so teams can see at a glance which skills are source-controlled. From the API Center portal, developers and other stakeholders can browse the full skill catalog, filter by lifecycle stage or type, and view detailed information about each skill — including its source URL, allowed tools, and contact information. Figure 4: Skills catalog in API Center portal, showing registered skills and the details related to the skill. Developer experience: The API Center portal gives teams a self-service way to discover approved skills without needing to ask around or search GitHub. The catalog becomes the authoritative source of what's available and what's allowed. Why this matters for AI development teams Skills close a critical gap in AI governance. As organizations deploy AI agents, they need to know — and control — what those agents can do. Without a governed skill registry, capability discovery is ad hoc, reuse is low, and security review is difficult. By bringing skills into Azure API Center alongside APIs, MCP servers, models, and agents, teams get: A single inventory for all the assets AI agents depend on Explicit governance over which resources each skill can access via Allowed tools Automated, source-controlled skill registration via Git integration Discoverability for developers and AI systems through the API Center portal Consistent lifecycle management — Design through Production to Deprecated API Center, as part of the Azure API Management platform and the broader AI Gateway vision, is evolving into the system of record for AI-ready development. Skills are the latest step in that direction. Available now Skills are available today in Azure API Center (preview). To register your first skill: Sign in to the Azure portal and navigate to your API Center instance In the sidebar, select Inventory > Assets Select + Register an asset > Skill Fill in the registration form and select Create → Register and discover skills in Azure API Center (docs) → Set up your API Center portal → Explore the Azure API Management platform1.6KViews0likes2CommentsGranting Azure Resources Access to SharePoint Online Sites Using Managed Identity
When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternatively, use an App Registration if you prefer a multi-tenant or reusable identity. How to register an app in Microsoft Entra ID - Microsoft identity platform | Microsoft Learn Step 2: Grant Sites.Selected Permission in Microsoft Entra Open Microsoft Entra ID > App registrations. Select your Logic App’s managed identity or app registration. Under API permissions, click Add a permission > Microsoft Graph. Select Application permissions and add: Sites.Selected Click Grant admin consent. Note: Sites.Selected ensures least-privilege access — you must explicitly allow site-level access later. Step 3: Assign SharePoint Site-Level Permission SharePoint Online requires site-level consent for apps with Sites.Selected. Use the script below to assign access. Note: You must be a SharePoint Administrator and have the Sites.FullControl.All permission when running this. PowerShell Script: # Replace with your values $application = @{ id = "{ApplicationID}" # Client ID of the Managed Identity displayName = "{DisplayName}" # Display name (optional but recommended) } $appRole = "write" # Can be "read" or "write" $spoTenant = "contoso.sharepoint.com" # Sharepoint site host $spoSite = "{Sitename}" # Sharepoint site name # Site ID format for Graph API $spoSiteId = $spoTenant + ":/sites/" + $spoSite + ":" # Load Microsoft Graph module Import-Module Microsoft.Graph.Sites # Connect with appropriate permissions Connect-MgGraph -Scope Sites.FullControl.All # Grant site-level permission New-MgSitePermission -SiteId $spoSiteId -Roles $appRole -GrantedToIdentities @{ Application = $application } That's it, Your Logic App or Azure resource can now call Microsoft Graph APIs to interact with that specific SharePoint site (e.g., list files, upload documents). You maintain centralized control and least-privilege access, complying with enterprise security standards. By following this approach, you ensure secure, auditable, and scalable access from Azure services to SharePoint Online — no secrets, no user credentials, just managed identity done right.10KViews2likes6CommentsThe Sentinel migration mental model question: what's actually retiring vs what isn't?
Something I keep seeing come up in conversations with other Sentinel operators lately, and I think it's worth surfacing here as a proper discussion. There's a consistent gap in how the migration to the Defender portal is being understood, and I think it's causing some teams to either over-scope their effort or under-prepare. The gap is this: the Microsoft comms have consistently told us *what* is happening (Azure portal experience retires March 31, 2027), but the question that actually drives migration planning, what is architecturally changing versus what is just moving to a different screen, doesn't have a clean answer anywhere in the community right now. The framing I've been working with, which I'd genuinely like to get other practitioners to poke holes in: What's retiring: The Azure portal UI experience for Sentinel operations. Incident management, analytics rule configuration, hunting, automation management: all of that moves to the Defender portal. What isn't changing: The Log Analytics workspace, all ingested data, your KQL rules, connectors, retention config, billing. None of that moves. The Defender XDR data lake is a separate Microsoft-managed layer, not a replacement for your workspace. Where it gets genuinely complex: MSSP/multi-tenant setups, teams with meaningful SOAR investments, and anyone who's built tooling against the SecurityInsights API for incident management (which now needs to shift to Microsoft Graph for unified incidents). The deadline extension from July 2026 to March 2027 tells its own story. Microsoft acknowledged that scale operators needed more time and capabilities. If you're in that camp, that extra runway is for proper planning, not deferral. A few questions I'd genuinely love to hear about from people who've started the migration or are actively scoping it: For those who've done the onboarding already: what was the thing that caught you most off guard that isn't well-documented? For anyone running Sentinel across multiple tenants: how are you approaching the GDAP gap while Microsoft completes that capability? Are you using B2B authentication as the interim path, or Azure Lighthouse for cross-workspace querying? I've been writing up a more detailed breakdown of this, covering the RBAC transition, automation review, and the MSSP-specific path, and the community discussion here is genuinely useful for making sure the practitioner perspective covers the right edge cases. Happy to share more context on anything above if useful.Solved298Views2likes6Comments