azure event grid
49 TopicsHow to Fix Azure Event Grid Entra Authentication issue for ACS and Dynamics 365 integrated Webhooks
Introduction: Azure Event Grid is a powerful event routing service that enables event-driven architectures in Azure. When delivering events to webhook endpoints, security becomes paramount. Microsoft provides a secure webhook delivery mechanism using Microsoft Entra ID (formerly Azure Active Directory) authentication through the AzureEventGridSecureWebhookSubscriber role. Problem Statement: When integrating Azure Communication Services with Dynamics 365 Contact Center using Microsoft Entra ID-authenticated Event Grid webhooks, the Event Grid subscription deployment fails with an error: "HTTP POST request failed with unknown error code" with empty HTTP status and code. For example: Important Note: Before moving forward, please verify that you have the Owner role assigned on app to create event subscription. Refer to the Microsoft guidelines below to validate the required prerequisites before proceeding: Set up incoming calls, call recording, and SMS services | Microsoft Learn Why This Happens: This happens because AzureEventGridSecureWebhookSubscriber role is NOT properly configured on Microsoft EventGrid SP (Service Principal) and event subscription entra ID or application who is trying to create event grid subscription. What is AzureEventGridSecureWebhookSubscriber Role: The AzureEventGridSecureWebhookSubscriber is an Azure Entra application role that: Enables your application to verify the identity of event senders Allows specific users/applications to create event subscriptions Authorizes Event Grid to deliver events to your webhook How It Works: Role Creation: You create this app role in your destination webhook application's Azure Entra registration Role Assignment: You assign this role to: Microsoft Event Grid service principal (so it can deliver events) Either Entra ID / Entra User or Event subscription creator applications (so they can create event grid subscriptions) Token Validation: When Event Grid delivers events, it includes an Azure Entra token with this role claim Authorization Check: Your webhook validates the token and checks for the role Key Participants: Webhook Application (Your App) Purpose: Receives and processes events App Registration: Created in Azure Entra Contains: The AzureEventGridSecureWebhookSubscriber app role Validates: Incoming tokens from Event Grid Microsoft Event Grid Service Principal Purpose: Delivers events to webhooks App ID: Different per Azure cloud (Public, Government, etc.) Public Azure: 4962773b-9cdb-44cf-a8bf-237846a00ab7 Needs: AzureEventGridSecureWebhookSubscriber role assigned Event Subscription Creator Entra or Application Purpose: Creates event subscriptions Could be: You, Your deployment pipeline, admin tool, or another application Needs: AzureEventGridSecureWebhookSubscriber role assigned Although the full PowerShell script is documented in the below Event Grid documentation, it may be complex to interpret and troubleshoot. Azure PowerShell - Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid - Azure Event Grid | Microsoft Learn To improve accessibility, the following section provides a simplified step-by-step tested solution along with verification steps suitable for all users including non-technical: Steps: STEP 1: Verify/Create Microsoft.EventGrid Service Principal Azure Portal → Microsoft Entra ID → Enterprise applications Change filter to Application type: Microsoft Applications Search for: Microsoft.EventGrid Ideally, your Azure subscription should include this application ID, which is common across all Azure subscriptions: 4962773b-9cdb-44cf-a8bf-237846a00ab7. If this application ID is not present, please contact your Azure Cloud Administrator. STEP 2: Create the App Role "AzureEventGridSecureWebhookSubscriber" Using Azure Portal: Navigate to your Webhook App Registration: Azure Portal → Microsoft Entra ID → App registrations Click All applications Find your app by searching OR use the Object ID you have Click on your app Create the App Role: Display name: AzureEventGridSecureWebhookSubscriber Allowed member types: Both (Users/Groups + Applications) Value: AzureEventGridSecureWebhookSubscriber Description: Azure Event Grid Role Do you want to enable this app role?: Yes In left menu, click App roles Click + Create app role Fill in the form: Click Apply STEP 3: Assign YOUR USER to the Role Using Azure Portal: Switch to Enterprise Application view: Azure Portal → Microsoft Entra ID → Enterprise applications Search for your webhook app (by name) Click on it Assign yourself: In left menu, click Users and groups Click + Add user/group Under Users, click None Selected Search for your user account (use your email) Select yourself Click Select Under Select a role, click None Selected Select AzureEventGridSecureWebhookSubscriber Click Select Click Assign STEP 4: Assign Microsoft.EventGrid Service Principal to the Role This step MUST be done via PowerShell or Azure CLI (Portal doesn't support this directly as we have seen) so PowerShell is recommended You will need to execute this step with the help of your Entra admin. # Connect to Microsoft Graph Connect-MgGraph -Scopes "AppRoleAssignment.ReadWrite.All" # Replace this with your webhook app's Application (client) ID $webhookAppId = "YOUR-WEBHOOK-APP-ID-HERE" #starting with c5 # Get your webhook app's service principal $webhookSP = Get-MgServicePrincipal -Filter "appId eq '$webhookAppId'" Write-Host " Found webhook app: $($webhookSP.DisplayName)" # Get Event Grid service principal $eventGridSP = Get-MgServicePrincipal -Filter "appId eq '4962773b-9cdb-44cf-a8bf-237846a00ab7'" Write-Host " Found Event Grid service principal" # Get the app role $appRole = $webhookSP.AppRoles | Where-Object {$_.Value -eq "AzureEventGridSecureWebhookSubscriber"} Write-Host " Found app role: $($appRole.DisplayName)" # Create the assignment New-MgServicePrincipalAppRoleAssignment ` -ServicePrincipalId $eventGridSP.Id ` -PrincipalId $eventGridSP.Id ` -ResourceId $webhookSP.Id ` -AppRoleId $appRole.Id Write-Host "Successfully assigned Event Grid to your webhook app!" Verification Steps: Verify the App Role was created: Your App Registration → App roles You should see: AzureEventGridSecureWebhookSubscriber Verify your user assignment: Enterprise application (your webhook app) → Users and groups You should see your user with role AzureEventGridSecureWebhookSubscriber Verify Event Grid assignment: Same location → Users and groups You should see Microsoft.EventGrid with role AzureEventGridSecureWebhookSubscriber Sample Flow: Analogy For Simplification: Lets think it similar to the construction site bulding where you are the owner of the building. Building = Azure Entra app (webhook app) Building (Azure Entra App Registration for Webhook) ├─ Building Name: "MyWebhook-App" ├─ Building Address: Application ID ├─ Building Owner: You ├─ Security System: App Roles (the security badges you create) └─ Security Team: Azure Entra and your actual webhook auth code (which validates tokens) like doorman Step 1: Creat the badge (App role) You (the building owner) create a special badge: - Badge name: "AzureEventGridSecureWebhookSubscriber" - Badge color: Let's say it's GOLD - Who can have it: Companies (Applications) and People (Users) This badge is stored in your building's system (Webhook App Registration) Step 2: Give badge to the Event Grid Service: Event Grid: "Hey, I need to deliver messages to your building" You: "Okay, here's a GOLD badge for your SP" Event Grid: *wears the badge* Now Event Grid can: - Show the badge to Azure Entra - Get tokens that say "I have the GOLD badge" - Deliver messages to your webhook Step 3: Give badge to yourself (or your deployment tool) You also need a GOLD badge because: - You want to create event grid event subscriptions - Entra checks: "Does this person have a GOLD badge?" - If yes: You can create subscriptions - If no: "Access denied" Your deployment pipeline also gets a GOLD badge: - So it can automatically set up event subscriptions during CI/CD deployments Disclaimer: The sample scripts provided in this article are provided AS IS without warranty of any kind. The author is not responsible for any issues, damages, or problems that may arise from using these scripts. Users should thoroughly test any implementation in their environment before deploying to production. Azure services and APIs may change over time, which could affect the functionality of the provided scripts. Always refer to the latest Azure documentation for the most up-to-date information. Thanks for reading this blog! I hope you found it helpful and informative for this specific integration use case 😀178Views2likes0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
We’re excited to announce a preview capability in Azure API Management (APIM) — you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus — no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they don’t support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management — no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the “Service Bus data sender” role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. 📖 Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing – Queue incoming orders for fulfillment or billing. Event notifications – Trigger internal workflows across multiple applications. Telemetry ingestion – Forward IoT or mobile app data to Service Bus for analytics. Partner integrations – Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus — no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single API’s workload, or extend to enterprise-wide event distribution using topics and subscriptions. You’ll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation 👉 here4KViews2likes6CommentsFrom Vibe Coding to Working App: How SRE Agent Completes the Developer Loop
The Most Common Challenge in Modern Cloud Apps There's a category of bugs that drive engineers crazy: multi-layer infrastructure issues. Your app deploys successfully. Every Azure resource shows "Succeeded." But the app fails at runtime with a vague error like Login failed for user ''. Where do you even start? You're checking the Web App, the SQL Server, the VNet, the private endpoint, the DNS zone, the identity configuration... and each one looks fine in isolation. The problem is how they connect and that's invisible in the portal. Networking issues are especially brutal. The error says "Login failed" but the actual causes could be DNS, firewall, identity, or all three. The symptom and the root causes are in completely different resources. Without deep Azure networking knowledge, you're just clicking around hoping something jumps out. Now imagine you vibe coded the infrastructure. You used AI to generate the Bicep, deployed it, and moved on. When it breaks, you're debugging code you didn't write, configuring resources you don't fully understand. This is where I wanted AI to help not just to build, but to debug. Enter SRE Agent + Coding Agent Here's what I used: Layer Tool Purpose Build VS Code Copilot Agent Mode + Claude Opus Generate code, Bicep, deploy Debug Azure SRE Agent Diagnose infrastructure issues and create developer issue with suggested fixes in source code (app code and IaC) Fix GitHub Coding Agent Create PRs with code and IaC fix from Github issue created by SRE Agent Copilot builds. SRE Agent debugs. Coding Agent fixes. What I Built I used VS Code Copilot in Agent Mode with Claude Opus to create a .NET 8 Web App connected to Azure SQL via private endpoint: Private networking (no public exposure) Entra-only authentication Managed identity (no secrets) Deployed with azd up. All green. Then I tested the health endpoint: $ curl https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql {"status":"unhealthy","error":"Login failed for user ''.","errorType":"SqlException"} Deployment succeeded. App failed. One error. How I Fixed It: Step by Step Step 1: Create SRE Agent with Azure Access I created an SRE Agent with read access to my Azure subscription. You can scope it to specific resource groups. The agent builds a knowledge graph of your resources and their dependencies visible in the Resource Mapping view below. Step 2: Connect GitHub to SRE Agent using GitHub MCP server I connected the GitHub MCP server so the agent could read my repository and create issues. Step 3: Create Sub Agent to analyze source code I created a sub-agent for analyzing source code using GitHub mcp tools. this lets SRE Agent understand not just Azure resources, but also the Bicep and source code files that created them. "you are expert in analyzing source code (bicep and app code) from github repos" Step 4: Invoke Sub-Agent to Analyze the Error In the SRE Agent chat, I invoked the sub-agent to diagnose the error I received from my app end point. It correlated the runtime error with the infrastructure configuration Step 5: Watch the SRE Agent Think and Reason SRE Agent analyzed the error by tracing code in Program.cs, Bicep configurations, and Azure resource relationships Web App, SQL Server, VNet, private endpoint, DNS zone, and managed identity. Its reasoning process worked through each layer, eliminating possibilities one by one until it identified the root causes. Step 6: Agent Creates GitHub Issue Based on its analysis, SRE Agent summarized the root causes and suggested fixes in a GitHub issue: Root Causes: Private DNS Zone missing VNet link Managed identity not created as SQL user Suggested Fixes: Add virtualNetworkLinks resource to Bicep Add SQL setup script to create user with db_datareader and db_datawriter roles Step 7: Merge the PR from Coding Agent Assign the Github issue to Coding Agent which then creates a PR with the fixes. I just reviewed the fix. It made sense and I merged it. Redeployed with azd up, ran the SQL script: curl -s https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql | jq . { "status": "healthy", "database": "tododb", "server": "tcp:sql-tsdvdfdwo77hc.database.windows.net,1433", "message": "Successfully connected to SQL Server" } 🎉 From error to fix in minutes without manually debugging a single Azure resource. Why This Matters If you're a developer building and deploying apps to Azure, SRE Agent changes how you work: You don't need to be a networking expert. SRE Agent understands the relationships between Azure resources private endpoints, DNS zones, VNet links, managed identities. It connects dots you didn't know existed. You don't need to guess. Instead of clicking through the portal hoping something looks wrong, the agent systematically eliminates possibilities like a senior engineer would. You don't break your workflow. SRE Agent suggests fixes in your Bicep and source code not portal changes. Everything stays version controlled. Deployed through pipelines. No hot fixes at 2 AM. You close the loop. AI helps you build fast. Now AI helps you debug fast too. Try It Yourself Do you vibe code your app, your infrastructure, or both? How do you debug when things break? Here's a challenge: Vibe code a todo app with a Web App, VNet, private endpoint, and SQL database. "Forget" to link the DNS zone to the VNet. Deploy it. Watch it fail. Then point SRE Agent at it and see how it identifies the root cause, creates a GitHub issue with the fix, and hands it off to Coding Agent for a PR. Share your experience. I'd love to hear how it goes. Learn More Azure SRE Agent documentation Azure SRE Agent blogs Azure SRE Agent community Azure SRE Agent home page Azure SRE Agent pricing941Views3likes0CommentsJSON Structure: A JSON schema language you'll love
We talk to many customers moving structured data through queues and event streams and topics, and we see a strong desire to create more efficient and less brittle communication paths governed by rich data definitions well understood by all parties. The way those definitions are often shared are schema documents. While there is great need, the available schema options and related tool chains are often not great. JSON Schema is popular for its relative simplicity in trivial cases, but quickly becomes unmanageable as users employ more complex constructs. The industry has largely settled on "Draft 7," with subsequent releases seeing weak adoption. There's substantial frustration among developers who try to use JSON Schema for code generation or database mapping—scenarios it was never designed for. JSON Schema is a powerful document validation tool, but it is not a data definition language. We believe it's effectively un-toolable for anything beyond pure validation; practically all available code-generation tools agree by failing at various degrees of complexity. Avro and Protobuf schemas are better for code generation, but tightly coupled to their respective serialization frameworks. For our own work in Microsoft Fabric, we're initially leaning on an Avro-compatible schema with a small set of modifications, but we ultimately need a richer type definition language that ideally builds on people's familiarity with JSON Schema. This isn't just a Microsoft problem. It's an industry-wide gap. That's why we've submitted JSON Structure as a set of Internet Drafts to the IETF, aiming for formal standardization as an RFC. We want a vendor-neutral, standards-track schema language that the entire industry can adopt. What Is JSON Structure? JSON Structure is a modern, strictly typed data definition language that describes JSON-encoded data such that mapping to and from programming languages and databases becomes straightforward. It looks familiar—if you've written "type": "object", "properties": {...} before, you'll feel right at home. But there's a key difference: JSON Structure is designed for code generation and data interchange first, with validation as an optional layer rather than the core concern. This means you get: Precise numeric types: int32 , int64 , decimal with precision and scale, float , double Rich date/time support: date , time , datetime , duration —all with clear semantics Extended compound types: Beyond objects and arrays, you get set , map , tuple , and choice (discriminated unions) Namespaces and modular imports: Organize your schemas like code Currency and unit annotations: Mark a decimal as USD or a double as kilograms Here's a compact example that showcases these features. We start with the schema header and the object definition: { "$schema": "https://json-structure.org/meta/extended/v0/#", "$id": "https://example.com/schemas/OrderEvent.json", "name": "OrderEvent", "type": "object", "properties": { Objects require a name for clean code generation. The $schema points to the JSON Structure meta-schema, and the $id provides a unique identifier for the schema itself. Now let's define the first few properties—identifiers and a timestamp: "orderId": { "type": "uuid" }, "customerId": { "type": "uuid" }, "timestamp": { "type": "datetime" }, The native uuid type maps directly to Guid in .NET, UUID in Java, and uuid in Python. The datetime type uses RFC3339 encoding and becomes DateTimeOffset in .NET, datetime in Python, or Date in JavaScript. No format strings, no guessing. Next comes the order status, modeled as a discriminated union: "status": { "type": "choice", "choices": { "pending": { "type": "null" }, "shipped": { "type": "object", "name": "ShippedInfo", "properties": { "carrier": { "type": "string" }, "trackingId": { "type": "string" } } }, "delivered": { "type": "object", "name": "DeliveredInfo", "properties": { "signedBy": { "type": "string" } } } } }, The choice type is a discriminated union with typed payloads per case. Each variant can carry its own structured data— shipped includes carrier and tracking information, delivered captures who signed for the package, and pending carries no payload at all. This maps to enums with associated values in Swift, sealed classes in Kotlin, or tagged unions in Rust. For monetary values, we use precise decimals: "total": { "type": "decimal", "precision": 12, "scale": 2 }, "currency": { "type": "string", "maxLength": 3 }, The decimal type with explicit precision and scale ensures exact monetary math—no floating-point surprises. A precision of 12 with scale 2 gives you up to 10 digits before the decimal point and exactly 2 after. Line items use an array of tuples for compact, positional data: "items": { "type": "array", "items": { "type": "tuple", "properties": { "sku": { "type": "string" }, "quantity": { "type": "int32" }, "unitPrice": { "type": "decimal", "precision": 10, "scale": 2 } }, "tuple": ["sku", "quantity", "unitPrice"], "required": ["sku", "quantity", "unitPrice"] } }, Tuples are fixed-length typed sequences—ideal for time-series data or line items where position matters. The tuple array specifies the exact order: SKU at position 0, quantity at 1, unit price at 2. The int32 type maps to int in all mainstream languages. Finally, we add extensible metadata using set and map types: "tags": { "type": "set", "items": { "type": "string" } }, "metadata": { "type": "map", "values": { "type": "string" } } }, "required": ["orderId", "customerId", "timestamp", "status", "total", "currency", "items"] } The set type represents unordered, unique elements—perfect for tags. The map type provides string keys with typed values, ideal for extensible key-value metadata without polluting the main schema. Here's what a valid instance of this schema looks like: { "orderId": "f47ac10b-58cc-4372-a567-0e02b2c3d479", "customerId": "7c9e6679-7425-40de-944b-e07fc1f90ae7", "timestamp": "2025-01-15T14:30:00Z", "status": { "shipped": { "carrier": "Litware", "trackingId": "794644790323" } }, "total": "129.97", "currency": "USD", "items": [ ["SKU-1234", 2, "49.99"], ["SKU-5678", 1, "29.99"] ], "tags": ["priority", "gift-wrap"], "metadata": { "source": "web", "campaign": "summer-sale" } } Notice how the choice is encoded as an object with a single key indicating the active case— {"shipped": {...}} —making it easy to parse and route. Tuples serialize as JSON arrays in the declared order. Decimals are encoded as strings to preserve precision across all platforms. Why Does This Matter for Messaging? When you're pushing events through Service Bus, Event Hubs, or Event Grid, schema clarity is everything. Your producers and consumers often live in different codebases, different languages, different teams. A schema that generates clean C# classes, clean Python dataclasses, and clean TypeScript interfaces—from the same source—is not a luxury. It's a requirement. JSON Structure's type system was designed with this polyglot reality in mind. The extended primitive types map directly to what languages actually have. A datetime is a DateTimeOffset in .NET, a datetime in Python, a Date in JavaScript. No more guessing whether that "string with format date-time" will parse correctly on the other side. SDKs Available Now We've built SDKs for the languages you're using today: TypeScript, Python, .NET, Java, Go, Rust, Ruby, Perl, PHP, Swift, and C. All SDKs validate both schemas and instances against schemas. A VS Code extension provides IntelliSense and inline diagnostics. Code and Schema Generation with Structurize Beyond validation, you often need to generate code or database schemas from your type definitions. The Structurize tool converts JSON Structure schemas into SQL DDL for various database dialects, as well as self-serializing classes for multiple programming languages. It can also convert between JSON Structure and other schema formats like Avro, Protobuf, and JSON Schema. Here's a simple example: a postal address schema on the left, and the SQL Server table definition generated by running structurize struct2sql postaladdress.json --dialect sqlserver on the right: JSON Structure Schema Generated SQL Server DDL { "$schema": "https://json-structure.org/meta/extended/v0/#", "$id": "https://example.com/schemas/PostalAddress.json", "name": "PostalAddress", "description": "A postal address for shipping or billing", "type": "object", "properties": { "id": { "type": "uuid", "description": "Unique identifier for the address" }, "street": { "type": "string", "description": "Street address with house number" }, "city": { "type": "string", "description": "City or municipality" }, "state": { "type": "string", "description": "State, province, or region" }, "postalCode": { "type": "string", "description": "ZIP or postal code" }, "country": { "type": "string", "description": "ISO 3166-1 alpha-2 country code" }, "createdAt": { "type": "datetime", "description": "When the address was created" } }, "required": ["id", "street", "city", "postalCode", "country"] } CREATE TABLE [PostalAddress] ( [id] UNIQUEIDENTIFIER, [street] NVARCHAR(200), [city] NVARCHAR(100), [state] NVARCHAR(50), [postalCode] NVARCHAR(20), [country] NVARCHAR(2), [createdAt] DATETIME2, PRIMARY KEY ([id], [street], [city], [postalCode], [country]) ); EXEC sp_addextendedproperty 'MS_Description', 'A postal address for shipping or billing', 'SCHEMA', 'dbo', 'TABLE', 'PostalAddress'; EXEC sp_addextendedproperty 'MS_Description', 'Unique identifier for the address', 'SCHEMA', 'dbo', 'TABLE', 'PostalAddress', 'COLUMN', 'id'; EXEC sp_addextendedproperty 'MS_Description', 'Street address with house number', 'SCHEMA', 'dbo', 'TABLE', 'PostalAddress', 'COLUMN', 'street'; -- ... additional column descriptions The uuid type maps to UNIQUEIDENTIFIER , datetime becomes DATETIME2 , and the schema's description fields are preserved as SQL Server extended properties. The tool supports PostgreSQL, MySQL, SQLite, and other dialects as well. Mind that all this code is provided "as-is" and is in a "draft" state just like the specification set. Feel encouraged to provide feedback and ideas in the GitHub repos for the specifications and SDKs at https://github.com/json-structure/ Learn More We've submitted JSON Structure as a set of Internet Drafts to the IETF, aiming for formal standardization as an RFC. This is an industry-wide issue, and we believe the solution needs to be a vendor-neutral standard. You can track the drafts at the IETF Datatracker. Main site: json-structure.org Primer: JSON Structure Primer Core specification: JSON Structure Core Extensions: Import | Validation | Alternate Names | Units | Composition IETF Drafts: IETF Datatracker GitHub: github.com/json-structure5.5KViews7likes1CommentWhat’s New in Azure Event Grid
Azure Event Grid continues to evolve with new capabilities in General Availability and Public Preview – intended to help you enhance performance, security, and interoperability in modern event-driven systems. These enhancements strengthen Azure Event Grid’s foundational messaging layer for distributed systems—supporting real-time telemetry, automation, and hybrid workloads. With support for advanced MQTT features and flexible authentication models, Event Grid now offers: Stronger security posture through OAuth 2.0 and Custom Webhook authentication Operational efficiency with static client identifiers and message retention Broader integration across devices, applications, and cloud services General Availability Features MQTT OAuth 2.0 Authentication Authenticate MQTT clients using JSON Web Tokens (JWTs) issued by any OpenID Connect (OIDC)-compliant identity provider. This enables seamless integration with Microsoft Entra ID (formerly Azure AD), custom identity platforms, or third-party IAM solutions. MQTT Custom Webhook Authentication Use external webhooks or Azure Functions to dynamically validate client connections. Authorize using shared access signatures (SAS), API keys, custom credentials, or X.509 certificate fingerprints. This is a powerful feature for scenarios requiring granular control across large, dynamic fleets of devices or multitenant environments. MQTT Assigned Client Identifiers Assign deterministic, pre-approved client identifiers to MQTT clients. This enables enhanced session continuity, better device tracking, simplified diagnostics, and improved auditability—critical for managing long-lived connections and operational visibility in regulated industries. Assign deterministic, pre-approved client identifiers to MQTT clients. This enables enhanced session continuity, better device tracking, simplified diagnostics, and improved auditability—critical for managing long-lived connections and operational visibility in regulated industries. First Class integration with Fabric Route MQTT messages and Cloud Events from Event Grid Namespace to Fabric Event Streams for real-time analytics, storage, visualization of IoT data without having to hop thru Event Hub. Public Preview Features HTTP Publish Bridge traditional HTTP-based applications into event-driven ecosystems by allowing HTTP clients to publish messages directly to Event Grid topics. This enables RESTful services, legacy systems, and webhooks to participate in real-time event workflows, complementing MQTT and cloud-native integrations. MQTT Retain Support Support for retained MQTT messages allows clients to receive the latest value on a topic immediately upon subscription—without waiting for the next publish. This is particularly useful in IoT telemetry scenarios, stateful dashboards, and device shadow synchronization. Retained messages are stored per topic with configurable expiry and can be cleared on demand. Unlocking Smart Factory Insights with Sparkplug B on Azure Event Grid MQTT Broker In the age of Industry 4.0, factories are becoming smarter, more connected, and increasingly data driven. A key enabler of this transformation is Sparkplug B, an MQTT-based protocol purpose-built for industrial IoT (IIoT). And now, with Azure Event Grid MQTT Broker, Sparkplug B comes to life in the cloud—securely, reliably, and at scale. What is Sparkplug B? Think of Sparkplug B as the common language for industrial devices. It defines how sensors, gateways, and SCADA systems talk to each other—sharing not just telemetry data (like temperature or RPM) but also device lifecycle information such as when a machine comes online (BIRTH) or goes offline (DEATH). Why it Matters for Manufacturers Real-time factory monitoring – View live machine vitals across distributed plants. Predictive maintenance – Anticipate failures by analyzing trends. Seamless SCADA integration – Auto-discover tags in systems like Ignition SCADA with Cirrus Link. Edge-to-cloud bridge – Bring legacy factory systems into Azure for analytics, AI, and automation. Azure Event Grid MQTT Broker + Sparkplug B With Azure Event Grid MQTT Broker, manufacturers can run Sparkplug B workloads with enterprise-grade reliability. A connected factory floor where insights flow seamlessly from edge devices to cloud-unlocking efficiency, uptime, and innovation using the following capabilities: QoS 1 for reliability (at-least-once delivery). Last Will & Testament (LWT) for real-time device state awareness. Retained messages to ensure new subscribers always see the last known good value. Native support for binary Sparkplug payloads over secure TLS. From Factory Floor to Cloud Insights Sensors measure machine temperature and RPM. Edge gateways publish Sparkplug B messages to Azure Event Grid MQTT Broker. Ignition SCADA with Chariot Cirrus Link auto-discovers and displays these tags. Azure Data Explorer or Fabric ingests the same data for real-time dashboards, predictive analytics, or automated alerts. Ready to Get Started? Learn more about Azure Event Grid MQTT Broker Sparkplug B Support619Views0likes0CommentsLogic Apps Aviators Newsletter - July 25
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month July’s Ace Aviator: Şahin Özdemir What's your role and title? What are your responsibilities? I currently work for Rubicon Cloud Advisor, a Dutch company specialized in digital transformations, cloud adoption and AI implementation. At Rubicon I fulfil the role of Application and Integration architect, while also being a Professional Scrum Trainer at Scrum.org. Even though this sounds like two completely different roles, in practice both go closely hand in hand. I firmly believe that good architecture, a strong development process, and application of best practices are key pillars for delivering high-quality solutions to my clients. Therefore, both roles come in handy in my day-to-day job (combined with my strong background in software development).\ I work closely with companies and their teams in making their journey to Azure - especially Azure Integration Services - successful. Most of the time this journey starts with a business need or challenge, and I work with my clients to get a deeper understanding of their needs. This results in further analysis, capturing requirements, defining architecture, solution design, setting the stage for development (ALM) and being involved in quality assurance. At the same time, I think it’s important to stay relevant from a technical perspective. That’s why I also like being involved with implementing the solution. This way, I hear the technical struggles teams face and I can help them to find the right solution. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Not a single day is the same, although there are some recurring activities. Specific parts of my day (or sprint) are dedicated to Scrum-related activities - whether it's participating in the daily scrum, having sprint reviews with stakeholders, planning the next sprint, or refining the backlog with the team or just aligning with the PO or stakeholders. I’m frequently involved in cross-organizational meetings focused on projects at scale. I contribute from the perspective of architecture, technical expertise, and integration strategy. In my role as a solution architect, I'm engaged in designing and implementing a critical integration platform for my client. This platform connects and exchanges data between many internal departments and external vendors - an effort that requires frequent alignment and collaboration. I’m always looking for opportunities to expand our Hybrid Integration Platform itself. Exploring how Azure resources may add value to our platform and working closely with the team to realize such improvements to the platform’s capabilities is something I enjoy. Outside of the regular meetings, I often focus on designing new integrations. Having working sessions with stakeholders to understand what they want. Based on these discussions, I assess the technical and architectural aspects of the solution. Every integration that lands on the platform is measured against both architectural and development principles and guidelines. I contribute to reviewing the solutions that have been developed. Ensuring that each integration is high-quality, consistent, easy to understand, and maintainable. I support the platform team with, and whenever possible. And if time permits, I develop parts of the solution myself – I see this as a great way to stay relevant from a technological perspective. All the spare time I have, I spend on writing technical articles that may help others. What motivates and inspires you to be an active member of the Aviators/Microsoft community? Because I enjoy helping others. Every day I work with a team of smart professionals on integration solutions and custom code within the Azure platform. Along the way, we regularly encounter challenges, limitations, or issues. In those moments, it's incredibly helpful to find solutions online or to have a community that can think along with you. Over the past few years, there have been many occasions where I just couldn’t find a solution online for a technical problem with Logic Apps. In these cases, we either came up with a creative solution ourselves or received support from Microsoft. When the integration community faces a similar challenge, it’s pretty much wasteful to tackle the same hurdles again. By documenting an approach or solution, others may be saving their invaluable time looking for a solution. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? It is ok that you don’t know everything. Just start doing, experiment, stay curious, challenge yourself, don’t be afraid to ask questions, fail, learn and keep going! What has helped you grow professionally? I have spent a fair amount of my career at a big consulting firm. I started off as a software engineer all the way up to senior manager and architect. A long journey like that gives great and well-dosed opportunities and learning experiences to focus on your technical (in-depth) skillset first, continued by working on you soft skills like consulting, guiding and leading teams, solutioning and architecture. If I had not followed this path at that company, I would not be the person I am now professionally. Be ok with the fact that growth doesn’t happen overnight -no shortcuts, no magic pills. It's like a good red wine that needs time to mature. So do many challenging projects, become all-round and then choose a specialization, ask for constructive feedback, fail many times and take your time to reflect and learn. And don’t forget to have a strong work ethic and ongoing curiosity to learn new things. In the end I found that - from a technological perspective-, quality attributes (the “-illities”), enterprise application integration and scrum made my heart skip a beat. So my advice is to always pursue what brings you joy! If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Overall, I must say that I’m happy with the current state of Logic Apps. Nevertheless, if I had a magic wand: I would like to see that the service plans for Logic App standard would be in line with Function Apps. The plans for Function apps have way better tiers from both memory, cores and pricing perspective. And being able to scale out and in based on specific metrics is more flexible than Logic App Standard currently offers. Having more CPU/memory available in the plans would also improve the overall performance of Logic Apps in general, even though performance optimizations of many actions would also be more than welcome. What I currently really miss in the HTTP connector (and possibly others) is the ability to have better control over the request timeouts. Even though the setting is there, it is capped to 4 minutes max. In practice, we need to deliver data to external APIs that work synchronously and take more time to complete. Giving better control on these timeouts would make the usability of workflows even better! Even though some nice additions to the initialization of variables have been made recently, I would like to see the ability to initialize variables at any point in the workflow. E.g. the foreach loop can be executed in parallel, and therefore the current global variables are not thread-safe, which leads to unexpected behavior. News from our product group Logic Apps Live June 2025 Missed Logic Apps Live in June? You can watch it here. We focused on the Logic Apps big announcements from Integrate 2025. There are a lot of great things to check! Feedback Opportunity: SRE Agent + Logic Apps Discover the new Applications feature in Azure API Management, enabling OAuth-based access to APIs and products. Streamline secure API access with built-in OAuth 2.0 application-based authorization. Configure SQL Storage for Standard Logic Apps Azure Logic Apps traditionally rely on Azure Storage to manage workflow states and runtime data. However, with the introduction of SQL as a storage provider (currently in preview), developers now have a compelling alternative that offers greater control, flexibility, and integration with existing SQL infrastructure. This post explores the benefits, configuration steps, and considerations for using SQL storage with Standard Logic Apps. Announcing General Availability: Azure Logic Apps Standard Automated Test Framework We’re excited to announce the General Availability (GA) of the Azure Logic Apps Standard Automated Test Framework—a major step forward in enabling developers to build, test, and maintain enterprise-grade workflows with confidence and agility. Announcing General Availability: Azure Logic Apps Standard Custom Code with .NET 8 We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. Business Process Tracking Reaches General Availability Business Process Tracking provides key insights to business stakeholders from your Logic Apps (Standard) implementation in an efficient and timely manner. Today, we are pleased to announce the General Availability of this capability, allowing customers to leverage in their production workloads. Announcement: General Availability of Logic Apps Hybrid Deployment Model We’re excited to announce the Public Preview of two major integrations that bring the power of Azure Logic Apps to AI Agents in Foundry – Logic Apps as Tools and AI Agent Service Connector. Learn more on our announcement post! Announcing Public Preview: Organizational Templates in Azure Logic Apps We’re excited to announce the Public Preview of Organizational Templates in Azure Logic Apps— empowering teams to author, share, and reuse automation patterns across their organization. With this release, we’re also rolling out a brand-new UI experience to easily create templates directly from your workflows—no manual packaging required! OpenTelemetry in Azure Logic Apps (Standard and Hybrid) OpenTelemetry provides a unified, vendor-agnostic framework for collecting telemetry data—logs, metrics, and traces—across different services and infrastructure layers. It simplifies monitoring and makes it easier to integrate with a variety of observability backends such as Azure Monitor, Grafana Tempo, Jaeger, and others. For Logic Apps—especially when deployed in hybrid or on-premises scenarios—OpenTelemetry is a powerful addition that elevates diagnostic capabilities beyond the default Application Insights telemetry. Logic App Standard - When High Memory / CPU usage strikes and what to do Monitoring your applications is essential, as it ensures that you know what's happening and you are not caught by surprise when something happens. One possible event is the performance of your application starting to decrease and processing becomes slower than usual. This may happen due to various reasons, and in this blog post, we will be discussing the High Memory and CPU usage and why it affects your Logic App. We will also observe some possibilities that we've seen that have been deemed as the root cause for some customers. Introducing Agent in a Day Agent in a Day represents a fantastic opportunity for customers to participate in hackathon-style contests where attendees learn how to build agents and then can apply them to their unique business use cases. For Partners, Agent in a Day represents a great way to engage your customers by building agents with them and uncovering new use cases. Introducing Confluent Kafka Connector (Public Preview) We are pleased to announce the introduction of the Confluent Kafka Connector in Logic Apps (Standard) which allows you to both send and receive messages between Logic Apps and Confluent Kafka. Confluent Kafka is a distributed streaming platform for building real-time data pipelines and streaming applications. It is used across many industries including financial services, Omnichannel retail, autonomous cars, fraud detection services, microservices and IoT deployments. Our current connector offering supports both triggers (receive) and sending (publish) within Logic Apps. News from our community Logic App Standard: Throw exceptions like a pro! Post by Şahin Özdemir Learn how to throw exceptions in Logic App Standard using a simple Compose action—no code needed, just clever workflow design. Azure Logic Apps: are you handling large blobs? Keep memory usage under control. Post by Stefano Demiliani Struggling with large blob files in Logic Apps? Learn how to keep memory usage under control and avoid out-of-memory errors with smart workflow design and a few performance-boosting tricks De SOAPing Services SOAP to REST using Azure API Management Video by Stephen W Thomas Struggling with legacy SOAP integrations from BizTalk to Azure? Check out this video on simplifying SOAP-to-REST conversions using Azure API Management and learn how easily you can manage SOAP envelopes and streamline your Logic Apps integrations! Integrating Entra ID and AI Agent workflows in Azure Logic Apps Post by Brian Veldman Discover how to build AI-powered workflows in Azure Logic Apps that interact with Entra ID, automate tasks, and adapt dynamically using agentic tools and OpenAI models. Advanced KQL Queries for Logic Apps in Application Insights: A Practical Guide Post by Dieter Gobeyn Boost Logic App performance with advanced KQL queries in Application Insights—spot bottlenecks, analyze slow actions, and optimize workflows without upgrading your hosting plan. How to Build an AI Agent with Azure Logic Apps Post by Cameron McKay Learn how to build your first AI Agent in Azure Logic Apps using Agent Loop—connect to OpenAI, design smart prompts, and automate tasks like weather reporting with low-code workflows. You Can Now Initialize All Your Variables In One Single Action Post by Luis Rigueira You can now initialize multiple variables in Logic Apps with a single action—making your workflows cleaner, faster, and easier to manage. It is a Friday Fact, brought to you by Luis Rigueira! Integration Insights Podcast: The Future of Integration Video by Sagar Sharma and Jochen Toelen In this two-part episode of the Integration Insights podcast, Sagar, Joechen and Kent dive into how integration is evolving in a cloud-first world. From BizTalk migrations to hybrid deployments with Azure Arc, they share practical insights and best practices to future-proof your integration strategy. A must-listen! You can watch part 2 here. Event Grid vs Service Bus vs Event Hubs vs Storage Queues: Choosing the Right Messaging Backbone in Azure Post by Prashant Singh Confused by Azure’s messaging options? This guide breaks down Event Grid, Service Bus, Event Hubs, and Storage Queues—helping you choose the right tool for real-time events, telemetry, enterprise workflows, or lightweight tasks. IntelliSense in Logic Apps Just Got Smarter – Matching Brackets in the Expression Editor! Post by Sandro Pereira Logic Apps just got a lot friendlier—bracket matching in the expression editor now highlights pairs as you type, making it easier to write and debug complex expressions.A Friday Fact from Sandro Pereira. How to Build Resilient Integrations for Mission-Critical Systems Post by Lilan Sameera Learn how to build resilient integrations for mission-critical systems using Logic Apps, Service Bus, and Event Hub—ensuring reliable data delivery, smart retries, and clean outputs even under pressure.737Views2likes0CommentsReimagining App Modernization for the Era of AI
This blog highlights the key announcements and innovations from Microsoft Build 2025. It focuses on how AI is transforming the software development lifecycle, particularly in app modernization. Key topics include the use of GitHub Copilot for accelerating development and modernization, the introduction of Azure SRE agent for managing production systems, and the launch of the App Modernization Guidance to help organizations modernize their applications with AI-first design. The blog emphasizes the strategic approach to modernization, aiming to reduce complexity, improve agility, and deliver measurable business outcomes4.7KViews2likes0CommentsAnnouncing new features and updates in Azure Event Grid
Discover powerful new features in Azure Event Grid, enhancing its functionality and user experience. This fully managed event broker now supports multi-protocol interoperability, including MQTT, for scalable messaging. It seamlessly connects Microsoft-native and third-party services, enabling robust event-driven applications. Streamline event management with flexible push-pull communication patterns. We are thrilled to announce General Availability of the Cross-tenant delivery to Event Hubs, Service Bus, Storage Queues, and dead letter storage using managed identity with federated identity credentials (FIC) from Azure Event Grid topics, domains, system topics, and partner topics. New cross-tenant scenarios, currently in Public Preview enable delivery to Event Hubs, webhooks, and dead letter storage in Azure Event Grid namespaces. This includes system topics, partner topics, and domains, offering seamless integration. The update enhances flexibility for event-driven applications across tenants. Azure Event Grid now also offers managed identity support for webhook delivery for all their resources. Public Preview features for new cross-tenant scenarios and managed identity support for webhook delivery are currently available in West Central, West Europe, UK South, Central US, and more regions will be supported soon. We are also introducing the Public Preview for the support of Network Security Perimeter (NSP) in Azure Event Grid topics and domains, for inbound and outbound communication. This perimeter defines a boundary with implicit trust access between each resource, where you can have sets of inbound and outbound access rules. By incorporating these advanced security measures, Azure Event Grid enhances the defense against a wide range of cyber threats, helping organizations to safeguard their event-driven workloads. In addition to this, Azure Event Grid has introduced message ordering support within single MQTT client sessions, ensuring reliable sequential event delivery, and a connection rate limit of one attempt per second per session, which maintains system stability. Furthermore, the expansion to support up to 15 MQTT topic segments per topic or filter offers greater flexibility in topic hierarchies. High throughput messaging, supporting up to 1,000 messages per second per session, is now in Public Preview, making it ideal for demanding scenarios such as IoT telemetry and real-time analytics. Azure Event Grid now also offers OAuth 2.0 JWT authentication for MQTT clients in Public Preview. This feature enables secure client authentication via JSON Web Tokens (JWT) issued by OpenID Connect (OIDC) compliant providers, providing a lightweight, secure, and flexible authentication option for clients not provisioned in Azure. Additionally, Custom Webhook Authentication has been introduced, allowing dynamic client authentication through webhooks or Azure Functions, with Entra ID JWT validation for centralized and customizable strategies. Finally, Assigned Client Identifiers in Public Preview provide consistent client IDs, improving session management and operational control, further enhancing the scalability and flexibility of client authentication workflows. We believe these updates will greatly enhance your Azure Event Grid experience. We welcome your feedback and appreciate your ongoing partnership as we work to deliver top features and services.1.1KViews0likes0CommentsAzure Event Grid Domain Creation: Overcoming AZ CLI's TLS Parameter Limitations with Workaround
Introduction: The Intersection of Security Policies and DevOps Automation In the modern cloud landscape, organizations increasingly enforce strict security requirements through platform policies. One common requirement is mandating latest TLS versions for example TLS 1.2 across all deployed resources to protect data in transit. While this is an excellent security practice, it can sometimes conflict with the available configuration options in deployment tools, particularly in the Azure CLI. This blog explores a specific scenario that many Azure DevOps teams encounter: how to deploy an Azure Event Grid domain when your organization has a custom policy requiring latest version considering TLS 1.2, but the Azure CLI command doesn't provide a parameter to configure this setting. The Problem: Understanding the Gap Between Policy and Tooling What Is Azure Event Grid? Azure Event Grid is a serverless event routing service that enables event-driven architectures. It manages the routing of events from various sources (like Azure services, custom applications, or SaaS products) to different handlers such as Azure Functions, Logic Apps, or custom webhooks. An Event Grid domain provides a custom topic endpoint that can receive events from multiple sources, offering a way to organize and manage events at scale. The Policy Requirement: Many organizations implement Azure Policy to enforce security standards across their cloud infrastructure. A common policy might look like this: { "policyRule": { "if": { "allOf": [ { "field": "type", "equals": "Microsoft.EventGrid/domains" }, { "anyOf": [ { "field": "Microsoft.EventGrid/domains/minimumTlsVersion", "exists": false }, { "field": "Microsoft.EventGrid/domains/minimumTlsVersion", "notEquals": "1.2" } ] } ] }, "then": { "effect": "deny" } } } This policy blocks the creation of any Event Grid domain that doesn't explicitly set TLS 1.2 as the minimum TLS version. The CLI Limitation: Now, let's examine the Azure CLI command to create an Event Grid domain: az eventgrid domain | Microsoft Learn TLS property is unrecognized with the latest version of AZ CLI version. Current Status of This Limitation: It's worth noting that this limitation has been recognized by the Azure team. There is an official GitHub feature request tracking this issue, which you can find at => Please add TLS support while creation of Azure Event Grid domain through CLI · Issue #31278 · Azure/azure-cli Before implementing this workaround described in this article, I recommend checking the current status of this feature request. The Azure CLI is continuously evolving, and by the time you're reading this, the limitation might have been addressed. However, as of April 2025, this remains a known limitation in the Azure CLI, necessitating the alternative approach outlined below. Why This Matters: This limitation becomes particularly problematic in CI/CD pipelines or Infrastructure as Code (IaC) scenarios where you want to automate the deployment of Event Grid domain resources. Workaround: You can utilize below ARM template and deploy it through AZ CLI in your deployment pipeline as below: Working ARM template: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "domainName": { "type": "string", "metadata": { "description": "Name of the Event Grid Domain" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Azure region for the domain" } } }, "resources": [ { "type": "Microsoft.EventGrid/domains", "apiVersion": "2025-02-15", "name": "[parameters('domainName')]", "location": "[parameters('location')]", "properties": { "minimumTlsVersionAllowed": "1.2" } } ] } Please note I've used latest API version from below official Microsoft documentation : Microsoft.EventGrid/domains - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn Working AZ CLI command: az deployment group create --resource-group <rg> --template-file <armtemplate.json> --parameters domainName=<event grid domain name> You can store this ARM template in your configuration directory with replacement for Azure CLI command. It explicitly sets TLS 1.2 for Event Grid domains, ensuring security compliance where the CLI lacks this parameter. For example: az deployment group create --resource-group <rg> --template-file ./config/<armtemplate.json> --parameters domainName=<event grid domain name> Disclaimer: The sample scripts provided in this article are provided AS IS without warranty of any kind. The author is not responsible for any issues, damages, or problems that may arise from using these scripts. Users should thoroughly test any implementation in their environment before deploying to production. Azure services and APIs may change over time, which could affect the functionality of the provided scripts. Always refer to the latest Azure documentation for the most up-to-date information. Thanks for reading this blog! I hope you've found this workaround valuable for addressing the Event Grid domain TLS parameter limitation in Azure CLI. 😊219Views4likes0CommentsAzure Event Grid CLI Identity Gaps & Workarounds with Python REST and ARM Templates
Azure Event Grid has become a cornerstone service for building event-driven architectures in the cloud. It provides a scalable event routing service that enables reactive programming patterns, connecting event sources to event handlers seamlessly. However, when working with Event Grid through the Azure CLI, developers often encounter a significant limitation: the inability to configure system-assigned managed identities using CLI commands. In this blog post, I'll explore this limitation and provide practical workarounds using Python REST API calls and ARM templates with CLI to ensure your Event Grid deployments can leverage the security benefits of managed identities without being blocked by tooling constraints. Problem Statement: Unlike many other Azure resources that support the --identity or ---assign-identity parameter for enabling system-assigned managed identities, Event Grid's CLI commands lack this capability while creating event subscription for system topic at the moment. This means that while the Azure Portal and other methods support managed identities for Event Grid, you can't configure them directly through the CLI in case of system topic event subscriptions For example you can add managed identity for delivery through portal but not through AZ CLI: If you try to use the following CLI command: az eventgrid system-topic event-subscription create \ --name my-sub \ --system-topic-name my-topic \ --resource-group my-rg \ --endpoint <EH resource id> --endpoint-type eventhub \ --identity systemassigned You'll run into a limitation: The --identity flag is not supported or unrecognized for system topic subscriptions in Azure CLI. Also, --delivery-identity is in preview and under development Current Status of This Limitation: It's worth noting that this limitation has been recognized by the Azure team. There is an official GitHub feature request tracking this issue, which you can find at Use managed identity to command creates an event subscription for an event grid system topic · Issue #26910 · Azure/azure-cli. Before implementing any of the workarounds described in this article, I recommend checking the current status of this feature request. The Azure CLI is continuously evolving, and by the time you're reading this, the limitation might have been addressed. However, as of April 2025, this remains a known limitation in the Azure CLI, necessitating the alternative approaches outlined below. Why This Matters: This limitation becomes particularly problematic in CI/CD pipelines or Infrastructure as Code (IaC) scenarios where you want to automate the deployment of Event Grid resources with managed identities. Solution 1: Using Azure REST API with Python request library: The first approach to overcome this limitation is to use the Azure REST API with Python. This provides the most granular control over your Event Grid resources and allows you to enable system-assigned managed identities programmatically. System Topic Event Subscriptions - Create Or Update - REST API (Azure Event Grid) | Microsoft Learn You can retrieve Azure Entra token using below CLI command: az account get-access-token Sample working code & payload: import requests import json subscription_id = <> resource_group = <> system_topic_name = <> event_subscription_name = <> event_hub_resource_id = <> access_token = <> url = f"https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.EventGrid/systemTopics/{system_topic_name}/eventSubscriptions/{event_subscription_name}?api-version=2024-12-15-preview" payload = { "identity": { "type": "SystemAssigned" }, "properties": { "topic": "/subscriptions/<>/resourceGroups/<>/providers/Microsoft.EventGrid/systemTopics/<>", "filter": { "includedEventTypes": [ "Microsoft.Storage.BlobCreated", "Microsoft.Storage.BlobDeleted" ], "advancedFilters": [], "enableAdvancedFilteringOnArrays": True }, "labels": [], "eventDeliverySchema": "EventGridSchema", "deliveryWithResourceIdentity": { "identity": { "type": "SystemAssigned" }, "destination": { "endpointType": "EventHub", "properties": { "resourceId": "/subscriptions/<>/resourceGroups/rg-sch/providers/Microsoft.EventHub/namespaces/<>/eventhubs/<>", "deliveryAttributeMappings": [ { "name": "test", "type": "Static", "properties": { "value": "test", "isSecret": False, "sourceField": "" } }, { "name": "id", "type": "Dynamic", "properties": { "value": "abc", "isSecret": False, "sourceField": "data.key" } } ] } } } } } headers = { "Authorization": f"Bearer {access_token}", "Content-Type": "application/json" } response = requests.put(url, headers=headers, data=json.dumps(payload)) if response.status_code in [200, 201]: print("Event subscription created successfully!") Remember that these tokens are sensitive security credentials, so handle them with appropriate care. They should never be exposed in logs, shared repositories, or other insecure locations. Solution 2: Using ARM Templates & deploying it through CLI Another solution is to use Azure Resource Manager (ARM) templates, which fully support system-assigned managed identities for Event Grid. This approach works well in existing IaC workflows. Microsoft.EventGrid/systemTopics/eventSubscriptions - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn Here's a sample ARM template that creates an Event Grid topic with a system-assigned managed identity: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "systemTopicName": { "type": "string", "metadata": { "description": "Name of the existing system topic" } }, "eventSubscriptionName": { "type": "string", "metadata": { "description": "Name of the event subscription to create" } }, "eventHubResourceId": { "type": "string", "metadata": { "description": "Resource ID of the Event Hub to send events to" } }, "includedEventType": { "type": "string", "defaultValue": "Microsoft.Storage.BlobCreated", "metadata": { "description": "Event type to filter on" } } }, "resources": [ { "type": "Microsoft.EventGrid/systemTopics/eventSubscriptions", "apiVersion": "2024-06-01-preview", "name": "[format('{0}/{1}', parameters('systemTopicName'), parameters('eventSubscriptionName'))]", "identity": { "type": "SystemAssigned" }, "properties": { "deliveryWithResourceIdentity": { "destination": { "endpointType": "EventHub", "properties": { "resourceId": "[parameters('eventHubResourceId')]" } }, "identity": { "type": "SystemAssigned" } }, "eventDeliverySchema": "EventGridSchema", "filter": { "includedEventTypes": [ "[parameters('includedEventType')]" ] } } } ] } How to deploy via Azure CLI: az deployment group create \ --resource-group <your-resource-group> \ --template-file eventgridarmtemplate.json \ --parameters \ systemTopicName=<system-topic-name> \ eventSubscriptionName=<event-subscription-name> \ eventHubResourceId="/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.EventHub/namespaces/<namespace>/eventhubs/<hub>" Disclaimer The sample scripts provided in this article are provided AS IS without warranty of any kind. The author is not responsible for any issues, damages, or problems that may arise from using these scripts. Users should thoroughly test any implementation in their environment before deploying to production. Azure services and APIs may change over time, which could affect the functionality of the provided scripts. Always refer to the latest Azure documentation for the most up-to-date information. Thanks for reading this blog! I hope you've found these workarounds valuable for addressing the Event Grid identity parameter limitation in Azure CLI. 😊194Views3likes0Comments