updates
799 TopicsSecurity Review for Microsoft Edge version 147
We have reviewed the new settings in Microsoft Edge version 147 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 139 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit. Microsoft Edge version 147 introduced 9 new Computer and User settings; we have included a spreadsheet listing the new settings to make it easier for you to find. Version 147 introduced the Control the availability of the XSLT feature policy (XSLTEnabled). This policy exists to support enterprise testing and transition scenarios while the Chromium project works toward deprecating and removing XSLT support from the browser due to security concerns associated with this legacy feature. XSLT support in modern browsers represents a disproportionate attack surface, and upstream Chromium has announced plans to disable and ultimately remove XSLT in a future release. As a result, organizations should treat continued reliance on client‑side XSLT as technical debt and plan migration accordingly. Additional details can be found here. Organizations are encouraged to proactively test setting XSLTEnabled = Disabled to identify application dependencies and remediation requirements ahead of any future default changes or removal of the feature. As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here. Please continue to give us feedback through the Security Baselines Discussion site or this post.Build Your AI Agent in 5 Minutes with AI Toolkit for VS Code
What if building an AI agent was as easy as filling out a form? No frameworks to install. No boilerplate to copy-paste from GitHub. No YAML to debug at midnight. Just VS Code, one extension, and an idea. AI Toolkit for VS Code turns agent development into something anyone can do — whether you're a seasoned developer who wants full code control, or someone who's never touched an AI framework and just wants to see something work. Let's build an agent. Then let's explore what else this toolkit can do. Getting Set Up You need two things: VS Code — download and install if you haven't already AI Toolkit extension — open VS Code, go to Extensions (Ctrl+Shift+X), search "AI Toolkit", and install it That's it. No terminal commands. No dependencies to wrangle. When AI Toolkit installs, it brings everything it needs — including the Microsoft Foundry integration and GitHub Copilot skills for agent development. Once installed, you'll see a new AI Toolkit icon in the left sidebar. Click it. That's your home base for everything we're about to do. Build an Agent — No Code Required Open the Command Palette (Ctrl+Shift+P) and type "Create Agent". You'll see a clean panel with two options side by side: Design an Agent Without Code — visual builder, perfect for getting started Create in Code — full project scaffolding, for when you want complete control Click "Design an Agent Without Code." Agent Builder opens up. Now fill in three things: Give it a name Something descriptive. For this example: "Azure Advisor" Pick a model Click the model dropdown. You'll see a list of available models — GPT-4.1, Claude Opus 4.6, and others. Foundry models appear at the top as recommended options. Pick one. Here's a nice detail: you don't need to know whether your model uses the Chat Completions API or the Responses API. AI Toolkit detects this automatically and handles the switch behind the scenes. Write your instructions This is where you tell the agent who it is and how to behave. Think of it as a personality brief: Hit Run That's it. Click Run and start chatting with your agent in the built-in playground. Want More Control? Build in Code The no-code path is great for prototyping and prompt engineering. But when you need custom tools, business logic, or multi-agent workflows — switch to code. From the Create Agent View, choose "Create in Code with Full Control." You get two options: Scaffold from a template Pick a pre-built project structure — single agent, multi-agent, or LangGraph workflow. AI Toolkit generates a complete project with proper folder structure, configuration files, and starter code. Open it, customize it, run it. Generate with GitHub Copilot Describe your agent in plain English in Copilot Chat: "Create a customer support agent that can look up order status, process returns, and escalate to a human when the customer is upset." Copilot generates a full project — agent logic, tool definitions, system prompts, and evaluation tests. It uses the microsoft-foundry skill, the same open-source skill powering GitHub Copilot for Azure. AI Toolkit installs and keeps this skill updated automatically — you never configure it. The output is structured and production-ready. Real folder structure. Real separation of concerns. Not a single-file script. Either way, you get a project you can version-control, test, and deploy. Cool Features You Should Know About Building the agent is just the beginning. Here's where AI Toolkit gets genuinely impressive. 🔧 Add Real Tools with MCP Your agent can do more than just talk. Click Add Tool in Agent Builder to connect MCP (Model Context Protocol) servers — these give your agent real capabilities: Search the web Query a database Read files Call external APIs Interact with any service that has an MCP server You control how much freedom your agent gets. Set tool approval to Auto (tool runs immediately) or Manual (you approve each call). Perfect for when you trust a read-only search tool but want oversight on anything that takes action. You can also delete MCP servers directly from the Tool Catalog when you no longer need them — no config file editing required. 🧠 Prompt Optimizer Not sure if your instructions are good enough? Click the Improve button in Agent Builder. The Foundry Prompt Optimizer analyzes your prompt and rewrites it to be clearer, more structured, and more effective. It's like having a prompt engineering expert review your work — except it takes seconds. 🕸️ Agent Inspector When your agent runs, open Agent Inspector to see what's happening under the hood. It visualizes the entire workflow in real time — which tools are called, in what order, and how the agent makes decisions. 💬 Conversations View Agent Builder includes a Conversations tab where you can review the full history of interactions with your agent. Scroll through past conversations, compare how your agent handled different scenarios, and spot patterns in where it succeeds or struggles. 📁 Everything in One Sidebar AI Toolkit puts everything in a single My Resources panel: Recent Agents — one-click access to agents you've been working on Local Resources — your local models, agents, and tools Foundry Resources — remote agents and models (if connected) Why AI Toolkit? There are other ways to build agents. What makes this different? Everything is in VS Code. You don't context-switch between a web UI, a CLI, and an IDE. Discovery, building, testing, debugging, and deployment all happen in one place. No-code and code-first aren't separate products. They're two views of the same agent. Start in Agent Builder, click View Code, and you have a full project. Or go the other way — build in code and test in the visual playground. Copilot is deeply integrated. Not as a chatbot bolted on the side — as an actual development tool that understands agent architecture and generates production-quality scaffolding. Wrapping Up: 📥 Install: AI Toolkit on the VS Code Marketplace 📖 Learn: AI Toolkit Documentation Open VS Code. Ctrl+Shift+P. Type "Create Agent." Five minutes from now, you'll have an agent running. 🚀342Views1like0CommentsAzure Functions Ignite 2025 Update
Azure Functions is redefining event-driven applications and high-scale APIs in 2025, accelerating innovation for developers building the next generation of intelligent, resilient, and scalable workloads. This year, our focus has been on empowering AI and agentic scenarios: remote MCP server hosting, bulletproofing agents with Durable Functions, and first-class support for critical technologies like OpenTelemetry, .NET 10 and Aspire. With major advances in serverless Flex Consumption, enhanced performance, security, and deployment fundamentals across Elastic Premium and Flex, Azure Functions is the platform of choice for building modern, enterprise-grade solutions. Remote MCP Model Context Protocol (MCP) has taken the world by storm, offering an agent a mechanism to discover and work deeply with the capabilities and context of tools. When you want to expose MCP/tools to your enterprise or the world securely, we recommend you think deeply about building remote MCP servers that are designed to run securely at scale. Azure Functions is uniquely optimized to run your MCP servers at scale, offering serverless and highly scalable features of Flex Consumption plan, plus two flexible programming model options discussed below. All come together using the hardened Functions service plus new authentication modes for Entra and OAuth using Built-in authentication. Remote MCP Triggers and Bindings Extension GA Back in April, we shared a new extension that allows you to author MCP servers using functions with the MCP tool trigger. That MCP extension is now generally available, with support for C#(.NET), Java, JavaScript (Node.js), Python, and Typescript (Node.js). The MCP tool trigger allows you to focus on what matters most: the logic of the tool you want to expose to agents. Functions will take care of all the protocol and server logistics, with the ability to scale out to support as many sessions as you want to throw at it. [Function(nameof(GetSnippet))] public object GetSnippet( [McpToolTrigger(GetSnippetToolName, GetSnippetToolDescription)] ToolInvocationContext context, [BlobInput(BlobPath)] string snippetContent ) { return snippetContent; } New: Self-hosted MCP Server (Preview) If you’ve built servers with official MCP SDKs and want to run them as remote cloud‑scale servers without re‑writing any code, this public preview is for you. You can now self‑host your MCP server on Azure Functions—keep your existing Python, TypeScript, .NET, or Java code and get rapid 0 to N scaling, built-in server authentication and authorization, consumption-based billing, and more from the underlying Azure Functions service. This feature complements the Azure Functions MCP extension for building MCP servers using the Functions programming model (triggers & bindings). Pick the path that fits your scenario—build with the extension or standard MCP SDKs. Either way you benefit from the same scalable, secure, and serverless platform. Use the official MCP SDKs: # MCP.tool() async def get_alerts(state: str) -> str: """Get weather alerts for a US state. Args: state: Two-letter US state code (e.g. CA, NY) """ url = f"{NWS_API_BASE}/alerts/active/area/{state}" data = await make_nws_request(url) if not data or "features" not in data: return "Unable to fetch alerts or no alerts found." if not data["features"]: return "No active alerts for this state." alerts = [format_alert(feature) for feature in data["features"]] return "\n---\n".join(alerts) Use Azure Functions Flex Consumption Plan's serverless compute using Custom Handlers in host.json: { "version": "2.0", "configurationProfile": "mcp-custom-handler", "customHandler": { "description": { "defaultExecutablePath": "python", "arguments": ["weather.py"] }, "http": { "DefaultAuthorizationLevel": "anonymous" }, "port": "8000" } } Learn more about MCPTrigger and self-hosted MCP servers at https://aka.ms/remote-mcp Built-in MCP server authorization (Preview) The built-in authentication and authorization feature can now be used for MCP server authorization, using a new preview option. You can quickly define identity-based access control for your MCP servers with Microsoft Entra ID or other OpenID Connect providers. Learn more at https://aka.ms/functions-mcp-server-authorization. Better together with Foundry agents Microsoft Foundry is the starting point for building intelligent agents, and Azure Functions is the natural next step for extending those agents with remote MCP tools. Running your tools on Functions gives you clean separation of concerns, reuse across multiple agents, and strong security isolation. And with built-in authorization, Functions enables enterprise-ready authentication patterns, from calling downstream services with the agent’s identity to operating on behalf of end users with their delegated permissions. Build your first remote MCP server and connect it to your Foundry agent at https://aka.ms/foundry-functions-mcp-tutorial. Agents Microsoft Agent Framework 2.0 (Public Preview Refresh) We’re excited about the preview refresh 2.0 release of Microsoft Agent Framework that builds on battle hardened work from Semantic Kernel and AutoGen. Agent Framework is an outstanding solution for building multi-agent orchestrations that are both simple and powerful. Azure Functions is a strong fit to host Agent Framework with the service’s extreme scale, serverless billing, and enterprise grade features like VNET networking and built-in auth. Durable Task Extension for Microsoft Agent Framework (Preview) The durable task extension for Microsoft Agent Framework transforms how you build production-ready, resilient and scalable AI agents by bringing the proven durable execution (survives crashes and restarts) and distributed execution (runs across multiple instances) capabilities of Azure Durable Functions directly into the Microsoft Agent Framework. Combined with Azure Functions for hosting and event-driven execution, you can now deploy stateful, resilient AI agents that automatically handle session management, failure recovery, and scaling, freeing you to focus entirely on your agent logic. Key features of the durable task extension include: Serverless Hosting: Deploy agents on Azure Functions with auto-scaling from thousands of instances to zero, while retaining full control in a serverless architecture. Automatic Session Management: Agents maintain persistent sessions with full conversation context that survives process crashes, restarts, and distributed execution across instances Deterministic Multi-Agent Orchestrations: Coordinate specialized durable agents with predictable, repeatable, code-driven execution patterns Human-in-the-Loop with Serverless Cost Savings: Pause for human input without consuming compute resources or incurring costs Built-in Observability with Durable Task Scheduler: Deep visibility into agent operations and orchestrations through the Durable Task Scheduler UI dashboard Create a durable agent: endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # Create an AI agent following the standard Microsoft Agent Framework pattern agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Configure the function app to host the agent with durable session management app = AgentFunctionApp(agents=[agent]) app.run() Durable Task Scheduler dashboard for agent and agent workflow observability and debugging For more information on the durable task extension for Agent Framework, see the announcement: https://aka.ms/durable-extension-for-af-blog. Flex Consumption Updates As you know, Flex Consumption means serverless without compromise. It combines elastic scale and pay‑for‑what‑you‑use pricing with the controls you expect: per‑instance concurrency, longer executions, VNet/private networking, and Always Ready instances to minimize cold starts. Since launching GA at Ignite 2024 last year, Flex Consumption has had tremendous growth with over 1.5 billion function executions per day and nearly 40 thousand apps. Here’s what’s new for Ignite 2025: 512 MB instance size (GA). Right‑size lighter workloads, scale farther within default quota. Availability Zones (GA). Distribute instances across zones. Rolling updates (Public Preview). Unlock zero-downtime deployments of code or config by setting a single configuration. See below for more information. Even more improvements including: new diagnostic settingsto route logs/metrics, use Key Vault App Config references, new regions, and Custom Handler support. To get started, review Flex Consumption samples, or dive into the documentation to see how Flex can support your workloads. Migrating to Azure Functions Flex Consumption Migrating to Flex Consumption is simple with our step-by-step guides and agentic tools. Move your Azure Functions apps or AWS Lambda workloads, update your code and configuration, and take advantage of new automation tools. With Linux Consumption retiring, now is the time to switch. For more information, see: Migrate Consumption plan apps to the Flex Consumption plan Migrate AWS Lambda workloads to Azure Functions Durable Functions Durable Functions introduces powerful new features to help you build resilient, production-ready workflows: Distributed Tracing: lets you track requests across components and systems, giving you deep visibility into orchestration and activities with support for App Insights and OpenTelemetry. Extended Sessions support in .NET isolated: improves performance by caching orchestrations in memory, ideal for fast sequential activities and large fan-out/fan-in patterns. Orchestration versioning (public preview): enables zero-downtime deployments and backward compatibility, so you can safely roll out changes without disrupting in-flight workflows Durable Task Scheduler Updates Durable Task Scheduler Dedicated SKU (GA): Now generally available, the Dedicated SKU offers advanced orchestration for complex workflows and intelligent apps. It provides predictable pricing for steady workloads, automatic checkpointing, state protection, and advanced monitoring for resilient, reliable execution. Durable Task Scheduler Consumption SKU (Public Preview): The new Consumption SKU brings serverless, pay-as-you-go orchestration to dynamic and variable workloads. It delivers the same orchestration capabilities with flexible billing, making it easy to scale intelligent applications as needed. For more information see: https://aka.ms/dts-ga-blog OpenTelemetry support in GA Azure Functions OpenTelemetry is now generally available, bringing unified, production-ready observability to serverless applications. Developers can now export logs, traces, and metrics using open standards—enabling consistent monitoring and troubleshooting across every workload. Key capabilities include: Unified observability: Standardize logs, traces, and metrics across all your serverless workloads for consistent monitoring and troubleshooting. Vendor-neutral telemetry: Integrate seamlessly with Azure Monitor or any OpenTelemetry-compliant backend, ensuring flexibility and choice. Broad language support: Works with .NET (isolated), Java, JavaScript, Python, PowerShell, and TypeScript. Start using OpenTelemetry in Azure Functions today to unlock standards-based observability for your apps. For step-by-step guidance on enabling OpenTelemetry and configuring exporters for your preferred backend, see the documentation. Deployment with Rolling Updates (Preview) Achieving zero-downtime deployments has never been easier. The Flex Consumption plan now offers rolling updates as a site update strategy. Set a single property, and all future code deployments and configuration changes will be released with zero-downtime. Instead of restarting all instances at once, the platform now drains existing instances in batches while scaling out the latest version to match real-time demand. This ensures uninterrupted in-flight executions and resilient throughput across your HTTP, non-HTTP, and Durable workloads – even during intensive scale-out scenarios. Rolling updates are now in public preview. Learn more at https://aka.ms/functions/rolling-updates. Secure Identity and Networking Everywhere By Design Security and trust are paramount. Azure Functions incorporates proven best practices by design, with full support for managed identity—eliminating secrets and simplifying secure authentication and authorization. Flex Consumption and other plans offer enterprise-grade networking features like VNETs, private endpoints, and NAT gateways for deep protection. The Azure Portal streamlines secure function creation, and updated scenarios and samples showcase these identity and networking capabilities in action. Built-in authentication (discussed above) enables inbound client traffic to use identity as well. Check out our updated Functions Scenarios page with quickstarts or our secure samples gallery to see these identity and networking best practices in action. .NET 10 Azure Functions now supports .NET 10, bringing in a great suite of new features and performance benefits for your code. .NET 10 is supported on the isolated worker model, and it’s available for all plan types except Linux Consumption. As a reminder, support ends for the legacy in-process model on November 10, 2026, and the in-process model is not being updated with .NET 10. To stay supported and take advantage of the latest features, migrate to the isolated worker model. Aspire Aspire is an opinionated stack that simplifies development of distributed applications in the cloud. The Azure Functions integration for Aspire enables you to develop, debug, and orchestrate an Azure Functions .NET project as part of an Aspire solution. Aspire publish directly deploys to your functions to Azure Functions on Azure Container Apps. Aspire 13 includes an updated preview version of the Functions integration that acts as a release candidate with go-live support. The package will be moved to GA quality with Aspire 13.1. Java 25, Node.js 24 Azure Functions now supports Java 25 and Node.js 24 in preview. You can now develop functions using these versions locally and deploy them to Azure Functions plans. Learn how to upgrade your apps to these versions here In Summary Ready to build what’s next? Update your Azure Functions Core Tools today and explore the latest samples and quickstarts to unlock new capabilities for your scenarios. The guided quickstarts run and deploy in under 5 minutes, and incorporate best practices—from architecture to security to deployment. We’ve made it easier than ever to scaffold, deploy, and scale real-world solutions with confidence. The future of intelligent, scalable, and secure applications starts now—jump in and see what you can create!3.4KViews0likes1CommentAnnouncing Microsoft Azure Network Adapter (MANA) support for Existing VM SKUs
As a leader in cloud infrastructure, Microsoft ensures that Azure’s IaaS customers always have access to the latest hardware. Our goal is to consistently deliver technology to support business critical workloads with world class efficiency, reliability, and security. Customers benefit from cutting-edge performance enhancements and features, helping them to future proof their workloads while maintaining business continuity. Azure will be deploying the Microsoft Azure Network Adapter (MANA) for existing VM Size Families. Deployment timeline to be announced by mid-to-late April. The intent is to provide the benefits of new server hardware to customers of existing VM SKUs as they work towards migrating to newer SKUs. The deployments will be based on capacity needs and won’t be restricted by region. Once the hardware is available in a region, VMs can be deployed to it as needed. Workloads on operating systems which fully support MANA will benefit from sub-second Network Interface Card (NIC) firmware upgrades, higher throughput, lower latency, increased Security and Azure Boost-enabled data path accelerations. If your workload doesn't support MANA today, you'll still be able to access Azure’s network on MANA enabled SKUs, but performance will be comparable to previous generation (non-MANA) hardware. Check out the Azure Boost Overview and the Microsoft Azure Network Adapter (MANA) overview for more detailed information and OS compatibility. To determine whether your VMs are impacted and what actions (if any) you should take, start with MANA support for existing VM SKUs. This article provides additional information about which VM Sizes are eligible to be deployed on the new MANA-enabled hardware, what actions (if any) you should take, and how to determine if the workload has been deployed on MANA-enabled hardware.6.2KViews8likes1CommentCode Optimizations for Azure App Service Now Available in VS Code
Today we shipped a feature in the Azure App Service extension for VS Code that answers both questions: Code Optimizations, powered by Application Insights profiler data and GitHub Copilot. The problem: production performance is a black box You've deployed your .NET app to Azure App Service. Monitoring shows CPU is elevated, and response times are creeping up. You know something is slow, but reproducing production load patterns locally is nearly impossible. Application Insights can detect these issues, but context-switching between the Azure Portal and your editor to actually fix them adds friction. What if the issues came to you, right where you write code? What's new The Azure App Service extension now adds a Code Optimizations node directly under your .NET web apps in the Azure Resources tree view. This node surfaces performance issues detected by the Application Insights profiler - things like excessive CPU or memory usage caused by specific functions in your code. Each optimization tells you: Which function is the bottleneck Which parent function is calling it What category of resource usage is affected (CPU, memory, etc.) The impact as a percentage, so you can prioritize what matters But we didn't stop at surfacing the data. Click Fix with Copilot on any optimization and the extension will: Locate the problematic code in your workspace by matching function signatures from the profiler stack trace against your local source using VS Code's workspace symbol provider Open the file and highlight the exact method containing the bottleneck Launch a Copilot Chat session pre-filled with a detailed prompt that includes the issue description, the recommendation from Application Insights, the full stack trace context, and the source code of the affected method By including the stack trace, recommendation, impact data, and the actual source code, the prompt gives Copilot enough signal to produce a meaningful, targeted fix rather than generic advice. For example, the profiler might surface a LINQ-heavy data transformation consuming 38% of CPU in OrderService.CalculateTotals, called from CheckoutController.Submit. It then prompts copilot with the problem and it them offers a fix. Prerequisites A .NET web app deployed to Azure App Service Application Insights connected to your app The Application Insights profiler enabled (the extension will prompt you if it's not) For Windows App Service plans When creating a new web app through the extension, you'll now see an option to enable the Application Insights profiler. For existing apps, the Code Optimizations node will guide you through enabling profiling if it's not already active. For Linux App Service plans Profiling on Linux requires a code-level integration rather than a platform toggle. If no issues are found, the extension provides a prompt to help you add profiler support to your application code. What's next This is the first step toward bringing production intelligence directly into the inner development loop. We're exploring how to expand this pattern beyond .NET and beyond performance — surfacing reliability issues, exceptions, and other operational insights where developers can act on them immediately. Install the latest Azure App Service extension and expand the Code Optimizations node under any .NET web app to try it out. We'd love your feedback - file issues on the GitHub repo. Happy Coding <3348Views0likes1CommentExpressRoute Gateway Microsoft initiated migration
Important: Microsoft initiated Gateway migrations are temporarily paused. You will be notified when migrations resume. Objective The backend migration process is an automated upgrade performed by Microsoft to ensure your ExpressRoute gateways use the Standard IP SKU. This migration enhances gateway reliability and availability while maintaining service continuity. You receive notifications about scheduled maintenance windows and have options to control the migration timeline. For guidance on upgrading Basic SKU public IP addresses for other networking services, see Upgrading Basic to Standard SKU. Important: As of September 30, 2025, Basic SKU public IPs are retired. For more information, see the official announcement. You can initiate the ExpressRoute gateway migration yourself at a time that best suits your business needs, before the Microsoft team performs the migration on your behalf. This gives you control over the migration timing. Please use the ExpressRoute Gateway Migration Tool to migrate your gateway Public IP to Standard SKU. This tool provides a guided workflow in the Azure portal and PowerShell, enabling a smooth migration with minimal service disruption. Backend migration overview The backend migration is scheduled during your preferred maintenance window. During this time, the Microsoft team performs the migration with minimal disruption. You don’t need to take any actions. The process includes the following steps: Deploy new gateway: Azure provisions a second virtual network gateway in the same GatewaySubnet alongside your existing gateway. Microsoft automatically assigns a new Standard SKU public IP address to this gateway. Transfer configuration: The process copies all existing configurations (connections, settings, routes) from the old gateway. Both gateways run in parallel during the transition to minimize downtime. You may experience brief connectivity interruptions may occur. Clean up resources: After migration completes successfully and passes validation, Azure removes the old gateway and its associated connections. The new gateway includes a tag CreatedBy: GatewayMigrationByService to indicate it was created through the automated backend migration Important: To ensure a smooth backend migration, avoid making non-critical changes to your gateway resources or connected circuits during the migration process. If modifications are absolutely required, you can choose (after the Migrate stage complete) to either commit or abort the migration and make your changes. Backend process details This section provides an overview of the Azure portal experience during backend migration for an existing ExpressRoute gateway. It explains what to expect at each stage and what you see in the Azure portal as the migration progresses. To reduce risk and ensure service continuity, the process performs validation checks before and after every phase. The backend migration follows four key stages: Validate: Checks that your gateway and connected resources meet all migration requirements for the Basic to Standard public IP migration. Prepare: Deploys the new gateway with Standard IP SKU alongside your existing gateway. Migrate: Cuts over traffic from the old gateway to the new gateway with a Standard public IP. Commit or abort: Finalizes the public IP SKU migration by removing the old gateway or reverts to the old gateway if needed. These stages mirror the Gateway migration tool process, ensuring consistency across both migration approaches. The Azure resource group RGA serves as a logical container that displays all associated resources as the process updates, creates, or removes them. Before the migration begins, RGA contains the following resources: This image uses an example ExpressRoute gateway named ERGW-A with two connections (Conn-A and LAconn) in the resource group RGA. Portal walkthrough Before the backend migration starts, a banner appears in the Overview blade of the ExpressRoute gateway. It notifies you that the gateway uses the deprecated Basic IP SKU and will undergo backend migration between March 7, 2026, and April 30, 2026: Validate stage Once you start the migration, the banner in your gateway’s Overview page updates to indicate that migration is currently in progress. In this initial stage, all resources are checked to ensure they are in a Passed state. If any prerequisites aren't met, validation fails and the Azure team doesn't proceed with the migration to avoid traffic disruptions. No resources are created or modified in this stage. After the validation phase completes successfully, a notification appears indicating that validation passed and the migration can proceed to the Prepare stage. Prepare stage In this stage, the backend process provisions a new virtual network gateway in the same region and SKU type as the existing gateway. Azure automatically assigns a new public IP address and re-establishes all connections. This preparation step typically takes up to 45 minutes. To indicate that the new gateway is created by migration, the backend mechanism appends _migrate to the original gateway name. During this phase, the existing gateway is locked to prevent configuration changes, but you retain the option to abort the migration, which deletes the newly created gateway and its connections. After the Prepare stage starts, a notification appears showing that new resources are being deployed to the resource group: Deployment status In the resource group RGA, under Settings → Deployments, you can view the status of all newly deployed resources as part of the backend migration process. In the resource group RGA under the Activity Log blade, you can see events related to the Prepare stage. These events are initiated by GatewayRP, which indicates they are part of the backend process: Deployment verification After the Prepare stage completes, you can verify the deployment details in the resource group RGA under Settings > Deployments. This section lists all components created as part of the backend migration workflow. The new gateway ERGW-A_migrate is deployed successfully along with its corresponding connections: Conn-A_migrate and LAconn_migrate. Gateway tag The newly created gateway ERGW-A_migrate includes the tag CreatedBy: GatewayMigrationByService, which indicates it was provisioned by the backend migration process. Migrate stage After the Prepare stage finishes, the backend process starts the Migrate stage. During this stage, the process switches traffic from the existing gateway ERGW-A to the new gateway ERGW-A_migrate. Gateway ERGW-A_migrate: Old gateway (ERGW-A) handles traffic: After the backend team initiates the traffic migration, the process switches traffic from the old gateway to the new gateway. This step can take up to 15 minutes and might cause brief connectivity interruptions. New gateway (ERGW-A_migrate) handles traffic: Commit stage After migration, the Azure team monitors connectivity for 15 days to ensure everything is functioning as expected. The banner automatically updates to indicate completion of migration: During this validation period, you can’t modify resources associated with both the old and new gateways. To resume normal CRUD operations without waiting 15 days, you have two options: Commit: Finalize the migration and unlock resources. Abort: Revert to the old gateway, which deletes the new gateway and its connections. To initiate Commit before the 15-day window ends, type yes and select Commit in the portal. When the commit is initiated from the backend, you will see “Committing migration. The operation may take some time to complete.” The old gateway and its connections are deleted. The event shows as initiated by GatewayRP in the activity logs. After old connections are deleted, the old gateway gets deleted. Finally, the resource group RGA contains only resources only related to the migrated gateway ERGW-A_migrate: The ExpressRoute Gateway migration from Basic to Standard Public IP SKU is now complete. Frequently asked questions How long will Microsoft team wait before committing to the new gateway? The Microsoft team waits around 15 days after migration to allow you time to validate connectivity and ensure all requirements are met. You can commit at any time during this 15-day period. What is the traffic impact during migration? Is there packet loss or routing disruption? Traffic is rerouted seamlessly during migration. Under normal conditions, no packet loss or routing disruption is expected. Brief connectivity interruptions (typically less than 1 minute) might occur during the traffic cutover phase. Can we make any changes to ExpressRoute Gateway deployment during the migration? Avoid making non-critical changes to the deployment (gateway resources, connected circuits, etc.). If modifications are absolutely required, you have the option (after the Migrate stage) to either commit or abort the migration.1.7KViews0likes0CommentsCalling all Microsoft Q&A contributors: Join Product Champions Program
🎉 Sign-ups are open for the Microsoft Q&A Product Champions Program (2026)! ✅ Sign up: https://aka.ms/AAzhkru 📘 Learn more + Welcome Guide: https://aka.ms/ProductChampionsWelcome If you love answering questions and helping others on Microsoft Q&A, we’d love to have you join. ``Guardrails for Generative AI: Securing Developer Workflows
Generative AI is revolutionizing software development that accelerates delivery but introduces compliance and security risks if unchecked. Tools like GitHub Copilot empower developers to write code faster, automate repetitive tasks, and even generate tests and documentation. But speed without safeguards introduces risk. Unchecked AI‑assisted development can lead to security vulnerabilities, data leakage, compliance violations, and ethical concerns. In regulated or enterprise environments, this risk multiplies rapidly as AI scales across teams. The solution? Guardrails—a structured approach to ensure AI-assisted development remains secure, responsible, and enterprise-ready. In this blog, we explore how to embed responsible AI guardrails directly into developer workflows using: Azure AI Content Safety GitHub Copilot enterprise controls Copilot Studio governance Azure AI Foundry CI/CD and ALM integration The goal: maximize developer productivity without compromising trust, security, or compliance. Key Points: Why Guardrails Matter: AI-generated code may include insecure patterns or violate organizational policies. Azure AI Content Safety: Provides APIs to detect harmful or sensitive content in prompts and outputs, ensuring compliance with ethical and legal standards. Copilot Studio Governance: Enables environment strategies, Data Loss Prevention (DLP), and role-based access to control how AI agents interact with enterprise data. Azure AI Foundry: Acts as the control plane for Generative AI turning Responsible AI from policy into operational reality. Integration with GitHub Workflows: Guardrails can be enforced in IDE, Copilot Chat, and CI/CD pipelines using GitHub Actions for automated checks. Outcome: Developers maintain productivity while ensuring secure, compliant, and auditable AI-assisted development. Why Guardrails Are Non-Negotiable AI‑generated code and prompts can unintentionally introduce: Security flaws — injection vulnerabilities, unsafe defaults, insecure patterns Compliance risks — exposure of PII, secrets, or regulated data Policy violations — copyrighted content, restricted logic, or non‑compliant libraries Harmful or biased outputs — especially in user‑facing or regulated scenarios Without guardrails, organizations risk shipping insecure code, violating governance policies, and losing customer trust. Guardrails enable teams to move fast—without breaking trust. The Three Pillars of AI Guardrails Enterprise‑grade AI guardrails operate across three core layers of the developer experience. These pillars are centrally governed and enforced through Azure AI Foundry, which provides lifecycle, evaluation, and observability controls across all three. 1. GitHub Copilot Controls (Developer‑First Safety) GitHub Copilot goes beyond autocomplete and includes built‑in safety mechanisms designed for enterprise use: Duplicate Detection: Filters code that closely matches public repositories. Custom Instructions: Enhance coding standards via .github/copilot-instructions.md. Copilot Chat: Provides contextual help for debugging and secure coding practices. Pro Tip: Use Copilot Enterprise controls to enforce consistent policies across repositories and teams. 2. Azure AI Content Safety (Prompt & Output Protection) This service adds a critical protection layer across prompts and AI outputs: Prompt Injection Detection: Blocks malicious attempts to override instructions or manipulate model behaviour. Groundedness Checks: Ensures outputs align with trusted sources and expected context. Protected Material Detection: Flags copyrighted or sensitive content. Custom Categories: Tailor filters for industry-specific or regulatory requirements. Example: A financial services app can block outputs containing PII or regulatory violations using custom safety categories. 3. Copilot Studio Governance (Enterprise‑Scale Control) For organizations building custom copilots, governance is non‑negotiable. Copilot Studio enables: Data Loss Prevention (DLP): Prevent sensitive data leaks from flowing through risky connectors or channels. Role-Based Access (RBAC): Control who can create, test, approve, deploy and publish copilots. Environment Strategy: Separate dev, test, and production environments. Testing Kits: Validate prompts, responses, and behavior before production rollout. Why it matters: Governance ensures copilots scale safely across teams and geographies without compromising compliance. Azure AI Foundry: The Platform That Operationalizes the Three Pillars While the three pillars define where guardrails are applied, Azure AI Foundry defines how they are governed, evaluated, and enforced at scale. Azure AI Foundry acts as the control plane for Generative AI—turning Responsible AI from policy into operational reality. What Azure AI Foundry Adds Centralized Guardrail Enforcement: Define guardrails once and apply them consistently across: Models, Agents, Tool calls and Outputs. Guardrails specify: Risk types (PII, prompt injection, protected material) Intervention points (input, tool call, tool response, output) Enforcement actions (annotate or block) Built‑In Evaluation & Red‑Teaming: Azure AI Foundry embeds continuous evaluation into the GenAIOps lifecycle: Pre‑deployment testing for safety, groundedness, and task adherence Adversarial testing to detect jailbreaks and misuse Post‑deployment monitoring using built‑in and custom evaluators Guardrails are measured and validated, not assumed. Observability & Auditability: Foundry integrates with Azure Monitor and Application Insights to provide: Token usage and cost visibility Latency and error tracking Safety and quality signals Trace‑level debugging for agent actions Every interaction is logged, traceable, and auditable—supporting compliance reviews and incident investigations. Identity‑First Security for AI Agents: Each AI agent operates as a first‑class identity backed by Microsoft Entra ID: No secrets embedded in prompts or code Least‑privilege access via Azure RBAC Full auditability and revocation Policy‑Driven Platform Governance: Azure AI Foundry aligns with the Azure Cloud Adoption Framework, enabling: Azure Policy enforcement for approved models and regions Cost and quota controls Integration with Microsoft Purview for compliance tracking How to Implement Guardrails in Developer Workflows Shift-Left Security Embed guardrails directly into the IDE using GitHub Copilot and Azure AI Content Safety APIs—catch issues early, when they’re cheapest to fix. Automate Compliance in CI/CD Integrate automated checks into GitHub Actions to enforce policies at pull‑request and build stages. Monitor Continuously Use Azure AI Foundry and governance dashboards to track usage, violations, and policy drift. Educate Developers Conduct readiness sessions and share best practices so developers understand why guardrails exist—not just how they’re enforced. Implementing DLP Policies in Copilot Studio Access Power Platform Admin Center Navigate to Power Platform Admin Centre Ensure you have Tenant Admin or Environment Admin role Create a DLP Policy Go to Data Policies → New Policy. Define data groups: Business (trusted connectors) Non-business Blocked (e.g., HTTP, social channels) Configure Enforcement for Copilot Studio Enable DLP enforcement for copilots using PowerShell Set-PowerVirtualAgentsDlpEnforcement ` -TenantId <tenant-id> ` -Mode Enabled Modes: Disabled (default, no enforcement) SoftEnabled (blocks updates) Enabled (full enforcement) Apply Policy to Environments Choose scope: All environments, specific environments, or exclude certain environments. Block channels (e.g., Direct Line, Teams, Omnichannel) and connectors that pose risk. Validate & Monitor Use Microsoft Purview audit logs for compliance tracking. Configure user-friendly DLP error messages with admin contact and “Learn More” links for makers. Implementing ALM Workflows in Copilot Studio Environment Strategy Use Managed Environments for structured development. Separate Dev, Test, and Prod clearly. Assign roles for makers and approvers. Application Lifecycle Management (ALM) Configure solution-aware agents for packaging and deployment. Use Power Platform pipelines for automated movement across environments. Govern Publishing Require admin approval before publishing copilots to organizational catalog. Enforce role-based access and connector governance. Integrate Compliance Controls Apply Microsoft Purview sensitivity labels and enforce retention policies. Monitor telemetry and usage analytics for policy alignment. Key Takeaways Guardrails are essential for safe, compliant AI‑assisted development. Combine GitHub Copilot productivity with Azure AI Content Safety for robust protection. Govern agents and data using Copilot Studio. Azure AI Foundry operationalizes Responsible AI across the full GenAIOps lifecycle. Responsible AI is not a blocker—it’s an enabler of scale, trust, and long‑term innovation.672Views0likes0CommentsAzure Event Grid MQTT Broker: Enterprise-Grade Messaging for the Connected World
Azure Messaging · What's New · March 2026 The Enterprise MQTT Broker for Every Connected Ecosystem Scale to millions of devices, enforce zero-trust security, and route real-time events across every service, system, and application — all on Azure's hyperscaler-grade messaging backbone. 1,000 msg/sec/session 5M+ concurrent connections 1 MB large message support TLS 1.2+ enforced encryption The Modern Broker for Every Connected Ecosystem Whether you're connecting vehicles, factory floors, edge devices, retail infrastructure, financial systems, or cloud-native services — Azure Event Grid MQTT Broker is the enterprise messaging backbone that scales from prototype to planet. With deep Azure integration, full MQTT compliance, and hyperscaler-grade security, it's the broker you ship on when it truly matters. Core Protocol 📡 Full MQTT Protocol Coverage End-to-end MQTT compliance across all versions, transports, and messaging patterns — so any client, any device, any stack just connects. MQTT v3.1.1 Full compliance MQTT v5.0 Rich features + user props TCP Transport Low-latency, always-on WebSocket Browser + web-native HTTP Publish REST-based ingestion 💬 MQTT v5 Enhancements Richer error signaling, user properties, message expiry, and request–response patterns built in. GA 🌐 HTTP Publish (REST Bridge) Non-MQTT services publish via HTTPS. Ideal for REST backends, legacy systems, and webhooks joining real-time workflows. GA 🔀 Shared Subscriptions Load-balance messages across consumer groups. Scale processing horizontally without duplication. Preview Zero-Trust Security 🔒 Enterprise Authentication Stack Multi-layered security for every fleet size — from embedded devices to enterprise IAM platforms. Every connection authenticated, every access authorized. 🏛️ Microsoft Entra ID / OAuth 2.0 JWT Authenticate via any OIDC-compliant identity provider — Entra ID, Auth0, custom IAM platforms. GA 📜 X.509 Certificate Authentication Hardware-rooted identity for devices. Mutual TLS, cert fingerprint validation, and PKI integration. GA 🪝 Custom Webhook Authentication Dynamically validate clients via Azure Functions or external services. SAS keys, API keys, cert fingerprints — full programmatic control. GA 🌐 TLS 1.2+ Enforced Transport-layer encryption enforced by default. No downgrade paths. Compliant with regulated industries. GA Assigned Client Identifiers — Deterministic Identity Pre-assign approved MQTT client IDs for session continuity, enhanced diagnostics, and audit trails. Critical for regulated industries, long-lived device connections, and operational compliance across large fleets. Hyperscaler Performance ⚡ Built for Massive Scale From startup-scale prototypes to workloads with up to 5M+ concurrent device connections — Event Grid MQTT Broker grows with your ambitions. 1,000 messages/sec/session (ingress) View Quotas & Limits → High-frequency industrial and automotive signal processing at full speed, per session. 15 topic segments (GA) View Quotas & Limits → Deep hierarchical topic modeling for structured fleets, factories, and telemetry pipelines. 🔢 1 MB Large Message Support Send high-res images, video frames, and large telemetry batches. No pre-chunking needed. Preview 📈 Auto Scale Up & Down Elastic namespace scaling that responds to demand in real time — pay for what you use, never throttle a workload. Coming Soon in Preview 🌐 IPv6 Support Native dual-stack networking for modern infrastructure, next-gen telecom, and global device deployments. Coming Soon in Preview 🚀 Up to 5M+ Connections Running large-scale workloads? Talk to us — we'll onboard your production workload with dedicated hands-on support. Contact Us → 📦 Bulk Client Onboarding API Register thousands of devices in a single API call — credentials, certificates, and auth rules in batch. CI/CD-native. Preview 📤 High Egress Throughput Up to 500 msg/sec/instance egress for high fan-out scenarios. Perfect for dashboards, monitoring, and analytics consumers. Preview Azure Native Integration 🔀 MQTT Events, Everywhere in Azure Route MQTT streams directly into Azure's full analytics, automation, and real-time intelligence stack — seamlessly connected to every Azure service you already use. 🌊 Fabric Eventstreams 📊 Azure Data Explorer 📨 Event Hubs ⚙️ Azure Functions 🔗 Logic Apps 🌟 First-Class Microsoft Fabric Integration Route MQTT messages and CloudEvents directly from Event Grid Namespaces to Fabric Eventstreams — enabling real-time analytics, storage, and visualization of IoT data without an Event Hub intermediary. Reference architectures for every industrial vertical are available on Microsoft Learn and via the RTI Reference Architectures. State & Presence 💡 Last Will & Testament + Retained Messages Know the last known state of every device, and be instantly notified when something goes offline — essential for mission-critical connected systems. 📣 Last Will & Testament (LWT) Immediately notify downstream subscribers when a device disconnects unexpectedly. Critical for industrial automation, fleet monitoring, and health-critical telemetry. GA 📌 Retained Messages Subscribers receive the latest device value immediately on connect. Configurable expiry, on-demand clearing, Get/List API, and portal experience coming soon. GA Industry 4.0 🏭 Sparkplug B on Azure — Smart Factory, Unlocked The industrial MQTT standard runs natively on Event Grid MQTT Broker — bringing real-time factory floor intelligence to Azure analytics pipelines. Learn more about Sparkplug B support → 🔴 Device Lifecycle (BIRTH/DEATH) Track when machines come online and offline. LWT handles unexpected disconnects with zero configuration. 🔧 SCADA Integration Auto-discover tags in Ignition SCADA with Cirrus Link Chariot. Seamless edge-to-cloud tag sync and live machine vitals. 📊 Azure Data Explorer / Fabric Industrial binary Sparkplug payloads ingested into ADE or Fabric for real-time dashboards and automated alerting. Edge to Cloud: How It Flows 1 Sensors on the factory floor Machines publish temperature, RPM, and status as Sparkplug B messages via edge gateways. 2 Azure Event Grid MQTT Broker ingests QoS 1 delivery, TLS secured, with LWT for device state awareness and retained messages for new subscribers. 3 SCADA auto-discovers tags Ignition SCADA with Chariot Cirrus Link sees all published tags in real time — zero manual configuration. 4 Azure Data Explorer / Fabric analytics Same stream bifurcated — live operational view and cloud-scale analytics for predictive maintenance and automated alerting. Operational Visibility 📊 Deep Metrics for Every Deployment Full operational telemetry across all broker functions — detect issues, optimize throughput, and ensure SLA compliance at any scale. ⏱️ CONNACK / PUBACK Latency Measure end-to-end connection and publish acknowledgment times. Baseline and alert on latency degradation. ✅ Publish / Subscribe Success Rates Track success, failure, and throttling events per topic and per session to quickly isolate problems. 🔌 Active Connections Real-time view of live connections, throughput, and session health across your entire device fleet. Customer Impact 🌍 Built for Every Connected Industry From connected vehicles to financial trading infrastructure to smart retail — Event Grid MQTT Broker powers the most demanding workloads across every sector. 🚗 Connected Automotive → Real-time vehicle telemetry at fleet scale → Over-the-air event routing and signaling → Edge-to-cloud V2X messaging pipelines → TLS/mTLS for per-vehicle identity → High-frequency CAN bus signal ingestion 🏭 Smart Manufacturing → Sparkplug B + SCADA tag auto-discovery → Predictive maintenance event pipelines → Factory floor machine state monitoring → Bulk device onboarding for plant rollouts → QoS 1 delivery for critical control signals 🖥️ DCIM / Data Centre → Real-time rack power and thermal monitoring → High-frequency sensor event ingestion → Automated alerting and threshold triggers → Secure multi-tenant device namespaces → Fabric Eventstreams for real-time dashboards 🏦 Finance & Payments → Low-latency event streaming for trading signals → Real-time payment status notifications → Fraud detection event pipelines → OAuth 2.0 / Entra ID for regulated access → Audit-ready assigned client identifiers 🛒 Retail & Commerce → Real-time inventory and shelf sensor updates → Point-of-sale event routing at scale → Connected checkout and payment device streams → In-store IoT device fleet management → Live promotions and pricing event broadcast 🏢 Smart Buildings → HVAC, lighting, and energy sensor telemetry → Access control and occupancy event streams → Elevator and facility equipment monitoring → Multi-tenant building namespace isolation → Retained messages for last-known device state What's Next 🔭 Coming Soon to Preview These capabilities are entering preview soon — start evaluating for your workloads today. Preview Large Message Packets (1 MB) Send images, video frames, and large telemetry batches without chunking. Streamlines edge-to-cloud ingestion. Preview Shared Subscriptions Load-balance across consumer groups for parallel message processing at scale — no duplication, built-in fault tolerance. Preview Subscription Identifier MQTT v5 compliance — tag subscriptions for routing, multiplexing, and fine-grained handler dispatch. Preview Bulk Client Onboarding API Provision entire device fleets in a single API call. Automate with CI/CD pipelines for factory and field deployments. Coming Soon Auto Scale Up & Down Elastic namespace scaling that automatically expands and contracts broker capacity in response to real-time load. Coming Soon IPv6 Support Dual-stack connectivity for next-gen networks, modern telecom infrastructure, and global IoT deployments. Want early access to any of these previews? We're actively onboarding early customers for all upcoming preview capabilities. Whether you're running a connected fleet, industrial workload, or large-scale IoT deployment — get in touch and we'll get you started. Contact Us for Early Access → Ready to Connect Everything? Azure Event Grid MQTT Broker is generally available. Start building enterprise-grade connected ecosystems today — or talk to us about your production workload. Get Started on Azure Talk to an Expert → Explore Reference Architectures © 2026 Microsoft Azure · Event Grid MQTT Broker Azure MQTT IoT Event Grid251Views0likes0Comments