integration
347 TopicsTypical Storage access issues troubleshooting
We get a big number of cases with Storage Account connection failing and sometimes we see that our customers are not aware of the troubleshooting steps they can take to accelerate the resolution of this issue. As such, we've compiled some scenarios and the usual troubleshooting steps we ask you to take. Always remember that if you have done changes to your infrastructure, consider rolling them back to ensure that this is not the root cause. Even a small change that apparently has no effect, may cause downtime on your application. Common messages The errors that are shown in the portal when the Storage Account connectivity is down are very similar, and they may not indicate correctly the cause. Error Message that surfaces in the Portal for Logic Apps Standard System.Private.Core.Lib: Access to the path 'C:\home\site\wwwroot\host.json' is denied Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (InternalServerError) from host runtime.' System.Private.CoreLib: The format of the specified network name is invalid. : 'C:\\home\\site\\wwwroot\\host.json'.' System.Private.CoreLib: The user name or password is incorrect. : 'C:\home\site\wwwroot\host.json'. Microsoft.Windows.Azure.ResourceStack: The SSL connection could not be established, see inner exception. System.Net.Http: The SSL connection could not be established, see inner exception. System.Net.Security: Authentication failed because the remote party has closed the transport stream Unexpected error occurred while loading workflow content and artifacts The errors don't really indicate what the root cause is, but it's very common to be a broken connection with the Storage. What to verify? There are 4 major components to verify in these cases: Logic App environment variables and network settings Storage Account networking settings Network settings DNS settings Logic App environment variables and Network From an App Settings point of view, there is not much to verify, but these are important steps, that sometimes are overlook. At this time, all or nearly all Logic Apps have been migrated to dotnet Functions_Worker_Runtime (under Environmental Variables tab), but this is good to confirm. It's also good to confirm if your Platform setting is set to 64 bits (under Configuration tab/ General Settings). We've seen that some deployments are using old templates and setting this as 32 bits, which doesn't make full use of the available resources. Check if Logic App has the following environment variables with value: WEBSITE_CONTENTOVERVNET - set to 1 OR WEBSITE_VNET_ROUTE_ALL - set to 1. OR vnetRouteAllEnabled set to 1. Configure virtual network integration with application and configuration routing. - Azure App Service | Microsoft Learn These settings can also be replaced with the UI setting in the Virtual Network tab, when you select "Content Storage" in the Configuration routing. For better understanding, vnetContentShareEnabled takes precedence. In other words, if it is set (true/false), WEBSITE_CONTENTOVERVNET is ignored. Only if vnetContentShareEnabled is null, WEBSITE_CONTENTOVERVNET is taken into account. Also keep this in mind: Storage considerations for Azure Functions | Microsoft Learn WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and AzureWebJobsStorage have the connection string as in the Storage Account Website_contentazurefileconnectionstring | App settings reference for Azure Functions | Microsoft Learn Azurewebjobsstorage | App settings reference for Azure Functions | Microsoft Learn WEBSITE_CONTENTSHARE has the Fileshare name Website_contentshare | App settings reference for Azure Functions | Microsoft Learn These are the first points to validate. Storage Account settings If all these are matching/properly configured and still the Logic App is in error, we move to the next step, that is to validate the Storage Account network settings. When the Storage Account does not have Vnet integration enabled, there should be no issues, because the connection is made through the public endpoints. Still, even with this, you must ensure that at least the "Allow storage account key access" is enabled. This is because at this time, the Logic App is dependent on the Access key to connect to the Storage Account. Although you can set the AzureWebJobsStorage to run with Managed Identity, you can't fully disable storage account key access for Standard logic apps that use the Workflow Service Plan hosting option. However, with ASE v3 hosting option, you can disable storage account key access after you finish the steps to set up managed identity authentication. Create example Standard workflow in Azure portal - Azure Logic Apps | Microsoft Learn If this setting is enabled, you must check if Storage Account is behind Firewall. The Access may be Enabled for select networks or fully disabled. Both options require Service Endpoints or Private Endpoints configured. Deploying Standard Logic App to Storage Account behind Firewall using Service or Private Endpoints | Microsoft Community Hub So check the Networking tab under the Storage Account and confirm the following: In case you select the "selected networks" option, confirm that the VNET is the same as the Logic App is extended to. Your Logic App and Storage may be hosted in different Vnets, but you must ensure that there is full connectivity between them. They must be peered and with HTTPS and SMB traffic allowed (more explained in the Network section). You can select "Disabled" network access as well. You should also confirm that the Fileshare is created. Usually this is created automatically with the creation of the Logic App, but if you use Terraform or ARM, it may not create the file share and you must do it manually. Confirm if all 4 Private Endpoints are created and approved (File, Table, Queue and Blob). All these resources are used for different components of the Logic App. This is not fully documented, as it is internal engine documentation and not publicly available. For Azure Functions, the runtime base, this is partially documented, as you can read in the article: Storage considerations for Azure Functions | Microsoft Learn If a Private Endpoint is missing, create it and link it to the Vnet as Shared Resource. Not having all Private Endpoints created may end in runtime errors, connections errors or trigger failures. For example, if a workflow is not generating the URL even if it saves correctly, it may be the Table and Queue Private Endpoints missing, as we've seen many times with customers. You can read a bit more about the integration of the Logic App and a firewall secured Storage Account and the needed configuration in these articles: Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn Deploy Standard logic apps to private storage accounts - Azure Logic Apps | Microsoft Learn You can use the Kudu console (Advanced tools tab) to further troubleshoot the connection with the Storage Account by using some network troubleshooting commands. If the Kudu console is not available, we recommend using a VM in the same Vnet as the Logic App, to mimic the scenario. Nslookup [hostname or IP] [DNS HOST IP] TCPPing [hostname or IP]:[PORT] Test-Netconnection [hostname] -port [PORT] If you have Custom DNS, the command NSLookup will not return the results from your DNS unless you specify the IP address as a parameter. Instead, you can use the nameresolver command for this, which will use the Vnet DNS settings to check for the endpoint name resolution. nameresolver [endpoint hostname or IP address] Networking Related Commands for Azure App Services | Microsoft Community Hub Vnet configuration Having configured the Private Endpoint for the Logic App will not affect traffic to the Storage. This is because the PE is only for Inbound traffic. The Storage Communication will the considered as outbound traffic, as it's the Logic App that actively communicates with the Storage. Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn So consider that the link between these resources must not be interrupted. This forces you to understand that the Logic App uses both HTTPS and SMB protocols to communicate with the Storage Account, meaning that traffic under the ports 443 and 445 needs to be fully allowed in your Vnet. If you have a Network Security Group associated with the Logic App subnet, you need to confirm that the rules are allowing this traffic. You may need to explicitly create rules to allow this. Source port Destination port Source Destination Protocol Purpose * 443 Subnet integrated with Standard logic app Storage account TCP Storage account * 445 Subnet integrated with Standard logic app Storage account TCP Server Message Block (SMB) File Share In case you have forced routing to your Network Virtual Appliance (i.e. Firewall), you must also ensure that this resource is not filtering the traffic or blocking it. Having TLS inspection enabled in your Firewall must also be disabled, for the Logic App traffic. In short, this is because the Firewall will replace the certificate in the message, thus making the Logic App not recognizing the returned certificate, invalidating the message. You can read more about TLS inspection in this URL: Azure Firewall Premium features | Microsoft Learn DNS If you are using Azure DNS, this section should not apply, because all records are automatically created once you create the resources, but if you're using a Custom DNS, when you create the Azure resource (ex: Storage Private Endpoint), the IP address won't be registered in your DNS, so you must do it manually. You must ensure that all A Records are created and maintained, also keeping in mind that they need to point to the correct IP and name. If there are mismatches, you may see the communications severed between the Logic App and other resources, such as the Storage Account. So double-check all DNS records, and confirm that all is in proper state and place. And to make it even easier, with the help of my colleague Mohammed_Barqawi , this information is now translated into a easy to understand flowchart. If you continue to have issues after all these steps are verified, I suggest you open a case with us, so that we can validate what else may be happening, because either a step may have been missed, or some other issue may be occurring.Boosting Hybrid Cloud Data Efficiency for EDA: The Power of Azure NetApp Files cache volumes
Electronic Design Automation (EDA) is the foundation of modern semiconductor innovation, enabling engineers to design, simulate, and validate increasingly sophisticated chip architectures. As designs push the boundaries of PPA (Power, Performance, and reduced Area) to meet escalating market demands, the volume of associated design data has surged exponentially with a single System-on-Chip (SoC) project generating multiple petabytes of data during its development lifecycle, making data mobility and accessibility critical bottlenecks. To overcome these challenges, Azure NetApp Files (ANF) cache volumes are purpose-built to optimize data movement and minimize latency, delivering high-speed access to massive design datasets across distributed environments. By mitigating data gravity, Azure NetApp Files cache volumes empower chip designers to leverage cloud-scale compute resources on demand and at scale, thus accelerating innovation without being constrained by physical infrastructure.371Views0likes0CommentsDefender Entity Page w/ Sentinel Events Tab
One device is displaying the Sentinel Events Tab, while the other is not. The only difference observed is that one device is Azure AD (AAD) joined and the other is Domain Joined. Could this difference account for the missing Sentinel events data? Any insight would be appreciated!57Views0likes1CommentAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationOptimizing Azure DevOps Jira Integration: 5 Practical Use Cases for DevOps Teams
Many teams rely on Azure DevOps (ADO) for development and Jira for project or product management. While each tool is powerful on its own, things often get messy when work items, statuses, and updates live in separate systems. Integrating the two platforms can remove a lot of friction. Below are six common use cases I have seen from real teams, with concrete problems and solutions to make the connection between Jira and Azure DevOps work smoothly. 1. Keeping User Stories and Bugs in Sync Challenge: Teams use Jira for user requests and Azure DevOps for development tasks. Manually updating both systems is tedious and error-prone. Solution: Enable two-way synchronization so that changes in Jira automatically reflect in Azure DevOps and vice versa (including comments and status updates). This keeps bugs and stories aligned without duplicate work. “Before we integrated Jira with Aure DevOps, I spent too much time manually updating task statuses in both systems. Now, with the automatic sync, my team is focused on actual coding work instead of managing project statuses across platforms.” — DevOps Engineer 2. One-Way Sync for Project Management–First Teams Challenge: Some organizations plan and track everything in Jira but manage code exclusively in Azure DevOps. Developers only need the essentials pushed across. Solution: Use a one-way sync from Jira → Azure DevOps to bring over metadata like titles, statuses, sprints, and due dates. Developers see the context they need without cluttering both systems with manual updates. “We rely on Jira for all project planning and management, but the developers need a clean workspace in Azure DevOps. A one-way sync from Jira to ADO helps us keep things efficient and ensures developers always have the latest information without double entry.” — Product Owner 3. Creating Jira Tickets from Azure DevOps Tasks or Bugs Challenge: External partners or stakeholders may only work in Jira Service Management to manage tickets. Developers in Azure DevOps often need their work mirrored for transparency. Solution: Configure automated ticket creation in Jira when certain ADO tasks are tagged. Both teams can track progress in their preferred tool without duplicating effort. “We use Azure DevOps internally, but our external stakeholders only work in Jira. Automating the creation of Jira tickets based on Azure DevOps tasks or bugs has made collaboration seamless and ensured no work is lost in translation.” — DevOps Lead 4. Syncing Epics, Features, and Work Items Challenge: High-level epics might live in Jira, while features and tasks are managed in Azure DevOps. Without integration, visibility across systems is fragmented. Solution: Sync epics and features so Jira provides portfolio-level visibility, while Azure DevOps remains the system of record for detailed development work. This keeps roadmaps and execution aligned. “Tracking epics in Jira while managing the technical work in Azure DevOps used to cause us to lose visibility. Now, everything from high-level epics to individual tasks is in sync, so we always know where we stand.” — Azure DevOps Product Manager 5. Managing Multiple Jira Projects with One Azure DevOps Project Challenge: Large organizations often run multiple Jira projects (by teams or business units) but only one Azure DevOPs project for development. Syncing everything consistently is tough. Solution: Map multiple Jira projects to a single Azure DevOps project, syncing only the key data (titles, statuses, sprints, custom fields). This creates a unified development view without losing project-specific details. “We have multiple teams using different Jira projects, but we consolidate all development work into a single Azure DevOps project. Syncing across these platforms used to be a nightmare, but now everything stays aligned, and we’re able to track all initiatives in one place.” — Azure DevOps Engineer 💬 Have you integrated Jira with Azure DevOps in your team? What worked well, and what challenges did you run into?155Views0likes2CommentsPreview: Govern, Secure, and Observe A2A APIs with Azure API Management
Today, we’re announcing the preview support for A2A (Agent2Agent) APIs in Azure API Management. With this capability, organizations can now manage and govern agent APIs alongside AI model APIs, Model Context Protocol (MCP) tools, and traditional APIs such as REST, SOAP, GraphQL, WebSocket, and gRPC — all within a single, consistent API management plane. Extending API Governance into the Agentic Ecosystem As organizations adopt agentic systems, the need for consistent governance, security, and observability grows. With A2A API support, Azure API Management enables you to extend established API practices into the agentic world — ensuring secure access, consistent policy enforcement, and complete visibility for AI agents. A2A APIs in Azure API Management: Mediate JSON-RPC runtime operations with policy support Expose and manage agent cards for users, clients, or other agents Support OpenTelemetry GenAI semantic conventions when logging traces to Application Insights — including "gen_ai.agent.id" and "gen_ai.agent.name" attributes How It Works When you import an A2A API, API Management mediates runtime calls to your agent backend (JSON-RPC only) and exposes the agent card as an operation within the same API. The agent card is transformed automatically to represent the A2A API managed by API Management — with the hostname replaced by API Management’s gateway address, security schemes converted to authentication configured in API Management, and unsupported interfaces removed. When integrated with Application Insights, API Management enriches traces with GenAI-compliant telemetry attributes — allowing easy identification of the agent and deep correlation between API and agent execution traces for monitoring and debugging. Try It Out To import an A2A API: Navigate to the APIs page in the Azure portal and select the A2A Agent tile. Enter your agent card URL. If accessible, the portal will automatically populate relevant settings. Configure the remaining properties, such as API path in API Management. This functionality is currently available only in v2 tiers of API Management and it will continue to roll out to all tiers in the coming months. Start Managing Your Agent APIs With A2A support in Azure API Management, you can now bring agent APIs under the same governance and security umbrella as your existing APIs — strengthening control, security, and observability across your AI and API ecosystems. Learn more about A2A API support in Azure API Management.Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption
We’re excited to announce a major leap forward in democratizing AI-powered business process automation: Agent Loop is now available in Azure Logic Apps Consumption, bringing advanced AI agent capabilities to a broader audience with a frictionless, pay-as-you-go experience. NOTE: This feature is being rolled out and is expected to be in all planned regions by end of the week What’s New? Agent Loop, previously available only in Logic Apps Standard, is now available in Consumption logic apps, providing developers, small and medium-sized businesses, startups, and enterprise teams with the ability to create autonomous and conversational AI agents without the necessity of provisioning or managing dedicated AI infrastructure. With Agent Loop, customers can develop both autonomous and conversational agents, seamlessly transforming any workflow into an intelligent workflow using the agent loop action. These agents are powered by knowledge and tools through access to over 1,400 connectors and MCPs (to be introduced soon). Why Does This Matter? By extending Agent Loop to Logic Apps Consumption, we’re making AI agent capabilities accessible to everyone—from individual developers to large enterprises—without barriers. This move supports rapid prototyping, experimentation, and production workloads, all while maintaining the flexibility to upgrade as requirements evolve. Key highlights: Hosted on Behalf Of (HOBO) Model: With this model, customers can harness the power of advanced Foundry models directly within their Logic Apps, without the need to provision or manage AI resources themselves. Microsoft handles all the underlying large language model (LLM) infrastructure, preserving the serverless, low-overhead nature of Consumption Logic Apps that lets you focus purely on building intelligent workflows. Frictionless Entry Point: With Microsoft hosting and managing the Foundry model, customers only need an Azure subscription to set up an agentic workflow. This dramatically reduces entry barriers and enables anyone with access to Azure to leverage powerful AI agent automation right away. Pay-As-You-Go Billing: You’re billed based on the number of tokens used for each agentic iteration, making experimentation and scaling cost-effective. No fixed infrastructure costs or complex setup. Extensive Connector Ecosystem: Provides access to an extensive ecosystem of over 1,400 connectors, facilitating seamless integration with a broad range of enterprise systems, APIs, and data sources. Enterprise-Grade Upgrade Path: As your needs grow—whether for higher performance, compliance, or custom model hosting—you can seamlessly graduate to Logic Apps Standard, bringing your own model and unlocking advanced features like VNET support and local development. Refer https://learn.microsoft.com/en-us/azure/logic-apps/clone-consumption-logic-app-to-standard-workflow Security and Tenant Isolation: The HOBO model ensures strong tenant isolation and security boundaries, so your data and workflows remain protected. Chat client Authentication: Setting up the chat client is straightforward, with built-in security provided using OAuth policies. How to Get Started? Check out the video below to see examples of conversational and autonomous agent workflows in Consumption Logic Apps. For detailed instructions on creating agentic workflows, visit Overview | Logic Apps Labs. Refer the official documentation for more information on this feature- Workflows with AI Agents and Models - Azure Logic Apps | Microsoft Learn. Limitations: Local development capabilities and VNET integration are not supported with Consumption Logic Apps. Regional data residency isn't guaranteed for the agentic actions. If any GDPR (General Data Protection Regulation) concerns, use Logic Apps Standard. Nested agents and MCP tools are currently unavailable but will be added soon. If you need these features, refer Logic Apps Standard. Currently, West Europe and West US are supported regions; additional regions will be available soon.Synthetic Monitoring in Application Insights Using Playwright: A Game-Changer
Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyond simple endpoint checks. The solution? Custom TrackAvailability tests using Playwright. What is Playwright? Playwright is a powerful end-to-end testing framework that enables automated browser testing for modern web applications. It supports multiple browsers (Chromium, Firefox, WebKit) and can run tests in headless mode, making it ideal for synthetic monitoring. Why Use Playwright for Synthetic Monitoring? Simulate real user interactions (login, navigate, click, etc.) Catch UI failures that simple URL ping tests cannot detect Execute complex workflows like authentication and transactions Integrate with Azure Functions for periodic execution Log availability metrics in Application Insights for better tracking and alerting Step-by-Step Implementation (Repo link) Set Up an Azure Function App Navigate to the Azure Portal. Create a new Function App. Select Runtime Stack: Node.js. Enable Application Insights. Install Dependencies In your local development environment, create a Node.js project: mkdir playwright-monitoring && cd playwright-monitoring npm init -y npm install /functions playwright applicationinsights dotenv Implement the Timer-Triggered Azure Function Create timerTrigger1.js: const { app } = require('@azure/functions'); const { runPlaywrightTests } = require('../playwrightTest.js'); // Import the Playwright test function app.timer('timerTrigger1', { schedule: '0 */5 * * * *', // Runs every 5 minutes handler: async (myTimer, context) => { try { context.log("Executing Playwright test..."); await runPlaywrightTests(context); context.log("Playwright test executed successfully!"); } catch (error) { context.log.error("Error executing Playwright test:", error); } finally { context.log("Timer function processed request."); } } }); Implement the Playwright Test Logic Create playwrightTest.js: require('dotenv').config(); const playwright = require('playwright'); const appInsights = require('applicationinsights'); // Debugging: Print env variable to check if it's loaded correctly console.log("App Insights Key:", process.env.APPLICATIONINSIGHTS_CONNECTION_STRING); // Initialize Application Insights appInsights .setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING || process.env.APPINSIGHTS_INSTRUMENTATIONKEY) .setSendLiveMetrics(true) .setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C) .setAutoDependencyCorrelation(true) .setAutoCollectRequests(true) .setAutoCollectPerformance(true) .setAutoCollectExceptions(true) .setAutoCollectDependencies(true) .setAutoCollectConsole(true) .setUseDiskRetryCaching(true) // Enables retry caching for telemetry .setInternalLogging(true, true) // Enables internal logging for debugging .start(); const client = appInsights.defaultClient; async function runPlaywrightTests(context) { const timestamp = new Date().toISOString(); try { context.log(`[${timestamp}] Running Playwright login test...`); // Launch Browser const browser = await playwright.chromium.launch({ headless: true }); const page = await browser.newPage(); // Navigate to login page await page.goto('https://www.saucedemo.com/'); // Perform Login await page.fill('#user-name', 'standard_user'); await page.fill('#password', 'secret_sauce'); await page.click('#login-button'); // Verify successful login await page.waitForSelector('.inventory_list', { timeout: 5000 }); // Log Success to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: true, duration: 5000, // Execution time runLocation: "Azure Function", message: "Login successful", time: new Date() }); context.log("✅ Playwright login test successful."); await browser.close(); } catch (error) { context.log.error("❌ Playwright login test failed:", error); // Log Failure to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: false, duration: 0, runLocation: "Azure Function", message: error.message, time: new Date() }); } } module.exports = { runPlaywrightTests }; Configure Environment Variables Create a .env file and set your Application Insights connection string: APPLICATIONINSIGHTS_CONNECTION_STRING=<your_connection_string> Deploy and Monitor Deploy the Function App using Azure CLI: func azure functionapp publish <your-function-app-name> Monitor the availability results in Application Insights → Availability. Setting Up Alerts for Failed Tests To get notified when availability tests fail: Open Application Insights in the Azure portal. Go to Alerts → Create Alert Rule. Select Signal Type: Availability Results. Configure a condition where Success = 0 (Failure). Add an action group (email, Teams, etc.). Click Create Alert Rule. Conclusion With Playwright-based synthetic monitoring, you can go beyond basic URL ping tests and validate real user interactions in your application. Since Microsoft has deprecated Multi-step web tests and URL ping tests, this approach ensures better availability tracking, UI validation, and proactive issue detection in Application Insights.2.3KViews2likes1CommentAccelerating HPC and EDA with Powerful Azure NetApp Files Enhancements
High-Performance Computing (HPC) and Electronic Design Automation (EDA) workloads demand uncompromising performance, scalability, and resilience. Whether you're managing petabyte-scale datasets or running compute intensive simulations, Azure NetApp Files delivers the agility and reliability needed to innovate without limits.341Views0likes0CommentsBuilding AI Agents: Workflow-First vs. Code-First vs. Hybrid
AI Agents are no longer just a developer’s playground. They’re becoming essential for enterprise automation, decision-making, and customer engagement. But how do you build them? Do you go workflow-first with drag-and-drop designers, code-first with SDKs, or adopt a hybrid approach that blends both worlds? In this article, I’ll walk you through the landscape of AI Agent design. We’ll look at workflow-first approaches with drag-and-drop designers, code-first approaches using SDKs, and hybrid models that combine both. The goal is to help you understand the options and choose the right path for your organization. Why AI Agents Need Orchestration Before diving into tools and approaches, let’s talk about why orchestration matters. AI Agents are not just single-purpose bots anymore. They often need to perform multi-step reasoning, interact with multiple systems, and adapt to dynamic workflows. Without orchestration, these agents can become siloed and fail to deliver real business value. Here’s what I’ve observed as the key drivers for orchestration: Complexity of Enterprise Workflows Modern business processes involve multiple applications, data sources, and decision points. AI Agents need a way to coordinate these steps seamlessly. Governance and Compliance Enterprises require control over how AI interacts with sensitive data and systems. Orchestration frameworks provide guardrails for security and compliance. Scalability and Maintainability A single agent might work fine for a proof of concept, but scaling to hundreds of workflows requires structured orchestration to avoid chaos. Integration with Existing Systems AI Agents rarely operate in isolation. They need to plug into ERP systems, CRMs, and custom apps. Orchestration ensures these integrations are reliable and repeatable. In short, orchestration is the backbone that turns AI Agents from clever prototypes into enterprise-ready solutions. Behind the Scenes I’ve always been a pro-code guy. I started my career on open-source coding in Unix and hardly touched the mouse. Then I discovered Visual Studio, and it completely changed my perspective. It showed me the power of a hybrid approach, the best of both worlds. That said, I won’t let my experience bias your ideas of what you’d like to build. This blog is about giving you the full picture so you can make the choice that works best for you. Workflow-First Approach Workflow-first platforms are more than visual designers and not just about drag-and-drop simplicity. They represent a design paradigm where orchestration logic is abstracted into declarative models rather than imperative code. These tools allow you to define agent behaviors, event triggers, and integration points visually, while the underlying engine handles state management, retries, and scaling. For architects, this means faster prototyping and governance baked into the platform. For developers, it offers extensibility through connectors and custom actions without sacrificing enterprise-grade reliability. Copilot Studio Building conversational agents becomes intuitive with a visual designer that maps prompts, actions, and connectors into structured flows. Copilot Studio makes this possible by integrating enterprise data and enabling agents to automate tasks and respond intelligently without deep coding. Building AI Agents using Copilot Studio Design conversation flows with adaptive prompts Integrate Microsoft Graph for contextual responses Add AI-driven actions using Copilot extensions Support multi-turn reasoning for complex queries Enable secure access to enterprise data sources Extend functionality through custom connectors Logic Apps Adaptive workflows and complex integrations are handled through a robust orchestration engine. Logic Apps introduces Agent Loop, allowing agents to reason iteratively, adapt workflows, and interact with multiple systems in real time. Building AI Agents using Logic Apps Implement Agent Loop for iterative reasoning Integrate Azure OpenAI for goal-driven decisions Access 1,400+ connectors for enterprise actions Support human-in-the-loop for critical approvals Enable multi-agent orchestration for complex tasks Provide observability and security for agent workflows Power Automate Multi-step workflows can be orchestrated across business applications using AI Builder models or external AI APIs. Power Automate enables agents to make decisions, process data, and trigger actions dynamically, all within a low-code environment. Building AI Agents using Power Automate Automate repetitive tasks with minimal effort Apply AI Builder for predictions and classification Call Azure OpenAI for natural language processing Integrate with hundreds of enterprise connectors Trigger workflows based on real-time events Combine flows with human approvals for compliance Azure AI Foundry Visual orchestration meets pro-code flexibility through Prompt Flow and Connected Agents, enabling multi-step reasoning flows while allowing developers to extend capabilities through SDKs. Azure AI Foundry is ideal for scenarios requiring both agility and deep customization. Building AI Agents using Azure AI Foundry Design reasoning flows visually with Prompt Flow Orchestrate multi-agent systems using Connected Agents Integrate with VS Code for advanced development Apply governance and deployment pipelines for production Use Azure OpenAI models for adaptive decision-making Monitor workflows with built-in observability tools Microsoft Agent Framework (Preview) I’ve been exploring Microsoft Agent Framework (MAF), an open-source foundation for building AI agents that can run anywhere. It integrates with Azure AI Foundry and Azure services, enabling multi-agent workflows, advanced memory services, and visual orchestration. With public preview live and GA coming soon, MAF is shaping how we deliver scalable, flexible agentic solutions. Enterprise-scale orchestration is achieved through graph-based workflows, human-in-the-loop approvals, and observability features. The Microsoft Agent Framework lays the foundation for multi-agent systems that are durable and compliant. Building AI Agents using Microsoft Agent Framework Coordinate multiple specialized agents in a graph Implement durable workflows with pause and resume Support human-in-the-loop for controlled autonomy Integrate with Azure AI Foundry for hosting and governance Enable observability through OpenTelemetry integration Provide SDK flexibility for custom orchestration patterns Visual-first platforms make building AI Agents feel less like coding marathons and more like creative design sessions. They’re perfect for those scenarios when you’d rather design than debug and still want the option to dive deeper when complexity calls. Pro-Code Approach Remember I told you how I started as a pro-code developer early in my career and later embraced a hybrid approach? I’ll try to stay neutral here as we explore the pro-code world. Pro-code frameworks offer integration with diverse ecosystems, multi-agent coordination, and fine-grained control over logic. While workflow-first and pro-code approaches both provide these capabilities, the difference lies in how they balance factors such as ease of development, ease of maintenance, time to deliver, monitoring capabilities, and other non-functional requirements. Choosing the right path often depends on which of these trade-offs matter most for your scenario. LangChain When I first explored LangChain, it felt like stepping into a developer’s playground for AI orchestration. I could stitch together prompts, tools, and APIs like building blocks, and I enjoyed the flexibility. It reminded me why pro-code approaches appeal to those who want full control over logic and integration with diverse ecosystems. Building AI Agents using LangChain Define custom chains for multi-step reasoning [it is called Lang“Chain”] Integrate external APIs and tools for dynamic actions Implement memory for context-aware conversations Support multi-agent collaboration through orchestration patterns Extend functionality with custom Python modules Deploy agents across cloud environments for scalability Semantic Kernel I’ve worked with Semantic Kernel when I needed more control over orchestration logic, and what stood out was its flexibility. It provides both .NET and Python SDKs, which makes it easy to combine natural language prompts with traditional programming logic. I found the planners and skills especially useful for breaking down goals into smaller steps, and connectors helped integrate external systems without reinventing the wheel. Building AI Agents using Semantic Kernel Create semantic functions for prompt-driven tasks Use planners for dynamic goal decomposition Integrate plugins for external system access Implement memory for persistent context across sessions Combine AI reasoning with deterministic code logic Enable observability and telemetry for enterprise monitoring Microsoft Agent Framework (Preview) Although I introduced MAF in the earlier section, its SDK-first design makes it relevant here as well for advanced orchestration and the pro-code nature… and so I’ll probably write this again in the Hybrid section. The Agent Framework is designed for developers who need full control over multi-agent orchestration. It provides a pro-code approach for defining agent behaviors, implementing advanced coordination patterns, and integrating enterprise-grade observability. Building AI Agents using Microsoft Agent Framework Define custom orchestration logic using SDK APIs Implement graph-based workflows for multi-agent coordination Extend agent capabilities with custom code modules Apply durable execution patterns with pause and resume Integrate OpenTelemetry for detailed monitoring and debugging Securely host and manage agents through Azure AI Foundry integration Hybrid Approach and decision framework I’ve always been a fan of both worlds, the flexibility of pro-code and the simplicity of workflow drag-and-drop style IDEs and GUIs. A hybrid approach is not about picking one over the other; it’s about balancing them. In practice, this to me means combining the speed and governance of workflow-first platforms with the extensibility and control of pro-code frameworks. Hybrid design shines when you need agility without sacrificing depth. For example, I can start with Copilot Studio to build a conversational agent using its visual designer. But if the scenario demands advanced logic or integration, I can call an Azure Function for custom processing, trigger a Logic Apps workflow for complex orchestration, or even invoke the Microsoft Agent Framework for multi-agent coordination. This flexibility delivers the best of both worlds, low-code for rapid development (remember RAD?) and pro-code for enterprise-grade customization with complex logic or integrations. Why go Hybrid Ø Balance speed and control: Rapid prototyping with workflow-first tools, deep customization with code. Ø Extend functionality: Call APIs, Azure Functions, or SDK-based frameworks from visual workflows. Ø Optimize for non-functional requirements: Address maintainability, monitoring, and scalability without compromising ease of development. Ø Enable interoperability: Combine connectors, plugins, and open standards for diverse ecosystems. Ø Support multi-agent orchestration: Integrate workflow-driven agents with pro-code agents for complex scenarios. The hybrid approach for building AI Agents is not just a technical choice but a design philosophy. When I need rapid prototyping or business automation, workflow-first is my choice. For multi-agent orchestration and deep customization, I go with code-first. Hybrid makes sense for regulated industries and large-scale deployments where flexibility and compliance are critical. The choice isn’t binary, it’s strategic. I’ve worked with both workflow-first tools like Copilot Studio, Power Automate, and Logic Apps, and pro-code frameworks such as LangChain, Semantic Kernel, and the Microsoft Agent Framework. Each approach has its strengths, and the decision often comes down to what matters most for your scenario. If rapid prototyping and business automation are priorities, workflow-first platforms make sense. When multi-agent orchestration, deep customization, and integration with diverse ecosystems are critical, pro-code frameworks give you the flexibility and control you need. Hybrid approaches bring both worlds together for regulated industries and large-scale deployments where governance, observability, and interoperability cannot be compromised. Understanding these trade-offs will help you create AI Agents that work so well, you’ll wonder if they’re secretly applying for your job! About the author Pradyumna (Prad) Harish is a Technology leader in the WW GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open-Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.7.7KViews3likes0Comments