apps & devops
53 TopicsBuilding AI Agents: Workflow-First vs. Code-First vs. Hybrid
AI Agents are no longer just a developer’s playground. They’re becoming essential for enterprise automation, decision-making, and customer engagement. But how do you build them? Do you go workflow-first with drag-and-drop designers, code-first with SDKs, or adopt a hybrid approach that blends both worlds? In this article, I’ll walk you through the landscape of AI Agent design. We’ll look at workflow-first approaches with drag-and-drop designers, code-first approaches using SDKs, and hybrid models that combine both. The goal is to help you understand the options and choose the right path for your organization. Why AI Agents Need Orchestration Before diving into tools and approaches, let’s talk about why orchestration matters. AI Agents are not just single-purpose bots anymore. They often need to perform multi-step reasoning, interact with multiple systems, and adapt to dynamic workflows. Without orchestration, these agents can become siloed and fail to deliver real business value. Here’s what I’ve observed as the key drivers for orchestration: Complexity of Enterprise Workflows Modern business processes involve multiple applications, data sources, and decision points. AI Agents need a way to coordinate these steps seamlessly. Governance and Compliance Enterprises require control over how AI interacts with sensitive data and systems. Orchestration frameworks provide guardrails for security and compliance. Scalability and Maintainability A single agent might work fine for a proof of concept, but scaling to hundreds of workflows requires structured orchestration to avoid chaos. Integration with Existing Systems AI Agents rarely operate in isolation. They need to plug into ERP systems, CRMs, and custom apps. Orchestration ensures these integrations are reliable and repeatable. In short, orchestration is the backbone that turns AI Agents from clever prototypes into enterprise-ready solutions. Behind the Scenes I’ve always been a pro-code guy. I started my career on open-source coding in Unix and hardly touched the mouse. Then I discovered Visual Studio, and it completely changed my perspective. It showed me the power of a hybrid approach, the best of both worlds. That said, I won’t let my experience bias your ideas of what you’d like to build. This blog is about giving you the full picture so you can make the choice that works best for you. Workflow-First Approach Workflow-first platforms are more than visual designers and not just about drag-and-drop simplicity. They represent a design paradigm where orchestration logic is abstracted into declarative models rather than imperative code. These tools allow you to define agent behaviors, event triggers, and integration points visually, while the underlying engine handles state management, retries, and scaling. For architects, this means faster prototyping and governance baked into the platform. For developers, it offers extensibility through connectors and custom actions without sacrificing enterprise-grade reliability. Copilot Studio Building conversational agents becomes intuitive with a visual designer that maps prompts, actions, and connectors into structured flows. Copilot Studio makes this possible by integrating enterprise data and enabling agents to automate tasks and respond intelligently without deep coding. Building AI Agents using Copilot Studio Design conversation flows with adaptive prompts Integrate Microsoft Graph for contextual responses Add AI-driven actions using Copilot extensions Support multi-turn reasoning for complex queries Enable secure access to enterprise data sources Extend functionality through custom connectors Logic Apps Adaptive workflows and complex integrations are handled through a robust orchestration engine. Logic Apps introduces Agent Loop, allowing agents to reason iteratively, adapt workflows, and interact with multiple systems in real time. Building AI Agents using Logic Apps Implement Agent Loop for iterative reasoning Integrate Azure OpenAI for goal-driven decisions Access 1,400+ connectors for enterprise actions Support human-in-the-loop for critical approvals Enable multi-agent orchestration for complex tasks Provide observability and security for agent workflows Power Automate Multi-step workflows can be orchestrated across business applications using AI Builder models or external AI APIs. Power Automate enables agents to make decisions, process data, and trigger actions dynamically, all within a low-code environment. Building AI Agents using Power Automate Automate repetitive tasks with minimal effort Apply AI Builder for predictions and classification Call Azure OpenAI for natural language processing Integrate with hundreds of enterprise connectors Trigger workflows based on real-time events Combine flows with human approvals for compliance Azure AI Foundry Visual orchestration meets pro-code flexibility through Prompt Flow and Connected Agents, enabling multi-step reasoning flows while allowing developers to extend capabilities through SDKs. Azure AI Foundry is ideal for scenarios requiring both agility and deep customization. Building AI Agents using Azure AI Foundry Design reasoning flows visually with Prompt Flow Orchestrate multi-agent systems using Connected Agents Integrate with VS Code for advanced development Apply governance and deployment pipelines for production Use Azure OpenAI models for adaptive decision-making Monitor workflows with built-in observability tools Microsoft Agent Framework (Preview) I’ve been exploring Microsoft Agent Framework (MAF), an open-source foundation for building AI agents that can run anywhere. It integrates with Azure AI Foundry and Azure services, enabling multi-agent workflows, advanced memory services, and visual orchestration. With public preview live and GA coming soon, MAF is shaping how we deliver scalable, flexible agentic solutions. Enterprise-scale orchestration is achieved through graph-based workflows, human-in-the-loop approvals, and observability features. The Microsoft Agent Framework lays the foundation for multi-agent systems that are durable and compliant. Building AI Agents using Microsoft Agent Framework Coordinate multiple specialized agents in a graph Implement durable workflows with pause and resume Support human-in-the-loop for controlled autonomy Integrate with Azure AI Foundry for hosting and governance Enable observability through OpenTelemetry integration Provide SDK flexibility for custom orchestration patterns Visual-first platforms make building AI Agents feel less like coding marathons and more like creative design sessions. They’re perfect for those scenarios when you’d rather design than debug and still want the option to dive deeper when complexity calls. Pro-Code Approach Remember I told you how I started as a pro-code developer early in my career and later embraced a hybrid approach? I’ll try to stay neutral here as we explore the pro-code world. Pro-code frameworks offer integration with diverse ecosystems, multi-agent coordination, and fine-grained control over logic. While workflow-first and pro-code approaches both provide these capabilities, the difference lies in how they balance factors such as ease of development, ease of maintenance, time to deliver, monitoring capabilities, and other non-functional requirements. Choosing the right path often depends on which of these trade-offs matter most for your scenario. LangChain When I first explored LangChain, it felt like stepping into a developer’s playground for AI orchestration. I could stitch together prompts, tools, and APIs like building blocks, and I enjoyed the flexibility. It reminded me why pro-code approaches appeal to those who want full control over logic and integration with diverse ecosystems. Building AI Agents using LangChain Define custom chains for multi-step reasoning [it is called Lang“Chain”] Integrate external APIs and tools for dynamic actions Implement memory for context-aware conversations Support multi-agent collaboration through orchestration patterns Extend functionality with custom Python modules Deploy agents across cloud environments for scalability Semantic Kernel I’ve worked with Semantic Kernel when I needed more control over orchestration logic, and what stood out was its flexibility. It provides both .NET and Python SDKs, which makes it easy to combine natural language prompts with traditional programming logic. I found the planners and skills especially useful for breaking down goals into smaller steps, and connectors helped integrate external systems without reinventing the wheel. Building AI Agents using Semantic Kernel Create semantic functions for prompt-driven tasks Use planners for dynamic goal decomposition Integrate plugins for external system access Implement memory for persistent context across sessions Combine AI reasoning with deterministic code logic Enable observability and telemetry for enterprise monitoring Microsoft Agent Framework (Preview) Although I introduced MAF in the earlier section, its SDK-first design makes it relevant here as well for advanced orchestration and the pro-code nature… and so I’ll probably write this again in the Hybrid section. The Agent Framework is designed for developers who need full control over multi-agent orchestration. It provides a pro-code approach for defining agent behaviors, implementing advanced coordination patterns, and integrating enterprise-grade observability. Building AI Agents using Microsoft Agent Framework Define custom orchestration logic using SDK APIs Implement graph-based workflows for multi-agent coordination Extend agent capabilities with custom code modules Apply durable execution patterns with pause and resume Integrate OpenTelemetry for detailed monitoring and debugging Securely host and manage agents through Azure AI Foundry integration Hybrid Approach and decision framework I’ve always been a fan of both worlds, the flexibility of pro-code and the simplicity of workflow drag-and-drop style IDEs and GUIs. A hybrid approach is not about picking one over the other; it’s about balancing them. In practice, this to me means combining the speed and governance of workflow-first platforms with the extensibility and control of pro-code frameworks. Hybrid design shines when you need agility without sacrificing depth. For example, I can start with Copilot Studio to build a conversational agent using its visual designer. But if the scenario demands advanced logic or integration, I can call an Azure Function for custom processing, trigger a Logic Apps workflow for complex orchestration, or even invoke the Microsoft Agent Framework for multi-agent coordination. This flexibility delivers the best of both worlds, low-code for rapid development (remember RAD?) and pro-code for enterprise-grade customization with complex logic or integrations. Why go Hybrid Ø Balance speed and control: Rapid prototyping with workflow-first tools, deep customization with code. Ø Extend functionality: Call APIs, Azure Functions, or SDK-based frameworks from visual workflows. Ø Optimize for non-functional requirements: Address maintainability, monitoring, and scalability without compromising ease of development. Ø Enable interoperability: Combine connectors, plugins, and open standards for diverse ecosystems. Ø Support multi-agent orchestration: Integrate workflow-driven agents with pro-code agents for complex scenarios. The hybrid approach for building AI Agents is not just a technical choice but a design philosophy. When I need rapid prototyping or business automation, workflow-first is my choice. For multi-agent orchestration and deep customization, I go with code-first. Hybrid makes sense for regulated industries and large-scale deployments where flexibility and compliance are critical. The choice isn’t binary, it’s strategic. I’ve worked with both workflow-first tools like Copilot Studio, Power Automate, and Logic Apps, and pro-code frameworks such as LangChain, Semantic Kernel, and the Microsoft Agent Framework. Each approach has its strengths, and the decision often comes down to what matters most for your scenario. If rapid prototyping and business automation are priorities, workflow-first platforms make sense. When multi-agent orchestration, deep customization, and integration with diverse ecosystems are critical, pro-code frameworks give you the flexibility and control you need. Hybrid approaches bring both worlds together for regulated industries and large-scale deployments where governance, observability, and interoperability cannot be compromised. Understanding these trade-offs will help you create AI Agents that work so well, you’ll wonder if they’re secretly applying for your job! About the author Pradyumna (Prad) Harish is a Technology leader in the WW GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open-Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.1.3KViews3likes0CommentsBuilding a Secure and Compliant Azure AI Landing Zone: Policy Framework & Best Practices
As organizations accelerate their AI adoption on Microsoft Azure, governance, compliance, and security become critical pillars for success. Deploying AI workloads without a structured compliance framework can expose enterprises to data privacy issues, misconfigurations, and regulatory risks. To address this challenge, the Azure AI Landing Zone provides a scalable and secure foundation — bringing together Azure Policy, Blueprints, and Infrastructure-as-Code (IaC) to ensure every resource aligns with organizational and regulatory standards. The Azure Policy & Compliance Framework acts as the governance backbone of this landing zone. It enforces consistency across environments by applying policy definitions, initiatives, and assignments that monitor and remediate non-compliant resources automatically. This blog will guide you through: 🧭 The architecture and layers of an AI Landing Zone 🧩 How Azure Policy as Code enables automated governance ⚙️ Steps to implement and deploy policies using IaC pipelines 📈 Visualizing compliance flows for AI-specific resources What is Azure AI Landing Zone (AI ALZ)? AI ALZ is a foundational architecture that integrates core Azure services (ML, OpenAI, Cognitive Services) with best practices in identity, networking, governance, and operations. To ensure consistency, security, and responsibility, a robust policy framework is essential. Policy & Compliance in AI ALZ Azure Policy helps enforce standards across subscriptions and resource groups. You define policies (single rules), group them into initiatives (policy sets), and assign them with certain scopes & exemptions. Compliance reporting helps surface noncompliant resources for mitigation. In AI workloads, some unique considerations: Sensitive data (PII, models) Model accountability, logging, audit trails Cost & performance from heavy compute usage Preview features and frequent updates Scope This framework covers: Azure Machine Learning (AML) Azure API Management Azure AI Foundry Azure App Service Azure Cognitive Services Azure OpenAI Azure Storage Accounts Azure Databases (SQL, Cosmos DB, MySQL, PostgreSQL) Azure Key Vault Azure Kubernetes Service Core Policy Categories 1. Networking & Access Control Restrict resource deployment to approved regions (e.g., Europe only). Enforce private link and private endpoint usage for all critical resources. Disable public network access for workspaces, storage, search, and key vaults. 2. Identity & Authentication Require user-assigned managed identities for resource access. Disable local authentication; enforce Microsoft Entra ID (Azure AD) authentication. 3. Data Protection Enforce encryption at rest with customer-managed keys (CMK). Restrict public access to storage accounts and databases. 4. Monitoring & Logging Deploy diagnostic settings to Log Analytics for all key resources. Ensure activity/resource logs are enabled and retained for at least one year. 5. Resource-Specific Guardrails Apply built-in and custom policy initiatives for OpenAI, Kubernetes, App Services, Databases, etc. A detailed list of all policies is bundled and attached at the end of this blog. Be sure to check it out for a ready-to-use Excel file—perfect for customer workshops—which includes policy type (Standalone/Initiative), origin (Built-in/Custom), and more. Implementation: Policy-as-Code using EPAC To turn policies from Excel/JSON into operational governance, Enterprise Policy as Code (EPAC) is a powerful tool. EPAC transforms policy artifacts into a desired state repository and handles deployment, lifecycle, versioning, and CI/CD automation. What is EPAC & Why Use It? EPAC is a set of PowerShell scripts / modules to deploy policy definitions, initiatives, assignments, role assignments, exemptions. Enterprise Policy As Code (EPAC) It supports CI/CD integration (GitHub Actions, Azure DevOps) so policy changes can be treated like code. It handles ordering, dependency resolution, and enforcement of a “desired state” — any policy resources not in your repo may be pruned (depending on configuration). It integrates with Azure Landing Zones (including governance baseline) out of the box. References & Further Reading EPAC GitHub Repository Advanced Azure Policy management - Microsoft Learn [Advanced A...Framework] How to deploy Azure policies the DevOps way [How to dep...- Rabobank]1.2KViews1like1CommentBuilding an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
Transform your enterprise-grade RAG pipeline with NVIDIA AI and Azure NetApp Files. This post highlights the challenges of scaling RAG solutions and introduces NVIDIA's AI Blueprint adapted for Azure. Discover how Azure NetApp Files boosts performance and handles dynamic demands, enabling robust and efficient RAG workloads.2.4KViews1like0CommentsGranting Azure Resources Access to SharePoint Online Sites Using Managed Identity
When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternatively, use an App Registration if you prefer a multi-tenant or reusable identity. How to register an app in Microsoft Entra ID - Microsoft identity platform | Microsoft Learn Step 2: Grant Sites.Selected Permission in Microsoft Entra Open Microsoft Entra ID > App registrations. Select your Logic App’s managed identity or app registration. Under API permissions, click Add a permission > Microsoft Graph. Select Application permissions and add: Sites.Selected Click Grant admin consent. Note: Sites.Selected ensures least-privilege access — you must explicitly allow site-level access later. Step 3: Assign SharePoint Site-Level Permission SharePoint Online requires site-level consent for apps with Sites.Selected. Use the script below to assign access. Note: You must be a SharePoint Administrator and have the Sites.FullControl.All permission when running this. PowerShell Script: # Replace with your values $application = @{ id = "{ApplicationID}" # Client ID of the Managed Identity displayName = "{DisplayName}" # Display name (optional but recommended) } $appRole = "write" # Can be "read" or "write" $spoTenant = "contoso.sharepoint.com" # Sharepoint site host $spoSite = "{Sitename}" # Sharepoint site name # Site ID format for Graph API $spoSiteId = $spoTenant + ":/sites/" + $spoSite + ":" # Load Microsoft Graph module Import-Module Microsoft.Graph.Sites # Connect with appropriate permissions Connect-MgGraph -Scope Sites.FullControl.All # Grant site-level permission New-MgSitePermission -SiteId $spoSiteId -Roles $appRole -GrantedToIdentities @{ Application = $application } That's it, Your Logic App or Azure resource can now call Microsoft Graph APIs to interact with that specific SharePoint site (e.g., list files, upload documents). You maintain centralized control and least-privilege access, complying with enterprise security standards. By following this approach, you ensure secure, auditable, and scalable access from Azure services to SharePoint Online — no secrets, no user credentials, just managed identity done right.4.5KViews2likes5CommentsStreamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. This comprehensive tutorial aims to assist Microsoft and NetApp customers in overcoming the challenges of identifying and managing their data for AI/ML purposes. By following this guide, users will achieve a fully functional OpenMetadata instance, enabling efficient data discovery, enhanced collaboration, and robust data governance.745Views0likes0CommentsSynthetic Monitoring in Application Insights Using Playwright: A Game-Changer
Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyond simple endpoint checks. The solution? Custom TrackAvailability tests using Playwright. What is Playwright? Playwright is a powerful end-to-end testing framework that enables automated browser testing for modern web applications. It supports multiple browsers (Chromium, Firefox, WebKit) and can run tests in headless mode, making it ideal for synthetic monitoring. Why Use Playwright for Synthetic Monitoring? Simulate real user interactions (login, navigate, click, etc.) Catch UI failures that simple URL ping tests cannot detect Execute complex workflows like authentication and transactions Integrate with Azure Functions for periodic execution Log availability metrics in Application Insights for better tracking and alerting Step-by-Step Implementation (Repo link) Set Up an Azure Function App Navigate to the Azure Portal. Create a new Function App. Select Runtime Stack: Node.js. Enable Application Insights. Install Dependencies In your local development environment, create a Node.js project: mkdir playwright-monitoring && cd playwright-monitoring npm init -y npm install /functions playwright applicationinsights dotenv Implement the Timer-Triggered Azure Function Create timerTrigger1.js: const { app } = require('@azure/functions'); const { runPlaywrightTests } = require('../playwrightTest.js'); // Import the Playwright test function app.timer('timerTrigger1', { schedule: '0 */5 * * * *', // Runs every 5 minutes handler: async (myTimer, context) => { try { context.log("Executing Playwright test..."); await runPlaywrightTests(context); context.log("Playwright test executed successfully!"); } catch (error) { context.log.error("Error executing Playwright test:", error); } finally { context.log("Timer function processed request."); } } }); Implement the Playwright Test Logic Create playwrightTest.js: require('dotenv').config(); const playwright = require('playwright'); const appInsights = require('applicationinsights'); // Debugging: Print env variable to check if it's loaded correctly console.log("App Insights Key:", process.env.APPLICATIONINSIGHTS_CONNECTION_STRING); // Initialize Application Insights appInsights .setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING || process.env.APPINSIGHTS_INSTRUMENTATIONKEY) .setSendLiveMetrics(true) .setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C) .setAutoDependencyCorrelation(true) .setAutoCollectRequests(true) .setAutoCollectPerformance(true) .setAutoCollectExceptions(true) .setAutoCollectDependencies(true) .setAutoCollectConsole(true) .setUseDiskRetryCaching(true) // Enables retry caching for telemetry .setInternalLogging(true, true) // Enables internal logging for debugging .start(); const client = appInsights.defaultClient; async function runPlaywrightTests(context) { const timestamp = new Date().toISOString(); try { context.log(`[${timestamp}] Running Playwright login test...`); // Launch Browser const browser = await playwright.chromium.launch({ headless: true }); const page = await browser.newPage(); // Navigate to login page await page.goto('https://www.saucedemo.com/'); // Perform Login await page.fill('#user-name', 'standard_user'); await page.fill('#password', 'secret_sauce'); await page.click('#login-button'); // Verify successful login await page.waitForSelector('.inventory_list', { timeout: 5000 }); // Log Success to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: true, duration: 5000, // Execution time runLocation: "Azure Function", message: "Login successful", time: new Date() }); context.log("✅ Playwright login test successful."); await browser.close(); } catch (error) { context.log.error("❌ Playwright login test failed:", error); // Log Failure to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: false, duration: 0, runLocation: "Azure Function", message: error.message, time: new Date() }); } } module.exports = { runPlaywrightTests }; Configure Environment Variables Create a .env file and set your Application Insights connection string: APPLICATIONINSIGHTS_CONNECTION_STRING=<your_connection_string> Deploy and Monitor Deploy the Function App using Azure CLI: func azure functionapp publish <your-function-app-name> Monitor the availability results in Application Insights → Availability. Setting Up Alerts for Failed Tests To get notified when availability tests fail: Open Application Insights in the Azure portal. Go to Alerts → Create Alert Rule. Select Signal Type: Availability Results. Configure a condition where Success = 0 (Failure). Add an action group (email, Teams, etc.). Click Create Alert Rule. Conclusion With Playwright-based synthetic monitoring, you can go beyond basic URL ping tests and validate real user interactions in your application. Since Microsoft has deprecated Multi-step web tests and URL ping tests, this approach ensures better availability tracking, UI validation, and proactive issue detection in Application Insights.2.2KViews2likes0CommentsCross-Region Resiliency for Ecommerce Reference Application
Authors: Radu Dilirici (radudilirici@microsoft.com) Ioan Dragan (ioan.dragan@microsoft.com) Ciprian Amzuloiu (camzuloiu@microsoft.com) Introduction The initial Resilient Ecommerce Reference Application demonstrated the best practices to achieve regional resiliency using Azure’s availability zones. Expanding on this foundation, in the current article we aim to achieve cross-region resiliency, ensuring high availability and disaster recovery capabilities across multiple geographic regions. This article outlines the enhancements made to extend the application into a cross-region resilient architecture. The app is publicly available on GitHub and can be used for educational purposes or as a starting point for developing cross-region resilient applications. Overview of Cross-Region Enhancements The main architectural change needed to extend the application to a cross-region approach was to replicate the existing zonal resilient setup across multiple Azure regions and enable failover mechanisms for seamless operation during regional outages. Below is a visual representation of the new architecture: Component Details Networking Architecture The networking architecture has been extended to support cross-region traffic management. Azure Front Door serves as the global entry point, routing traffic to the primary region. In case of a disaster, the traffic is redirected to the secondary region. Global Virtual Network Peering is used to link together the virtual networks of the two regions. This enables the Redis Caches and SQL Databases to communicate with each other, keeping them in sync and allowing them to perform the switchover procedure. This change allowed us to remove the previous DNS zone groups. Service Endpoints provide secure and direct connectivity with the Azure Virtual Network for the SQL Databases and Key Vault. They allow access to these services without exposing them to the public internet, reducing the attack surface and enhancing security. Storage Architecture Azure SQL Database, Azure Cache for Redis and Azure Container Registry now employ geo-replication to ensure data availability across regions. Azure Key Vault is cross-region resilient by default as it automatically replicates the data to the Azure paired region. Read more about geo-replication for Azure SQL and Azure Cache for Redis. Compute Architecture The Azure Kubernetes Service (AKS) clusters are deployed across multiple regions , with each cluster running in a minimum of three Availability Zones. The autoscaling and load distribution mechanisms from the original setup are retained, ensuring optimal performance and high availability. Read more about multi-region AKS clusters. The application supports both Active-Active and Active-Passive states, determined by the AKS configuration. In an Active-Active state, the secondary AKS is always running, providing a faster switchover at the cost of higher expenses. Conversely, in an Active-Passive state, the secondary AKS is deployed but not started, reducing costs but resulting in a slower switchover. Additionally, the secondary AKS can be configured with fewer resources for further cost savings. Failover The failover procedure consists of migrating the compute, storage and networking services to the secondary region. Firstly, the AKS cluster is started in the secondary region. In an Active-Active configuration, this step is skipped as the cluster is already running. Then, the SQL Database and Redis Cache are synced with their replicas and the secondary instances are elevated to the primary role. The traffic is reconfigured through the Front Door profile to hit the services in the new region. Controlled failover is crucial for keeping systems running smoothly during a disaster. When things go wrong, an on-call engineer can start the failover process to quickly move operations to a backup system, minimizing any potential issues. Follow this guide to start experimenting with failover over the reference application.589Views0likes0CommentsAutomating End-to-End testing with Playwright and Azure Pipelines
The article discusses the importance of end-to-end testing in software development. It introduces Playwright, an open-source automation framework developed by Microsoft, as a superior alternative to Selenium for writing automated browser tests.68KViews6likes11CommentsNeed inspirations? Real AI Apps stories by Azure customers to help you get started
In this blog, we present a tapestry of authentic stories from real Azure customers. You will read about how AI-empowered applications are revolutionizing enterprises and the myriad ways organizations choose to modernize their software, craft innovative experiences, and unveil new revenue streams. We hope that these stories inspire you to embark upon your own Azure AI journey. Before we begin, be sure to bookmark the newly unveiled Plan on Microsoft Learn—meticulously designed for developers and technical managers—to enhance your expertise on this subject. Inspiration 1: Transform customer service Intelligent apps today can offer a self-service natural language chat interface for customers to resolve service issues faster. They can route and divert calls, allowing agents to focus on the most complex cases. These solutions also enable customer service agents to quickly access contextual summaries of prior interactions offer real-time recommendations and generally enhance customer service productivity by automating repetitive tasks, such as logging interaction summaries. Prominent use cases across industries are self-service chatbots, the provision of real-time counsel to agents during customer engagements, the meticulous analysis and coaching of agents following each interaction, and the automation of summarizing customer dialogues. Below is a sample architecture for airline customer service and support. Azure Database for PostgresSQL. Azure Kubernetes Services hosts web UI and integrates with other components. In addition, this app uses RAG, with Azure AI Search as the retrieval system, and Azure OpenAI Service provides LLM capabilities, allowing customer service agents and customers to ask questions using natural language. Air India, the nation’s flagship carrier, updated its existing virtual assistant’s core natural language processing engine to the latest GPT models, using Azure OpenAI services. The new AI-based virtual assistant handles 97% of queries with full automation and saves millions of dollars on customer support costs. "We are on this mission of building a world-class airline with an Indian heart. To accomplish that goal, we are becoming an AI-infused company, and our collaboration with Microsoft is making that happen.” — Dr. Satya Ramaswamy, Chief Digital and Technology Officer, Air India In this customer case, the Azure-powered AI platform also supports Air India customers in other innovative ways. Travelers can save time by scanning visas and passports during web check-in, and then scan baggage tags to track their bags throughout their journeys. The platform’s voice recognition also enables analysis of live contact center conversations for quality assurance, training, and improvement. Inspiration #2: Personalize customer experience Organizations now can use AI models to present personalized content, products, or services to users based on multimodal user inputs from text, images, and speech, grounded on a deep understanding of their customer profiles. Common solutions we have seen include conversational shopping interfaces, image searches for products, product recommenders, and customized content delivery for each customer. In these cases, product discovery is improved through searching for data semantically, and as a result, personalized search and discovery improve engagement, customer satisfaction, and retention. Three areas are critical to consider when implementing such solutions. First, your development team should examine the ability to integrate multiple data types (e.g., user profiles, real-time inventory data, store sales data, and social data.) Second, during testing, ensure that pre-trained AI models can handle multi-modal inputs and can learn from user data to deliver personalized results. Lastly, your cloud administrator should implement scalability measures to meet variable user demands. ASOS, a global online fashion retailer, leveraged Azure AI Foundry to revolutionize its customer experience by creating an AI-powered virtual stylist that could engage with customers and help them discover new trends. "Having a conversational interface option gets us closer to our goals of fully engaging the customer and personalizing their experience by showing them the most relevant products at the most relevant time.” — Cliff Cohen, Chief Technology Officer, ASOS In this customer case, Azure AI Foundry enabled ASOS to rapidly develop and deploy their intelligent apps, integrating natural language processing and computer vision capabilities. Enabled ASOS to rapidly develop and deploy their intelligent app, integrating natural language processing and computer vision capabilities. This solution takes advantage of Azure’s ability to support cutting-edge AI applications in the retail sector, driving business growth and customer satisfaction. Inspiration #3: Accelerate product innovation Building customer-facing custom copilots has the promise to provide enhanced services to your customers. This is typically achieved through using AI to provide data-driven insights that facilitate personalized or unique customer interactions, to enable customer access to a wider range of information, while improving search queries and making data more accessible. You can check out a sample architecture for building your copilot below. d in near real-time by the AI agent. DocuSign, a leader in e-signature solutions with 1.6 million global customers, pioneered an entirely new category of agreement management designed to streamline workflows and created Docusign Intelligent Agreement Management (IAM). The IAM platform uses sophisticated multi-database architecture to efficiently manage various aspects of agreement processing and management. At the heart of the IAM platform is Azure AI, which automates manual tasks and processes agreements using machine learning models. "We needed to transform how businesses worked with a new platform. With Docusign Intelligent Agreement Management, built with Microsoft Azure, we help our customers create, commit to, manage, and act on agreements in real-time.” — Kunal Mukerjee, VP, Technology Strategy and Architecture, Docusign The workflow begins with agreement data stored in an Azure SQL Database and is then transferred through an ingestion pipeline to Navigator, an intelligent agreements repository. In addition, the Azure SQL Database Hyperscale service tier serves as the primary transactional engine, providing virtually unlimited storage capacity and the ability to scale compute and storage resources independently. Inspiration #4: Optimize employee workflows With AI-powered apps, businesses can organize unstructured data to streamline document management and information, leverage natural language processing to create a conversational search experience for employees, provide more contextual information to increase workplace productivity and summarize data for further analysis. Increasingly we have seen solutions such as employee chatbots for HR, professional services assistants (legal/tax/audit), analytics and reporting agents, contact center agent assistants, and employee self-service and knowledge management (IT) centers. It’s essential to note that adequate prompt engineering training can improve employee queries, and your team should examine the capability of integrating copilot with other internal workloads; lastly, make sure your organization implements continuous innovation and delivery mechanisms to support new internal resources and optimize chatbot dialogs. Improving the lives of clinicians and patients Medigold Health, one of the United Kingdom’s leading occupational health service providers, migrated applications to Azure OpenAI Service, with Azure Cosmos DB for logging and Azure SQL Database for data storage, achieving the automation of clinician processes, including report generation, leading to a 58% rise in clinician retention and greater job satisfaction. With Azure App Service, Medigold Health was also able to quickly and efficiently deploy and manage web applications, enhancing the company’s ability to respond to client and clinician needs. "We knew with Microsoft and moving our AI workloads to Azure, we’d get the expert support, plus scalability, security, performance, and resource optimization we needed.” — Alex Goldsmith, CEO, Medigold Health Inspiration #5: Prevent fraud and detect anomalies Increasingly, organizations leverage AI to identify suspicious financial transactions, false account chargebacks, fraudulent insurance claims, digital theft, unauthorized account access or account takeover, network intrusions or malware attacks, and false product or content reviews. If your company can use similar designs, take a glance at a sample architecture for building an interactive fraud analysis app below. Azure Cosmos DB. Transactional data is available for analytics in real-time (HTAP) using Synapse Link. All the other financial transactions such as stock trading data, claims, and other documents are integrated with Microsoft Fabric using Azure Data Factory. This setup allows analysts to see real-time fraud alerts on a custom dashboard. Generative AI denoted here uses RAG, with Azure OpenAI Service of the LLM, and Azure AI Search as the retrieval system. Fighting financial crimes in the gaming world Kinectify, an anti-money laundering (AML) risk management technology company, built its scalable, robust, Microsoft Azure-powered AML platform with a seamless combination of Azure Cosmos DB, Azure AI Services, Azure Kubernetes Service, and the broader capabilities of Azure cloud services. "We needed to choose a platform that provided best-in-class security and compliance due to the sensitive data we require and one that also offered best-in-class services as we didn’t want to be an infrastructure hosting company. We chose Azure because of its scalability, security, and the immense support it offers in terms of infrastructure management.” — Michael Calvin, CTO, Kinectify With the new solutions in place, Kinectify detects 43% more suspicious activities achieves 96% faster decisions, and continues to champion handling a high volume of transactions reliably and identifying patterns, anomalies, and suspicious activity. Inspiration #6: Unlock organizational knowledge We have seen companies building intelligent apps to surface insights from vast amounts of data and make it accessible through natural language interactions. Teams will be able to analyze conversations for keywords to spot trends and better understand your customers. Common use cases can include knowledge extraction and organization, trend and sentiment analysis, curation of content summarization, automated reports, and research generation. Below is a sample architecture for enterprise search and knowledge mining. H&R Block, the trusted tax preparation company, envisioned using generative AI to create an easy, seamless process that answers filers’ tax questions, maintains safeguards to ensure accuracy, and minimizes the time to file. Valuing Microsoft’s leadership in security and AI and the longstanding collaboration between the two companies, H&R Block selected Azure AI Foundry and Azure OpenAI Service to build a new solution on the H&R Block platform to provide real-time, reliable tax filing assistance. By building an intelligent app that automates the extraction of key data from tax documents, H&R Block reduced the time and manual effort involved in document handling. The AI-driven solution significantly increased accuracy while speeding up the overall tax preparation process. "We conduct about 25 percent of our annual business in a matter of days.” — Aditya Thadani, Vice President, H&R Block Through Azure’s intelligent services, H&R Block modernized its operations, improving both productivity and client service and classifying more than 30 million tax documents a year. The solution has allowed the company to handle more clients with greater efficiency, providing a faster, more accurate tax filing experience. Inspiration #7: Automate document processing Document intelligence through AI applications helps human counterparts classify, extract, summarize, and gain deeper insights with natural language prompts. When adopting this approach, organizations are recommended to also consider prioritizing the identification of tasks to be automated, and streamline employee access to historical data, as well as refine downstream workload to leverage summarized data. Here is a sample architecture for large document summarization. ents. Volve Group, one of the world’s leading manufacturers of trucks, buses, construction equipment, and marine and industrial engines, streamlined invoice and claims processing, saving over 10,000 manual hours with the help of Microsoft Azure AI services and Azure AI Document Intelligence. "We chose Microsoft Azure AI primarily because of the advanced capabilities offered, especially with AI Document Intelligence.” — Malladi Kumara Datta, RPA Product Owner, Volvo Group Since launch, the company has saved 10,000 manual hours—about 850-plus manual hours per month. Inspiration #8: Accelerate content delivery Using generative AI, your new applications can automate the creation of web or mobile content, such as product descriptions for online catalogs or visual campaign assets based on marketing narratives, accelerating time to market. It also helps you enable faster iteration and A/B testing to identify the best descriptions that resonate with customers. This pattern generates text or image content based on conversational user input. It combines the capabilities of Image Generation and Text Generation, and the content generated may be personalized to the user, data may be read from a variety of data sources, including Storage Account, Azure Cosmos DB, Azure Database for PostgreSQL, orAzure SQL. JATO Dynamics, a global supplier of automotive business intelligence operating in more than 50 countries, developed Sales Link with Azure OpenAI Service, which now helps dealerships quickly produce tailored content by combining market data and vehicle information, saving customers 32 hours per month. "Data processed through Azure OpenAI Service remains within Azure. This is critical for maintaining the privacy and security of dealer data and the trust of their customers.” — Derek Varner, Head of Software Engineering, JATO Dynamics In addition to Azure OpenAI, JATO Dynamics used Azure Cosmos DB to manage data from millions of transactions across 55 car brands. The database service also empowers scalability and quick access to vehicle and dealer transaction data, providing a reliable foundation for Sales Link. Closing thoughts From innovative solutions to heartwarming successes, it’s clear that a community of AI pioneers is transforming business and customer experiences. Let’s continue to push boundaries, embrace creativity, and celebrate every achievement along the way. Here’s to many more stories of success and innovation! Want to be certified as an Azure AI Engineer? Start preparing with this Microsoft Curated Learning Plan.2.9KViews3likes3Comments