artificial intelligence
64 TopicsAzure Resiliency: Proactive Continuity with Agentic Experiences and Frontier Innovation
Introduction In today’s digital-first world, even brief downtime can disrupt revenue, reputation, and operations. Azure’s new resiliency capabilities empower organizations to anticipate and withstand disruptions—embedding continuity into every layer of their business. At Microsoft Ignite, we’re unveiling a new era of resiliency in Azure, powered by agentic experiences. The new Azure Copilot resiliency agent brings AI-driven workflows that proactively detect vulnerabilities, automate backups, and integrate cyber recovery for ransomware protection. IT teams can instantly assess risks and deploy solutions across infrastructure, data, and cyber recovery—making resiliency a living capability, not just a checklist. The Evolution from Azure Business Continuity Center to Resiliency in Azure Microsoft is excited to announce that the Azure Business Continuity Center (ABCC) is evolving into resiliency capabilities in Azure. This evolution expands its scope from traditional backup and disaster recovery to a holistic resiliency framework. This new experience is delivered directly in the Azure Portal, providing integrated dashboards, actionable recommendations, and one-click access to remediation—so teams can manage resiliency where they already operate. Learn more about this: aka.ms/Ignite2025/Resiliencyblog. To see the new experience, visit the Azure Portal. The Three Pillars of Resiliency Azure’s resiliency strategy is anchored in three foundational pillars, each designed to address a distinct dimension of operational continuity: Infrastructure Resiliency: Built-in redundancy and zonal/regional management keep workloads running during disruptions. The resiliency agent in Azure Copilot automates posture checks, risk detection, and remediation. Data Resiliency: Automated backup and disaster recovery meet RPO/RTO and compliance needs across Azure, on-premises, and hybrid. Cyber Recovery: Isolated recovery vaults, immutable backups, and AI-driven insights defend against ransomware and enable rapid restoration. With these foundational pillars in place, organizations can adopt a lifecycle approach to resiliency—ensuring continuity from day one and adapting as their needs evolve. The Lifecycle Approach: Start Resilient, Get Resilient, Stay Resilient While the pillars define what resiliency protects, the lifecycle stages in resiliency journey define how organizations implement and sustain it over time. For the full framework, see the prior blog; below we focus on what’s new and practical. The resiliency agent in Azure Copilot empowers organizations to embed resiliency at every stage of their cloud journey—making proactive continuity achievable from day one and sustainable over time. Start Resilient: With the new resiliency agent, teams can “Start Resilient” by leveraging guided experiences and automated posture assessments that help design resilient workloads before deployment. The agent surfaces architecture gaps, validates readiness, and recommends best practices—ensuring resiliency is built in from the outset, not bolted on later. Get Resilient: As organizations scale, the resiliency agent enables them to “Get Resilient” by providing estate-wide visibility, automated risk assessments, and configuration recommendations. AI-driven insights help identify blind spots, remediate risks, and accelerate the adoption of resilient-by-default architectures—so resiliency is actively achieved across all workloads, not just planned. Stay Resilient: To “Stay Resilient,” the resiliency agent delivers continuous validation, monitoring, and improvement. Automated failure simulations, real-time monitoring, and attestation reporting allow teams to proactively test recovery workflows and ensure readiness for evolving threats. One-click failover and ongoing posture checks help sustain compliance and operational continuity, making resiliency a living capability that adapts as your business and technology landscape changes Best Practices for Proactive Continuity in Resiliency To enable proactive continuity, organizations should: Architect for high availability across multiple availability zones and regions (prioritize Tier-0/1 workloads). Automate recovery with Azure Site Recovery and failover playbooks for orchestrated, rapid restoration. Leverage integrated zonal resiliency experiences to uncover blind spots and receive tailored recommendations. Continuously validate using Chaos Studio to simulate outages and test recovery workflows. Monitor SLAs, RPO/RTO, and posture metrics with Azure Monitor and Policy; iterate for ongoing improvement. Use the Azure Copilot resiliency agent for AI-driven posture assessments, remediation scripts, and cost analysis to streamline operations. Conclusion & Next Steps Resiliency capabilities in Azure unifies infrastructure, data, and cyber recovery while guiding organizations to start, get, and stay resilient. Teams adopting these capabilities see faster posture improvements, less manual effort, and continuous operational continuity. This marks a fundamental shift—from reactive recovery to proactive continuity. By embedding resiliency as a living capability, Azure empowers organizations to anticipate, withstand, and recover from disruptions, adapting to new threats and evolving business needs. Organizations adopting Resiliency in Azure see measurable impact: Accelerated posture improvement with AI-driven insights and actionable recommendations. Less manual effort through automation and integrated recovery workflows. Continuous operational continuity via ongoing validation and monitoring Ready to take the next step? Explore these resources and sessions: Resiliency in Azure (Portal) Resiliency in Azure (Learn Docs) Agents (preview) in Azure Copilot Resiliency Solutions Reliability Guides by Service Azure Essentials Azure Accelerate Ignite Announcement Key Ignite 2025 Sessions to Watch: Resilience by Design: Secure, Scalable, AI-Ready Cloud with Azure (BRK217) Resiliency & Recovery with Azure Backup and Site Recovery (BRK146) Architect Resilient Apps with Azure Backup and Reliability Features (BRK148) Architecting for Resiliency on Azure Infrastructure (BRK178) All sessions are available on demand—perfect for catching up or sharing with your team. Browse the full session catalog and start building resiliency by default today.110Views0likes0CommentsAccelerating HPC and EDA with Powerful Azure NetApp Files Enhancements
High-Performance Computing (HPC) and Electronic Design Automation (EDA) workloads demand uncompromising performance, scalability, and resilience. Whether you're managing petabyte-scale datasets or running compute intensive simulations, Azure NetApp Files delivers the agility and reliability needed to innovate without limits.267Views0likes0CommentsBuilding AI Agents: Workflow-First vs. Code-First vs. Hybrid
AI Agents are no longer just a developer’s playground. They’re becoming essential for enterprise automation, decision-making, and customer engagement. But how do you build them? Do you go workflow-first with drag-and-drop designers, code-first with SDKs, or adopt a hybrid approach that blends both worlds? In this article, I’ll walk you through the landscape of AI Agent design. We’ll look at workflow-first approaches with drag-and-drop designers, code-first approaches using SDKs, and hybrid models that combine both. The goal is to help you understand the options and choose the right path for your organization. Why AI Agents Need Orchestration Before diving into tools and approaches, let’s talk about why orchestration matters. AI Agents are not just single-purpose bots anymore. They often need to perform multi-step reasoning, interact with multiple systems, and adapt to dynamic workflows. Without orchestration, these agents can become siloed and fail to deliver real business value. Here’s what I’ve observed as the key drivers for orchestration: Complexity of Enterprise Workflows Modern business processes involve multiple applications, data sources, and decision points. AI Agents need a way to coordinate these steps seamlessly. Governance and Compliance Enterprises require control over how AI interacts with sensitive data and systems. Orchestration frameworks provide guardrails for security and compliance. Scalability and Maintainability A single agent might work fine for a proof of concept, but scaling to hundreds of workflows requires structured orchestration to avoid chaos. Integration with Existing Systems AI Agents rarely operate in isolation. They need to plug into ERP systems, CRMs, and custom apps. Orchestration ensures these integrations are reliable and repeatable. In short, orchestration is the backbone that turns AI Agents from clever prototypes into enterprise-ready solutions. Behind the Scenes I’ve always been a pro-code guy. I started my career on open-source coding in Unix and hardly touched the mouse. Then I discovered Visual Studio, and it completely changed my perspective. It showed me the power of a hybrid approach, the best of both worlds. That said, I won’t let my experience bias your ideas of what you’d like to build. This blog is about giving you the full picture so you can make the choice that works best for you. Workflow-First Approach Workflow-first platforms are more than visual designers and not just about drag-and-drop simplicity. They represent a design paradigm where orchestration logic is abstracted into declarative models rather than imperative code. These tools allow you to define agent behaviors, event triggers, and integration points visually, while the underlying engine handles state management, retries, and scaling. For architects, this means faster prototyping and governance baked into the platform. For developers, it offers extensibility through connectors and custom actions without sacrificing enterprise-grade reliability. Copilot Studio Building conversational agents becomes intuitive with a visual designer that maps prompts, actions, and connectors into structured flows. Copilot Studio makes this possible by integrating enterprise data and enabling agents to automate tasks and respond intelligently without deep coding. Building AI Agents using Copilot Studio Design conversation flows with adaptive prompts Integrate Microsoft Graph for contextual responses Add AI-driven actions using Copilot extensions Support multi-turn reasoning for complex queries Enable secure access to enterprise data sources Extend functionality through custom connectors Logic Apps Adaptive workflows and complex integrations are handled through a robust orchestration engine. Logic Apps introduces Agent Loop, allowing agents to reason iteratively, adapt workflows, and interact with multiple systems in real time. Building AI Agents using Logic Apps Implement Agent Loop for iterative reasoning Integrate Azure OpenAI for goal-driven decisions Access 1,400+ connectors for enterprise actions Support human-in-the-loop for critical approvals Enable multi-agent orchestration for complex tasks Provide observability and security for agent workflows Power Automate Multi-step workflows can be orchestrated across business applications using AI Builder models or external AI APIs. Power Automate enables agents to make decisions, process data, and trigger actions dynamically, all within a low-code environment. Building AI Agents using Power Automate Automate repetitive tasks with minimal effort Apply AI Builder for predictions and classification Call Azure OpenAI for natural language processing Integrate with hundreds of enterprise connectors Trigger workflows based on real-time events Combine flows with human approvals for compliance Azure AI Foundry Visual orchestration meets pro-code flexibility through Prompt Flow and Connected Agents, enabling multi-step reasoning flows while allowing developers to extend capabilities through SDKs. Azure AI Foundry is ideal for scenarios requiring both agility and deep customization. Building AI Agents using Azure AI Foundry Design reasoning flows visually with Prompt Flow Orchestrate multi-agent systems using Connected Agents Integrate with VS Code for advanced development Apply governance and deployment pipelines for production Use Azure OpenAI models for adaptive decision-making Monitor workflows with built-in observability tools Microsoft Agent Framework (Preview) I’ve been exploring Microsoft Agent Framework (MAF), an open-source foundation for building AI agents that can run anywhere. It integrates with Azure AI Foundry and Azure services, enabling multi-agent workflows, advanced memory services, and visual orchestration. With public preview live and GA coming soon, MAF is shaping how we deliver scalable, flexible agentic solutions. Enterprise-scale orchestration is achieved through graph-based workflows, human-in-the-loop approvals, and observability features. The Microsoft Agent Framework lays the foundation for multi-agent systems that are durable and compliant. Building AI Agents using Microsoft Agent Framework Coordinate multiple specialized agents in a graph Implement durable workflows with pause and resume Support human-in-the-loop for controlled autonomy Integrate with Azure AI Foundry for hosting and governance Enable observability through OpenTelemetry integration Provide SDK flexibility for custom orchestration patterns Visual-first platforms make building AI Agents feel less like coding marathons and more like creative design sessions. They’re perfect for those scenarios when you’d rather design than debug and still want the option to dive deeper when complexity calls. Pro-Code Approach Remember I told you how I started as a pro-code developer early in my career and later embraced a hybrid approach? I’ll try to stay neutral here as we explore the pro-code world. Pro-code frameworks offer integration with diverse ecosystems, multi-agent coordination, and fine-grained control over logic. While workflow-first and pro-code approaches both provide these capabilities, the difference lies in how they balance factors such as ease of development, ease of maintenance, time to deliver, monitoring capabilities, and other non-functional requirements. Choosing the right path often depends on which of these trade-offs matter most for your scenario. LangChain When I first explored LangChain, it felt like stepping into a developer’s playground for AI orchestration. I could stitch together prompts, tools, and APIs like building blocks, and I enjoyed the flexibility. It reminded me why pro-code approaches appeal to those who want full control over logic and integration with diverse ecosystems. Building AI Agents using LangChain Define custom chains for multi-step reasoning [it is called Lang“Chain”] Integrate external APIs and tools for dynamic actions Implement memory for context-aware conversations Support multi-agent collaboration through orchestration patterns Extend functionality with custom Python modules Deploy agents across cloud environments for scalability Semantic Kernel I’ve worked with Semantic Kernel when I needed more control over orchestration logic, and what stood out was its flexibility. It provides both .NET and Python SDKs, which makes it easy to combine natural language prompts with traditional programming logic. I found the planners and skills especially useful for breaking down goals into smaller steps, and connectors helped integrate external systems without reinventing the wheel. Building AI Agents using Semantic Kernel Create semantic functions for prompt-driven tasks Use planners for dynamic goal decomposition Integrate plugins for external system access Implement memory for persistent context across sessions Combine AI reasoning with deterministic code logic Enable observability and telemetry for enterprise monitoring Microsoft Agent Framework (Preview) Although I introduced MAF in the earlier section, its SDK-first design makes it relevant here as well for advanced orchestration and the pro-code nature… and so I’ll probably write this again in the Hybrid section. The Agent Framework is designed for developers who need full control over multi-agent orchestration. It provides a pro-code approach for defining agent behaviors, implementing advanced coordination patterns, and integrating enterprise-grade observability. Building AI Agents using Microsoft Agent Framework Define custom orchestration logic using SDK APIs Implement graph-based workflows for multi-agent coordination Extend agent capabilities with custom code modules Apply durable execution patterns with pause and resume Integrate OpenTelemetry for detailed monitoring and debugging Securely host and manage agents through Azure AI Foundry integration Hybrid Approach and decision framework I’ve always been a fan of both worlds, the flexibility of pro-code and the simplicity of workflow drag-and-drop style IDEs and GUIs. A hybrid approach is not about picking one over the other; it’s about balancing them. In practice, this to me means combining the speed and governance of workflow-first platforms with the extensibility and control of pro-code frameworks. Hybrid design shines when you need agility without sacrificing depth. For example, I can start with Copilot Studio to build a conversational agent using its visual designer. But if the scenario demands advanced logic or integration, I can call an Azure Function for custom processing, trigger a Logic Apps workflow for complex orchestration, or even invoke the Microsoft Agent Framework for multi-agent coordination. This flexibility delivers the best of both worlds, low-code for rapid development (remember RAD?) and pro-code for enterprise-grade customization with complex logic or integrations. Why go Hybrid Ø Balance speed and control: Rapid prototyping with workflow-first tools, deep customization with code. Ø Extend functionality: Call APIs, Azure Functions, or SDK-based frameworks from visual workflows. Ø Optimize for non-functional requirements: Address maintainability, monitoring, and scalability without compromising ease of development. Ø Enable interoperability: Combine connectors, plugins, and open standards for diverse ecosystems. Ø Support multi-agent orchestration: Integrate workflow-driven agents with pro-code agents for complex scenarios. The hybrid approach for building AI Agents is not just a technical choice but a design philosophy. When I need rapid prototyping or business automation, workflow-first is my choice. For multi-agent orchestration and deep customization, I go with code-first. Hybrid makes sense for regulated industries and large-scale deployments where flexibility and compliance are critical. The choice isn’t binary, it’s strategic. I’ve worked with both workflow-first tools like Copilot Studio, Power Automate, and Logic Apps, and pro-code frameworks such as LangChain, Semantic Kernel, and the Microsoft Agent Framework. Each approach has its strengths, and the decision often comes down to what matters most for your scenario. If rapid prototyping and business automation are priorities, workflow-first platforms make sense. When multi-agent orchestration, deep customization, and integration with diverse ecosystems are critical, pro-code frameworks give you the flexibility and control you need. Hybrid approaches bring both worlds together for regulated industries and large-scale deployments where governance, observability, and interoperability cannot be compromised. Understanding these trade-offs will help you create AI Agents that work so well, you’ll wonder if they’re secretly applying for your job! About the author Pradyumna (Prad) Harish is a Technology leader in the WW GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open-Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.6.8KViews3likes0CommentsSelecting the Right Agentic Solution on Azure – Part 2 (Security)
Let’s pick up from where we left off in the previous post — Selecting the Right Agentic Solution on Azure - Part 1. Earlier, we explored a decision tree to help identify the most suitable Azure service for building your agentic solution. Following that discussion, we received several requests to dive deeper into the security considerations for each of these services. In this post, we’ll examine the security aspects of each option, one by one. But before going ahead and looking at the security perspective I highly recommend looking at list of Azure AI Services Technologies made available by Microsoft. This list is inclusive of all those services which were part of erstwhile cognitive services and latest additions. Workflows with AI agents and models in Azure Logic Apps (Preview) – This approach focuses on running your agents as an action or as part of an “agent loop” with multiple actions within Azure Logic Apps. It’s important not to confuse this with the alternative setup, where Azure Logic Apps integrates with AI Agents in the Foundry Agent Service—either as a tool or as a trigger. (Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps | Microsoft Community Hub). In that scenario, your agents are hosted under the Azure AI Foundry Agent Service, which we’ll discuss separately below. Although, to create an agent workflow, you’ll need to establish a connection—either to Azure OpenAI or to an Azure AI Foundry project for connecting to a model. When connected to a Foundry project, you can view agents and threads directly within that project’s lists. Since agents here run as Logic Apps actions, their security is governed by the Logic Apps security framework. Let’s look at the key aspects: Easy Auth or App Service Auth (Preview) - Agent workflows often integrate with a broader range of systems—models, MCPs, APIs, agents, and even human interactions. You can secure these workflows using Easy Auth, which integrates with Microsoft Entra ID for authentication and authorization. Read more here: Protect Agent Workflows with Easy Auth - Azure Logic Apps | Microsoft Learn. Securing and Encrypting Data at Rest - Azure Logic Apps stores data in Azure Storage, which uses Microsoft-managed keys for encryption by default. You can further enhance security by: Restricting access to Logic App operations via Azure RBAC Limiting access to run history data Securing inputs and outputs Controlling parameter access for webhook-triggered workflows Managing outbound call access to external services More info here: Secure access and data in workflows - Azure Logic Apps | Microsoft Learn. Secure Data at transit – When exposing your Logic App as an HTTP(S) endpoint, consider using: Azure API Management for access policies and documentation Azure Application Gateway or Azure Front Door for WAF (Web Application Firewall) protection. I highly recommend the labs provided by Logic Apps Product Group to learn more about Agentic Workflows: https://azure.github.io/logicapps-labs/docs/intro. Azure AI Foundry Agent Service – As of this writing, the Azure AI Foundry Agent Service abstracts the underlying infrastructure where your agents run. Microsoft manages this secure environment, so you don’t need to handle compute, network, or storage resources—though bring-your-own-storage is an option. Securing and Encrypting Data at Rest - Microsoft guarantees that your prompts and outputs remain private—never shared with other customers or AI providers (such as OpenAI or Meta). Data (from messages, threads, runs, and uploads) is encrypted using AES-256. It remains stored in the same region where the Agent Service is deployed. You can optionally use Customer-Managed Keys (CMK) for encryption. Read more here: Data, privacy, and security for Azure AI Agent Service - Azure AI Services | Microsoft Learn. Network Security – The service allows integration with your private virtual network using a private endpoint. Note: There are known limitations, such as subnet IP restrictions, the need for a dedicated agent subnet, same-region requirements, and limited regional availability. Read more here: How to use a virtual network with the Azure AI Foundry Agent Service - Azure AI Foundry | Microsoft Learn. Secure Data at transit – Upcoming enhancements include API Management support (soon in Public Preview) for AI APIs, including Model APIs, Tool APIs/MCP servers, and Agent APIs. Here is another great article about using APIM to safeguard HTTP APIs exposed by Azure OpenAI that let your applications perform embeddings or completions by using OpenAI's language models. Agent Orchestrators – We’ve introduced the Agent Framework, which succeeds both AutoGen and Semantic Kernel. According to the product group, it combines the best capabilities of both predecessors. Support for Semantic Kernel and related documentation for AutoGen will continue to be available for some time to allow users to transition smoothly to the new framework. When discussing the security aspects of agent orchestrators, it’s important to note that these considerations also extend to the underlying services hosting them—whether on AKS or Container Apps. However, this discussion will not focus on the security features of those hosting environments, as comprehensive resources already exist for them. Instead, we’ll focus on common security concerns applicable across different orchestrators, including AutoGen, Semantic Kernel, and other frameworks such as LlamaIndex, LangGraph, or LangChain. Key areas to consider include (but are not limited to): Secure Secrets / Key Management Avoid hard-coding secrets (e.g., API keys for Foundry, OpenAI, Anthropic, Pinecone, etc.). Use secret management solutions such as Azure Key Vault or environment variables. Encrypt secrets at rest and enforce strict limits on scope and lifetime. Access Control & Least Privilege Grant each agent or tool only the minimum required permissions. Implement Role-Based Access Control (RBAC) and enforce least privilege principles. Use strong authentication (e.g., OAuth2, Azure AD) for administrative or tool-level access. Restrict the scope of external service credentials (e.g., read-only vs. write) and rotate them regularly. Isolation / Sandboxing Isolate plugin execution and use inter-process separation as needed. Prevent user inputs from executing arbitrary code on the host. Apply resource limits for model or function execution to mitigate abuse. Sensitive Data Protection Encrypt data both at rest and in transit. Mask or remove PII before sending data to models. Avoid persisting sensitive context unnecessarily. Ensure logs and memory do not inadvertently expose secrets or user data. Prompt & Query Security Sanitize or escape user input in custom query engines or chat interfaces. Protect against prompt injection by implementing guardrails to monitor and filter prompts. Set context length limits and use safe output filters (e.g., profanity filters, regex validators). Observability, Logging & Auditing Maintain comprehensive logs, including tool invocations, agent decisions, and execution paths. Continuously monitor for anomalies or unexpected behaviour. I hope this overview assists you in evaluating and implementing the appropriate security measures for your chosen agentic solution.453Views3likes3CommentsAgentic Integration with SAP, ServiceNow, and Salesforce
Copilot/Copilot Studio Integration with SAP (No Code) By integrating SAP Cloud Identity Services with Microsoft Entra ID, organizations can establish secure, federated identity management across platforms. This configuration enables Microsoft Copilot and Teams to seamlessly connect with SAP’s Joule digital assistant, supporting natural language interactions and automating business processes efficiently. Key Resources as given in SAP docs (Image courtesy SAP): Configuring SAP Cloud Identity Services and Microsoft Entra ID for Joule Enable Microsoft Copilot and Teams to Pass Requests to Joule Copilot Studio Integration with ServiceNow and Salesforce (No Code) Integration with ServiceNow and Salesforce, has two main approaches: Copilot Agents using Copilot Studio: Custom agents can be built in Copilot Studio to interact directly with Salesforce CRM data or ServiceNow knowledge bases and helpdesk tickets. This enables organizations to automate sales and support processes using conversational AI. Create a custom sales agent using your Salesforce CRM data (YouTube) ServiceNow Connect Knowledge Base + Helpdesk Tickets (YouTube) 3rd Party Agents using Copilot for Service Agent: Microsoft Copilot can be embedded within Salesforce and ServiceNow interfaces, providing users with contextual assistance and workflow automation directly inside these platforms. Set up the embedded experience in Salesforce Set up the embedded experience in ServiceNow MCP or Agent-to-Agent (A2A) Interoperability (Pro Code) - (Image courtesy SAP) If you choose a pro-code approach, you can either implement the Model Context Protocol (MCP) in a client/server setup for SAP, ServiceNow, and Salesforce, or leverage existing agents for these third-party services using Agent-to-Agent (A2A) integration. Depending on your requirements, you may use either method individually or combine them. The recently released Azure Agent Framework offers practical examples for both MCP and A2A implementations. Below is the detailed SAP reference architecture, illustrating how A2A solutions can be layered on top of SAP systems to enable modular, scalable automation and data exchange. Agent2Agent Interoperability | SAP Architecture Center Logic Apps as Integration Actions Logic Apps is the key component of Azure Integration platform. Just like so many other connectors it has connectors for all this three platforms (SAP, ServiceNow, Salesforce). Logic Apps can be invoked from custom Agent (built in action in Foundry) or Copilot Agent. Same can be said for Power Platform/Automate as well. Conclusion This article provides a comprehensive overview of how Microsoft Copilot, Copilot Studio, Foundry by A2A/MCP, and Azure Logic Apps can be combined to deliver robust, agentic integrations with SAP, ServiceNow, and Salesforce. The narrative highlights the importance of secure identity federation, modular agent orchestration, and low-code/pro-code automation in building next-generation enterprise solutions.435Views1like0CommentsSelecting the Right Agentic Solution on Azure - Part 1
Recently, we have seen a surge in requests from customers and Microsoft partners seeking guidance on building and deploying agentic solutions at various scales. With the rise of Generative AI, replacing traditional APIs with agents has become increasingly popular. There are several approaches to building, deploying, running, and orchestrating agents on Azure. In this discussion, I will focus exclusively on Azure-specific tools, services, and methodologies, setting aside Copilot and Copilot Studio for now. This article describes the options available as of today. 1. Azure OpenAI Assistants API: This feature within Azure OpenAI Service enables developers to create conversational agents (“assistants”) based on OpenAI models (such as GPT-3.5 and GPT-4). It supports capabilities like memory, tool/function calls, and retrieval (e.g., document search). However, Microsoft has already deprecated version 1 of the Azure OpenAI Assistants API, and version 2 remains in preview. Microsoft strongly recommends migrating all existing Assistants API-based agents to the Agent Service. Additionally, OpenAI is retiring the Assistants API and advises developers to use the modern “Response” API instead (see migration detail). Given these developments, it is not advisable to use the Assistants API for building agents. Instead, you should use the Azure AI Agent Service, which is part of Azure AI Foundry. 2. Workflows with AI agents and models in Azure Logic Apps (Preview) – As the name suggests, this feature is currently in public preview and is only available with Logic Apps Standard, not with the consumption plan. You can enhance your workflow by integrating agentic capabilities. For example, in a visa processing workflow, decisions can be made based on priority, application type, nationality, and background checks using a knowledge base. The workflow can then route cases to the appropriate queue and prepare messages accordingly. Workflows can be implemented either as chat assistant or APIs. If your project is workflow-dependent and you are ready to implement agents in a declarative way, this is a great option. However, there are currently limited choices for models and regional availability. For CI/CD, there is an Azure Logic Apps Standard template available for VS Code you can use. 3. Azure AI Agent Service – Part of Azure AI Foundry, the Azure AI Agent Service allows you to provision agents declaratively from the UI. You can consume various OpenAI models (with support for non-OpenAI models coming soon) and leverage important tools or knowledge bases such as files, Azure AI Search, SharePoint, and Fabric. You can connect agents together and create hierarchical agent dependencies. SDKs are available for building agents within agent services using Python, C#, or Java. Microsoft manages the infrastructure to host and run these agents in isolated containers. The service offers role-based access control, MS Entra ID integration, and options to bring your own storage for agent states and Azure Key Vault keys. You can also incorporate different actions including invoking a Logic App instance from your agent. There is also option to trigger an agent using Logic Apps (preview). Microsoft recommends using Agent Service/Azure Foundry as the destination for agents, as further enhancements and investments are focused here. 4. Agent Orchestrators – There are several excellent orchestrators available, such as LlamaIndex, LangGraph, LangChain, and two from Microsoft—Semantic Kernel and AutoGen. These options are ideal if you need full control over agent creation, hosting, and orchestration. They are developer-only solutions and do not offer a UI (barring AutoGen Studio having some UI assistance). You can create complex, multi-layered agent connections. You can then host and run these agents in you choice of Azure services like AKS or Apps Service. Additionally, you have the option to create agents using Agent Service and then orchestrate them with one of these orchestrators. Choosing the Right Solution The choice of agentic solution depends on several factors, including whether you prefer code or no-code approaches, control over the hosting platform, customer needs, scalability, maintenance, orchestration complexity, security, and cost. Customer Need: If agents need to be part of a workflow, use AI Agents in Logic Apps; otherwise, consider other options. No-Code: For workflow-based agents, Logic Apps is suitable; for other scenarios, Azure AI Agent Service is recommended. Hosting and Maintenance: If Logic Apps is not an option and you prefer not to maintain your own environment, use Azure AI Agent Service. Otherwise, consider custom agent orchestrators like Semantic Kernel or AutoGen to build the agent and services like AKS or Apps Service to host those. Orchestration Complexity: For simple hierarchical agent connections, Azure AI Agent Service is good choice. For complex orchestration, use an agent orchestrator. Versioning - If you are concerned about versioning to ensure solid CI/CD regime then you may have to chose Agent Orchestrators. Agent Service still miss this feature clarity. We have some work-around but it is not robust implementation. Hopefully we will catch up soon with a better versioning solution. Summary: When selecting the right agentic solution on Azure, consider the latest recommendations and platform developments. For most scenarios, Microsoft advises using the Azure AI Agent Service within Azure Foundry, as it is the focus of ongoing enhancements and support. For workflow-driven projects, Azure Logic Apps with agentic capabilities may be suitable, while advanced users can leverage orchestrators for custom agent architectures. In Selecting the Right Agentic Solution on Azure – Part 2 (Security) blog we will examine the security aspects of each option, one by one.1.2KViews5likes0CommentsAccelerating Enterprise AI Adoption with Azure AI Landing Zone
Introduction As organizations across industries race to integrate Artificial Intelligence (AI) into their business processes and realize tangible value, one question consistently arises — where should we begin? Customers often wonder: What should the first steps in AI adoption look like? Should we build a unified, enterprise-grade platform for all AI initiatives? Who should guide us through this journey — Microsoft, our partners, or both? This blog aims to demystify these questions by providing a foundational understanding of the Azure AI Landing Zone (AI ALZ) — a unified, scalable, and secure framework for enterprise AI adoption. It explains how AI ALZ builds on two key architectural foundations — the Cloud Adoption Framework (CAF) and the Well-Architected Framework (WAF) — and outlines an approach to setting up an AI Landing Zone in your Azure environment. Foundational Frameworks Behind the AI Landing Zone 1.1 Cloud Adoption Framework (CAF) The Azure Cloud Adoption Framework is Microsoft’s proven methodology for guiding customers through their cloud transformation journey. It encompasses the complete lifecycle of cloud enablement across stages such as Strategy, Plan, Ready, Adopt, Govern, Secure, and Manage. The Landing Zone concept sits within the Ready stage — providing a secure, scalable, and compliant foundation for workload deployment. CAF also defines multiple adoption scenarios, one of which focuses specifically on AI adoption, ensuring that AI workloads align with enterprise cloud governance and best practices. 1.2 Well-Architected Framework (WAF) The Azure Well-Architected Framework complements CAF by providing detailed design guidance across five key pillars: Reliability Security Cost Optimization Operational Excellence Performance Efficiency AI Landing Zones integrate these design principles to ensure that AI workloads are not only functional but also resilient, cost-effective, and secure at enterprise scale. Understanding Azure Landing Zones To understand an AI Landing Zone, it’s important to first understand Azure Landing Zones in general. An Azure Landing Zone acts as a blueprint or foundation for deploying workloads in a cloud environment — much like a strong foundation is essential for constructing a building or bridge. Each workload type (SAP, Oracle, CRM, AI, etc.) may require a different foundation, but all share the same goal: to provide a consistent, secure, and repeatable environment built on best practices. Azure Landing Zones provide: A governed, scalable foundation aligned with enterprise standards Repeatable, automated deployment patterns using Infrastructure as Code (IaC) Integrated security and management controls baked into the architecture To have more insightful understanding of Azure Landing zone architecture pls visit the official link here and refer diagram below: The Role of Azure AI Foundry in AI Landing Zones Azure AI Foundry is emerging as Microsoft’s unified environment for enterprise AI development and deployment. It acts as a one-stop platform for building, deploying, and managing AI solutions at scale. Key components include: Foundry Model Catalog: A collection of foundation and fine-tuned models Agent Service: Enables model selection, tool and knowledge integration, and control over data and security Search and Machine Learning Services: Integrated capabilities for knowledge retrieval and ML lifecycle management Content Safety and Observability: Ensures responsible AI use and operational visibility Compute Options: Customers can choose from various Azure compute services based on control and scalability needs: Azure Kubernetes Service (AKS) — full control App Service and Azure Container Apps — simplified management Azure Functions — fully serverless option What Is Azure AI Landing Zone (AI ALZ)? The Azure AI Landing Zone is a workload-specific landing zone designed to help enterprises deploy AI workloads securely and efficiently in production environments. Key Objectives of AI ALZ Accelerate deployment of production-grade AI solutions Embed security, compliance, and resilience from the start Enable cost and operational optimization through standardized architecture Support repeatable patterns for multiple AI use cases using Azure AI Foundry Empower customer-centric enablement with extensibility and modularity By adopting the AI ALZ, organizations can move faster from proof-of-concept (POC) to production, addressing common challenges such as inconsistent architectures, lack of governance, and operational inefficiencies. Core Components of AI Landing Zone The AI ALZ is structured around three major components: Design Framework – Based on the Cloud Adoption Framework (CAF) and Well-Architected Framework (WAF). Reference Architectures – Blueprint architectures for common AI workloads. Extensible Implementations – Deployable through Terraform, Bicep, or (soon) Azure Portal templates using Azure Verified Modules (AVM). Together, these elements allow customers to quickly deploy a secure, standardized, and production-ready AI environment. Customer Readiness and Discovery A common question during early customer engagements is: “Can our existing enterprise-scale landing zone support AI workloads, or do we need a new setup?” To answer this, organizations should start with a discovery and readiness assessment, reviewing their existing enterprise-scale landing zone across key areas such as: Identity and Access Management Networking and Connectivity Data Security and Compliance Governance and Policy Controls Compute and Deployment Readiness Based on this assessment, customers can either: Extend their existing enterprise-scale foundation, or Deploy a dedicated AI workload spoke designed specifically for Azure AI Foundry and enterprise-wide AI enablement. Attached excel contains the discovery question to enquire about customer current setup and propose a adoption plan to reflect architecture changes if any. The Journey Toward AI Adoption The AI Landing Zone represents the first critical step in an organization’s AI adoption journey. It establishes the foundation for: Consistent governance and policy enforcement Security and networking standardization Rapid experimentation and deployment of AI workloads Scalable, production-grade AI environments By aligning with CAF and WAF, customers can be confident that their AI adoption strategy is architecturally sound, secure, and sustainable. Conclusion The Azure AI Landing Zone provides enterprises with a structured, secure, and scalable foundation for AI adoption at scale. It bridges the gap between innovation and governance, enabling organizations to deploy AI workloads faster while maintaining compliance, performance, and operational excellence. By leveraging Microsoft’s proven frameworks — CAF and WAF — and adopting Azure AI Foundry as the unified development platform, enterprises can confidently build the next generation of responsible, production-grade AI solutions on Azure. Get Started Ready to start your AI Landing Zone journey? Microsoft can help assess your readiness and accelerate deployment through validated reference implementations and expert-led guidance. To help organizations accelerate deployment, Microsoft has published open-source Azure AI Landing Zone templates and automation scripts in Terraform and Bicep that can be directly used to implement the architecture described in this blog. 👉 Explore and deploy the Azure AI Landing Zone(Preview) on GitHub: https://github.com/Azure/AI-Landing-Zones2.8KViews4likes9CommentsHow Azure NetApp Files Object REST API powers Azure and ISV Data and AI services – on YOUR data
This article introduces the Azure NetApp Files Object REST API, a transformative solution for enterprises seeking seamless, real-time integration between their data and Azure's advanced analytics and AI services. By enabling direct, secure access to enterprise data—without costly transfers or duplication—the Object REST API accelerates innovation, streamlines workflows, and enhances operational efficiency. With S3-compatible object storage support, it empowers organizations to make faster, data-driven decisions while maintaining compliance and data security. Discover how this new capability unlocks business potential and drives a new era of productivity in the cloud.560Views0likes0CommentsValidating Scalable EDA Storage Performance: Azure NetApp Files and SPECstorage Solution 2020
Electronic Design Automation (EDA) workloads drive innovation across the semiconductor industry, demanding robust, scalable, and high-performance cloud solutions to accelerate time-to-market and maximize business outcomes. Azure NetApp Files empowers engineering teams to run complex simulations, manage vast datasets, and optimize workflows by delivering industry-leading performance, flexibility, and simplified deployment—eliminating the need for costly infrastructure overprovisioning or disruptive workflow changes. This leads to faster product development cycles, reduced risk of project delays, and the ability to capitalize on new opportunities in a highly competitive market. In a historic milestone, Microsoft has been independently validated Azure NetApp Files for EDA workloads through the publication of the SPECstorage® Solution 2020 EDA_BLENDED benchmark, providing objective proof of its readiness to meet the most demanding enterprise requirements, now and in the future.349Views0likes0CommentsBuilding a Secure and Compliant Azure AI Landing Zone: Policy Framework & Best Practices
As organizations accelerate their AI adoption on Microsoft Azure, governance, compliance, and security become critical pillars for success. Deploying AI workloads without a structured compliance framework can expose enterprises to data privacy issues, misconfigurations, and regulatory risks. To address this challenge, the Azure AI Landing Zone provides a scalable and secure foundation — bringing together Azure Policy, Blueprints, and Infrastructure-as-Code (IaC) to ensure every resource aligns with organizational and regulatory standards. The Azure Policy & Compliance Framework acts as the governance backbone of this landing zone. It enforces consistency across environments by applying policy definitions, initiatives, and assignments that monitor and remediate non-compliant resources automatically. This blog will guide you through: 🧭 The architecture and layers of an AI Landing Zone 🧩 How Azure Policy as Code enables automated governance ⚙️ Steps to implement and deploy policies using IaC pipelines 📈 Visualizing compliance flows for AI-specific resources What is Azure AI Landing Zone (AI ALZ)? AI ALZ is a foundational architecture that integrates core Azure services (ML, OpenAI, Cognitive Services) with best practices in identity, networking, governance, and operations. To ensure consistency, security, and responsibility, a robust policy framework is essential. Policy & Compliance in AI ALZ Azure Policy helps enforce standards across subscriptions and resource groups. You define policies (single rules), group them into initiatives (policy sets), and assign them with certain scopes & exemptions. Compliance reporting helps surface noncompliant resources for mitigation. In AI workloads, some unique considerations: Sensitive data (PII, models) Model accountability, logging, audit trails Cost & performance from heavy compute usage Preview features and frequent updates Scope This framework covers: Azure Machine Learning (AML) Azure API Management Azure AI Foundry Azure App Service Azure Cognitive Services Azure OpenAI Azure Storage Accounts Azure Databases (SQL, Cosmos DB, MySQL, PostgreSQL) Azure Key Vault Azure Kubernetes Service Core Policy Categories 1. Networking & Access Control Restrict resource deployment to approved regions (e.g., Europe only). Enforce private link and private endpoint usage for all critical resources. Disable public network access for workspaces, storage, search, and key vaults. 2. Identity & Authentication Require user-assigned managed identities for resource access. Disable local authentication; enforce Microsoft Entra ID (Azure AD) authentication. 3. Data Protection Enforce encryption at rest with customer-managed keys (CMK). Restrict public access to storage accounts and databases. 4. Monitoring & Logging Deploy diagnostic settings to Log Analytics for all key resources. Ensure activity/resource logs are enabled and retained for at least one year. 5. Resource-Specific Guardrails Apply built-in and custom policy initiatives for OpenAI, Kubernetes, App Services, Databases, etc. A detailed list of all policies is bundled and attached at the end of this blog. Be sure to check it out for a ready-to-use Excel file—perfect for customer workshops—which includes policy type (Standalone/Initiative), origin (Built-in/Custom), and more. Implementation: Policy-as-Code using EPAC To turn policies from Excel/JSON into operational governance, Enterprise Policy as Code (EPAC) is a powerful tool. EPAC transforms policy artifacts into a desired state repository and handles deployment, lifecycle, versioning, and CI/CD automation. What is EPAC & Why Use It? EPAC is a set of PowerShell scripts / modules to deploy policy definitions, initiatives, assignments, role assignments, exemptions. Enterprise Policy As Code (EPAC) It supports CI/CD integration (GitHub Actions, Azure DevOps) so policy changes can be treated like code. It handles ordering, dependency resolution, and enforcement of a “desired state” — any policy resources not in your repo may be pruned (depending on configuration). It integrates with Azure Landing Zones (including governance baseline) out of the box. References & Further Reading EPAC GitHub Repository Advanced Azure Policy management - Microsoft Learn [Advanced A...Framework] How to deploy Azure policies the DevOps way [How to dep...- Rabobank]1.3KViews1like1Comment