well architected
114 TopicsBuilding AI Agents: Workflow-First vs. Code-First vs. Hybrid
AI Agents are no longer just a developer’s playground. They’re becoming essential for enterprise automation, decision-making, and customer engagement. But how do you build them? Do you go workflow-first with drag-and-drop designers, code-first with SDKs, or adopt a hybrid approach that blends both worlds? In this article, I’ll walk you through the landscape of AI Agent design. We’ll look at workflow-first approaches with drag-and-drop designers, code-first approaches using SDKs, and hybrid models that combine both. The goal is to help you understand the options and choose the right path for your organization. Why AI Agents Need Orchestration Before diving into tools and approaches, let’s talk about why orchestration matters. AI Agents are not just single-purpose bots anymore. They often need to perform multi-step reasoning, interact with multiple systems, and adapt to dynamic workflows. Without orchestration, these agents can become siloed and fail to deliver real business value. Here’s what I’ve observed as the key drivers for orchestration: Complexity of Enterprise Workflows Modern business processes involve multiple applications, data sources, and decision points. AI Agents need a way to coordinate these steps seamlessly. Governance and Compliance Enterprises require control over how AI interacts with sensitive data and systems. Orchestration frameworks provide guardrails for security and compliance. Scalability and Maintainability A single agent might work fine for a proof of concept, but scaling to hundreds of workflows requires structured orchestration to avoid chaos. Integration with Existing Systems AI Agents rarely operate in isolation. They need to plug into ERP systems, CRMs, and custom apps. Orchestration ensures these integrations are reliable and repeatable. In short, orchestration is the backbone that turns AI Agents from clever prototypes into enterprise-ready solutions. Behind the Scenes I’ve always been a pro-code guy. I started my career on open-source coding in Unix and hardly touched the mouse. Then I discovered Visual Studio, and it completely changed my perspective. It showed me the power of a hybrid approach, the best of both worlds. That said, I won’t let my experience bias your ideas of what you’d like to build. This blog is about giving you the full picture so you can make the choice that works best for you. Workflow-First Approach Workflow-first platforms are more than visual designers and not just about drag-and-drop simplicity. They represent a design paradigm where orchestration logic is abstracted into declarative models rather than imperative code. These tools allow you to define agent behaviors, event triggers, and integration points visually, while the underlying engine handles state management, retries, and scaling. For architects, this means faster prototyping and governance baked into the platform. For developers, it offers extensibility through connectors and custom actions without sacrificing enterprise-grade reliability. Copilot Studio Building conversational agents becomes intuitive with a visual designer that maps prompts, actions, and connectors into structured flows. Copilot Studio makes this possible by integrating enterprise data and enabling agents to automate tasks and respond intelligently without deep coding. Building AI Agents using Copilot Studio Design conversation flows with adaptive prompts Integrate Microsoft Graph for contextual responses Add AI-driven actions using Copilot extensions Support multi-turn reasoning for complex queries Enable secure access to enterprise data sources Extend functionality through custom connectors Logic Apps Adaptive workflows and complex integrations are handled through a robust orchestration engine. Logic Apps introduces Agent Loop, allowing agents to reason iteratively, adapt workflows, and interact with multiple systems in real time. Building AI Agents using Logic Apps Implement Agent Loop for iterative reasoning Integrate Azure OpenAI for goal-driven decisions Access 1,400+ connectors for enterprise actions Support human-in-the-loop for critical approvals Enable multi-agent orchestration for complex tasks Provide observability and security for agent workflows Power Automate Multi-step workflows can be orchestrated across business applications using AI Builder models or external AI APIs. Power Automate enables agents to make decisions, process data, and trigger actions dynamically, all within a low-code environment. Building AI Agents using Power Automate Automate repetitive tasks with minimal effort Apply AI Builder for predictions and classification Call Azure OpenAI for natural language processing Integrate with hundreds of enterprise connectors Trigger workflows based on real-time events Combine flows with human approvals for compliance Azure AI Foundry Visual orchestration meets pro-code flexibility through Prompt Flow and Connected Agents, enabling multi-step reasoning flows while allowing developers to extend capabilities through SDKs. Azure AI Foundry is ideal for scenarios requiring both agility and deep customization. Building AI Agents using Azure AI Foundry Design reasoning flows visually with Prompt Flow Orchestrate multi-agent systems using Connected Agents Integrate with VS Code for advanced development Apply governance and deployment pipelines for production Use Azure OpenAI models for adaptive decision-making Monitor workflows with built-in observability tools Microsoft Agent Framework (Preview) I’ve been exploring Microsoft Agent Framework (MAF), an open-source foundation for building AI agents that can run anywhere. It integrates with Azure AI Foundry and Azure services, enabling multi-agent workflows, advanced memory services, and visual orchestration. With public preview live and GA coming soon, MAF is shaping how we deliver scalable, flexible agentic solutions. Enterprise-scale orchestration is achieved through graph-based workflows, human-in-the-loop approvals, and observability features. The Microsoft Agent Framework lays the foundation for multi-agent systems that are durable and compliant. Building AI Agents using Microsoft Agent Framework Coordinate multiple specialized agents in a graph Implement durable workflows with pause and resume Support human-in-the-loop for controlled autonomy Integrate with Azure AI Foundry for hosting and governance Enable observability through OpenTelemetry integration Provide SDK flexibility for custom orchestration patterns Visual-first platforms make building AI Agents feel less like coding marathons and more like creative design sessions. They’re perfect for those scenarios when you’d rather design than debug and still want the option to dive deeper when complexity calls. Pro-Code Approach Remember I told you how I started as a pro-code developer early in my career and later embraced a hybrid approach? I’ll try to stay neutral here as we explore the pro-code world. Pro-code frameworks offer integration with diverse ecosystems, multi-agent coordination, and fine-grained control over logic. While workflow-first and pro-code approaches both provide these capabilities, the difference lies in how they balance factors such as ease of development, ease of maintenance, time to deliver, monitoring capabilities, and other non-functional requirements. Choosing the right path often depends on which of these trade-offs matter most for your scenario. LangChain When I first explored LangChain, it felt like stepping into a developer’s playground for AI orchestration. I could stitch together prompts, tools, and APIs like building blocks, and I enjoyed the flexibility. It reminded me why pro-code approaches appeal to those who want full control over logic and integration with diverse ecosystems. Building AI Agents using LangChain Define custom chains for multi-step reasoning [it is called Lang“Chain”] Integrate external APIs and tools for dynamic actions Implement memory for context-aware conversations Support multi-agent collaboration through orchestration patterns Extend functionality with custom Python modules Deploy agents across cloud environments for scalability Semantic Kernel I’ve worked with Semantic Kernel when I needed more control over orchestration logic, and what stood out was its flexibility. It provides both .NET and Python SDKs, which makes it easy to combine natural language prompts with traditional programming logic. I found the planners and skills especially useful for breaking down goals into smaller steps, and connectors helped integrate external systems without reinventing the wheel. Building AI Agents using Semantic Kernel Create semantic functions for prompt-driven tasks Use planners for dynamic goal decomposition Integrate plugins for external system access Implement memory for persistent context across sessions Combine AI reasoning with deterministic code logic Enable observability and telemetry for enterprise monitoring Microsoft Agent Framework (Preview) Although I introduced MAF in the earlier section, its SDK-first design makes it relevant here as well for advanced orchestration and the pro-code nature… and so I’ll probably write this again in the Hybrid section. The Agent Framework is designed for developers who need full control over multi-agent orchestration. It provides a pro-code approach for defining agent behaviors, implementing advanced coordination patterns, and integrating enterprise-grade observability. Building AI Agents using Microsoft Agent Framework Define custom orchestration logic using SDK APIs Implement graph-based workflows for multi-agent coordination Extend agent capabilities with custom code modules Apply durable execution patterns with pause and resume Integrate OpenTelemetry for detailed monitoring and debugging Securely host and manage agents through Azure AI Foundry integration Hybrid Approach and decision framework I’ve always been a fan of both worlds, the flexibility of pro-code and the simplicity of workflow drag-and-drop style IDEs and GUIs. A hybrid approach is not about picking one over the other; it’s about balancing them. In practice, this to me means combining the speed and governance of workflow-first platforms with the extensibility and control of pro-code frameworks. Hybrid design shines when you need agility without sacrificing depth. For example, I can start with Copilot Studio to build a conversational agent using its visual designer. But if the scenario demands advanced logic or integration, I can call an Azure Function for custom processing, trigger a Logic Apps workflow for complex orchestration, or even invoke the Microsoft Agent Framework for multi-agent coordination. This flexibility delivers the best of both worlds, low-code for rapid development (remember RAD?) and pro-code for enterprise-grade customization with complex logic or integrations. Why go Hybrid Ø Balance speed and control: Rapid prototyping with workflow-first tools, deep customization with code. Ø Extend functionality: Call APIs, Azure Functions, or SDK-based frameworks from visual workflows. Ø Optimize for non-functional requirements: Address maintainability, monitoring, and scalability without compromising ease of development. Ø Enable interoperability: Combine connectors, plugins, and open standards for diverse ecosystems. Ø Support multi-agent orchestration: Integrate workflow-driven agents with pro-code agents for complex scenarios. The hybrid approach for building AI Agents is not just a technical choice but a design philosophy. When I need rapid prototyping or business automation, workflow-first is my choice. For multi-agent orchestration and deep customization, I go with code-first. Hybrid makes sense for regulated industries and large-scale deployments where flexibility and compliance are critical. The choice isn’t binary, it’s strategic. I’ve worked with both workflow-first tools like Copilot Studio, Power Automate, and Logic Apps, and pro-code frameworks such as LangChain, Semantic Kernel, and the Microsoft Agent Framework. Each approach has its strengths, and the decision often comes down to what matters most for your scenario. If rapid prototyping and business automation are priorities, workflow-first platforms make sense. When multi-agent orchestration, deep customization, and integration with diverse ecosystems are critical, pro-code frameworks give you the flexibility and control you need. Hybrid approaches bring both worlds together for regulated industries and large-scale deployments where governance, observability, and interoperability cannot be compromised. Understanding these trade-offs will help you create AI Agents that work so well, you’ll wonder if they’re secretly applying for your job! About the author Pradyumna (Prad) Harish is a Technology leader in the WW GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open-Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.187Views3likes0CommentsBoosting Hybrid Cloud Data Efficiency for EDA: The Power of Azure NetApp Files cache volumes
Electronic Design Automation (EDA) is the foundation of modern semiconductor innovation, enabling engineers to design, simulate, and validate increasingly sophisticated chip architectures. As designs push the boundaries of PPA (Power, Performance, and reduced Area) to meet escalating market demands, the volume of associated design data has surged exponentially with a single System-on-Chip (SoC) project generating multiple petabytes of data during its development lifecycle, making data mobility and accessibility critical bottlenecks. To overcome these challenges, Azure NetApp Files (ANF) cache volumes are purpose-built to optimize data movement and minimize latency, delivering high-speed access to massive design datasets across distributed environments. By mitigating data gravity, Azure NetApp Files cache volumes empower chip designers to leverage cloud-scale compute resources on demand and at scale, thus accelerating innovation without being constrained by physical infrastructure.243Views0likes0CommentsAccelerating Enterprise AI Adoption with Azure AI Landing Zone
Introduction As organizations across industries race to integrate Artificial Intelligence (AI) into their business processes and realize tangible value, one question consistently arises — where should we begin? Customers often wonder: What should the first steps in AI adoption look like? Should we build a unified, enterprise-grade platform for all AI initiatives? Who should guide us through this journey — Microsoft, our partners, or both? This blog aims to demystify these questions by providing a foundational understanding of the Azure AI Landing Zone (AI ALZ) — a unified, scalable, and secure framework for enterprise AI adoption. It explains how AI ALZ builds on two key architectural foundations — the Cloud Adoption Framework (CAF) and the Well-Architected Framework (WAF) — and outlines an approach to setting up an AI Landing Zone in your Azure environment. Foundational Frameworks Behind the AI Landing Zone 1.1 Cloud Adoption Framework (CAF) The Azure Cloud Adoption Framework is Microsoft’s proven methodology for guiding customers through their cloud transformation journey. It encompasses the complete lifecycle of cloud enablement across stages such as Strategy, Plan, Ready, Adopt, Govern, Secure, and Manage. The Landing Zone concept sits within the Ready stage — providing a secure, scalable, and compliant foundation for workload deployment. CAF also defines multiple adoption scenarios, one of which focuses specifically on AI adoption, ensuring that AI workloads align with enterprise cloud governance and best practices. 1.2 Well-Architected Framework (WAF) The Azure Well-Architected Framework complements CAF by providing detailed design guidance across five key pillars: Reliability Security Cost Optimization Operational Excellence Performance Efficiency AI Landing Zones integrate these design principles to ensure that AI workloads are not only functional but also resilient, cost-effective, and secure at enterprise scale. Understanding Azure Landing Zones To understand an AI Landing Zone, it’s important to first understand Azure Landing Zones in general. An Azure Landing Zone acts as a blueprint or foundation for deploying workloads in a cloud environment — much like a strong foundation is essential for constructing a building or bridge. Each workload type (SAP, Oracle, CRM, AI, etc.) may require a different foundation, but all share the same goal: to provide a consistent, secure, and repeatable environment built on best practices. Azure Landing Zones provide: A governed, scalable foundation aligned with enterprise standards Repeatable, automated deployment patterns using Infrastructure as Code (IaC) Integrated security and management controls baked into the architecture To have more insightful understanding of Azure Landing zone architecture pls visit the official link here and refer diagram below: The Role of Azure AI Foundry in AI Landing Zones Azure AI Foundry is emerging as Microsoft’s unified environment for enterprise AI development and deployment. It acts as a one-stop platform for building, deploying, and managing AI solutions at scale. Key components include: Foundry Model Catalog: A collection of foundation and fine-tuned models Agent Service: Enables model selection, tool and knowledge integration, and control over data and security Search and Machine Learning Services: Integrated capabilities for knowledge retrieval and ML lifecycle management Content Safety and Observability: Ensures responsible AI use and operational visibility Compute Options: Customers can choose from various Azure compute services based on control and scalability needs: Azure Kubernetes Service (AKS) — full control App Service and Azure Container Apps — simplified management Azure Functions — fully serverless option What Is Azure AI Landing Zone (AI ALZ)? The Azure AI Landing Zone is a workload-specific landing zone designed to help enterprises deploy AI workloads securely and efficiently in production environments. Key Objectives of AI ALZ Accelerate deployment of production-grade AI solutions Embed security, compliance, and resilience from the start Enable cost and operational optimization through standardized architecture Support repeatable patterns for multiple AI use cases using Azure AI Foundry Empower customer-centric enablement with extensibility and modularity By adopting the AI ALZ, organizations can move faster from proof-of-concept (POC) to production, addressing common challenges such as inconsistent architectures, lack of governance, and operational inefficiencies. Core Components of AI Landing Zone The AI ALZ is structured around three major components: Design Framework – Based on the Cloud Adoption Framework (CAF) and Well-Architected Framework (WAF). Reference Architectures – Blueprint architectures for common AI workloads. Extensible Implementations – Deployable through Terraform, Bicep, or (soon) Azure Portal templates using Azure Verified Modules (AVM). Together, these elements allow customers to quickly deploy a secure, standardized, and production-ready AI environment. Customer Readiness and Discovery A common question during early customer engagements is: “Can our existing enterprise-scale landing zone support AI workloads, or do we need a new setup?” To answer this, organizations should start with a discovery and readiness assessment, reviewing their existing enterprise-scale landing zone across key areas such as: Identity and Access Management Networking and Connectivity Data Security and Compliance Governance and Policy Controls Compute and Deployment Readiness Based on this assessment, customers can either: Extend their existing enterprise-scale foundation, or Deploy a dedicated AI workload spoke designed specifically for Azure AI Foundry and enterprise-wide AI enablement. Attached excel contains the discovery question to enquire about customer current setup and propose a adoption plan to reflect architecture changes if any. The Journey Toward AI Adoption The AI Landing Zone represents the first critical step in an organization’s AI adoption journey. It establishes the foundation for: Consistent governance and policy enforcement Security and networking standardization Rapid experimentation and deployment of AI workloads Scalable, production-grade AI environments By aligning with CAF and WAF, customers can be confident that their AI adoption strategy is architecturally sound, secure, and sustainable. Conclusion The Azure AI Landing Zone provides enterprises with a structured, secure, and scalable foundation for AI adoption at scale. It bridges the gap between innovation and governance, enabling organizations to deploy AI workloads faster while maintaining compliance, performance, and operational excellence. By leveraging Microsoft’s proven frameworks — CAF and WAF — and adopting Azure AI Foundry as the unified development platform, enterprises can confidently build the next generation of responsible, production-grade AI solutions on Azure. Get Started Ready to start your AI Landing Zone journey? Microsoft can help assess your readiness and accelerate deployment through validated reference implementations and expert-led guidance. To help organizations accelerate deployment, Microsoft has published open-source Azure AI Landing Zone templates and automation scripts in Terraform and Bicep that can be directly used to implement the architecture described in this blog. 👉 Explore and deploy the Azure AI Landing Zone(Preview) on GitHub: https://github.com/Azure/AI-Landing-Zones2.6KViews4likes9CommentsValidating Scalable EDA Storage Performance: Azure NetApp Files and SPECstorage Solution 2020
Electronic Design Automation (EDA) workloads drive innovation across the semiconductor industry, demanding robust, scalable, and high-performance cloud solutions to accelerate time-to-market and maximize business outcomes. Azure NetApp Files empowers engineering teams to run complex simulations, manage vast datasets, and optimize workflows by delivering industry-leading performance, flexibility, and simplified deployment—eliminating the need for costly infrastructure overprovisioning or disruptive workflow changes. This leads to faster product development cycles, reduced risk of project delays, and the ability to capitalize on new opportunities in a highly competitive market. In a historic milestone, Microsoft has been independently validated Azure NetApp Files for EDA workloads through the publication of the SPECstorage® Solution 2020 EDA_BLENDED benchmark, providing objective proof of its readiness to meet the most demanding enterprise requirements, now and in the future.327Views0likes0CommentsAI Azure Landing Zone: Shared Capabilities and Models to Enable AI as a Platform
This architecture diagram illustrates a Microsoft Azure AI Landing Zone Pattern — a scalable, secure, and well-governed framework for deploying AI workloads across multiple subscriptions in an enterprise environment. Let's walk through it end-to-end, breaking down each section, the flow, and key Azure services involved. 🧭 Overview: The architecture is split into 4 major landing zones: Connectivity Subscription AI Apps Landing Zone Subscription AI Hub Landing Zone Subscription AI Services Landing Zone Subscription 🔁 Step-by-Step Breakdown 🔹 1. Users → Application Gateway (WAF) Users (e.g., enterprise employees or external users) access the system via the Application Gateway with Web Application Firewall (WAF). This is part of the Connectivity Subscription and provides: Centralized ingress control Zone redundancy Protection against common exploits 🔹 2. Route to AI Apps Landing Zone Subscription Traffic is routed to the AI Apps Landing Zone Subscription via the Application Gateway. This subscription hosts applications that use AI services, typically in a containerized or App Service-based architecture. 🔹 3. AI Apps Workload Components This section includes: App Hosting: Azure App Services Container Apps (with Container Registry) Networking: Private Endpoints Subnets Network Security Groups Monitoring: Log Analytics Workspaces Diagnostic Settings App Agents: Represent container/app service instances (Agent 1, 2, 3) 🔹 4. Integration with AI Services & Secrets Management These apps securely connect to: Azure Key Vault (secrets, credentials) Azure AI Search Azure Cosmos DB Azure Storage Azure OpenAI App Insights is used for application performance monitoring. Logic Apps & Functions handle: Knowledge Management Processing LLM Integration Workflows 🔹 5 & 6. Connectivity to Centralized Services Virtual Network Peering connects AI Apps Landing Zone with: Connectivity Subscription Hub Virtual Network in the Platform Landing Zone Subscription These provide access to shared infrastructure: Azure Firewall Azure Bastion VPN Gateway / ExpressRoute Azure DNS / Private Resolver Azure DDoS Protection 🔹 7. AI Hub Landing Zone Subscription This acts as a centralized workload processing zone with components like: Event Hubs Azure Key Vault App Insights Power BI Cosmos DB API Management (OpenAI Endpoints) Used for: Observability Usage processing API integration 🔹 8 & 9. FTU Usage Processing & Reporting Function Apps & Logic Apps: Process usage data (e.g., for chargebacks, monitoring) FTU = "Fair Tenant Usage" Reporting is done using Power BI and stored in Cosmos DB 🔹 10 & 11. Network Peering to Platform Zone AI Hub connects back to Platform Landing Zone via Virtual Network Peering Provides access to shared DNS zones and network services 🔹 12. AI Services Landing Zone Subscription This is where core AI capabilities live, such as: Azure OpenAI Azure AI Services: Speech Vision Language Machine Learning Foundry Project: OpenAI Agents Agent Service Dependencies Models hosted in Azure (e.g., GPT) This zone is accessed securely via: Private Endpoints Azure Key Vault Network rules 📦 Subscription Vending (All Zones) Each subscription includes a Subscription Vending Framework for: Spoke VNet placement Route configurations Policy/role assignments Defender for Cloud & cost management This ensures a consistent and compliant environment across the enterprise. 📌 Key Architectural Benefits Feature Purpose 🔐 Zero Trust Network Controlled access via WAF, private endpoints 📡 Scalable AI Apps Container Apps & App Services 🧠 Central AI Services Managed in isolated subscriptions 🔍 Monitoring Deep insights via App Insights, Log Analytics 🧾 Governance Role-based access, policy enforcement 🔌 Secure Integration VNet Peering, Azure Key Vault, API Management 🔚 End-to-End Data Flow Summary Users access app through Application Gateway (WAF) Apps in AI Apps Landing Zone process input Apps call AI services (OpenAI, Cognitive) via private endpoints Data usage and insights flow to AI Hub for logging and analysis FTU and usage metrics processed and stored Platform services support routing, DNS, security 🎯 Goal of the User Journey The user interacts with an AI-powered application (e.g., chatbot, document summarizer, recommendation engine) deployed on Azure. The app is secure, scalable, and integrated with advanced Azure AI services (like OpenAI). 👣 User Journey: Step-by-Step Breakdown ✅ 1. User Access (Public Entry Point) The user (browser or mobile app) sends a request (e.g., opens an AI web app or sends a prompt to a chatbot). The request hits the Azure Application Gateway with Web Application Firewall (WAF). ✅ Filters and protects against malicious traffic. ✅ Ensures high availability with zone redundancy. 🧠 Think of it as the front door to the AI platform. ✅ 2. Routing to AI Application The Application Gateway securely routes the request to the AI Apps Landing Zone Subscription. The user request reaches the App Service or Container App hosting the AI-based application logic. Example: A user submits a product question via a chatbot UI hosted here. ✅ 3. Processing the Request (App Logic) The app receives the input and begins processing: App uses App Insights for performance telemetry. Secrets or config (API keys, connection strings) are securely pulled from Azure Key Vault. Based on the business logic, the app needs to call an AI model (e.g., OpenAI). ✅ 4. Calling AI Services (via Private Endpoints) The app securely connects (using private endpoints) to the AI Services Landing Zone to: 🔹 Call Azure OpenAI (e.g., ChatGPT, DALL·E, embeddings) 🔹 Use Azure Cognitive Services (e.g., speech, vision, search) These services are isolated in their own subscription for security, scalability, and cost governance. 🧠 Here’s where the “AI magic” happens. ✅ 5. Retrieval-Augmented Generation (Optional) If the AI needs additional knowledge (RAG pattern), the app can: Query Azure AI Search for documents. Pull knowledge from Azure Cosmos DB or Azure Storage. AI results are processed via Logic Apps / Functions (e.g., post-processing, formatting). ✅ 6. Return the Response to the User The application receives the AI-generated output. It formats the result (e.g., chatbot message, visual, PDF, etc.) and returns it to the user via the original secure path. ✅ 7. Observability & Usage Logging App, AI service usage, and telemetry are logged in: Log Analytics / App Insights Event Hub → Streamed to AI Hub Landing Zone This enables centralized monitoring and analytics (Power BI dashboards, anomaly detection, etc.) ✅ 8. Usage Reporting & Governance Function App & Logic App in the AI Hub Landing Zone process usage logs. Usage is stored in Azure Cosmos DB. FTU (Fair Tenant Usage) policies are enforced and reported via Power BI dashboards. ✅ 9. Admin/Platform Layer All resources and subscriptions are governed via the Platform Landing Zone: Shared services like DNS, security policies, firewalls Cost controls, Defender for Cloud, DDoS protection Subscription vending and network segmentation 🗺️ Visual Recap: User Journey Flow User → App Gateway (WAF) → App in AI Apps Landing Zone → Call to Azure OpenAI / AI Services → (Optional: Knowledge retrieval) → AI Response →Returned to User → Usage logged & monitored → Usage reporting in AI Hub User Workflow 🔐 Security Throughout the Journey Step Security Feature App Gateway Web Application Firewall App Hosting Private Endpoints, Managed Identity Secrets Azure Key Vault Network Virtual Network Peering, NSGs Governance Role-based access, Policy Assignments 🧠 Example: Real-World Use Case Scenario: A doctor uses a medical AI assistant to analyze patient notes. Logs in via secure portal (WAF gateway) Submits patient notes (App Service) App calls OpenAI with prompt: "Summarize this diagnosis." App also queries internal document store (RAG) OpenAI returns result → displayed in UI Usage tracked for audit and reporting 🧭 User Journey Flow Users End users initiate a request (e.g., accessing an AI-powered app). Application Gateway + WAF (Connectivity Subscription) Request is routed through the Application Gateway with Web Application Firewall for security and traffic filtering. AI Apps Landing Zone Subscription Request enters the AI Apps subscription. Workloads run on App Services or Container Apps (Agents 1, 2, 3). Secure Access Application services authenticate and securely retrieve data from Azure Key Vault, Cosmos DB, Azure Storage, and Azure AI Search. Knowledge Management Processing Logic Apps / Function Apps process the request, enabling workflows, integrations, and knowledge enrichment. AI Hub Gateway Application Requests requiring AI services are routed to the AI Hub for centralized management. API Management (OpenAI Endpoints) APIs handle communication with downstream AI services. Event Hub + App Insights Telemetry and logs are captured for monitoring and troubleshooting. Power BI + Cosmos DB Usage data is aggregated and analyzed for reporting (FTU usage tracking). AI Services Subscription API calls are directed to the AI Services subscription. Azure AI Models Execution Requests hit Azure OpenAI, Azure AI Foundry, Cognitive Services (Speech, Vision, Search, etc.). Foundry/Agent services provide additional AI processing. Response back to User Processed AI output is routed back through the pipeline → API → Hub → Apps → Application Gateway → returned to the user. High Level Architecture Diagram Security & Governance Overview AI Landing Zone Lifecycle Workflow URL Reference Architectures: Baseline Azure AI Foundry Chat Reference Architecture in an Azure Landing Zone - Azure Architecture Center | Microsoft Learn Repo Link for AI Landing Zone: https://github.com/Azure/AI-Landing-Zones3.6KViews6likes1CommentBuilding a Secure and Compliant Azure AI Landing Zone: Policy Framework & Best Practices
As organizations accelerate their AI adoption on Microsoft Azure, governance, compliance, and security become critical pillars for success. Deploying AI workloads without a structured compliance framework can expose enterprises to data privacy issues, misconfigurations, and regulatory risks. To address this challenge, the Azure AI Landing Zone provides a scalable and secure foundation — bringing together Azure Policy, Blueprints, and Infrastructure-as-Code (IaC) to ensure every resource aligns with organizational and regulatory standards. The Azure Policy & Compliance Framework acts as the governance backbone of this landing zone. It enforces consistency across environments by applying policy definitions, initiatives, and assignments that monitor and remediate non-compliant resources automatically. This blog will guide you through: 🧭 The architecture and layers of an AI Landing Zone 🧩 How Azure Policy as Code enables automated governance ⚙️ Steps to implement and deploy policies using IaC pipelines 📈 Visualizing compliance flows for AI-specific resources What is Azure AI Landing Zone (AI ALZ)? AI ALZ is a foundational architecture that integrates core Azure services (ML, OpenAI, Cognitive Services) with best practices in identity, networking, governance, and operations. To ensure consistency, security, and responsibility, a robust policy framework is essential. Policy & Compliance in AI ALZ Azure Policy helps enforce standards across subscriptions and resource groups. You define policies (single rules), group them into initiatives (policy sets), and assign them with certain scopes & exemptions. Compliance reporting helps surface noncompliant resources for mitigation. In AI workloads, some unique considerations: Sensitive data (PII, models) Model accountability, logging, audit trails Cost & performance from heavy compute usage Preview features and frequent updates Scope This framework covers: Azure Machine Learning (AML) Azure API Management Azure AI Foundry Azure App Service Azure Cognitive Services Azure OpenAI Azure Storage Accounts Azure Databases (SQL, Cosmos DB, MySQL, PostgreSQL) Azure Key Vault Azure Kubernetes Service Core Policy Categories 1. Networking & Access Control Restrict resource deployment to approved regions (e.g., Europe only). Enforce private link and private endpoint usage for all critical resources. Disable public network access for workspaces, storage, search, and key vaults. 2. Identity & Authentication Require user-assigned managed identities for resource access. Disable local authentication; enforce Microsoft Entra ID (Azure AD) authentication. 3. Data Protection Enforce encryption at rest with customer-managed keys (CMK). Restrict public access to storage accounts and databases. 4. Monitoring & Logging Deploy diagnostic settings to Log Analytics for all key resources. Ensure activity/resource logs are enabled and retained for at least one year. 5. Resource-Specific Guardrails Apply built-in and custom policy initiatives for OpenAI, Kubernetes, App Services, Databases, etc. A detailed list of all policies is bundled and attached at the end of this blog. Be sure to check it out for a ready-to-use Excel file—perfect for customer workshops—which includes policy type (Standalone/Initiative), origin (Built-in/Custom), and more. Implementation: Policy-as-Code using EPAC To turn policies from Excel/JSON into operational governance, Enterprise Policy as Code (EPAC) is a powerful tool. EPAC transforms policy artifacts into a desired state repository and handles deployment, lifecycle, versioning, and CI/CD automation. What is EPAC & Why Use It? EPAC is a set of PowerShell scripts / modules to deploy policy definitions, initiatives, assignments, role assignments, exemptions. Enterprise Policy As Code (EPAC) It supports CI/CD integration (GitHub Actions, Azure DevOps) so policy changes can be treated like code. It handles ordering, dependency resolution, and enforcement of a “desired state” — any policy resources not in your repo may be pruned (depending on configuration). It integrates with Azure Landing Zones (including governance baseline) out of the box. References & Further Reading EPAC GitHub Repository Advanced Azure Policy management - Microsoft Learn [Advanced A...Framework] How to deploy Azure policies the DevOps way [How to dep...- Rabobank]1.2KViews1like1CommentAzure OpenAI Landing Zone reference architecture
In this article, delve into the synergy of Azure Landing Zones and Azure OpenAI Service, building a secure and scalable AI environment. Unpack the Azure OpenAI Landing Zone architecture, which integrates numerous Azure services for optimal AI workloads. Explore robust security measures and the significance of monitoring for operational success. This journey of deploying Azure OpenAI evolves alongside Azure's continual innovation.209KViews42likes20CommentsAzure Course Blueprints
Overview The Course Blueprint is a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video. Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course Integration: The Visio Template+ for expert courses like SC-100 and AZ-305 includes an additional layer that enables you to compare SC-100, AZ-500, and SC-300 within the same diagram. Similarly, you can compare any combination of AZ-305, AZ-700, AZ-204, and AZ-104 to identify differences and study gaps. Since SC-300 and AZ-500 are potential prerequisites for the expert certification associated with SC-100, and AZ-204 or AZ-104 for the expert certification associated with AZ-305, this comparison is particularly useful for understanding the extra knowledge or skills required to advance to the next level. Advantages for Students Defined Goals: The blueprint presents learners with a clear vision of what they are expected to master and achieve by the course’s end. Focused Learning: By spotlighting the course content and learning targets, it steers learners’ efforts towards essential areas, leading to more productive learning. Progress Tracking: The blueprint allows learners to track their advancement and assess their command of the course material. Topic List: A comprehensive list of topics for each slide deck is now available in a downloadable .xlsx file. Each entry includes a link to Learn and its dependencies. Download links Associate Level PDF Visio Contents Video Overview AZ-104 Azure Administrator Associate R: 12/14/2023 U: 04/16/2025 Blueprint Visio Excel Mod 01 AZ-204 Azure Developer Associate R: 11/05/2024 U: 11/11/2024 Blueprint Visio Excel AZ-500 Azure Security Engineer Associate R: 01/09/2024 U: 10/10/2024 Blueprint Visio+ Excel AZ-700 Azure Network Engineer Associate R: 01/25/2024 U: 11/04/2024 Blueprint Visio Excel SC-200 Security Operations Analyst Associate R: 04/03/2025 U:04/09/2025 Blueprint Visio Excel SC-300 Identity and Access Administrator Associate R: 10/10/2024 Blueprint Excel Specialty PDF Visio AZ-140 Azure Virtual Desktop Specialty R: 01/03/2024 U: 02/27/2025 Blueprint Visio Excel Expert level PDF Visio AZ-305 Designing Microsoft Azure Infrastructure Solutions R: 05/07/2024 U: 02/05/2025 Blueprint Visio+ AZ-104 AZ-204 AZ-700 AZ-140 Excel SC-100 Microsoft Cybersecurity Architect R: 10/10/2024 U: 04/09/2025 Blueprint Visio+ AZ-500 SC-300 SC-200 Excel Skill based Credentialing PDF AZ-1002 Configure secure access to your workloads using Azure virtual networking R: 05/27/2024 Blueprint Visio Excel AZ-1003 Secure storage for Azure Files and Azure Blob Storage R: 02/07/2024 U: 02/05/2024 Blueprint Excel Subscribe if you want to get notified of any update like new releases or updates. Author: Ilan Nyska, Microsoft Technical Trainer My email ilan.nyska@microsoft.com LinkedIn https://www.linkedin.com/in/ilan-nyska/ I’ve received so many kind messages, thank-you notes, and reshares — and I’m truly grateful. But here’s the reality: 💬 The only thing I can use internally to justify continuing this project is your engagement — through this survey https://lnkd.in/gnZ8v4i8 ⏳ Unless I receive enough support via this short survey, the project will be sunset. Thank you for your support! ___ Benefits for Trainers: Trainers can follow this plan to design a tailored diagram for their course, filled with notes. They can construct this comprehensive diagram during class on a whiteboard and continuously add to it in each session. This evolving visual aid can be shared with students to enhance their grasp of the subject matter. Explore Azure Course Blueprints! | Microsoft Community Hub Visio stencils Azure icons - Azure Architecture Center | Microsoft Learn ___ Are you curious how grounding Copilot in Azure Course Blueprints transforms your study journey into smarter, more visual experience: 🧭 Clickable guides that transform modules into intuitive roadmaps 🌐 Dynamic visual maps revealing how Azure services connect ⚖️ Side-by-side comparisons that clarify roles, services, and security models Whether you're a trainer, a student, or just certification-curious, Copilot becomes your shortcut to clarity, confidence, and mastery. Navigating Azure Certifications with Copilot and Azure Course Blueprints | Microsoft Community Hub30KViews14likes13Comments