infrastructure
123 TopicsAzure Course Blueprints
Overview The Course Blueprint is a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video. Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers The Visio Template+ for expert-level courses like SC-100 and AZ-305 includes an additional layer for each associate-level certification, allowing you to compare courses along the same path—such as SC-100, SC-200, SC-300, and AZ-500—within a single diagram. Likewise, you can compare any combination of AZ-305, AZ-104, AZ-204, AZ-700, and AZ-140 to identify differences and study gaps. Since associate certifications like SC-300 and AZ-500 are potential prerequisites for SC-100, and AZ-104 or AZ-204 for AZ-305, this comparison is especially useful for understanding the extra knowledge and skills required to advance to the next level Advantages for Students Defined Goals: The blueprint presents learners with a clear vision of what they are expected to master and achieve by the course’s end. Focused Learning: By spotlighting the course content and learning targets, it steers learners’ efforts towards essential areas, leading to more productive learning. Progress Tracking: The blueprint allows learners to track their advancement and assess their command of the course material. Topic List: A comprehensive list of topics for each slide deck is now available in a downloadable .xlsx file. Each entry includes a link to Learn and its dependencies. Demos are an integral part of achieving end-to-end understanding of Azure. To make this seamless, we partnered with Peter De Tender to align the Trainer Demo Deploy with the corresponding Blueprint, enabling trainers to easily deploy the full scenario and reinforce learning objectives. Download links Associate Level PDF - Demo Visio Contents AZ-104 Azure Administrator Associate R: 12/14/2023 U: 04/16/2025 Blueprint Demo Mod 01Video Visio Excel AZ-204 Azure Developer Associate R: 11/05/2024 U: 11/11/2024 Blueprint Demo Visio Excel AZ-500 Azure Security Engineer Associate R: 01/09/2024 U: 10/10/2024 Blueprint Demo Visio+ Excel AZ-700 Azure Network Engineer Associate R: 01/25/2024 U: 11/04/2024 Blueprint Demo Visio Excel SC-200 Security Operations Analyst Associate R: 04/03/2025 U:04/09/2025 Blueprint Demo Visio Excel SC-300 Identity and Access Administrator Associate R: 10/10/2024 Blueprint Demo Excel Specialty PDF Visio AZ-140 Azure Virtual Desktop Specialty R: 01/03/2024 U: 02/27/2025 Blueprint Demo Visio Excel Expert level PDF Visio AZ-305 Designing Microsoft Azure Infrastructure Solutions R: 05/07/2024 U: 02/05/2025 Blueprint Demo Visio+ AZ-104 AZ-204 AZ-700 AZ-140 Excel SC-100 Microsoft Cybersecurity Architect R: 10/10/2024 U: 04/09/2025 Blueprint Demo Visio+ AZ-500 SC-300 SC-200 Excel Skill based Credentialing PDF AZ-1002 Configure secure access to your workloads using Azure virtual networking R: 05/27/2024 Blueprint Visio Excel AZ-1003 Secure storage for Azure Files and Azure Blob Storage R: 02/07/2024 U: 02/05/2024 Blueprint Excel Subscribe if you want to get notified of any update like new releases or updates. Author: Ilan Nyska, Microsoft Technical Trainer My email ilan.nyska@microsoft.com LinkedIn https://www.linkedin.com/in/ilan-nyska/ I’ve received so many kind messages, thank-you notes, and reshares — and I’m truly grateful. But here’s the reality: 💬 The only thing I can use internally to justify continuing this project is your engagement — through this survey https://lnkd.in/gnZ8v4i8 ___ Benefits for Trainers: Trainers can follow this plan to design a tailored diagram for their course, filled with notes. They can construct this comprehensive diagram during class on a whiteboard and continuously add to it in each session. This evolving visual aid can be shared with students to enhance their grasp of the subject matter. Explore Azure Course Blueprints! | Microsoft Community Hub Visio stencils Azure icons - Azure Architecture Center | Microsoft Learn ___ Are you curious how grounding Copilot in Azure Course Blueprints transforms your study journey into smarter, more visual experience: 🧭 Clickable guides that transform modules into intuitive roadmaps 🌐 Dynamic visual maps revealing how Azure services connect ⚖️ Side-by-side comparisons that clarify roles, services, and security models Whether you're a trainer, a student, or just certification-curious, Copilot becomes your shortcut to clarity, confidence, and mastery. Navigating Azure Certifications with Copilot and Azure Course Blueprints | Microsoft Community Hub31KViews14likes13CommentsStreamline Azure NetApp Files Management—Right from Your IDE
The Azure NetApp Files VS Code Extension is designed to streamline storage provisioning and management directly within the developer’s IDE. Traditional workflows often require extensive portal navigation, manual configuration, and policy management, leading to inefficiencies and context switching. The extension addresses these challenges by enabling AI-powered automation through natural language commands, reducing provisioning time from hours to minutes while minimizing errors and improving compliance. Key capabilities include generating production-ready ARM templates, validating resources, and delivering optimization insights—all without leaving the coding environment.54Views0likes0CommentsHow Azure NetApp Files Object REST API powers Azure and ISV Data and AI services – on YOUR data
This article introduces the Azure NetApp Files Object REST API, a transformative solution for enterprises seeking seamless, real-time integration between their data and Azure's advanced analytics and AI services. By enabling direct, secure access to enterprise data—without costly transfers or duplication—the Object REST API accelerates innovation, streamlines workflows, and enhances operational efficiency. With S3-compatible object storage support, it empowers organizations to make faster, data-driven decisions while maintaining compliance and data security. Discover how this new capability unlocks business potential and drives a new era of productivity in the cloud.707Views0likes0CommentsAccelerating HPC and EDA with Powerful Azure NetApp Files Enhancements
High-Performance Computing (HPC) and Electronic Design Automation (EDA) workloads demand uncompromising performance, scalability, and resilience. Whether you're managing petabyte-scale datasets or running compute intensive simulations, Azure NetApp Files delivers the agility and reliability needed to innovate without limits.497Views1like0CommentsAzure Resiliency: Proactive Continuity with Agentic Experiences and Frontier Innovation
Introduction In today’s digital-first world, even brief downtime can disrupt revenue, reputation, and operations. Azure’s new resiliency capabilities empower organizations to anticipate and withstand disruptions—embedding continuity into every layer of their business. At Microsoft Ignite, we’re unveiling a new era of resiliency in Azure, powered by agentic experiences. The new Azure Copilot resiliency agent brings AI-driven workflows that proactively detect vulnerabilities, automate backups, and integrate cyber recovery for ransomware protection. IT teams can instantly assess risks and deploy solutions across infrastructure, data, and cyber recovery—making resiliency a living capability, not just a checklist. The Evolution from Azure Business Continuity Center to Resiliency in Azure Microsoft is excited to announce that the Azure Business Continuity Center (ABCC) is evolving into resiliency capabilities in Azure. This evolution expands its scope from traditional backup and disaster recovery to a holistic resiliency framework. This new experience is delivered directly in the Azure Portal, providing integrated dashboards, actionable recommendations, and one-click access to remediation—so teams can manage resiliency where they already operate. Learn more about this: Resiliency. To see the new experience, visit the Azure Portal. The Three Pillars of Resiliency Azure’s resiliency strategy is anchored in three foundational pillars, each designed to address a distinct dimension of operational continuity: Infrastructure Resiliency: Built-in redundancy and zonal/regional management keep workloads running during disruptions. The resiliency agent in Azure Copilot automates posture checks, risk detection, and remediation. Data Resiliency: Automated backup and disaster recovery meet RPO/RTO and compliance needs across Azure, on-premises, and hybrid. Cyber Recovery: Isolated recovery vaults, immutable backups, and AI-driven insights defend against ransomware and enable rapid restoration. With these foundational pillars in place, organizations can adopt a lifecycle approach to resiliency—ensuring continuity from day one and adapting as their needs evolve. The Lifecycle Approach: Start Resilient, Get Resilient, Stay Resilient While the pillars define what resiliency protects, the lifecycle stages in resiliency journey define how organizations implement and sustain it over time. For the full framework, see the prior blog; below we focus on what’s new and practical. The resiliency agent in Azure Copilot empowers organizations to embed resiliency at every stage of their cloud journey—making proactive continuity achievable from day one and sustainable over time. Start Resilient: With the new resiliency agent, teams can “Start Resilient” by leveraging guided experiences and automated posture assessments that help design resilient workloads before deployment. The agent surfaces architecture gaps, validates readiness, and recommends best practices—ensuring resiliency is built in from the outset, not bolted on later. Get Resilient: As organizations scale, the resiliency agent enables them to “Get Resilient” by providing estate-wide visibility, automated risk assessments, and configuration recommendations. AI-driven insights help identify blind spots, remediate risks, and accelerate the adoption of resilient-by-default architectures—so resiliency is actively achieved across all workloads, not just planned. Stay Resilient: To “Stay Resilient,” the resiliency agent delivers continuous validation, monitoring, and improvement. Automated failure simulations, real-time monitoring, and attestation reporting allow teams to proactively test recovery workflows and ensure readiness for evolving threats. One-click failover and ongoing posture checks help sustain compliance and operational continuity, making resiliency a living capability that adapts as your business and technology landscape changes Best Practices for Proactive Continuity in Resiliency To enable proactive continuity, organizations should: Architect for high availability across multiple availability zones and regions (prioritize Tier-0/1 workloads). Automate recovery with Azure Site Recovery and failover playbooks for orchestrated, rapid restoration. Leverage integrated zonal resiliency experiences to uncover blind spots and receive tailored recommendations. Continuously validate using Chaos Studio to simulate outages and test recovery workflows. Monitor SLAs, RPO/RTO, and posture metrics with Azure Monitor and Policy; iterate for ongoing improvement. Use the Azure Copilot resiliency agent for AI-driven posture assessments, remediation scripts, and cost analysis to streamline operations. Conclusion & Next Steps Resiliency capabilities in Azure unifies infrastructure, data, and cyber recovery while guiding organizations to start, get, and stay resilient. Teams adopting these capabilities see faster posture improvements, less manual effort, and continuous operational continuity. This marks a fundamental shift—from reactive recovery to proactive continuity. By embedding resiliency as a living capability, Azure empowers organizations to anticipate, withstand, and recover from disruptions, adapting to new threats and evolving business needs. Organizations adopting Resiliency in Azure see measurable impact: Accelerated posture improvement with AI-driven insights and actionable recommendations. Less manual effort through automation and integrated recovery workflows. Continuous operational continuity via ongoing validation and monitoring Ready to take the next step? Explore these resources and sessions: Resiliency in Azure (Portal) Resiliency in Azure (Learn Docs) Agents (preview) in Azure Copilot Resiliency Solutions Reliability Guides by Service Azure Essentials Azure Accelerate Ignite Announcement Key Ignite 2025 Sessions to Watch: Resilience by Design: Secure, Scalable, AI-Ready Cloud with Azure (BRK217) Resiliency & Recovery with Azure Backup and Site Recovery (BRK146) Architect Resilient Apps with Azure Backup and Reliability Features (BRK148) Architecting for Resiliency on Azure Infrastructure (BRK178) All sessions are available on demand—perfect for catching up or sharing with your team. Browse the full session catalog and start building resiliency by default today.548Views4likes0CommentsBoosting Hybrid Cloud Data Efficiency for EDA: The Power of Azure NetApp Files cache volumes
Electronic Design Automation (EDA) is the foundation of modern semiconductor innovation, enabling engineers to design, simulate, and validate increasingly sophisticated chip architectures. As designs push the boundaries of PPA (Power, Performance, and reduced Area) to meet escalating market demands, the volume of associated design data has surged exponentially with a single System-on-Chip (SoC) project generating multiple petabytes of data during its development lifecycle, making data mobility and accessibility critical bottlenecks. To overcome these challenges, Azure NetApp Files (ANF) cache volumes are purpose-built to optimize data movement and minimize latency, delivering high-speed access to massive design datasets across distributed environments. By mitigating data gravity, Azure NetApp Files cache volumes empower chip designers to leverage cloud-scale compute resources on demand and at scale, thus accelerating innovation without being constrained by physical infrastructure.447Views0likes0CommentsAgentic Integration with SAP, ServiceNow, and Salesforce
Copilot/Copilot Studio Integration with SAP (No Code) By integrating SAP Cloud Identity Services with Microsoft Entra ID, organizations can establish secure, federated identity management across platforms. This configuration enables Microsoft Copilot and Teams to seamlessly connect with SAP’s Joule digital assistant, supporting natural language interactions and automating business processes efficiently. Key Resources as given in SAP docs (Image courtesy SAP): Configuring SAP Cloud Identity Services and Microsoft Entra ID for Joule Enable Microsoft Copilot and Teams to Pass Requests to Joule Copilot Studio Integration with ServiceNow and Salesforce (No Code) Integration with ServiceNow and Salesforce, has two main approaches: Copilot Agents using Copilot Studio: Custom agents can be built in Copilot Studio to interact directly with Salesforce CRM data or ServiceNow knowledge bases and helpdesk tickets. This enables organizations to automate sales and support processes using conversational AI. Create a custom sales agent using your Salesforce CRM data (YouTube) ServiceNow Connect Knowledge Base + Helpdesk Tickets (YouTube) 3rd Party Agents using Copilot for Service Agent: Microsoft Copilot can be embedded within Salesforce and ServiceNow interfaces, providing users with contextual assistance and workflow automation directly inside these platforms. Set up the embedded experience in Salesforce Set up the embedded experience in ServiceNow MCP or Agent-to-Agent (A2A) Interoperability (Pro Code) - (Image courtesy SAP) If you choose a pro-code approach, you can either implement the Model Context Protocol (MCP) in a client/server setup for SAP, ServiceNow, and Salesforce, or leverage existing agents for these third-party services using Agent-to-Agent (A2A) integration. Depending on your requirements, you may use either method individually or combine them. The recently released Azure Agent Framework offers practical examples for both MCP and A2A implementations. Below is the detailed SAP reference architecture, illustrating how A2A solutions can be layered on top of SAP systems to enable modular, scalable automation and data exchange. Agent2Agent Interoperability | SAP Architecture Center Logic Apps as Integration Actions Logic Apps is the key component of Azure Integration platform. Just like so many other connectors it has connectors for all this three platforms (SAP, ServiceNow, Salesforce). Logic Apps can be invoked from custom Agent (built in action in Foundry) or Copilot Agent. Same can be said for Power Platform/Automate as well. Conclusion This article provides a comprehensive overview of how Microsoft Copilot, Copilot Studio, Foundry by A2A/MCP, and Azure Logic Apps can be combined to deliver robust, agentic integrations with SAP, ServiceNow, and Salesforce. The narrative highlights the importance of secure identity federation, modular agent orchestration, and low-code/pro-code automation in building next-generation enterprise solutions.670Views1like0CommentsSelecting the Right Agentic Solution on Azure - Part 1
Recently, we have seen a surge in requests from customers and Microsoft partners seeking guidance on building and deploying agentic solutions at various scales. With the rise of Generative AI, replacing traditional APIs with agents has become increasingly popular. There are several approaches to building, deploying, running, and orchestrating agents on Azure. In this discussion, I will focus exclusively on Azure-specific tools, services, and methodologies, setting aside Copilot and Copilot Studio for now. This article describes the options available as of today. 1. Azure OpenAI Assistants API: This feature within Azure OpenAI Service enables developers to create conversational agents (“assistants”) based on OpenAI models (such as GPT-3.5 and GPT-4). It supports capabilities like memory, tool/function calls, and retrieval (e.g., document search). However, Microsoft has already deprecated version 1 of the Azure OpenAI Assistants API, and version 2 remains in preview. Microsoft strongly recommends migrating all existing Assistants API-based agents to the Agent Service. Additionally, OpenAI is retiring the Assistants API and advises developers to use the modern “Response” API instead (see migration detail). Given these developments, it is not advisable to use the Assistants API for building agents. Instead, you should use the Azure AI Agent Service, which is part of Azure AI Foundry. 2. Workflows with AI agents and models in Azure Logic Apps (Preview) – As the name suggests, this feature is currently in public preview and is only available with Logic Apps Standard, not with the consumption plan. You can enhance your workflow by integrating agentic capabilities. For example, in a visa processing workflow, decisions can be made based on priority, application type, nationality, and background checks using a knowledge base. The workflow can then route cases to the appropriate queue and prepare messages accordingly. Workflows can be implemented either as chat assistant or APIs. If your project is workflow-dependent and you are ready to implement agents in a declarative way, this is a great option. However, there are currently limited choices for models and regional availability. For CI/CD, there is an Azure Logic Apps Standard template available for VS Code you can use. 3. Azure AI Agent Service – Part of Azure AI Foundry, the Azure AI Agent Service allows you to provision agents declaratively from the UI. You can consume various OpenAI models (with support for non-OpenAI models coming soon) and leverage important tools or knowledge bases such as files, Azure AI Search, SharePoint, and Fabric. You can connect agents together and create hierarchical agent dependencies. SDKs are available for building agents within agent services using Python, C#, or Java. Microsoft manages the infrastructure to host and run these agents in isolated containers. The service offers role-based access control, MS Entra ID integration, and options to bring your own storage for agent states and Azure Key Vault keys. You can also incorporate different actions including invoking a Logic App instance from your agent. There is also option to trigger an agent using Logic Apps (preview). Microsoft recommends using Agent Service/Azure Foundry as the destination for agents, as further enhancements and investments are focused here. 4. Agent Orchestrators – There are several excellent orchestrators available, such as LlamaIndex, LangGraph, LangChain, and two from Microsoft—Semantic Kernel and AutoGen. These options are ideal if you need full control over agent creation, hosting, and orchestration. They are developer-only solutions and do not offer a UI (barring AutoGen Studio having some UI assistance). You can create complex, multi-layered agent connections. You can then host and run these agents in you choice of Azure services like AKS or Apps Service. Additionally, you have the option to create agents using Agent Service and then orchestrate them with one of these orchestrators. Choosing the Right Solution The choice of agentic solution depends on several factors, including whether you prefer code or no-code approaches, control over the hosting platform, customer needs, scalability, maintenance, orchestration complexity, security, and cost. Customer Need: If agents need to be part of a workflow, use AI Agents in Logic Apps; otherwise, consider other options. No-Code: For workflow-based agents, Logic Apps is suitable; for other scenarios, Azure AI Agent Service is recommended. Hosting and Maintenance: If Logic Apps is not an option and you prefer not to maintain your own environment, use Azure AI Agent Service. Otherwise, consider custom agent orchestrators like Semantic Kernel or AutoGen to build the agent and services like AKS or Apps Service to host those. Orchestration Complexity: For simple hierarchical agent connections, Azure AI Agent Service is good choice. For complex orchestration, use an agent orchestrator. Versioning - If you are concerned about versioning to ensure solid CI/CD regime then you may have to chose Agent Orchestrators. Agent Service still miss this feature clarity. We have some work-around but it is not robust implementation. Hopefully we will catch up soon with a better versioning solution. Summary: When selecting the right agentic solution on Azure, consider the latest recommendations and platform developments. For most scenarios, Microsoft advises using the Azure AI Agent Service within Azure Foundry, as it is the focus of ongoing enhancements and support. For workflow-driven projects, Azure Logic Apps with agentic capabilities may be suitable, while advanced users can leverage orchestrators for custom agent architectures. In Selecting the Right Agentic Solution on Azure – Part 2 (Security) blog we will examine the security aspects of each option, one by one.1.2KViews5likes0CommentsHow Great Engineers Make Architectural Decisions — ADRs, Trade-offs, and an ATAM-Lite Checklist
Why Decision-Making Matters Without a shared framework, context fades and teams' re-debate old choices. ADRs solve that by recording the why behind design decisions — what problem we solved, what options we considered, and what trade-offs we accepted. A good ADR: Lives next to the code in your repo. Explains reasoning in plain language. Survives personnel changes and version history. Think of it as your team’s engineering memory. The Five Pillars of Trade-offs At Microsoft, we frame every major design discussion using the Azure Well-Architected pillars: Reliability – Will the system recover gracefully from failures? Performance Efficiency – Can it meet latency and throughput targets? Cost Optimization – Are we using resources efficiently? Security – Are we minimizing blast radius and exposure? Operational Excellence – Can we deploy, monitor, and fix quickly? No decision optimizes all five. Great engineers make conscious trade-offs — and document them. A Practical Decision Flow Step What to Do Output 1. Frame It Clarify the problem, constraints, and quality goals (SLOs, cost caps). Problem statement 2. List Options Identify 2-4 realistic approaches. Options list 3. Score Trade-offs Use a Decision Matrix to rate options (1–5) against pillars. Table of scores 4. ATAM-Lite Review List scenarios, identify sensitivity points (small changes with big impact) and risks. Risk notes 5. Record It as an ADR Capture everything in one markdown doc beside the code. ADR file Example: Adding a Read-Through Cache Decision: Add a Redis cache in front of Cosmos DB to reduce read latency. Context: Average P95 latency from DB is 80 ms; target is < 15 ms. Options: A) Query DB directly B) Add read-through cache using Redis Trade-offs Performance: + Massive improvement in read speed. Cost: + Fewer RU/s on Cosmos DB. Reliability: − Risk of stale data if cache invalidation fails. Operational: + Added complexity for monitoring and TTLs. Templates You Can Re-use ADR Template # ADR-001: Add Read-through Cache in Front of Cosmos DB Status: Accepted Date: 2025-10-21 Context: High read latency; P95 = 80ms, target <15ms Options: A) Direct DB reads B) Redis cache for hot keys ✅ Decision: Adopt Redis cache for performance and cost optimization. Consequences: - Improved read latency and reduced RU/s cost - Risk of data staleness during cache invalidation - Added operational complexity Links: PR#3421, Design Doc #204, Azure Monitor dashboard Decision Matrix Example Pillar Weight Option A Option B Notes Reliability 5 3 4 Redis clustering handles failover Performance 4 2 5 In-memory reads Cost 3 4 5 Reduced RU/s Security 4 4 4 Same auth posture Operational Excellence 3 4 3 More moving parts Weighted total = Σ(weight × score) → best overall score wins. Team Guidelines Create a /docs/adr folder in each repo. One ADR per significant change; supersede old ones instead of editing history. Link ADRs in design reviews and PRs. Revisit when constraints change (incidents, new SLOs, cost shifts). Publish insights as follow-up blogs to grow shared knowledge. Why It Works This practice connects the theory of trade-offs with Microsoft’s engineering culture of reliability and transparency. It improves onboarding, enables faster design reviews, and builds a traceable record of engineering evolution. Join the Conversation Have you tried ADRs or other decision frameworks in your projects? Share your experience in the comments or link to your own public templates — let’s make architectural reasoning part of our shared language.594Views1like0Comments