azure
7950 TopicsThe Microsoft Azure Infra Summit 2026 Schedule Is Live.
Hello Folks, I promised the full agenda would drop soon. Today’s the day. The schedule is locked in, the approved sessions are on the board, and I want to walk you through what three days of deep-technical, engineering-led Azure content looks like. A quick refresher before we get into the content: this event is free, it’s virtual, and it’s built by engineering for engineering. Most sessions are at the L300–L400 level, which means we’re skipping the marketing slide and getting straight to the architecture, the gotchas, and the “here’s what actually happens in production” stories you came for. We’re starting at 8:00 AM Pacific each day and running solid technical content through the afternoon. You can still register here (https://aka.ms/MAIS-reg) We organized the three days around the pillars our community keeps coming back to, Build, Operate, and Optimize. Day 1 leans into Build so you leave the keynote with momentum, Day 2 bridges Build into Operate (where most of us actually spend our workdays), and Day 3 is pure Optimize, resiliency, cost, performance, and networking, before we close things out. The full 3-day agenda (all times Pacific) Online Schedule Here Day 1, Tue, May 19 · BUILD Day 2, Wed, May 20 · BUILD + OPERATE Day 3, Thu, May 21 · OPTIMIZE + Closing 8:00 KEYNOTE: Welcome & Azure Infrastructure Vision 8:00, Build and Optimize a Data Lakehouse for Unified Data Intelligence 8:00, Achieving Zonal Resiliency in Azure Infrastructure 9:00, Build a Sovereign Private Cloud with Azure Local 8:45, Designing Azure Networks That Scale: From Small Deployments to Enterprise-Grade 8:30, Architecting Resilient Azure Platforms: Durable Functions, Cosmos DB, and DR by Design 9:45, The Azure Deployment Agent: How AI Turns a Prompt into a Production-Ready Workload 9:30, From Alert to Resolved: Building a Self-Healing Azure Platform with SRE Agent 9:00, Optimizing EDA & HPC Pipelines on Azure: High-Performance Shared Storage with Azure NetApp Files 10:15, ALZ IaC Accelerator: Deploy Your Azure Platform Landing Zone with IaC 10:15, Agentic Migrations & Modernization 9:30, Elastic SAN for AVS Datastores: Best Price-Performance External Storage 11:00, Building Secure, Well-Architected Azure Workloads by Default with Azure Verified Modules and GitHub Copilot 10:45, Simplifying File Share Management and Control for Azure Files 10:00, Premium SSD v2 Disk: Best Price-Performance Block Storage for VMs and Containers 11:45, Best Practices for Infrastructure as Code CI/CD on Azure 11:30, Marketplace Image Protection: Safeguarding Workloads Through Patching and Graceful Deprecation 10:45, Optimizing File Storage for AI and Cloud-Native Workloads on Azure 12:30, Modern Ingress for AKS: Introducing Application Gateway for Containers (AGC) 12:00, Operating Hybrid at Scale: Real-World Azure Arc Patterns for Governance, Security, and Cost Control 11:30, Cut Storage Costs, Boost ROI: Optimizing Your Storage TCO on Azure Object Storage 13:15, End-to-End Security on AKS Using Azure Application Gateway for Containers with Managed Cilium 12:45, Run At-Scale On-Premises and Cloud Assessments and Migrations to Azure Storage 12:15, How to Build Resilient Networks Using Azure Networking, What’s New in Azure Software Load Balancing 14:00, Deployment Stacks: Getting Started 13:30, Modernize VDI with Azure Files and Entra Cloud-Native Identities 13:00, AKS Networking at Scale, CNI, Security, and Multi-Cluster Networking with Accelerated Performance 14:30, Accelerating Automated VM Image Pipelines with Azure Image Builder and Azure Compute Gallery 14:15, Operating Azure Backup at Scale: Day-2 Excellence for IaaS, PaaS, and Storage Workloads 13:45, Kubenet Deprecation, Futureproofing AKS IPAM and Dataplane Configurations 15:00, Troubleshooting Kubernetes Networking with an AI Diagnostic Assistant 14:15, Implement Zero-Tolerance Downtime Web Apps with Azure Front Door 14:45, Closing: Azure Infrastructure Applied Skills and Certifications What to do right now Block your calendar, May 19, 20, and 21, 8:00 AM PT start each day. Check out www.azureinfrasummit.com for more information. Register, it’s free. Pick your sessions, the online schedule has ICS files for each session. Build your personal track across Build, Operate, and Optimize. Bring your team, the agenda is deliberately wide: platform engineers, SREs, storage folks, network folks, AKS operators, IaC builders, and backup/DR owners will all find their sessions. We put a lot of work into making sure every slot earned its place, these are engineering-delivered, production-grounded, no-fluff sessions. The speakers are the people shipping the features you’re using in Azure. Can’t wait to see you online May 19–21. Until then, Cheers! Pierre Roman884Views3likes0CommentsHigh Expert Summit 2026 - United by Community, Cloud, and AI
Community-driven events continue to be one of the strongest pillars of the Microsoft ecosystem—and the High Expert Summit, organized by MVPs and the High Expert community, is a powerful example of that impact in action. Hotmart’s headquarters in Belo Horizonte offered a wonderful venue for the conference due to its modern auditorium and event infrastructure. The summit delivered two intense days of immersion, combining technical depth, strategic discussions, and meaningful connections. With more than 150 in-person participants, attendees were highly engaged and focused on advanced Azure and AI topics—making it one of the most impactful community Azure events in Brazil. Belo Horizonte: Strategic Location, Real Impact From both participant and speaker perspectives, the choice of Belo Horizonte played a defining role in the event’s success. Although São Paulo often concentrates major technology events, Belo Horizonte—home to approximately 2.5 million people—has a strong industrial, technological, and innovation footprint. The region hosts major organizations such as ArcelorMittal, a global leader in steel and mining, and Localiza, one of Latin America’s largest mobility companies, founded in Belo Horizonte and operating across multiple countries, amongst many others. Belo Horizonte also counts with a solid startup network structure (SanPedro Valley - 1st startup community created in Brazil, BH-TEC technology park or the Seed -Startups and Entrepreneurship Ecosystem Development, governmental startup acceleration program) that pushes local entrepreneurs to create new technology-based goods and services. Professionals from these companies were actively present throughout the event, reinforcing how regional hubs outside the traditional tech “center” are deeply invested in cloud and AI transformation. For many attendees, this was the first event in months—or even years—of this scale and quality in the region. The summit clearly addressed a local demand, delivering an experience that had a visible and lasting regional impact. A Community-Led Event, Built by MVPs and New Voices From the very first moments—reception, venue, logistics, and overall organization—the conference demonstrated exceptional care and professionalism. The High Expert team, led by Guilherme Maia, delivered an experience widely praised by attendees for its structure, attention to detail, and high standards. A defining aspect of the event was the strong MVP presence, combined with intentional space for new and first‑time speakers. Most sessions were delivered by Microsoft MVPs, alongside Microsoft professionals and specialists working directly in the market—creating a balance between recognized expertise and fresh perspectives. One particularly meaningful moment was the first public presentation by Matheus Faria Nogueira, who shared a real-world use case focused on security posture management in a web application architecture on Azure. His session demonstrated how security can be embedded into architecture decisions from the start—highlighting both technical rigor and the importance of encouraging new community voices. Deep Technical Content with a Strong Security Focus Over two days, participants explored strategic and technical content covering Azure architecture, DevOps, Artificial Intelligence, innovation, career development, market trends, and the real challenges organizations face today. Security emerged as a key theme throughout the agenda. Among the highlights was the participation of Paulo Silva as a new speaker, presenting practical scenarios combining Microsoft Defender for Cloud and Microsoft Sentinel. His session showcased how organizations can achieve better visibility, detection, and response across hybrid and cloud environments using Microsoft’s security stack. Across sessions, a consistent message resonated with attendees: the value of hands‑on, experience‑driven content. Speakers went beyond slides, focusing on implementation details, lessons learned, and actionable guidance—an approach many participants highlighted as one of the event’s strongest differentiators. Networking, Connections, and Industry Impact Beyond technical sessions, the summit created space for high‑quality networking and collaboration. Conversations between architects, developers, MVPs, Microsoft professionals, and industry leaders fostered valuable exchanges among those actively shaping the future of Cloud and AI. These interactions led to concrete follow‑ups after the event, including discussions around applying Azure AI and object recognition technologies in industrial environments—demonstrating how community events often become catalysts for real innovation. Gratitude to the Community Behind the Event The success of the High Expert Summit was the result of collective effort. Special recognition goes to the event team, who worked behind the scenes to deliver what many described as one of their most challenging—and rewarding—projects to date. The event was elevated by outstanding speakers, including Johnson de Souza Cruz, Claudenir Andrade, Francisco Ferreira, Henrique Eduardo Souza, Elton Bordim, Osanam Giordane da Costa Junior, Gilson Banin, Rodrigo Fonseca, Daniel Ribeiro, Ieso Dias, Roberta Santos, Professor Rodrigo Moreira, and others—each contributing deep expertise, practical insight, and pride in representing the Microsoft MVP community. Support from the sponsors BHS, Advanced Informatica Ltda., and DCIT Tecnologia also played a key role in making the experience possible. Above all, sincere thanks go to every participant who invested their time, energy, and curiosity, turning the summit into a truly memorable community moment. Looking Ahead The event may have concluded, but the movement continues. The conversations, connections, and learning sparked in Belo Horizonte are already shaping what comes next. With overwhelmingly positive feedback and strong regional engagement, expectations are set high for future editions, including the next High Expert Summit anticipated in 2027. Once again, the Microsoft MVP community demonstrated its power to learn, connect, and build the future—together. Want to Learn More About the MVP Program? To find an MVP and learn more about the MVP Program visit the MVP Communities website and follow our updates on LinkedIn or #mvpbuzz. Join us for a future live session through the Microsoft Reactor where we walk through what the MVP program is about, what we look for, and how nominations work. These sessions are designed to help you connect the dots between the work you’re already doing and the impact the MVP Program recognizes — with time for questions, examples, and real conversations.Network Connectivity Check APIs for Logic App Standard
Introduction When your Logic App Standard is integrated with a Virtual Network (VNET), you can use these APIs to troubleshoot connectivity issues to downstream resources like SQL databases, Storage Accounts, Service Bus, Key Vault, and more. The checks run directly from the worker hosting your Logic App, so the results reflect the actual network path your workflows use. API Overview API HTTP Method Route Suffix Purpose ConnectivityCheck POST /connectivityCheck Validates end-to-end connectivity to an Azure resource (SQL, Key Vault, Storage, Service Bus, etc.) DnsCheck POST /dnsCheck Performs DNS resolution for a hostname TcpPingCheck POST /tcpPingCheck Performs a TCP ping to a host and port How to Call Using Azure API Playground Sign in with your Azure account. https://portal.azure.com/#view/Microsoft_Azure_Resources/ArmPlayground.ReactView Use POST method with the URLs below. Instead of API playground you can also use PowerShell or Az Rest URL Pattern Production slot: POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{logicAppName}/connectivityCheck?api-version=2026-03-01-preview Deployment slot: POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{logicAppName}/slots/{slotName}/connectivityCheck?api-version=2026-03-01-preview Replace connectivityCheck with dnsCheck or tcpPingCheck as needed. all the requests should be Json 1. ConnectivityCheck Tests end-to-end connectivity from your Logic App to an Azure resource. This validates DNS, TCP, and authentication in a single call. Supported Provider Types ProviderType Use For KeyVault Azure Key Vault SQL Azure SQL Database / SQL Server ServiceBus Azure Service Bus EventHubs Azure Event Hubs BlobStorage Azure Blob Storage FileShare Azure File Share (see Port 445 limitation) only tese 443 QueueStorage Azure Queue Storage TableStorage Azure Table Storage Web Any HTTP/HTTPS endpoint Credential Types CredentialType When to Use ConnectionString You have a connection string to provide directly Authentication You have an endpoint URL with username and password CredentialReference You want to reference an existing connection string or app setting by name AppSetting You want to reference an app setting configured on the Logic App ManagedIdentity Your Logic App uses Managed Identity to authenticate Sample Request — Connection String (SQL Database) POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/connectivityCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "providerType": "SQL", "credentials": { "credentialType": "ConnectionString", "connectionString": "Server=tcp:myserver.database.windows.net,1433;Database=mydb;User ID=myuser;Password=mypassword;Encrypt=True;TrustServerCertificate=False;" }, "resourceMetadata": { "entityName": "" } } } Sample Request — App Setting Reference (Service Bus) Use this when your connection string is stored in an app setting on the Logic App (e.g., ServiceBusConnection). { "properties": { "providerType": "ServiceBus", "credentials": { "credentialType": "AppSetting", "appSetting": "ServiceBusConnection" }, "resourceMetadata": { "entityName": "myqueue" } } } Sample Request — Managed Identity (Blob Storage) Use this when your Logic App authenticates using Managed Identity. { "properties": { "providerType": "BlobStorage", "credentials": { "credentialType": "ManagedIdentity", "managedIdentity": { "targetResourceUrl": "https://mystorageaccount.blob.core.windows.net", "clientId": "" } }, "resourceMetadata": { "entityName": "" } } } Tip: Leave clientId empty to use the system-assigned managed identity. Provide a client ID to use a specific user-assigned managed identity. 2. DnsCheck Tests whether a hostname can be resolved from your Logic App's worker. This is useful for verifying private DNS zones and private endpoints are configured correctly. Sample Request POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/dnsCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "dnsName": "myserver.database.windows.net" } } 3. TcpPingCheck Tests whether a TCP connection can be established from your Logic App to a specific host and port. This is useful for checking if a port is open and reachable through your VNET. Sample Request POST https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Web/sites/{logicAppName}/tcpPingCheck?api-version=2026-03-01-preview Content-Type: application/json { "properties": { "host": "myserver.database.windows.net", "port": "1433" } } Port 445 (SMB / Azure File Share) — Known Limitation Port 445 cannot be reliably tested using TcpPingCheck or ConnectivityCheck with the FileShare provider type. Restricted Outgoing Ports Regardless of address, applications cannot connect to anywhere using ports 445, 137, 138, and 139. In other words, even if connecting to a non-private IP address or the address of a virtual network, connections to ports 445, 137, 138, and 139 are not permitted.Stop Experimenting, Start Building: AI Apps & Agents Dev Days Has You Covered
The AI landscape has shifted. The question is no longer “Can we build AI applications?” it’s “Can we build AI applications that actually work in production?” Demos are easy. Reliable, scalable, resilient AI systems that handle real-world complexity? That’s where most teams struggle. If you’re an AI developer, software engineer, or solution architect who’s ready to move beyond prototypes and into production-grade AI, there’s a series built specifically for you. What Is AI Apps & Agents Dev Days? AI Apps & Agents Dev Days is a monthly technical series from Microsoft Reactor, delivered in partnership with Microsoft and NVIDIA. You can explore the full series at https://developer.microsoft.com/en-us/reactor/series/s-1590/ This isn’t a slide deck marathon. The series tagline says it best: “It’s not about slides, it’s about building.” Each session tackles real-world challenges, shares patterns that actually work, and digs into what’s next in AI-driven app and agent design. You bring your curiosity, your code, and your questions. You leave with something you can ship. The sessions are led by experienced engineers and advocates from both Microsoft and NVIDIA, people like Pamela Fox, Bruno Capuano, Anthony Shaw, Gwyneth Peña-Siguenza, and solutions architects from NVIDIA’s Cloud AI team. These aren’t theorists; they’re practitioners who build and ship the tools you use every day. What You’ll Learn The series covers the full spectrum of building AI applications and agent-based systems. Here are the key themes: Building AI Applications with Azure, GitHub, and Modern Tooling Sessions walk through how to wire up AI capabilities using Azure services, GitHub workflows, and the latest SDKs. The focus is always on code-first learning, you’ll see real implementations, not abstract architecture diagrams. Designing and Orchestrating AI Agents Agent development is one of the series’ strongest threads. Sessions cover how to build agents that orchestrate long-running workflows, persist state automatically, recover from failures, and pause for human-in-the-loop input, without losing progress. For example, the session “AI Agents That Don’t Break Under Pressure” demonstrates building durable, production-ready AI agents using the Microsoft Agent Framework, running on Azure Container Apps with NVIDIA serverless GPUs. Scaling LLM Inference and Deploying to Production Moving from a working prototype to a production deployment means grappling with inference performance, GPU infrastructure, and cost management. The series covers how to leverage NVIDIA GPU infrastructure alongside Azure services to scale inference effectively, including patterns for serverless GPU compute. Real-World Architecture Patterns Expect sessions on container-based deployments, distributed agent systems, and enterprise-grade architectures. You’ll learn how to use services like Azure Container Apps to host resilient AI workloads, how Foundry IQ fits into agent architectures as a trusted knowledge source, and how to make architectural decisions that balance performance, cost, and scalability. Why This Matters for Your Day Job There’s a critical gap between what most AI tutorials teach and what production systems actually require. This series bridges that gap: Production-ready patterns, not demos. Every session focuses on code and architecture you can take directly into your projects. You’ll learn patterns for state persistence, failure recovery, and durable execution — the things that break at 2 AM. Enterprise applicability. The scenarios covered — travel planning agents, multi-step workflows, GPU-accelerated inference — map directly to enterprise use cases. Whether you’re building internal tooling or customer-facing AI features, the patterns transfer. Honest trade-off discussions. The speakers don’t shy away from the hard questions: When do you need serverless GPUs versus dedicated compute? How do you handle agent failures gracefully? What does it actually cost to run these systems at scale? Watch On-Demand, Build at Your Own Pace Every session is available on-demand. You can watch, pause, and build along at your own pace, no need to rearrange your schedule. The full playlist is available at https://www.youtube.com/playlist?list=PLmsFUfdnGr3znh-5zg1xFTK5dmaSE44br This is particularly valuable for technical content. Pause a session while you replicate the architecture in your own environment. Rewind when you need to catch a configuration detail. Build alongside the presenters rather than just watching passively. What You’ll Walk Away With After working through the series, you’ll have: Practical agent development skills — how to design, orchestrate, and deploy AI agents that handle real-world complexity, including state management, failure recovery, and human-in-the-loop patterns Production architecture patterns — battle-tested approaches for deploying AI workloads on Azure Container Apps, leveraging NVIDIA GPU infrastructure, and building resilient distributed systems Infrastructure decision-making confidence — a clearer understanding of when to use serverless GPUs, how to optimise inference costs, and how to choose the right compute strategy for your workload Working code and reference implementations — the sessions are built around live coding and sample applications (like the Travel Planner agent demo), giving you starting points you can adapt immediately A framework for continuous learning — with new sessions each month, you’ll stay current as the AI platform evolves and new capabilities emerge Start Building The AI applications that will matter most aren’t the ones with the flashiest demos — they’re the ones that work reliably, scale gracefully, and solve real problems. That’s exactly what this series helps you build. Whether you’re designing your first AI agent system or hardening an existing one for production, the AI Apps & Agents Dev Days sessions give you the patterns, tools, and practical knowledge to move forward with confidence. Explore the series at https://developer.microsoft.com/en-us/reactor/series/s-1590/ and start watching the on-demand sessions at https://www.youtube.com/playlist?list=PLmsFUfdnGr3znh-5zg1xFTK5dmaSE44br The best time to level up your AI engineering skills was yesterday. The second-best time is right now and these sessions make it easy to start.AI apps and agents: choosing your Marketplace offer type
Choosing your Marketplace offer type is one of the earliest—and most consequential—decisions you’ll make when preparing an AI app for Microsoft Marketplace. It’s also one of the hardest to change later. This post is the second in our Marketplace‑ready AI app series. Its goal is not to push you toward a specific option, but to help you understand how Marketplace offer types map to different AI delivery models—so you can make an informed decision before architecture, security, and publishing work begins. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents in Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. Why offer type is an important Marketplace decision Offer type is more than a packaging choice. It defines the operating model of your AI app on Marketplace: How customers acquire your solution Where the AI runtime executes Determining the right security and business boundaries for the AI solution and associated contextual data Who operates and updates the system How transactions and billing are handled Once an offer type is selected, it cannot be changed without creating a new offer. Teams that choose too quickly often discover later that the decision creates friction across architecture, security boundaries, or publishing requirements. Microsoft’s Publishing guide by offer type explains the structural differences between offer types and why this decision must be made up front. How Marketplace offer types map to AI delivery models AI apps differ from traditional software in a few critical ways: Contextual data may need to remain in a specific tenant or geography Agents may operate autonomously and continuously Control over infrastructure often determines trust and compliance How the solution is charged and monetized, including whether pricing is usage‑based, metered, or subscription‑driven (for example, billing per inference, per workflow execution, or as a flat monthly fee) The buyer’s technical capability, including the level of engineering expertise required to deploy and operate the solution (for example, SaaS is generally easier to consume, while container‑based and managed application offers often require stronger cloud engineering and DevOps skills) Marketplace offer types don’t describe features. They define responsibility boundaries—who controls the AI runtime, who owns the infrastructure, and where customer data is processed. At a high level, Marketplace supports four primary delivery models for AI solutions: SaaS Azure Managed Application Azure Container Virtual Machine Each represents a different balance between publisher control and customer control. The sections below explain what each model means in practice. Check out the interactive offer selection wizard in App Advisor for decision support. Below, we unpack each of the offer types. SaaS offers for AI apps SaaS is the most common model for AI apps and agents on Marketplace—and often the default starting point. With a SaaS offer, the AI service runs in the publisher’s Azure environment and is accessed by customers through a centralized endpoint. This model works well for: Multi‑tenant AI platforms and agents Continuous model and prompt updates Rapid experimentation and iteration Usage‑based or subscription billing Because the service is centrally hosted, publishers retain full control over deployment, updates, and operational behavior. For multi-tenant AI apps, this also means making early decisions about Microsoft Entra ID configuration—such as how customers are onboarded, whether access is granted through tenant-level consent or external identities, and how user identities, roles, and data are isolated across tenants to prevent cross-tenant access or data leakage. For official guidance, see the SaaS section of the Marketplace publishing guide and the AI agent overview, which describes SaaS‑based agent deployments. Plan a SaaS offer for Microsoft Marketplace. Azure Managed Applications for AI solutions In this model, the solution is deployed into the customer’s Azure subscription, not the publisher’s. There are two variants: Managed applications, where the publisher retains permissions to operate and update the deployed resources Solution templates, where the customer fully manages the deployment after installation This model is a strong fit when AI workloads must run inside customer‑controlled environments, such as: Regulated or sensitive data scenarios Customer‑owned networking and identity boundaries Infrastructure‑heavy AI solutions that can’t be centralized Willingness or need on part of the customer or IT team to tailor the app to the needs of the end customer specific environment Managed Applications sit between SaaS and fully customer‑run deployments. They offer more customer control than SaaS, while still allowing publishers to manage lifecycle aspects when appropriate. Marketplace guidance for Azure Applications is covered in the publishing guide. For more information, see the following links: Plan an Azure managed application for an Azure application offer. Azure Container offers for AI workloads With container offers, the customer runs the AI workload—typically on AKS—using container images supplied by the publisher. This model is best suited for scenarios that require: Strict data residency Air‑gapped or tightly controlled environments Customer‑managed Kubernetes infrastructure The publisher delivers the container artifacts, but deployment, scaling, and runtime operations occur in the customer’s environment. This shifts operational responsibility, risk and compute costs away from the publisher and toward the customer. Container offer requirements are covered in the Marketplace publishing guide. Plan a Microsoft Marketplace Container offer. Virtual Machine offers for AI solutions Virtual Machine offers still play a role, particularly for specialized or legacy AI solutions. VM offers package a pre‑configured AI environment that customers deploy into their Azure subscription. Compared to other models, they offer: Updates and scaling are more tightly scoped Iteration cycles tend to be longer The solution is more closely aligned with specific OS or hardware requirements They are most commonly used for: Legacy AI stacks Fixed‑function AI appliances Solutions with specialized hardware or driver dependencies VM publishing requirements are also documented in the Marketplace publishing guide. Plan a virtual machine offer for Microsoft Marketplace. Comparing offer types across AI‑specific decision dimensions Rather than asking “which offer type is best,” it’s more useful to ask what trade‑offs you’re making. Key lenses to consider include: Who operates the AI runtime day‑to‑day Where customer data and AI prompts inputs and outputs are processed and stored How quickly models, prompts, and logic can evolve The balance between publisher control and customer control How Marketplace transactions and billing align with runtime behavior SaaS Container (AKS / ACI) Virtual Machine (VM) Azure Managed Application What it is Fully managed, externally hosted app integrated with Marketplace for billing and entitlement Containerized app deployed into customer-managed Azure container environments VM image deployed directly into the customer’s Azure subscription Azure native solution deployed into the customer’s subscription, managed by the publisher Control plane Publisher‑owned Customer owned Customer owned Customer owned (with publisher access) Operational model Centralized operations, updates, and scaling Customer operates infra; publisher provides containers Customer operates infra; publisher provides VM image Per customer deployment and lifecycle Good fit scenarios • Multi‑tenant AI apps serving many customers • Fast onboarding and trials • Frequent model or feature updates • Publisher has full runtime control • AI apps or agents built as microservices • Legacy or lift-and-shift AI workloads • Enterprise AI solutions requiring customer owned infrastructure Avoid when • Customers require deployment into their own subscription • Strict data residency mandates customer control • Offline or air‑gapped environments are required • Customers standardize on Kubernetes • Custom OS or driver dependencies • Tight integration with customer Azure resources Typical AI usage pattern Centralized inference and orchestration across tenants • Portability across environments is important • Specialized runtime requirements • Strong compliance and governance needs Different AI solutions land in different places across these dimensions. The right choice is the one that matches your operational reality—not just your product vision. Note: If your solution primarily delivers virtual machines or containerized workloads, use a Virtual Machine or Container offer instead of an Azure Managed Application. Supported sales models and pricing options by Marketplace offer type Marketplace offer types don’t just define how an AI app and agent is deployed — they also determine how it can be sold, transacted, and expanded through Microsoft Marketplace. Understanding the supported sales models early helps avoid misalignment between architecture, pricing, and go‑to‑market strategy. Supported sales models Offer type Transactable listing Public listing Private offers Resale enabled Multiparty private offers Azure IP Co‑sell eligible SaaS Yes Yes Yes Yes Yes Yes Container Yes Yes Yes No Yes Yes Virtual Machine Yes Yes Yes Yes Yes Yes Azure Managed Application Yes Yes Yes No Yes Yes What these sales models mean Transactable listing A Marketplace listing that allows customers to purchase the solution directly through Microsoft Marketplace, with billing handled through Microsoft. Public listing A listing that is discoverable by any customer browsing Microsoft Marketplace and available for self‑service acquisition. Private offers Customer‑specific offers created by the publisher with negotiated pricing, terms, or configurations, purchased through Marketplace. Resale enabled Using resale enabled offers, software companies can authorize their channel partners to sell their existing Marketplace offers on their behalf. After authorization, channel partners can independently create and sell private offers without direct involvement from the software company. Multiparty private offers Private offers that involve one or more Microsoft partners (such as resellers or system integrators) as part of the transaction. Azure IP Co‑sell eligible When all requirements are met this allows your offers to contribute toward customers' Microsoft Azure Consumption Commitments (MACC). Pricing section Marketplace offer types determine the pricing models available. Make sure you build towards a marketplace offer type that aligns with how you want to deploy and price your solution. SaaS – Subscription or flat‑rate pricing, per‑user pricing, and usage‑based (metered) pricing. Container – Kubernetes‑based offers support multiple Marketplace‑transactable pricing models aligned to how the workload runs in the customer’s environment, including per core, per core in cluster, per node, per node in cluster, per pod, or per cluster pricing, all billed on a usage basis. Container offers can also support custom metered dimensions for application‑specific usage. Alternatively, publishers may offer Bring Your Own License (BYOL) plans, where customers deploy through Marketplace but bring an existing software license. Virtual Machine – Usage-based hourly pricing (flat rate, per vCPU, or per vCPU size), with optional 1-year or 3-year reservation discounts. Publishers may also offer Bring Your Own License (BYOL) plans, where customers bring an existing software license and are billed only for Azure infrastructure. Azure Managed Application – A monthly management or service fee charged by the publisher; Azure infrastructure consumption is billed separately to the customer. Note: Azure Managed Applications are designed to charge for management and operational services, not for SaaS‑style application usage or underlying infrastructure consumption. Buyer‑side assumptions to be aware of For customers to purchase AI apps and agents through these sales models: The customer must be able to purchase through Microsoft Marketplace using their existing Microsoft procurement setup Marketplace purchases align with enterprise buying and governance controls, rather than ad‑hoc vendor contracts For private and multiparty private offers, the customer must be willing to engage in a negotiated Marketplace transaction, rather than pure self‑service checkout Important clarification Supported sales models are consistent across Marketplace offer types. What varies by offer type is how the solution is provisioned, billed, operated, and updated. Sales flexibility alone should not drive offer‑type selection — it must align with the architecture and operating model of the AI app and agent. How this decision impacts everything that follows Offer type decisions ripple through the rest of the Marketplace journey. They directly shape: Architecture design choices Security and compliance boundaries Fulfillment APIs and billing integration Publishing and certification requirements Cost, scalability, and operational responsibility Follow the series for updates on new posts. What’s next in the journey With the offer type decision in place, the focus shifts to turning that choice into a production‑ready solution. This includes designing an architecture that aligns with your delivery model, establishing clear security and compliance boundaries, and preparing the operational foundations required to run, update, and scale your AI app or agent confidently in customer environments. Getting these elements right early reduces rework and sets the stage for a smoother Marketplace journey. See the next post in the series: Designing Production‑Ready AI App and Agent Architectures for Microsoft Marketplace. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success181Views4likes0CommentsJoin us at Microsoft Azure Infra Summit 2026 for deep technical Azure infrastructure content
Microsoft Azure Infra Summit 2026 is a free, engineering-led virtual event created for IT professionals, platform engineers, SREs, and infrastructure teams who want to go deeper on how Azure really works in production. It will take place May 19-21, 2026. This event is built for the people responsible for keeping systems running, making sound architecture decisions, and dealing with the operational realities that show up long after deployment day. Over the past year, one message has come through clearly from the community: infrastructure and operations audiences want more in-depth technical content. They want fewer surface-level overviews and more practical guidance from the engineers and experts who build, run, and support these systems every day. That is exactly what Azure Infra Summit aims to deliver. All content is created AND delivered by engineering, targeting folks working with Azure infrastructure and operating production environments. Who is this for: IT professionals, platform engineers, SREs, and infrastructure teams When: May 19-21, 2026 - 8:00 AM–1:00 PM Pacific Time, all 3 days Where: Online Virtual Cost: Free Level: Most sessions are advanced (L300-400). Register here: https://aka.ms/MAIS-Reg Built for the people who run workloads on Azure Azure Infra Summit is for the people who do more than deploy to Azure. It is for the people who run it. If your day involves uptime, patching, governance, monitoring, reliability, networking, identity, storage, or hybrid infrastructure, this event is for you. Whether you are an IT professional managing enterprise environments, a platform engineer designing landing zones, an Azure administrator, an architect, or an SRE responsible for resilience and operational excellence, you will find content built with your needs in mind. We are intentionally shaping this event around peer-to-peer technical learning. That means engineering-led sessions, practical examples, and candid discussion about architecture, failure modes, operational tradeoffs, and what breaks in production. The promise here is straightforward: less fluff, more infrastructure. What to expect Azure Infra Summit will feature deep technical content in the 300 to 400 level range, with sessions designed by engineering to help you build, operate, and optimize Azure infrastructure more effectively. The event will include a mix of live and pre-recorded sessions and live Q&A. Throughout the three days, we will dig into topics such as: Hybrid operations and management Networking at scale Storage, backup, and disaster recovery Observability, SLOs, and day-2 operations Confidential compute Architecture, automation, governance, and optimization in Azure Core environments And more… The goal is simple: to give you practical guidance you can take back to your environment and apply right away. We want attendees to leave with stronger mental models, a better understanding of how Azure behaves in the real world, and clearer patterns for designing and operating infrastructure with confidence. Why this event matters Infrastructure decisions have a long tail. The choices we make around architecture, operations, governance, and resilience show up later in the form of performance issues, outages, cost, complexity, and recovery challenges. That is why deep technical learning matters, and why events like this matter. Join us I hope you will join us for Microsoft Azure Infra Summit 2026, happening May 19-21, 2026. If you care about how Azure infrastructure behaves in the real world, and you want practical, engineering-led guidance on how to build, operate, and optimize it, this event was built for you. Register here: https://aka.ms/MAIS-Reg Cheers! Pierre Roman4.4KViews2likes2CommentsSuccess with AI apps and agents on Marketplace: the end-to-end
Preparing an AI app or agent for Microsoft Marketplace requires solving a broader set of problems—ones that extend beyond the model and into architecture, security, compliance, operations, and commerce. These requirements often surface late, when teams are already moving toward launch. Teams often reach the same milestone: the AI works, the demo is compelling, and early customers are interested. But when it’s time to publish, transact, and operate that solution through Marketplace, gaps emerge—around security, compliance, reliability, operations, or commerce integration. Whether you are demo ready or starting with a great AI idea, this series is designed to address those challenges through a connected, end‑to‑end journey. It brings together the decisions and requirements needed to build AI apps and agents that are not only functional, but Marketplace‑ready from day one. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents on Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. Why an end‑to‑end journey matters A working AI app does not automatically mean a Marketplace‑ready AI app. Marketplace readiness spans far more than model selection or prompt engineering. It requires a holistic approach across: Architecture and hosting design Security and AI guardrails Compliance and governance Operational maturity Commerce, billing, and lifecycle integration While guidance exists across each of these areas, it is often fragmented. This series connects those pieces into a single, reusable mental model that software companies can use to design, build, publish, and operate AI apps and agents with confidence. This first post frames the journey. Each subsequent post goes deep into one stage. The marketplace‑ready AI app and agent lifecycle At a high level, Marketplace‑ready AI apps and agents follow this lifecycle: Define how the AI app and agent will be delivered Identify industry compliance and regulatory requirements Design a production‑ready AI architecture Embed security and AI guardrails into the design Validate compliance and governance Build and test an MVP with potential customers Build for quality, reliability, and scale Integrate with Marketplace commerce Prepare for publishing and go‑live Operate, monitor, and evolve safely Promoting your AI app and agent to close initial sales This lifecycle is intentionally introduced once, at a high level. Decisions made early will shape everything that follows. Throughout the series, this lifecycle serves as a shared reference point. Step 1: Decide how your AI app and agent will be packaged and delivered The first decision is how the AI app and agent will be delivered through Marketplace. Offer types—such as SaaS, Azure Managed Applications, Containers, and Virtual Machines—are not just listing formats. They are delivery models that directly impact: Identity and authentication Billing and metering Deployment responsibilities Operational ownership Customer onboarding experience Supported sales models Choosing an offer type early helps avoid costly redesigns later. Step 2: Design a production‑ready AI architecture Marketplace AI apps and agents are expected to meet enterprise customer expectations for performance, reliability, and security. Architecture decisions must account for: Customer business, compliance, and security needs Offer‑specific hosting best practices For example, SaaS offers typically require: Tenant isolation Environment separation Strong identity boundaries Architecture must also support both AI behavior and Marketplace lifecycle events, such as provisioning, subscription changes, and entitlement checks. Step 3: Secure the AI app and agent and define guardrails Security cannot be treated as a certification checklist at the end of the process. AI introduces new risks beyond traditional applications, including expanded attack surfaces through prompts and inputs. Frameworks such as the OWASP GenAI Top 10 provide a useful lens for identifying these risks. Guardrails must be enforced: At design time through architecture and policy decisions At runtime through monitoring, enforcement, and response AI‑specific incident response must also factor in privacy regulations and customer trust. Step 4: Treat AI agents as compliance‑governed systems AI agents and their data are first‑class compliance subjects. This includes: Prompts and responses Contextual and training data Actions taken by the agent These artifacts must be auditable and governed inline, not retroactively. At the same time, publishers must balance compliance with intellectual property protection by enabling explainability and transparency without exposing proprietary logic. Step 5: Build for quality, reliability, and scale Marketplace buyers expect predictable behavior. AI apps and agents should formalize: Quality and evaluation frameworks Reliability and performance targets Scaling and cost optimization strategies Quality, reliability, and performance directly influence customer trust and satisfaction. Step 6: Integrate with Marketplace commerce and lifecycle APIs Marketplace is not “just a listing.” For transactable offers that help you sell globally direct to customers or through channel and allow customers to count sales of your app against their cloud commitments, Marketplace becomes an operational contract. Subscription state, entitlements, billing, and metering are runtime responsibilities of the application. For SaaS offers, SaaS Fulfillment APIs define the source of truth for subscription lifecycle events. Integrate Marketplace lead flows with your CRM using the Marketplace lead connector for CRM Step 7: Prepare for publishing, certification, and go‑live Publishing requires more than code completion. Marketplace certification validates: Security posture Customer experience Operational readiness Using templates, checklists, and tooling such as Quick Start Development Toolkit, Marketplace Rewards resources, and App Advisor reduces friction and rework. Step 8: Operate and evolve safely after go‑live Launch is not the end of the journey. AI apps and agents evolve continuously, making: Safe deployment strategies CI/CD discipline Rollback and monitoring practices This is essential for protecting both customers and publishers. Operational maturity also includes maintaining Marketplace offer assets (store images) as the product evolves. Use this framework to help you build a production ready AI app and agent, well architected, secured, reliable, scalable and integrated with Microsoft Marketplace global commerce engine. Step 9: Promote your AI app and agent Becoming Marketplace‑ready does not end at publication. AI app and agent success also depends on how effectively the solution is discovered, evaluated, and trusted by customers within Microsoft Marketplace and the broader Microsoft ecosystem. Promotion in Microsoft Marketplace is tightly integrated with how customers discover and purchase solutions. AI apps and agents are surfaced through Marketplace search, categories, and in‑product experiences, and once your AI app or agent becomes Azure IP co-sell eligible - the purchase of your offer can count towards your customers' Microsoft Azure Consumption Commitments (MACC) motivating customers to buy your offer. This reduces buying friction and accelerates evaluation‑to‑purchase transitions. Top activities to grow your sales: Optimize your listing once you publish your app, by getting an agentic review of your published listing in seconds, based on Marketplace listing best practices and expert Microsoft editorial guidance. Promote your Marketplace offer and track your engagement following best practices. Manage and nurture leads from trials to purchase, and from purchase to higher level SKUs. Private offers, which allow publishers to create customer-specific or negotiated offers directly through Marketplace, including multiparty private offers involving Microsoft channel partners Sell through channel, use resale enabled offers to enable resellers and channel partners to sell your app to their customers, Co-sell motions, where eligible AI apps and agents are sold jointly with Microsoft sellers and count toward customer cloud consumption commitments Effective customer engagement depends on alignment between how the AI app and agent is positioned and how it is delivered. Clear descriptions, accurate architectural boundaries, and transparent operational expectations help customers move confidently from discovery to production adoption. For publishers, programs such as ISV Success provide guidance and tooling to help align technical readiness, Marketplace requirements, and go‑to‑market execution as AI apps and agents scale through Microsoft Marketplace. Sales don't happen by accident, it's essential you engage in promoting your marketing. When promotion is treated as a first‑class step in the lifecycle, it reinforces trust, accelerates evaluation, and increases the likelihood that an AI app and agent transitions from initial interest to sustained use. How to use this series This series is designed to be used in two ways: Read sequentially to understand the full Marketplace‑ready journey Use individual posts alongside Microsoft Learn content, App Advisor, and Quick Start resources for deeper implementation guidance This series provides a structured, end‑to‑end view of what it takes to move from a working AI app and agent to a solution that customers can trust, deploy, and buy through Marketplace. It is designed to complement hands‑on implementation guidance, including Microsoft Learn resources such as Publishing AI agents to the Microsoft marketplace, and to help software companies navigate Marketplace readiness with fewer surprises and less rework. The upcoming post is about choosing your marketplace offer type which defines the operating model of your AI app or agent on Marketplace and influences key architectural decisions for your app or agent. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success266Views2likes0CommentsWhen cloud apps become a weak link: How FortiAppSec Cloud in Microsoft Marketplace bridges the gap
In this guest blog post, Srija Reddy Allam, Cloud Security/DevOps Architect, Fortinet, discusses the increase of attacks targeted at web applications and APIs and how FortiAppSec Cloud in Microsoft Marketplace provides a layer of adaptive security to address the challenge.64Views2likes0CommentsIf You're Building AI on Azure, ECS 2026 is Where You Need to Be
Let me be direct: there's a lot of noise in the conference calendar. Generic cloud events. Vendor showcases dressed up as technical content. Sessions that look great on paper but leave you with nothing you can actually ship on Monday. ECS 2026 isn't that. As someone who will be on stage at Cologne this May, I can tell you the European Collaboration Summit combined with the European AI & Cloud Summit and European Biz Apps Summit is one of the few events I've seen where engineers leave with real, production-applicable knowledge. Three days. Three summits. 3,000+ attendees. One of the largest Microsoft-focused events in Europe, and it keeps getting better. If you're building AI systems on Azure, designing cloud-native architectures, or trying to figure out how to take your AI experiments to production — this is where the conversation is happening. What ECS 2026 Actually Is ECS 2026 runs May 5–7 at Confex in Cologne, Germany. It brings together three co-located summits under one roof: European Collaboration Summit — Microsoft 365, Teams, Copilot, and governance European AI & Cloud Summit — Azure architecture, AI agents, cloud security, responsible AI European BizApps Summit — Power Platform, Microsoft Fabric, Dynamics For Azure engineers and AI developers, the European AI & Cloud Summit is your primary destination. But don't ignore the overlap, some of the most interesting AI conversations happen at the intersection of collaboration tooling and cloud infrastructure. The scale matters here: 3,000+ attendees, 100+ sessions, multiple deep-dive tracks, and a speaker lineup that includes Microsoft executives, Regional Directors, and MVPs who have built, broken, and rebuilt production systems. The Azure + AI Track - What's Actually On the Agenda The AI & Cloud Summit agenda is built around real technical depth. Not "intro to AI" content, actual architecture decisions, patterns that work, and lessons from things that didn't. Here's what you can expect: AI Agents and Agentic Systems This is where the energy is right now, and ECS is leaning in. Expect sessions covering how to design agent workflows, chain reasoning steps, handle memory and state, and integrate with Azure AI services. Marco Casalaina, VP of Products for Azure AI at Microsoft, is speaking if you want to understand the direction of the Azure AI platform from the people building it, this is a direct line. Azure Architecture at Scale Cloud-native patterns, microservices, containers, and the architectural decisions that determine whether your system holds up under real load. These sessions go beyond theory you'll hear from engineers who've shipped these designs at enterprise scale. Observability, DevOps, and Production AI Getting AI to production is harder than the demos suggest. Sessions here cover monitoring AI systems, integrating LLMs into CI/CD pipelines, and building the operational practices that keep AI in production reliable and governable. Cloud Security and Compliance Security isn't optional when you're putting AI in front of users or connecting it to enterprise data. Tracks cover identity, access patterns, responsible AI governance, and how to design systems that satisfy compliance requirements without becoming unmaintainable. Pre-Conference Deep Dives One underrated part of ECS: the pre-conference workshops. These are extended, hands-on sessions typically 3–6 hours that let you go deep on a single topic with an expert. Think of them as intensive short courses where you can actually work through the material, not just watch slides. If you're newer to a particular area of Azure AI, or you want to build fluency in a specific pattern before the main conference sessions, these are worth the early travel. The Speaker Quality Is Different Here The ECS speaker roster includes Microsoft executives, Microsoft MVPs, and Regional Directors, people who have real accountability for the products and patterns they're presenting. You'll hear from over 20 Microsoft speakers: Marco Casalaina — VP of Products, Azure AI at Microsoft Adam Harmetz — VP of Product at Microsoft, Enterprise Agent And dozens of MVPs and Regional Directors who are in the field every day, solving the same problems you are. These aren't keynote-only speakers — they're in the session rooms, at the hallway track, available for real conversations. The Hallway Track Is Not a Cliché I know "networking" sounds like a corporate afterthought. At ECS it genuinely isn't. When you put 3,000 practitioners, engineers, architects, DevOps leads, security specialists in one venue for three days, the conversations between sessions are often more valuable than the sessions themselves. You get candid answers to "how are you actually handling X in production?" that you won't find in documentation. The European Microsoft community is tight-knit and collaborative. ECS is where that community concentrates. Why This Matters Right Now We're in a period where AI development is moving fast but the engineering discipline around it is still maturing. Most teams are figuring out: How to move from AI prototype to production system How to instrument and observe AI behaviour reliably How to design agent systems that don't become unmaintainable How to satisfy security and compliance requirements in AI-integrated architectures ECS 2026 is one of the few places where you can get direct answers to these questions from people who've solved them — not theoretically, but in production, on Azure, in the last 12 months. If you go, you'll come back with practical patterns you can apply immediately. That's the bar I hold events to. ECS consistently clears it. Register and Explore the Agenda Register for ECS 2026: ecs.events Explore the AI & Cloud Summit agenda: cloudsummit.eu/en/agenda Dates: May 5–7, 2026 | Location: Confex, Cologne, Germany Early registration is worth it the pre-conference workshops fill up. And if you're coming, find me, I'll be the one talking too much about AI agents and Azure deployments. See you in Cologne.