azure red hat openshift
28 TopicsBuilding the agentic future together at JDConf 2026
JDConf 2026 is just weeks away, and I’m excited to welcome Java developers, architects, and engineering leaders from around the world for two days of learning and connection. Now in its sixth year, JDConf has become a place where the Java community compares notes on their real-world production experience: patterns, tooling, and hard-earned lessons you can take back to your team, while we keep moving the Java systems that run businesses and services forward in the AI era. This year’s program lines up with a shift many of us are seeing first-hand: delivery is getting more intelligent, more automated, and more tightly coupled to the systems and data we already own. Agentic approaches are moving from demos to backlog items, and that raises practical questions: what’s the right architecture, where do you draw trust boundaries, how do you keep secrets safe, and how do you ship without trading reliability for novelty? JDConf is for and by the people who build and manage the mission-critical apps powering organizations worldwide. Across three regional livestreams, you’ll hear from open source and enterprise practitioners who are making the same tradeoffs you are—velocity vs. safety, modernization vs. continuity, experimentation vs. operational excellence. Expect sessions that go beyond “what” and get into “how”: design choices, integration patterns, migration steps, and the guardrails that make AI features safe to run in production. You’ll find several practical themes for shipping Java in the AI era: connecting agents to enterprise systems with clear governance; frameworks and runtimes adapting to AI-native workloads; and how testing and delivery pipelines evolve as automation gets more capable. To make this more concrete, a sampling of sessions would include topics like Secrets of Agentic Memory Management (patterns for short- and long-term memory and safe retrieval), Modernizing a Java App with GitHub Copilot (end-to-end upgrade and migration with AI-powered technologies), and Docker Sandboxes for AI Agents (guardrails for running agent workflows without risking your filesystem or secrets). The goal is to help you adopt what’s new while hardening your long lived codebases. JDConf is built for community learning—free to attend, accessible worldwide, and designed for an interactive live experience in three time zones. You’ll not only get 23 practitioner-led sessions with production-ready guidance but also free on-demand access after the event to re-watch with your whole team. Pro tip: join live and get more value by discussing practical implications and ideas with your peers in the chat. This is where the “how” details and tradeoffs become clearer. JDConf 2026 Keynote Building the Agentic Future Together Rod Johnson, Embabel | Bruno Borges, Microsoft | Ayan Gupta, Microsoft The JDConf 2026 keynote features Rod Johnson, creator of the Spring Framework and founder of Embabel, joined by Bruno Borges and Ayan Gupta to explore where the Java ecosystem is headed in the agentic era. Expect a practitioner-level discussion on how frameworks like Spring continue to evolve, how MCP is changing the way agents interact with enterprise systems, and what Java developers should be paying attention to right now. Register. Attend. Earn. Register for JDConf 2026 to earn Microsoft Rewards points, which you can use for gift cards, sweepstakes entries, and more. Earn 1,000 points simply by signing up. When you register for any regional JDConf 2026 event with your Microsoft account, you'll automatically receive these points. Get 5,000 additional points for attending live (limited to the first 300 attendees per stream). On the day of your regional event, check in through the Reactor page or your email confirmation link to qualify. Disclaimer: Points are added to your Microsoft account within 60 days after the event. Must register with a Microsoft account email. Up to 10,000 developers eligible. Points will be applied upon registration and attendance and will not be counted multiple times for registering or attending at different events. Terms | Privacy JDConf 2026 Regional Live Streams Americas – April 8, 8:30 AM – 12:30 PM PDT (UTC -7) Bruno Borges hosts the Americas stream, discussing practical agentic Java topics like memory management, multi-agent system design, LLM integration, modernization with AI, and dependency security. Experts from Redis, IBM, Hammerspace, HeroDevs, AI Collective, Tekskills, and Microsoft share their insights. Register for Americas → Asia-Pacific – April 9, 10:00 AM – 2:00 PM SGT (UTC +8) Brian Benz and Ayan Gupta co-host the APAC stream, highlighting Java frameworks and practices for agentic delivery. Topics include Spring AI, multi-agent orchestration, spec-driven development, scalable DevOps, and legacy modernization, with speakers from Broadcom, Alibaba, CERN, MHP (A Porsche Company), and Microsoft. Register for Asia-Pacific → Europe, Middle East and Africa – April 9, 9:00 AM – 12:30 PM GMT (UTC +0) The EMEA stream, hosted by Sandra Ahlgrimm, will address the implementation of agentic Java in production environments. Topics include self-improving systems utilizing Spring AI, Docker sandboxes for agent workflow management, Retrieval-Augmented Generation (RAG) pipelines, modernization initiatives from a national tax authority, and AI-driven CI/CD enhancements. Presentations will feature experts from Broadcom, Docker, Elastic, Azul Systems, IBM, Team Rockstars IT, and Microsoft. Register for EMEA → Make It Interactive: Join Live Come prepared with an actual challenge you’re facing, whether you’re modernizing a legacy application, connecting agents to internal APIs, or refining CI/CD processes. Test your strategies by participating in live chats and Q&As with presenters and fellow professionals. If you’re attending with your team, schedule a debrief after the live stream to discuss how to quickly use key takeaways and insights in your pilots and projects. Learning Resources Java and AI for Beginners Video Series: Practical, episode-based walkthroughs on MCP, GenAI integration, and building AI-powered apps from scratch. Modernize Java Apps Guide: Step-by-step guide using GitHub Copilot agent mode for legacy Java project upgrades, automated fixes, and cloud-ready migrations. AI Agents for Java Webinar: Embedding AI Agent capabilities into Java applications using Microsoft Foundry, from project setup to production deployment. Java Practitioner’s Guide: Learning plan for deploying, managing, and optimizing Java applications on Azure using modern cloud-native approaches. Register Now JDConf 2026 is a free global event for Java teams. Join live to ask questions, connect, and gain practical patterns. All 23 sessions will be available on-demand. Register now to earn Microsoft Rewards points for attending. Register at JDConf.com194Views0likes0CommentsMicrosoft Azure at KubeCon Europe 2026 | Amsterdam, NL - March 23-26
Microsoft Azure is coming back to Amsterdam for KubeCon + CloudNativeCon Europe 2026 in two short weeks, from March 23-26! As a Diamond Sponsor, we have a full week of sessions, hands-on activities, and ways to connect with the engineers behind AKS and our open-source projects. Here's what's on the schedule: Azure Day with Kubernetes: 23 March 2026 Before the main conference begins, join us at Hotel Casa Amsterdam for a free, full-day technical event built around AKS (registration required for entry - capacity is limited!). Whether you're early in your Kubernetes journey, running clusters at scale, or building AI apps, the day is designed to give you practical guidance from Microsoft product and engineering teams. Morning sessions cover what's new in AKS, including how teams are building and running AI apps on Kubernetes. In the afternoon, pick your track: Hands-on AKS Labs: Instructor-led labs to put the morning's concepts into practice. Expert Roundtables: Small-group conversations with AKS engineers on topics like security, autoscaling, AI workloads, and performance. Bring your hard questions. Evening: Drinks on us. Capacity is limited, so secure your spot before it closes: aka.ms/AKSDayEU KubeCon + CloudNativeCon: 24-26 March 2026 There will be lots going on at the main conference! Here's what to add to your calendar: Keynote (24 March): Jorge Palma takes the stage to tackle a question the industry is actively wrestling with: can AI agents reliably operate and troubleshoot Kubernetes at scale, and should they? Customer Keynote (24 March): Wayve's Mukund Muralikrishnan shares how they handle GPU scheduling across multi-tenant inference workloads using Kueue, providing a practical look at what production AI infrastructure actually requires. Demo Theatre (25 March): Anson Qian and Jorge Palma walk through a Kubernetes-native approach to cross-cloud AI inference, covering elastic autoscaling with Karpenter and GPU capacity scheduling across clouds. Sessions: Microsoft engineers are presenting across all three days on topics ranging from multi-cluster networking, supply chain security, observability, Istio in production, and more. Full list below. Find our team in the Project Pavilion at kiosks for Inspektor Gadget, Headlamp, Drasi, Radius, Notary Project, Flatcar, ORAS, Ratify, and Istio. Brendan Burns, Kubernetes co-founder and Microsoft CVP & Technical Fellow, will also share his thoughts on the latest developments and key Microsoft announcements related to open-source, cloud native, and AI application development in his KubeCon Europe blog on March 24. Come find us at Microsoft Azure booth #200 all three days. We'll be running short demos and sessions on AKS, running Kubernetes at scale, AI workloads, and cloud-native topics throughout the show, plus fun activations and opportunities to unlock special swag. Read on below for full details on our KubeCon sessions and booth theater presentations: Sponsored Keynote Date: Tues 24 March 2026 Start Time: 10:18 AM CET Room: Hall 12 Title: Scaling Platform Ops with AI Agents: Troubleshooting to Remediation Speakers: Jorge Palma, Natan Yellin (Robusta) As AI agents increasingly write our code, can they also operate and troubleshoot our infrastructure? More importantly, should they? This keynote explores the practical reality of deploying AI agents to maintain Kubernetes clusters at scale. We'll demonstrate HolmesGPT, an open-source CNCF sandbox project that connects LLMs to operational and observability data to diagnose production issues. You'll see how agents reduce MTTR by correlating logs, metrics, and cluster state far faster than manual investigation. Then we'll tackle the harder problem: moving from diagnosis to remediation. We'll show how agents with remediation policies can detect and fix issues autonomously, within strict RBAC boundaries, approval workflows, and audit trails. We'll be honest about challenges: LLM non-determinism, building trust, and why guardrails are non-negotiable. This isn't about replacing SREs; it's about multiplying their effectiveness so they can focus on creative problem-solving and system design. Customer Keynote Date: Tues 24 March 2026 Start Time: 9:37 AM CET Room: Hall 12 Title: Rules of the road for shared GPUs: AI inference scheduling at Wayve Speaker: Mukund Muralikrishnan, Wayve Technologies As AI inference workloads grow in both scale and diversity, predictable access to GPUs becomes as important as raw throughput, especially in large, multi-tenant Kubernetes clusters. At Wayve, Kubernetes underpins a wide range of inference workloads, from latency-sensitive evaluation and validation to large-scale synthetic data generation supporting the development of an end-to-end self-driving system. These workloads run side by side, have very different priorities, and all compete for the same GPU capacity. In this keynote, we will share how we manage scheduling and resources for multi-tenant AI inference on Kubernetes. We will explain why default Kubernetes scheduling falls short, and how we use Kueue, a Kubernetes-native queueing and admission control solution, to operate shared GPU clusters reliably at scale. This approach gives teams predictable GPU allocations, improves cluster utilisation, and reduces operational noise. We will close by briefly showing how frameworks like Ray fit into this model as Wayve scales its AI Driver platform. KubeCon Theatre Demo Date: Wed 25 March 2026 Start Time: 13:15 CET Room: Hall 1-5 | Solutions Showcase | Demo Theater Title: Building cross-cloud AI inference on Kubernetes with OSS Speaker: Anson Qian, Jorge Palma Operating AI inference under bursty, latency-sensitive workloads is hard enough on a single cluster. It gets harder when GPU capacity is fragmented across regions and cloud providers. This demo walks through a Kubernetes-native pattern for cross-cloud AI inference, using an incident triage and root cause analysis workflow as the example. The stack is built on open-source capabilities for lifecycle management, inference, autoscaling, and cross-cloud capacity scheduling. We will specifically highlight Karpenter for elastic autoscaling and a GPU flex nodes project for scheduling capacity across multiple cloud providers into a single cluster. Models, inference endpoints, and GPU resources are treated as first-class Kubernetes objects, enabling elastic scaling, stable routing under traffic spikes, and cross-provider failover without a separate AI control plane. KubeCon Europe 2025 Sessions with Microsoft Speakers Speaker Title Jorge Palma Microsoft keynote: Scaling Platform Ops with AI Agents: Troubleshooting to Remediation Anson Qian, Jorge Palma Microsoft demo: Building cross-cloud AI inference on Kubernetes with OSS Will Tsai Leveling up with Radius: Custom Resources and Headlamp Integration for Real-World Workloads Simone Rodigari Demystifying the Kubernetes Network Stack (From Pod to Pod) Joaquin Rodriguez Privacy as Infrastructure: Declarative Data Protection for AI on Kubernetes Cijo Thomas ⚡Lightning Talk: “Metrics That Lie”: Understanding OpenTelemetry’s Cardinality Capping and Its Implications Gaurika Poplai ⚡Lightning Talk: Compliance as Code Meets Developer Portals: Kyverno + Backstage in Action Mereta Degutyte & Anubhab Majumdar Network Flow Aggregation: Pay for the Logs You Care About! Niranjan Shankar Expl(AI)n Like I’m 5: An Introduction To AI-Native Networking Danilo Chiarlone Running Wasmtime in Hardware-Isolated Microenvironments Jack Francis Cluster Autoscaler Evolution Jackie Maertens Cloud Native Theater | Istio Day: Running State of the Art Inference with Istio and LLM-D Jackie Maertens & Mitch Connors Bob and Alice Revisited: Understanding Encryption in Kubernetes Mitch Connors Istio in Production: Expected Value, Results, and Effort at GitHub Scale Mitch Connors Evolution or Revolution: Istio as the Network Platform for Cloud Native René Dudfield Ping SRE? I Am the SRE! Awesome Fun I Had Drawing a Zine for Troubleshooting Kubernetes Deployments René Dudfield & Santhosh Nagaraj Does Your Project Want a UI in Kubernetes-SIGs/headlamp? Bridget Kromhout How Will Customized Kubernetes Distributions Work for You? a Discussion on Options and Use Cases Kenneth Kilty AI-Powered Cloud Native Modernization: From Real Challenges to Concrete Solutions Mike Morris Building the Next Generation of Multi-Cluster with Gateway API Toddy Mladenov, Flora Taagen & Dallas Delaney Beyond Image Pull-Time: Ensuring Runtime Integrity With Image Layer Signing Microsoft Booth Theatre Sessions Tues 24 March (11:00 - 18:00) Zero-Migration AI with Drasi: Bridge Your Existing Infrastructure to Modern Workflows Bringing real-time Kubernetes observability to AI agents via Model Context Protocol Secure Kubernetes Across the Stack: Supply Chain to Runtime Cut the Noise, Cut the Bill: Cost‑Smart Network Observability for Kubernetes AKS everywhere: one Kubernetes experience from Cloud to Edge Teaching AI to Build Better AKS Clusters with Terraform AKS-Flex: autoscale GPU nodes from Azure and neocloud like Nebius using karpenter Block Game with Block Storage: Running Minecraft on Kubernetes with local NVMe When One Cluster Fails: Keeping Kubernetes Services Online with Cilium ClusterMesh You Spent How Much? Controlling Your AI Spend with Istio + agentgateway Azure Front Door Edge Actions: Hardware-protected CDN functions in Azure Secure Your Sensitive Workloads with Confidential Containers on Azure Red Hat OpenShift AKS Automatic Anyscale on Azure Wed 25 March Kubernetes Answers without AI (And That's Okay) Accelerating Cloud‑Native and AI Workloads with Azure Linux on AKS Codeless OpenTelemetry: Auto‑Instrumenting Kubernetes Apps in Minutes Life After ingress-nginx: Modern Kubernetes Ingress on AKS Modern Apps, Faster: Modernization with AKS + GitHub Copilot App Mod Get started developing on AKS Encrypt Everything, Complicate Nothing: Rethinking Kubernetes Workload Network Security From Repo to Running on AKS with GitHub Copilot Simplify Multi‑Cluster App Traffic with Azure Kubernetes Application Network Open Source with Chainguard and Microsoft: Better Together on AKS Accelerating Cloud-Native Delivery for Developers: API-Driven Platforms with Radius Operate Kubernetes at Scale with Azure Kubernetes Fleet Manager Thurs 26 March Oooh Wee! An AKS GUI! – Deploy, Secure & Collaborate in Minutes (No CLI Required) Sovereign Kubernetes: Run AKS Where the Cloud Can’t Go Thousand Pods, One SAN: Burst-Scaling Stateful Apps with Azure Container Storage + Elastic SAN There will also be a wide variety of demos running at our booth throughout the show – be sure to swing by to chat with the team. We look forward to seeing you at KubeCon Europe 2026 in Amsterdam Psst! Local or coming in to Amsterdam early? You can also catch the Microsoft team at: Cloud Native Rejekts on 21 March Maintainer Summit on 22 March1.1KViews0likes0CommentsBuilding AI apps and agents for the new frontier
Every new wave of applications brings with it the promise of reshaping how we work, build and create. From digitization to web, from cloud to mobile, these shifts have made us all more connected, more engaged and more powerful. The incoming wave of agentic applications, estimated to number 1.3 billion over the next 2 years[1] is no different. But the expectations of these new services are unprecedented, in part for how they will uniquely operate with both intelligence and agency, how they will act on our behalf, integrated as a member of our teams and as a part of our everyday lives. The businesses already achieving the greatest impact from agents are what we call Frontier Organizations. This week at Microsoft Ignite we’re showcasing what the best frontier organizations are delivering, for their employees, for their customers and for their markets. And we’re introducing an incredible slate of innovative services and tools that will help every organization achieve this same frontier transformation. What excites me most is how frontier organizations are applying AI to achieve their greatest level of creativity and problem solving. Beyond incremental increases in efficiency or cost savings, frontier firms use AI to accelerate the pace of innovation, shortening the gap from prototype to production, and continuously refining services to drive market fit. Frontier organizations aren’t just moving faster, they are using AI and agents to operate in novel ways, redefining traditional business processes, evolving traditional roles and using agent fleets to augment and expand their workforce. To do this they build with intent, build for impact and ground services in deep, continuously evolving, context of you, your organization and your market that makes every service, every interaction, hyper personalized, relevant and engaging. Today we’re announcing new capabilities that help you build what was previously impossible. To launch and scale fleets of agents in an open system across models, tools, and knowledge. And to run and operate agents with the confidence that every service is secure, governed and trusted. The question is, how do you get there? How do you build the AI apps and agents fueling the future? Read further for just a few highlights of how Microsoft can help you become frontier: Build with agentic DevOps Perhaps the greatest area of agentic innovation today is in service of developers. Microsoft’s strategy for agentic DevOps is redefining the developer experience to be AI-native, extending the power of AI to every stage of the software lifecycle and integrating AI services into the tools embraced by millions of developers. At Ignite, we’re helping every developer build faster, build with greater quality and security and deliver increasingly innovative apps that will shape their businesses. Across our developer services, AI agents now operate like an active member of your development and operations teams – collaborating, automating, and accelerating every phase of the software development lifecycle. From planning and coding to deployment and production, agents are reshaping how we build. And developers can now orchestrate fleets of agents, assigning tasks to agents to execute code reviews, testing, defect resolution, and even modernization of legacy Java and .NET applications. We continue to take this strategy forward with a new generation of AI-powered tools, with GitHub Agent HQ making coding agents like Codex, Claude Code, and Jules available soon directly in GitHub and Visual Studio Code, to Custom Agents to encode domain expertise, and “bring your own models” to empower teams to adapt and innovate. It’s these advancements that make GitHub Copilot, the world’s the most popular AI pair programmer, serving over 26 million users and helping organizations like Pantone, Ahold Delhaize USA, and Commerzbank streamline processes and save time. Within Microsoft’s own developer teams, we’re seeing transformative results with agentic DevOps. GitHub Copilot coding agent is now a top contributor—not only to GitHub’s core application but also to our major open-source projects like the Microsoft Agent Framework and Aspire. Copilot is reducing task completion time from hours to minutes and eliminating up to two weeks of manual development effort for complex work. Across Microsoft, 90% of pull requests are now covered by GitHub Copilot code review, increasing the pace of PR completion. Our AI-powered assistant for Microsoft’s engineering ecosystem is deeply integrated into VS Code, Teams, and other tools, giving engineers and product managers real-time, context-aware answers where they work—saving 2.2k developer days in September alone. For app modernization, GitHub Copilot has reduced modernization project timelines by as much as 88%. In production environments, Azure SRE agent has handled over 7K incidents and collected diagnostics on over 18K incidents, saving over 10,000 hours for on-call engineers. These results underscore how agentic workflows are redefining speed, scale, and reliability across the software lifecycle at Microsoft. Launch at speed and scale with a full-stack AI app and agent platform We’re making it easier to build, run, and scale AI agents that deliver real business outcomes. To accelerate the path to production for advanced AI applications and agents is delivering a complete, and flexible foundation that helps every organization move with speed and intelligence without compromising security, governance or operations. Microsoft Foundry helps organizations move from experimentation to execution at scale, providing the organization-wide observability and control that production AI requires. More than 80,000 customers, including 80% of the Fortune 500, use Microsoft Foundry to build, optimize, and govern AI apps and agents today. Foundry supports open frameworks like the Microsoft Agent Framework for orchestration, standard protocols like Model Context Protocol (MCP) for tool calling, and expansive integrations that enable context-aware, action-oriented agents. Companies like Nasdaq, Softbank, Sierra AI, and Blue Yonder are shipping innovative solutions with speed and precision. New at Ignite this year: Foundry Models With more than 11,000 models like OpenAI’s GPT-5, Anthropic’s Claude, and Microsoft’s Phi at their fingertips, developers, Foundry delivers the broadest model selection on any cloud. Developers have the power to benchmark, compare, and dynamically route models to optimize performance for every task. Model router is now generally available in Microsoft Foundry and in public preview in Foundry Agent Service. Foundry IQ, Delivering the deep context needed to make every agent grounded, productive, and reliable. Foundry IQ, now available in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process rather than a one-time lookup. Powered by Azure AI Search, it centralizes RAG workflows into a single grounding API, simplifying orchestration and improving response quality while respecting user permissions and data classifications. Foundry Agent Service now offers Hosted Agents, multi-agent workflows, built-in memory, and the ability to deploy agents directly to Microsoft 365 and Agent 365 in public preview. Foundry Tools, empowers developers to create agents with secure, real-time access to business systems, business logic, and multimodal capabilities. Developers can quickly enrich agents with real-time business context, multimodal capabilities, and custom business logic through secure, governed integration with 1,400+ systems and APIs. Foundry Control Plane, now in public preview, centralizes identity, policy, observability, and security signals and capabilities for AI developers in one portal. Build on an AI-Ready foundation for all applications Managed Instance on Azure App Service lets organizations migrate existing .NET web applications to the cloud without the cost or effort of rewriting code, allowing them to migrate directly into a fully managed platform-as-a-service (PaaS) environment. With Managed Instance, organizations can keep operating applications with critical dependencies on local Windows services, third-party vendor libraries, and custom runtimes without requiring any code changes. The result is faster modernizations with lower overhead, and access to cloud-native scalability, built-in security and Azure’s AI capabilities. MCP Governance with Azure API Management now delivers a unified control plane for APIs and MCP servers, enabling enterprises to extend their existing API investments directly into the agentic ecosystem with trusted governance, secure access, and full observability. Agent Loop and native AI integrations in Azure Logic Apps enable customers to move beyond rigid workflows to intelligent, adaptive automation that saves time and reduces complexity. These capabilities make it easier to build AI-powered, context-aware applications using low-code tools, accelerating innovation without heavy development effort. Azure Functions now supports hosting production-ready, reliable AI agents with stateful sessions, durable tool calls, and deterministic multi-agent orchestrations through the durable extension for Microsoft Agent Framework. Developers gain automatic session management, built-in HTTP endpoints, and elastic scaling from zero to thousands of instances — all with pay-per-use pricing and automated infrastructure. Azure Container Apps agents and security supercharges agentic workloads with automated deployment of multi-container agents, on-demand dynamic execution environments, and built-in security for runtime protection, and data confidentiality. Run and operate agents with confidence New at Ignite, we’re also expanding the use of agents to keep every application secure, managed and operating without compromise. Expanded agentic capabilities protect applications from code to cloud and continuously monitor and remediate production issues, while minimizing the efforts on developers, operators and security teams. Microsoft Defender for Cloud and GitHub Advanced Security: With the rise of multi-agent systems, the security threat surface continues to expand. Increased alert volumes, unprioritized threat signals, unresolved threats and a growing backlog of vulnerabilities is increasing risk for businesses while security teams and developers often operate in disconnected tools, making collaboration and remediation even more challenging. The new Defender for Cloud and GitHub Advanced Security integration closes this gap, connecting runtime context to code for faster alert prioritization and AI-powered remediation. Runtime context prioritizes security risks with insights that allow teams to focus on what matters most and fix issues faster with AI-powered remediation. When Defender for Cloud finds a threat exposed in production, it can now link to the exact code in GitHub. Developers receive AI suggested fixes directly inside GitHub, while security teams track progress in Defender for Cloud in real time. This gives both sides a faster, more connected way to identify issues, drive remediation, and keep AI systems secure throughout the app lifecycle. Azure SRE Agent is an always-on, AI-powered partner for cloud reliability, enabling production environments to become self-healing, proactively resolve issues, and optimize performance. Seamlessly integrated with Azure Monitor, GitHub Copilot, and incident management tools, Azure SRE Agent reduces operational toil. The latest update introduces no-code automation, empowering teams to tailor processes to their unique environments with minimal engineering overhead. Event-driven triggers enable proactive checks and faster incident response, helping minimize downtime. Expanded observability across Azure and third-party sources is designed to help teams troubleshoot production issues more efficiently, while orchestration capabilities support integration with MCP-compatible tools for comprehensive process automation. Finally, its adaptive memory system is designed to learn from interactions, helping improve incident handling and reduce operational toil, so organizations can achieve greater reliability and cost efficiency. The future is yours to build We are living in an extraordinary time, and across Microsoft we’re focused on helping every organization shape their future with AI. Today’s announcements are a big step forward on this journey. Whether you’re a startup fostering the next great concept or a global enterprise shaping your future, we can help you deliver on this vision. The frontier is open. Let’s build beyond expectations and build the future! Check out all the learning at Microsoft Ignite on-demand and read more about the announcements making it happen at: Recommended sessions BRK113: Connected, managed, and complete BRK103: Modernize your apps in days, not months, with GitHub Copilot BRK110: Build AI Apps fast with GitHub and Microsoft Foundry in action BRK100: Best practices to modernize your apps and databases at scale BRK114: AI Agent architectures, pitfalls and real-world business impact BRK115: Inside Microsoft's AI transformation across the software lifecycle Announcements aka.ms/AgentFactory aka.ms/AppModernizationBlog aka.ms/SecureCodetoCloudBlog aka.ms/AppPlatformBlog [1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, #US53361825 and May 2025.8.9KViews2likes0Comments