best practices
1712 TopicsRestrict Users from Creating New Teams in Microsoft Teams
Hello together, I've do this here: https://learn.microsoft.com/en-us/previous-versions/microsoft-365/solutions/manage-creation-of-groups?view=o365-worldwide#step-2-run-powershell-commands I create a group for users, that allowed to create new Teams Channels and run the script, but i didnt see in the updated settings the group ID: What im doing wrong? At the moment no one can create new Teams Channels, also the members of the GroupCreationAllowed Group. Can somebody help, whats the problem of the script is? Thx Peter24Views0likes1CommentPlanner task comments no longer send email notifications – critical regression
This change removed a previously existing core functionality without providing an adequate replacement. With the new Planner experience, task comments no longer trigger automatic email notifications to assigned users. This breaks a critical communication mechanism that many teams relied on for reliable task coordination. As a result, assigned users are no longer consistently informed about updates, introducing a high risk of missed information and operational issues in day-to-day work. There is currently no supported or enforceable alternative to ensure users are notified. Previous behavior: Task comments triggered automatic email notifications Assigned users were reliably informed Communication was traceable and consistent Current behavior: No automatic email notifications No configuration to restore this @mentions required (manual, error-prone, not enforceable) Microsoft Support has confirmed that this is by design and cannot be reverted. From an enterprise perspective, this is not just a design change, but a regression of critical functionality without an equivalent replacement. Request: Please restore automatic email notifications for task discussions or provide a reliable, enforceable alternative for notifying assigned users. Question to the community: How are you handling this change in real-world scenarios? Switching tools? Enforcing @mentions? Moving communication out of Planner? Would appreciate hearing how others are dealing with this.101Views1like1CommentViva Engage for recognition feels limited, what are you all using?
We rolled out Viva Engage hoping the praise feature would help build a recognition culture but honestly its kinda buried and most employees forget its there. Our managers want something where giving recognition is part of the daily flow, like right inside a Teams chat or after a meeting. Are you running into difficulties with praise? Any best practices you can recommend so we can use it better? Or are you using something else?2Views0likes0CommentsMicrosoft’s New In‑House AI Models (MAI‑Transcribe, MAI‑Voice, MAI‑Image)
What Are the New MAI Models? MAI‑Transcribe‑1 (Speech‑to‑Text) MAI‑Transcribe‑1 is Microsoft’s first‑generation in‑house speech recognition model. It supports 25 languages and is optimized for real‑world, noisy enterprise audio, such as meetings and call centers. Key highlights Enterprise‑grade transcription accuracy Designed for multilingual and accented speech Lower GPU cost compared to prior Azure speech offerings MAI‑Voice‑1 (Text‑to‑Speech) MAI‑Voice‑1 is a high‑fidelity voice generation model capable of producing natural, expressive speech while preserving speaker identity over long‑form audio. Key highlights Generates up to 60 seconds of audio in ~1 second Supports custom voice creation Optimized for voice agents and conversational systems MAI‑Image‑2 (Text‑to‑Image) MAI‑Image‑2 is Microsoft’s highest‑capability text‑to‑image model, already ranking among top image models used in production Copilot experiences. Key highlights High‑quality photorealistic image generation Accurate in‑image text rendering Production‑ready latency and cost profile Why This Matters for Azure Developers For Azure developers, this launch changes three things fundamentally: First‑party AI stack Developers can now build speech, voice, and image workloads without relying on external AI providers. Enterprise‑ready by default These models inherit Azure RBAC, Managed Identity, compliance, and governance through Microsoft Foundry. Agent‑first design MAI models are designed to be embedded inside AI agents, not just called as single APIs Below is a common enterprise architecture using MAI models. Sample Code Calling MAI‑Transcribe‑1: What Changed with MAI Models: Before vs After (Developer Perspective) Microsoft’s MAI models are not just new endpoints — they represent a fundamental shift in how Azure developers build multimodal and agent‑based AI solutions. High‑Level Comparison Aspect Before MAI (Azure & External Models) After MAI (MAI‑Transcribe, Voice, Image) Model Ownership Heavy dependency on third‑party models (OpenAI, external TTS/STT providers) First‑party Microsoft‑built models, operated and optimized by Microsoft Enterprise Integration AI models integrated into Azure AI models native to Microsoft Foundry Governance & Compliance Mixed controls depending on model provider Unified Azure RBAC, Entra ID, Purview, Managed Identity Agent Readiness Primarily single‑request / single‑response APIs Designed for agent‑oriented, long‑running workflows Cost Predictability Token‑based or mixed pricing models Enterprise‑optimized price‑to‑performance models Operational Consistency Different SDKs, APIs, quotas Single Foundry tooling and SDK surface352Views0likes0CommentsStop Experimenting, Start Building: AI Apps & Agents Dev Days Has You Covered
The AI landscape has shifted. The question is no longer “Can we build AI applications?” it’s “Can we build AI applications that actually work in production?” Demos are easy. Reliable, scalable, resilient AI systems that handle real-world complexity? That’s where most teams struggle. If you’re an AI developer, software engineer, or solution architect who’s ready to move beyond prototypes and into production-grade AI, there’s a series built specifically for you. What Is AI Apps & Agents Dev Days? AI Apps & Agents Dev Days is a monthly technical series from Microsoft Reactor, delivered in partnership with Microsoft and NVIDIA. You can explore the full series at https://developer.microsoft.com/en-us/reactor/series/s-1590/ This isn’t a slide deck marathon. The series tagline says it best: “It’s not about slides, it’s about building.” Each session tackles real-world challenges, shares patterns that actually work, and digs into what’s next in AI-driven app and agent design. You bring your curiosity, your code, and your questions. You leave with something you can ship. The sessions are led by experienced engineers and advocates from both Microsoft and NVIDIA, people like Pamela Fox, Bruno Capuano, Anthony Shaw, Gwyneth Peña-Siguenza, and solutions architects from NVIDIA’s Cloud AI team. These aren’t theorists; they’re practitioners who build and ship the tools you use every day. What You’ll Learn The series covers the full spectrum of building AI applications and agent-based systems. Here are the key themes: Building AI Applications with Azure, GitHub, and Modern Tooling Sessions walk through how to wire up AI capabilities using Azure services, GitHub workflows, and the latest SDKs. The focus is always on code-first learning, you’ll see real implementations, not abstract architecture diagrams. Designing and Orchestrating AI Agents Agent development is one of the series’ strongest threads. Sessions cover how to build agents that orchestrate long-running workflows, persist state automatically, recover from failures, and pause for human-in-the-loop input, without losing progress. For example, the session “AI Agents That Don’t Break Under Pressure” demonstrates building durable, production-ready AI agents using the Microsoft Agent Framework, running on Azure Container Apps with NVIDIA serverless GPUs. Scaling LLM Inference and Deploying to Production Moving from a working prototype to a production deployment means grappling with inference performance, GPU infrastructure, and cost management. The series covers how to leverage NVIDIA GPU infrastructure alongside Azure services to scale inference effectively, including patterns for serverless GPU compute. Real-World Architecture Patterns Expect sessions on container-based deployments, distributed agent systems, and enterprise-grade architectures. You’ll learn how to use services like Azure Container Apps to host resilient AI workloads, how Foundry IQ fits into agent architectures as a trusted knowledge source, and how to make architectural decisions that balance performance, cost, and scalability. Why This Matters for Your Day Job There’s a critical gap between what most AI tutorials teach and what production systems actually require. This series bridges that gap: Production-ready patterns, not demos. Every session focuses on code and architecture you can take directly into your projects. You’ll learn patterns for state persistence, failure recovery, and durable execution — the things that break at 2 AM. Enterprise applicability. The scenarios covered — travel planning agents, multi-step workflows, GPU-accelerated inference — map directly to enterprise use cases. Whether you’re building internal tooling or customer-facing AI features, the patterns transfer. Honest trade-off discussions. The speakers don’t shy away from the hard questions: When do you need serverless GPUs versus dedicated compute? How do you handle agent failures gracefully? What does it actually cost to run these systems at scale? Watch On-Demand, Build at Your Own Pace Every session is available on-demand. You can watch, pause, and build along at your own pace, no need to rearrange your schedule. The full playlist is available at This is particularly valuable for technical content. Pause a session while you replicate the architecture in your own environment. Rewind when you need to catch a configuration detail. Build alongside the presenters rather than just watching passively. What You’ll Walk Away Wit After working through the series, you’ll have: Practical agent development skills — how to design, orchestrate, and deploy AI agents that handle real-world complexity, including state management, failure recovery, and human-in-the-loop patterns Production architecture patterns — battle-tested approaches for deploying AI workloads on Azure Container Apps, leveraging NVIDIA GPU infrastructure, and building resilient distributed systems Infrastructure decision-making confidence — a clearer understanding of when to use serverless GPUs, how to optimise inference costs, and how to choose the right compute strategy for your workload Working code and reference implementations — the sessions are built around live coding and sample applications (like the Travel Planner agent demo), giving you starting points you can adapt immediately A framework for continuous learning — with new sessions each month, you’ll stay current as the AI platform evolves and new capabilities emerge Start Building The AI applications that will matter most aren’t the ones with the flashiest demos — they’re the ones that work reliably, scale gracefully, and solve real problems. That’s exactly what this series helps you build. Whether you’re designing your first AI agent system or hardening an existing one for production, the AI Apps & Agents Dev Days sessions give you the patterns, tools, and practical knowledge to move forward with confidence. Explore the series at https://developer.microsoft.com/en-us/reactor/series/s-1590/ and start watching the on-demand sessions at the link above. The best time to level up your AI engineering skills was yesterday. The second-best time is right now and these sessions make it easy to start.Microsoft 365 multi-agent workflow with Microsoft Agent Framework
Learn how to design and run a multi‑agent workflow with Microsoft Agent Framework: from building a coordinated set of specialized agents and tools, to hosting and deploying them with Azure AI Foundry, and finally exposing the same workflow to users in Microsoft 365 (Teams or Copilot). This walkthrough demonstrates a practical end‑to‑end pattern for orchestrating agents, adding tools, and packaging the solution for real‑world applications.233Views0likes0CommentsWhats the best Practise for on-call duty via teams external calling?
Hey community, I'm a bit in a struggle when setting up our Teams Operator Connect Phone system. We have an Auto attendence which is offering different menus (Press 1..., etc) We're planning on setting up a twentyfour x seven on-call duty where customers can call and are getting redirected to the mobile phones of our technician. I saw the option to forward to one number, but there isn't an option to forward to multiple numbers. How do you guys solve such a scenario, where you have to wake up colleagues mid night? We are changing shifts weekly, always 2 guys, sometimes 3 ppl. on shift. Thank in advance, SchnittlauchSolved147Views0likes3CommentsIf You're Building AI on Azure, ECS 2026 is Where You Need to Be
Let me be direct: there's a lot of noise in the conference calendar. Generic cloud events. Vendor showcases dressed up as technical content. Sessions that look great on paper but leave you with nothing you can actually ship on Monday. ECS 2026 isn't that. As someone who will be on stage at Cologne this May, I can tell you the European Collaboration Summit combined with the European AI & Cloud Summit and European Biz Apps Summit is one of the few events I've seen where engineers leave with real, production-applicable knowledge. Three days. Three summits. 3,000+ attendees. One of the largest Microsoft-focused events in Europe, and it keeps getting better. If you're building AI systems on Azure, designing cloud-native architectures, or trying to figure out how to take your AI experiments to production — this is where the conversation is happening. What ECS 2026 Actually Is ECS 2026 runs May 5–7 at Confex in Cologne, Germany. It brings together three co-located summits under one roof: European Collaboration Summit — Microsoft 365, Teams, Copilot, and governance European AI & Cloud Summit — Azure architecture, AI agents, cloud security, responsible AI European BizApps Summit — Power Platform, Microsoft Fabric, Dynamics For Azure engineers and AI developers, the European AI & Cloud Summit is your primary destination. But don't ignore the overlap, some of the most interesting AI conversations happen at the intersection of collaboration tooling and cloud infrastructure. The scale matters here: 3,000+ attendees, 100+ sessions, multiple deep-dive tracks, and a speaker lineup that includes Microsoft executives, Regional Directors, and MVPs who have built, broken, and rebuilt production systems. The Azure + AI Track - What's Actually On the Agenda The AI & Cloud Summit agenda is built around real technical depth. Not "intro to AI" content, actual architecture decisions, patterns that work, and lessons from things that didn't. Here's what you can expect: AI Agents and Agentic Systems This is where the energy is right now, and ECS is leaning in. Expect sessions covering how to design agent workflows, chain reasoning steps, handle memory and state, and integrate with Azure AI services. Marco Casalaina, VP of Products for Azure AI at Microsoft, is speaking if you want to understand the direction of the Azure AI platform from the people building it, this is a direct line. Azure Architecture at Scale Cloud-native patterns, microservices, containers, and the architectural decisions that determine whether your system holds up under real load. These sessions go beyond theory you'll hear from engineers who've shipped these designs at enterprise scale. Observability, DevOps, and Production AI Getting AI to production is harder than the demos suggest. Sessions here cover monitoring AI systems, integrating LLMs into CI/CD pipelines, and building the operational practices that keep AI in production reliable and governable. Cloud Security and Compliance Security isn't optional when you're putting AI in front of users or connecting it to enterprise data. Tracks cover identity, access patterns, responsible AI governance, and how to design systems that satisfy compliance requirements without becoming unmaintainable. Pre-Conference Deep Dives One underrated part of ECS: the pre-conference workshops. These are extended, hands-on sessions typically 3–6 hours that let you go deep on a single topic with an expert. Think of them as intensive short courses where you can actually work through the material, not just watch slides. If you're newer to a particular area of Azure AI, or you want to build fluency in a specific pattern before the main conference sessions, these are worth the early travel. The Speaker Quality Is Different Here The ECS speaker roster includes Microsoft executives, Microsoft MVPs, and Regional Directors, people who have real accountability for the products and patterns they're presenting. You'll hear from over 20 Microsoft speakers: Marco Casalaina — VP of Products, Azure AI at Microsoft Adam Harmetz — VP of Product at Microsoft, Enterprise Agent And dozens of MVPs and Regional Directors who are in the field every day, solving the same problems you are. These aren't keynote-only speakers — they're in the session rooms, at the hallway track, available for real conversations. The Hallway Track Is Not a Cliché I know "networking" sounds like a corporate afterthought. At ECS it genuinely isn't. When you put 3,000 practitioners, engineers, architects, DevOps leads, security specialists in one venue for three days, the conversations between sessions are often more valuable than the sessions themselves. You get candid answers to "how are you actually handling X in production?" that you won't find in documentation. The European Microsoft community is tight-knit and collaborative. ECS is where that community concentrates. Why This Matters Right Now We're in a period where AI development is moving fast but the engineering discipline around it is still maturing. Most teams are figuring out: How to move from AI prototype to production system How to instrument and observe AI behaviour reliably How to design agent systems that don't become unmaintainable How to satisfy security and compliance requirements in AI-integrated architectures ECS 2026 is one of the few places where you can get direct answers to these questions from people who've solved them — not theoretically, but in production, on Azure, in the last 12 months. If you go, you'll come back with practical patterns you can apply immediately. That's the bar I hold events to. ECS consistently clears it. Register and Explore the Agenda Register for ECS 2026: ecs.events Explore the AI & Cloud Summit agenda: cloudsummit.eu/en/agenda Dates: May 5–7, 2026 | Location: Confex, Cologne, Germany Early registration is worth it the pre-conference workshops fill up. And if you're coming, find me, I'll be the one talking too much about AI agents and Azure deployments. See you in Cologne.NFS Permission Denied in Azure App Service on Linux: What It Means and What to Do
If your Azure App Service on Linux uses an Azure Files NFS share, you may sometimes see errors like Permission denied or Errno 13 when your app tries to write to the mounted path. Azure Files supports NFS for Linux and Unix workloads, and NFS uses Unix-style numeric ownership and permissions (UID/GID), which can behave differently from SMB-based file sharing. Overview This post is for customers using Azure App Service on Linux together with an Azure Files NFS share for persistent storage. Azure Files NFS is designed for Linux and Unix-style workloads, supports POSIX-style permissions, and does not support Windows clients or NFS ACLs. In this setup, a write failure does not always mean the file is corrupted. Sometimes it means the file ownership seen by the running app no longer matches the identity context currently used to access the NFS share. In containerized Linux environments, user IDs inside a container can be mapped differently outside the container, and Docker documents that this can affect access to host-mounted resources. Common signs You may notice: Permission denied Errno 13 your app can read files but cannot update or overwrite them file ownership looks different than expected when you inspect the mounted path These symptoms are consistent with how NFS handles Unix-style ownership and permissions. Azure documents that NFS permissions are enforced through the operating system and NFS model rather than SMB-style user authentication. Why this can happen At a high level, NFS uses numeric ownership such as UID and GID. In container-based Linux environments, the identity that appears inside the container is not always the same as the identity seen outside the container. Docker’s user namespace documentation explains that a container user such as root can be mapped to a less-privileged user on the host, and that mounted-resource access can become more complex because of that mapping. That means a file created earlier under one effective identity context may later be accessed under a different one. When that happens, the app may no longer be able to write to the file even though the file itself is still present and intact. What to check first Start by checking the mounted share from the app’s runtime context. ls -l /mount/path/file ls -ln /mount/path/file id -u id -g The ls -ln output is especially useful because it shows the numeric UID and GID directly. If you need shell access for investigation, App Service supports SSH into Linux containers, and Microsoft notes that Linux custom containers may need extra SSH configuration. You should also review the NFS share’s squash setting. Azure Files NFS supports No Root Squash, Root Squash, and All Squash. Microsoft documents these options in the root squash guidance. A practical mitigation If the main issue is inconsistent ownership behavior, a practical mitigation is often to use All Squash on the NFS share. Azure documents All Squash as a supported NFS setting, and squash settings are specifically intended to control how client identities are handled when they access the share. One important note: changing the squash setting does not automatically rewrite old files. If existing data was created under a different ownership context, you may still need to migrate that data to a new share configured the way you want. Recommended approach A simple and cautious approach is: Create a new Azure Files NFS share. Configure it with All Squash if that matches your workload needs. Mount both the old share and the new share on a Linux environment. Copy the data from old to new. Validate that the app can read and write correctly. Repoint production to the validated share. Azure Files supports NFS shares and squash configuration, and Azure also documents how to mount NFS shares on Linux if you need a separate environment for validation or migration. Final takeaway If your App Service on Linux starts hitting NFS permission denied errors, focus first on ownership, UID/GID behavior, and squash settings before assuming the files are damaged. For many users, the most effective path is to validate the current ownership model, review the NFS squash setting, and, if needed, migrate data to a share configured with All Squash. References NFS file shares in Azure Files | Microsoft Learn Configure Root Squash Settings for NFS Azure File Shares | Microsoft Learn SSH Access for Linux and Windows Containers - Azure App Service | Microsoft Learn Isolate containers with a user namespace | Docker Docs105Views0likes0CommentsAutomate cybersecurity at scale with Microsoft Security Copilot agents
When we introduced Microsoft Security Copilot last year, we set out to transform the way defenders approach cybersecurity. As one of the industry's first generative AI solutions for security and IT teams, Security Copilot is empowering teams to catch what others miss, respond faster, and strengthen team expertise in an evolving threat landscape. Customers like Eastman are already seeing the impact. “I’m finding that I can ask [Security Copilot] about attack factors that I’ve never seen before and get answers much faster”, said David Yates, Senior Cybersecurity Analyst at Eastman. “That helps me to make a better decision and respond faster to an attacker.” A recent study of Copilot users showed that using Security Copilot reduced mean time to resolution by 30%, helping accelerate response times and minimizing the impact of security incidents. But as defenders evolve, so have attackers. Adversaries are now leveraging AI to launch more sophisticated attacks with unprecedented speed and scale. Security and IT teams – already overwhelmed by a huge volume of alerts, data, and threats – are struggling to keep up. Traditional automation, while useful, lacks the flexibility and adaptability to keep up. Today, we’re taking the next leap forward in generative AI-powered cybersecurity. I am thrilled to introduce agents in Microsoft Security Copilot. AI-powered agents represent the natural evolution of Security Copilot, going beyond AI assistant capabilities. They autonomously manage high-volume security and IT tasks, seamlessly integrated with Microsoft Security solutions and partner solutions. Purpose-built for security, these agents learn from feedback, adapt to organizational workflows with your team fully in-control, and operate securely within Microsoft’s Zero-Trust framework. Delivering powerful automation across threat protection, identity management, data security, and IT operations, these agents empower teams to accelerate responses, prioritize risks, and drive efficiency at scale. By reducing manual workloads, they enhance operational effectiveness and strengthen overall security posture – allowing defenders to stay ahead. To bring this automation to life, we’re introducing six security agents from Microsoft and five security agents from partners which will be available for preview in April. Empowering security and IT teams with Security Copilot agents Our goal is to provide generative AI-powered security for everyone. Integrating Copilot with Microsoft Security products helps IT and security teams benefit from increased speed and accuracy. Now, you can use embedded Security Copilot agents with capabilities specific to use cases for your role in the products you know and love: Security Alert Triage Agent (previously named Phishing Triage Agent) SOC analysts often face the challenge of managing hundreds of user-submitted phishing alerts each week, with each alert taking up to 30 minutes for manual triage. This process requires meticulous sifting through submissions to find the needle in the haystack – the genuine threat amidst all the noise. Security Copilot solves this challenge with an AI-powered agent embedded in Microsoft Defender, that works in the background to autonomously triage user-submitted phishing incidents. Powered by advanced multi-modal AI tools, it determines whether an alert is a genuine phishing attempt or a false alarm with exceptional precision. The agent not only delivers natural language explanations for its decisions but also dynamically refines its detection capabilities based on analyst feedback. By alleviating the burden of reactive work, it empowers SOC analysts to focus on proactive security measures, ultimately strengthening the organization's overall security posture. Note: The Phishing Triage Agent has since been expanded and is now called the Security Alert Triage Agent. Learn more at aka.ms/SATA Alert Triage Agents for Data Loss Prevention and Insider Risk Management Data security admins regularly struggle to manage the volume of alerts they receive daily, addressing only about 60% of them due to time and resource constraints1. The Alert Triage Agents in Microsoft Purview Data Loss Prevention (DLP) and Insider Risk Management (IRM) identify the alerts that pose the greatest risk to your organization and should be prioritized first. These agents analyze the content and potential intent involved in an alert, based on the organization’s chosen parameters and selected policies, to categorize alerts based on the impact they have on sensitive data. Additionally, they provide a comprehensive explanation on the logic behind that categorization, allowing admins to analyze a risk in just a few minutes. These agents empower data security teams to focus on the most important alerts and concentrate on the critical threats, with a dynamic process that takes inputs from data security admins in natural language and fine-tunes the triage results to better match the organizations’ priorities. The agent learns from this feedback, using that rationale to calibrate the prioritization of future alerts in DLP and IRM. Learn more about the Alert Triage Agents for DLP and IRM here. Conditional Access Optimization Agent As organizations grow, identity and IT admins must continuously ensure that access policies adapt to new employees, contractors, SaaS apps, and more – keeping security intact without adding complexity. But as their environments evolve, keeping Conditional Access (CA) policies up to date becomes increasingly difficult. New users and apps can slip through, and exclusions can go unaddressed, creating security risks. Even with routine reviews, manually auditing policies and adjusting coverage can take days or weeks –yet gaps can still go unnoticed. The CA Optimization Agent in Microsoft Entra changes that for admins, automating the detection and resolution of policy drift. This agent continuously monitors for newly created users and applications, analyzing their alignment with existing CA policies, and proactively detects security gaps in real time. Unlike static automation, it recommends optimizations and provides one-click fixes, helping admins refine policy coverage effortlessly while ensuring a strong, adaptive security posture. Learn more about the CA Optimization Agent here. Vulnerability Remediation Agent Managing security vulnerabilities is a growing challenge for organizations, as the volume of CVEs and limited resources make it difficult to prioritize and implement critical fixes effectively. Microsoft Intune is designed for organizations that need a modern, cloud-powered approach to endpoint management, one that not only simplifies IT operations but strengthens security in an evolving threat landscape. IT admins require more than just visibility into vulnerabilities; they need a proactive, risk-based security strategy that continuously assesses risk and automates remediation to minimize exposure. That’s why Intune is introducing the Vulnerability Remediation Agent—a solution built to help organizations stay ahead of emerging threats. By leveraging Microsoft Defender Vulnerability Management, the agent automatically identifies, evaluates, and prioritizes vulnerabilities. It continuously monitors newly published threats, assesses their risk levels, and offers clear, actionable recommendations for remediation. With continuous vulnerability detection, risk-based prioritization and guided remediation, the agent reduces exposure time while freeing up IT teams to focus on strategic initiatives. This is the first step toward designing vulnerability remediation at scale. A future, comprehensive approach will work across device platforms, address vulnerabilities in third-party applications, and remediate using configuration changes. Learn more about the Vulnerability Remediation Agent here. Threat Intelligence Briefing Agent Cyber Threat Intelligence analysts often face data overload and resource constraints when sourcing the threat intelligence needed to help their organizations understand, prioritize, and respond to critical threats. Crafting a threat intelligence briefing for security teams and executives can take hours—or even days—due to the constant evolution of both the threat landscape and an organization’s attack surface. The Threat Intelligence Briefing Agent in Security Copilot dramatically expedites this process. It automatically curates up-to-date, context-specific intelligence tailored to your organization’s unique profile and attack surface. Operating autonomously in the background, it taps into Microsoft’s extensive threat intelligence resources (including Microsoft Defender Threat Intelligence and Microsoft Defender External Surface Management) to deliver prioritized reports in just 4-5 minutes. This tool not only cuts down on manual effort but also highlights the most pressing threats and provides actionable recommendations, ensuring your team stays well-informed and ready to respond. Learn more about the Threat Intelligence Briefing Agent here. Extending agentic capabilities with partner solutions We are grateful to our partners who continue to play a vital role in empowering everyone to confidently adopt safe and responsible AI. Our growing partner ecosystem seamlessly integrates Security Copilot with established tools across various applications. Today, I am pleased to share five new upcoming agents in partner solutions, with many more to come. Privacy Breach Response Agent by OneTrust analyzes a data breach based on type of data, geographic jurisdiction, and regulatory requirements to generate guidance for the privacy team on how to meet those requirements. Network Supervisor Agent by Aviatrix determines why a VPN, Gateway, or Site2Cloud connection is down and provides information about the failure. SecOps Tooling Agent by BlueVoyant assesses your security operations center (SOC) and state of controls to make recommendations to optimize security operations to improve controls, efficacy, and compliance. Alert Triage Agent by Tanium provides analysts with necessary context to quickly and confidently make a decision on each alert. Task Optimizer Agent by Fletch helps organizations forecast and prioritize the most critical threat alerts to reduce alert fatigue and improve security. Learn more about our partner integrations at aka.ms/partnerintegrations. Get Started with Security Copilot Agents Microsoft Security Copilot agents will be available in preview starting April 2025. To get started with Security Copilot, check out the website for more information. Already using Security Copilot? Make sure you’re signed up for the Security Copilot Customer Connection Program (CCP) to receive the latest updates and features—join today at aka.ms/JoinCCP. Learn more about the latest innovations at the Microsoft Secure digital event on April 9, 2025. Register now. With agents, Security Copilot continues to lead the way in AI-powered cybersecurity, helping organizations defend against threats faster, smarter, and with greater confidence.30KViews9likes3Comments