security
5433 TopicsHow to disable Microsoft Defender sign-in prompt.
We're using the default Microsoft Defender in Win 11 Pro on workstations on a domain network. On this network, access to things like OneDrive are just flat out not allowed for security reasons. User's can log in to things like the MS Store, Google accounts, etc. Every time a user logs on using their domain credentials they get a popup from the Microsoft Defender icon in the systray informing them they need to sign in for "best protection". Since they're not permitted to use "any" remote sites for things like data storage there is no need for them to sign in to "ANYTHING" remote. Presently the buttons present on the popup are "Sign In" and "Dismiss". Per a GPO setup they flat out can't sign in even if they tried. But is there a way in GPO to completely eliminate this popup appearing? I've looked in GPO settings for Defender for both computer and user, but there's nothing that jumps out at me to indicate I can do this without totally and completely disabling Microsoft Defender entirely. TIA. - Carl9Views0likes1CommentBehind the Build with RSA: Identity Resilience in the Age of AI
Behind the Build is an ongoing series spotlighting standout Microsoft partner collaborations. Each edition dives into the technical and strategic decisions that shape real-world integrations—highlighting engineering excellence, innovation, and the shared customer value created through partnership. RSA and Microsoft share a long, multiyear partnership shaped not by a single product or integration, but by shared customers grappling with some of today’s most complex security challenges, from cloud migration and identity sprawl to AI-driven threats. In this Behind the Build blog, we feature Dave Taku, RSA’s Vice President of Product Management and User Experience, to dive deeper into how that collaboration works at a technical level, how RSA and Microsoft engineers partner to solve real customer problems, and how recent work spanning Microsoft Entra, Microsoft Sentinel, and AI-driven security capabilities is shaping what comes next. Meet Dave Taku Dave Taku has spent nearly 25 years in cybersecurity, working across domains such as telecommunications and network security. But most of that time has been focused squarely on identity in areas like authentication, access management, governance and lifecycle, in particular. He’s been with RSA for two decades. When asked what makes a great VP of product, Dave describes his role as one centered on enablement. “My job is really to provide clarity and empower the team, to help them be successful.” That team-oriented mindset carries through RSA’s broader approach to engineering and partnerships. A Customer Driven Partnership with Microsoft RSA’s collaboration with Microsoft has largely been shaped by shared customers, many of them large, complex enterprises navigating the shift from on premises environments to cloud-first architectures. “These efforts are almost always customer initiated,” Dave notes. “Customers want us working together to make their journey successful.” That alignment has led to a wide range of joint initiatives over the years, spanning identity control planes, hybrid and multi cloud scenarios, and more recently, deeper analytics and AI driven security workflows. Identity as the Foundation Identity sits at the center of RSA’s partnership with Microsoft, particularly through integrations with Microsoft Entra. While organizations increasingly adopt Entra for cloud identity, many still operate complex hybrid estates and highly regulated environments. RSA can help in those mixed-use cases by extending identity controls beyond a single platform, providing behavioral analytics and risk-based authentication that complements Entra’s native features. “At RSA, we’re laser focused on answering two questions for our customers,” Dave explains. “Who is this user (can we be absolutely sure)? And is their access appropriate from a zero-trust perspective?” A standout example of Microsoft’s collaboration with RSA is their early adoption of External Authentication Methods (EAM), where they served as a day one launch partner. EAM built on prior generations of integration between RSA and Microsoft identity technologies and has been critical for customers migrating sensitive workloads to the cloud without disrupting existing security postures. At the end of the day, it is customers that drive this kind of innovation. Dave points to large, global, financial institutions as clear bellwethers. As these organizations shift toward cloud first models and embrace Azure and SaaS, they face the challenge of modernizing identity without disrupting environments long secured by RSA or introducing new risks during migration. EAM has been critical in enabling that transition, allowing established RSA authentication and policy controls to carry forward into Microsoft Entra so customers can adopt cloud services while preserving the security models and operational consistency they depend on. From Identity Signals to Agentic AI with Sentinel More recently, RSA and Microsoft have expanded their collaboration through deeper integrations with Microsoft Sentinel, including work involving Sentinel’s data lake and Security Copilot. Together, the companies are advancing AI-driven capabilities that strengthen identity-security insights and automation, defend against AI-powered attacks, and address the growing need to secure non-human identities as autonomous agents become more prevalent in enterprise environments. RSA’s approach starts with administrative telemetry from RSA ID Plus. Those events are ingested through a Sentinel connector and stored in the Microsoft Sentinel data lake which enables cost‑effective long‑term retention of identity telemetry, making it available for advanced analytics. Security Copilot agents then assess this data to surface anomalous or risky administrative behavior. “Admin accounts are increasingly a target,” says Dave. “If you don’t know when an admin is behaving unusually, you’re already too late.” This integration enables security teams to analyze identity related activity alongside broader organizational telemetry, helping analysts detect compromised credentials earlier and respond faster. “Human operators can’t keep up anymore,” Dave says. “As identities become more dynamic and more automated, we need AI driven assistance to maintain zero trust at scale.” Looking Ahead As RSA and Microsoft look ahead, their collaboration is increasingly shaped by how identity security must evolve in an AI driven world. Dave outlines three core areas where both teams see significant opportunities for continued innovation. AI will play a growing role in helping organizations make sense of increasingly fluid identity environments, enabling better insight, decision making, and, over time, more autonomous responses as manual oversight becomes less viable. At the same time, the rise of AI powered attacks is placing new strain on traditional identity trust models, pushing the industry toward more adaptive, analytics driven signals. Finally, as enterprises adopt AI agents that act independently or on behalf of users, identity security is expanding beyond humans altogether, making the protection of non-human identities an essential frontier for the future of cybersecurity. Programs like the Microsoft Intelligent Security Association (MISA) help enable this kind of deep technical collaboration, providing a framework for RSA and Microsoft to align on emerging scenarios, validate integrations, and bring new capabilities to market faster. “It’s been a long journey together,” Dave reflects. “And we’re just getting started.”205Views1like0CommentsWindows Sandbox
I had to reinstall my computer after the NVMe drive failed and was replaced. The current version of Windows is Windows 11 Business, version 25H2 (OS build 26200.7462). I enabled Windows Sandbox. However, it fails to run with the error: “Windows Sandbox failed to initialize. The media is write-protected (0x80070013)”. I previously had this Windows optional feature enabled without any issues. I have disabled BitLocker, turned off some core isolation features, disabled other Microsoft Defender anti-virus features and in addition followed some recommended steps in Microsoft Support documentation. However, the issue persists. Any help would be greatly appreciated.586Views1like5CommentsIntroducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 20269.8KViews3likes2CommentsProblem creating a subfolder or modifying the contents of a folder
A problem happens to me that I already had and which seemed to have resolved itself more or less at the time. When I want to modify the contents of a folder (add a new subfolder, modify the name of a file,...) the modification does not appear. I am forced via the explorer to come out of my folder then enter it again to see that the subfolder is indeed created or that the name of a file has been modified. This is obviously very painful to use. When this happened to me a few months ago I saw people who had had the same problem and I tested proposals without success until one morning the problem disappeared. IT'S reappeared but I no longer know what I was asked to do. Does anyone on this forum know the issue and can explain it by suggesting a way to resolve it?71Views0likes2CommentsUrgent: Stop the "Security Theater." UAC Needs Parent Process Visibility NOW.
Subject: Urgent: Stop the "Security Theater." UAC Needs Parent Process Visibility NOW. To the Windows Shell & Security Team, I am writing to demand a critical rectification in the User Account Control (UAC) design. The current implementation of UAC is not just outdated; it is fundamentally broken and fosters dangerous user habits due to a lack of transparency. The Core Problem: Context is Everything Your current design only answers "WHAT is running" (e.g., cmd.exe executing netsh winsock reset), but it deliberately hides "WHO requested it." This obfuscation renders the security prompt useless. Let me give you a simple analogy: If someone tells me to "Go home" at night, my reaction depends entirely on the speaker. If it is my father, it is an instruction of care. If it is a stranger in the shadows, it is a threat. Right now, Windows is that stranger in the dark. It throws a command in my face without identifying the source. When a generic system process requests high privileges, how is a user supposed to distinguish between a legitimate driver update and a malicious script? The "Safety" Excuse is Invalid Do not hide behind the excuse that "Parent Process ID (PPID) can be spoofed." Even a potentially spoofable path is infinitely better than a complete blindfold. By hiding the call stack, you are forcing users to play Russian Roulette with their "Yes/No" buttons. You Are Training Users to Be Vulnerable Because you refuse to provide the "Source" context, users have learned that they cannot verify the prompt. Consequently, they are conditioned to blindly click "Yes" just to make the annoying window go away. This is Security Theater at its worst. You are not protecting the user; you are confusing them. The Demand We are in 2026. The technical barrier to displaying the "Initiating Process" in the UAC dialog is non-existent. 1. Show the Parent Process: Display clearly which application triggered the UAC request (e.g., "Initiated by: Steam.exe"). 2. Show the Hierarchy: Give advanced users the option to expand the process tree right there in the dialog. Stop being lazy. Stop assuming users do not need to know. Give us the information we need to make actual security decisions. Disappointed and Expecting Change, A Windows User who refuses to click "Yes" blindly.56Views0likes3Comments