microsoft defender for endpoint
759 TopicsLearn more about Microsoft Security Communities.
In the last five years, Microsoft has increased the emphasis on community programs – specifically within the security, compliance, and management space. These communities fall into two categories: Public and Private (or NDA only). In this blog, we will share a breakdown of each community and how to join.Onboarding MDE with Defender for Cloud (Problem)
Hello Community, In our Customer i have a strange problem. We onboarded with Azure Arc server and activate a Defender for Cloud servises only for Endpoint protection. Some of this device onboarded into Microsoft Defender portale, but not appears as a device, infact i don't have opportunity to put them into a group to apply policy. I have check sensor of Azure Arc and all works fine (device are in Azure Arc, are in the defender portal and see them on Intune (managed by MDE)). From Intune portal From Defender portal But in difference from other device into entra ID exists only the enterprise application and not device I show the example of device that works correctly (the same onboarding method) Is there anyone who has or has had this problem? Thanks and Regards, Guido152Views0likes2CommentsDefender for servers (P1)
Hey guys, I enabled my Defender for cloud trial licens (P1) for my Windows servers. They are onboarded and i can see them visually in the (security.microsoft.com) EDR Portal. My enforcement scope is set to Intune - so all my AV policies etc are created there. I want to create a AV Policy for my Windows servers but i can't see the objects in Entra. What is best practice. To register them in Entra manually or should it automaticlly create a object in Entra during the onboarding process? Can't create & assign a AV policy etc until i create a group and put all my servers into that group. Any ideas? Also might be worth mentioning i see that they are managed by "unknown" and not Microsoft Sense? Should i point back the scope to the Defender portal? Whilst my endpoints are managed by Intune.19Views0likes1CommentMonthly news - December 2025
Microsoft Defender Monthly news - December 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from November 2025. Defender for Cloud has its own Monthly News post, have a look at their blog space. 😎 Microsoft Ignite 2025 - now on-demand! 🚀 New Virtual Ninja Show episode: Advancements in Attack Disruption Vulnerability Remediation Agent in Microsoft Intune Microsoft Defender Ignite 2025: What's new in Microsoft Defender? This blog summarizes our big announcements we made at Ignite. (Public Preview) Defender XDR now includes the predictive shielding capability, which uses predictive analytics and real-time insights to dynamically infer risk, anticipate attacker progression, and harden your environment before threats materialize. Learn more about predictive shielding. Security Copilot for SOC: bringing agentic AI to every defender. This blog post gives a great overview of the various agents supporting SOC teams. Account correlation links related accounts and corresponding insights to provide identity-level visibility and insights to the SOC. Coordinated response allows Defenders to take action comprehensively across connected accounts, accelerating response and minimizing the potential for lateral movement. Enhancing visibility into your identity fabric with Microsoft Defender. This blog describes new enhancements to the identity security experience within Defender that will help enrich your security team’s visibility and understanding into your unique identity fabric. (Public Preview) The IdentityAccountInfo table in advanced hunting is now available for preview. This table contains information about account information from various sources, including Microsoft Entra ID. It also includes information and link to the identity that owns the account. Microsoft Sentinel customers using the Defender portal, or the Azure portal with the Microsoft Sentinel Defender XDR data connector, now also benefit from Microsoft Threat Intelligence alerts that highlight activity from nation-state actors, major ransomware campaigns, and fraudulent operations. For more information, see Incidents and alerts in the Microsoft Defender portal. (Public Preview) New Entity Behavior Analytics (UEBA) experiences in the Defender portal! Microsoft Sentinel introduces new UEBA experiences in the Defender portal, bringing behavioral insights directly into key analyst workflows. These enhancements help analysts prioritize investigations and apply UEBA context more effectively. Learn more on our docs. (Public Preview) A new Restrict pod access response action is now available when investigating container threats in the Defender portal. This response action blocks sensitive interfaces that allow lateral movement and privilege escalation. (Public Preview) Threat analytics now has an Indicators tab that provides a list of all indicators of compromise (IOCs) associated with a threat. Microsoft researchers update these IOCs in real time as they find new evidence related to the threat. This information helps your security operations center (SOC) and threat intelligence analysts with remediation and proactive hunting. Learn more. In addition the overview section of threat analytics now includes additional details about a threat, such as alias, origin, and related intelligence, providing you with more insights on what the threat is and how it might impact your organization. Microsoft Defender for Identity (Public Preview) In addition to the GA release of scoping by Active Directory domains a few months ago, you can now scope by Organizational Units (OUs) as part of XDR User Role-Based Access Control. This enhancement provides even more granular control over which entities and resources are included in security analysis. For more information, see Configure scoped access for Microsoft Defender for Identity. (Public Preview). New security posture assessment: Change password for on-prem account with potentially leaked credentials. The new security posture assessment lists users whose valid credentials have been leaked. For more information, see: Change password for on-prem account with potentially leaked credentials. Defender for Identity is slowly rolling out automatic Windows event auditing for sensors v3.x, streamlining deployment by applying required auditing settings to new sensors and fixing misconfigurations on existing ones. As it becomes available, you will be able to enable automatic Windows event-auditing in the Advanced settings section in the Defender portal, or using the Graph API. Identity Inventory enhancements: Accounts tab, manual account linking and unlinking, and expanded remediation actions are now available. Learn more in our docs. Microsoft Defender for Cloud Apps (Public Preview) Defender for Cloud Apps automatically discovers AI agents created in Microsoft Copilot Studio and Azure AI Foundry, collects audit logs, continuously monitors for suspicious activity, and integrates detections and alerts into the XDR Incidents and Alerts experience with a dedicated Agent entity. For more information, see Protect your AI agents. Microsoft Defender for Endpoint Ignite 2025: Microsoft Defender now prevents threats on endpoints during an attack. This year at Microsoft Ignite, Microsoft Defender is announcing exciting innovations for endpoint protection that help security teams deploy faster, gain more visibility, and proactively block attackers during active attacks. (Public Preview) Defender for Endpoint now includes the GPO hardening and Safeboot hardening response actions. These actions are part of the predictive shielding feature, which anticipates and mitigates potential threats before they materialize. (Public Preview) Custom data collection enables organizations to expand and customize telemetry collection beyond default configurations to support specialized threat hunting and security monitoring needs. (Public Preview) Native root detection support for Microsoft Defender on Android. This enables proactive detection of rooted devices without requiring Intune policies, ensuring stronger security and validating that Defender is running on an uncompromised device, ensuring more reliable telemetry that is not vulnerable to attacker manipulation. (Public Preview) The new Defender deployment tool is a lightweight, self-updating application that streamlines onboarding devices to the Defender endpoint security solution. The tool takes care of prerequisites, automates migrations from older solutions, and removes the need for complex onboarding scripts, separate downloads, and manual installations. It currently supports Windows and Linux devices. Defender deployment tool: for Windows devices for Linux devices (Public Preview) Defender endpoint security solution for Windows 7 SP1 and Windows Server 2008 R2 SP1. A Defender for endpoint security solution is now available for legacy Windows 7 SP1 and Windows Server 2008 R2 SP1 devices. The solution provides advanced protection capabilities and improved functionality for these devices compared to other solutions. The new solution is available using the new Defender deployment tool. Microsoft Defender Vulnerability Management (Public Preview) The Vulnerability Management section in the Microsoft Defender portal is now located under Exposure management. This change is part of the vulnerability management integration to Microsoft Security Exposure Management, which significantly expands the scope and capabilities of the platform. Learn more. (General Availability) Microsoft Secure Score now includes new recommendations to help organizations proactively prevent common endpoint attack techniques. Require LDAP client signing and Require LDAP server signing - help ensure integrity of directory requests so attackers can't tamper with or manipulate group memberships or permissions in transit. Encrypt LDAP client traffic - prevents exposure of credentials and sensitive user information by enforcing encrypted communication instead of clear-text LDAP. Enforce LDAP channel binding - prevents man-in-the-middle relay attacks by ensuring the authentication is cryptographically tied to the TLS session. If the TLS channel changes, the bind fails, stopping credential replay. (General Availability) These Microsoft Secure Score recommendations are now generally available: Block web shell creation on servers Block use of copied or impersonated system tools Block rebooting a machine in Safe Mode Microsoft Defender for Office 365 Microsoft Ignite 2025: Transforming Phishing Response with Agentic Innovation. This blog post summarizes the following announcements: General Availability of the Security Copilot Phishing Triage Agent Agentic Email Grading System in Microsoft Defender Cisco and VIPRE Security Group join the Microsoft Defender ICES ecosystem. A separate blog explains these best practices in more detail and outline three other routing techniques commonly used across ICES vendors. Blog series: Best practices from the Microsoft Community Microsoft Defender for Office 365: Fine-Tuning: This blog covers our top recommendations for fine-tuning Microsoft Defender for Office 365 configuration from hundreds of deployments and recovery engagements, by Microsoft MVP Joe Stocker. You may be right after all! Disputing Submission Responses in Microsoft Defender for Office 365: Microsoft MVP Mona Ghadiri spotlights a new place AI has been inserted into a workflow to make it better… a feature that elevates the transparency and responsiveness of threat management: the ability to dispute a submission response directly within Microsoft Defender for Office 365. Blog post: Strengthening calendar security through enhanced remediation.3.5KViews0likes0CommentsArtificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.Issue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. Lucas129Views0likes1CommentFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.291Views0likes0CommentsUpdating SDK for Java used by Defender for Server/CSPM in AWS
Hi, I have a customer who is Defender for Cloud/CSPM in AWS. Last week, Cloud AWS Health Dashboard lit up with a recommendation around the use of AWS SDK for Java 1.x in their organization. This version will reach end of support on December 31, 2025. The recommendation is to migrate to AWS SDK for Java 2.x. The issue is present in all of AWS workload accounts. They found that a large amount of these alerts is caused by the Defender CSPM service, running remotely, and using AWS SDK for Java 1.x. Customer attaching a couple of sample events that were gathered from the CloudTrail logs. Please note that in both cases: assumed-role: DefenderForCloud-Ciem sourceIP: 20.237.136.191 (MS Azure range) userAgent: aws-sdk-java/1.12.742 Linux/6.6.112.1-2.azl3 OpenJDK_64-Bit_Server_VM/21.0.9+10-LTS java/21.0.9 kotlin/1.6.20 vendor/Microsoft cfg/retry-mode/legacy cfg/auth-source#unknown Can someone provide guidance about this? How to find out if DfC is going to leverage AWS SDK for Java 2.x after Dec 31, 2025? Thanks, Terru37Views0likes0CommentsHost Microsoft Defender data locally in the United Arab Emirates
We are pleased to announce that local data residency support in the UAE is now generally available for Microsoft Defender for Endpoint and Microsoft Defender for Identity. This announcement reinforces our ongoing commitment to delivering secure, compliant services aligned with local data sovereignty requirements. Customers can now confidently onboard to Defender for Endpoint and Defender for Identity in the UAE, knowing that this Defender data will remain at rest within the UAE data boundary. This allows customers to meet their regulatory obligations and maintain control over their data. For more details on the Defender data storage and privacy policies, refer to Microsoft Defender for Endpoint data storage and privacy and Microsoft Defender for Identity data security and privacy. Note: Defender for Endpoint and Defender for Identity may potentially use other Microsoft services (i.e. Microsoft Intune for security settings management). Each Microsoft service is governed by its own data storage and privacy policies and may have varying regional availability. For more information, refer to our Online Product Terms. In addition to the UAE, Defender data residency capabilities are available in the United States, the European Union, the United Kingdom, Australia, Switzerland and India (see our recent announcement for local data hosting in India). Customers with Existing deployments for Defender for Endpoint and/or Defender for Identity Existing customers can check their deployment geo within the portal by going to Settings -> Microsoft Defender XDR-> Account; and see where the service is storing your data at rest. For example, in the image below, the service location for the Defender XDR tenant is UAE. ation information If you would like to update your service location, please reach out to Customer Service and Support for a tenant reset. Support can be accessed by clicking on the “?” icon in the top right corner of the portal when signed in as an Admin (see image below). If you are a Microsoft Unified support customer, please reach out to your Customer Success Account Manager for assistance with the migration process. More information: Ready to go local? Read our documentation for more information on how to get started. Microsoft Defender XDR data center location Not yet a customer? Take Defender XDR for a spin via a 90-day trial for Office 365 E5 or Defender for Endpoint via a 90-day trial for Defender for Endpoint Check out the Defender for Endpoint website to learn more about our industry leading Endpoint protection platform Check out the Defender for Identity website to learn how to keep your organization safe against rising identity threats887Views1like2Comments