microsoft sentinel
27 TopicsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesn’t require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, we’re announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. “Really amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.” Glueckkanja AG “The biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.” Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity context—enabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. What’s next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoft’s help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - it’s a foundation for integrated, AI-driven security operations. We’re committed to helping you make this transition smoothly and confidently. If you haven’t already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube33KViews9likes21CommentsMicrosoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection
Microsoft Sentinel is leveling up! Already a trusted cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution, it empowers security teams to detect, investigate, and respond to threats with speed and precision. Now, with the introduction of its new Data Lake architecture, Sentinel is transforming how security data is stored, accessed, and analyzed, bringing unmatched flexibility and scale to threat investigation. Unlike Microsoft Fabric OneLake, which supports analytics across the organization, Sentinel’s Data Lake is purpose-built for security. It centralizes raw structured, semi-structured, and unstructured data in its original format, enabling advanced analytics without rigid schemas. This article is written by someone who’s spent years helping security teams navigate Microsoft’s evolving ecosystem, translating complex capabilities into practical strategies. What follows is a hands-on look at the key features, benefits, and challenges of Sentinel’s Data Lake, designed to help you make the most of this powerful new architecture. Current Sentinel Features To tackle the challenges security teams, face today—like explosive data growth, integration of varied sources, and tight compliance requirements—organizations need scalable, efficient architectures. Legacy SIEMs often become costly and slow when analyzing multi-year data or correlating diverse events. Security data lakes address these issues by enabling seamless ingestion of logs from any source, schema-on-read flexibility, and parallelized queries over massive datasets. This schema-on-read allows SOC analysts to define how data is interpreted at the time of analysis, rather than when it is stored. This means analysts can flexibly adapt queries and threat detection logic to evolving threats, without reformatting historical data making investigations more agile and responsive to change. This empowers security operations to conduct deep historical analysis, automate enrichment, and apply advanced analytics, such as machine learning, while retaining strict control over data access and residency. Ultimately, decoupling storage and compute allows teams to boost detection and response speed, maintain compliance, and adapt their Security Operation Center (SOC) to future security demands. As organizations manage increasing data and limited budgets, many are moving from legacy SIEMs to advanced cloud-native options. Microsoft Sentinel’s Data Lake separates storage from computing, offering scalable and cost-effective analytics and compliance. For instance, storing 500 TB of logs in Sentinel Data Lake can cut costs by 60–80% compared to Log Analytics, due to lower storage costs and flexible retention. Integration with modern tools and open formats enables efficient threat response and regulatory compliance. Microsoft Sentinel data lake pricing (preview) Sentinel Data Lake Use Cases Log Retention: Long-term retention of security logs for compliance and forensic investigations Hunting: Advanced threat hunting using historical data Interoperability: Integration with Microsoft Fabric and other analytics platforms Cost: Efficient storage prices for high-volume data sources How Microsoft Sentinel Data Lake Helps Microsoft Sentinel’s Data Lake introduces a powerful paradigm shift for security operations by architecting the separation of storage and compute, enabling organizations to achieve petabyte-scale data retention without the traditional overhead and cost penalties of legacy SIEM solutions. Built atop highly scalable, cloud-native infrastructure, Sentinel Data Lake empowers SOCs to ingest telemetry from virtually unlimited sources ranging from on-premises firewalls, proxies, and endpoint logs to SaaS, IaaS, and PaaS environments—while leveraging schema-on-read, a method that allows analysts to define how data is interpreted at query time rather than when it is stored, offering greater flexibility in analytics. For example, a security analyst can adapt to the way historical data is examined as new threats emerge, without needing to reformat or restructure the data stored in the Data Lake. From Microsoft Learn – Retention and data tiering Storing raw security logs in open formats like Parquet (this is a columnar storage file format optimized for efficient data compression and retrieval, commonly used in big data processing frameworks like Apache Spark and Hadoop) enables easy integration with analytics tools and Microsoft Fabric, letting analysts efficiently query historical data using KQL, SQL, or Spark. This approach eliminates the need for complex ETL and archived data rehydration, making incident response faster; for instance, a SOC analyst can quickly search for years of firewall logs for threat detection. From Microsoft Learn – Flexible querying with Kusto Query Language Granular data governance and access controls allow organizations to manage sensitive information and meet legal requirements. Storing raw security logs in open formats enables fast investigations of long-term data incidents, while automated lifecycle management reduces costs and ensures compliance. Data Lakes integrate with Microsoft platforms and other tools for unified analytics and security. Machine learning helps detect unusual login activity across years, overcoming previous storage issues. From Microsoft Learn – Powerful analytics using Jupyter notebooks Pros and Cons The following table highlights the advantages and potential opportunities that Microsoft Sentinel Data Lake offers. This follows the same Pay-As-You-Go pricing model as currently available with Sentinel. Pros Cons License Needed Scalable, cost-effective long-term retention of security data Requires adaptation to new architecture Pay-As-You-Go model Seamless integration with Microsoft Fabric and open data formats Initial setup and integration may involve a learning curve Pay-As-You-Go model Efficient processing of petabyte-scale datasets Transitioning existing workflows may require planning Pay-As-You-Go model Advanced analytics, threat hunting, and AI/ML across historical data Some features may depend on integration with other services Pay-As-You-Go model Supports compliance use cases with robust data governance and audit trails Complexity in new data governance features Pay-As-You-Go model Microsoft Sentinel Data Lake solution advances cloud-native security by overcoming traditional SIEM limitations, allowing organizations to better retain, analyze, and respond to security data. As cyber threats grow, Sentinel Data Lake offers flexible, cost-efficient storage for long-term retention, supporting detection, compliance, and audits without significant expense or complexity. Quick Guide: Deploy Microsoft Sentinel Data Lake Assess Needs: Identify your security data volume, retention, and compliance requirements - Sentinel Data Lake Overview. Prepare Environment: Ensure Azure permissions and workspace readiness - Onboarding Guide. Enable Data Lake: Use Azure CLI or Defender portal to activate - Setup Instructions. Ingest & Import Data: Connect sources and migrate historical logs - Microsoft Sentinel Data Connectors. Integrate Analytics: Use KQL, notebooks, and Microsoft Fabric for scalable analysis - Fabric Overview Train & Optimize: Educate your team and monitor performance - Best Practices. About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share this as it’s something I often asked during my Security Trainings. This improves the already impressive Microsoft Sentinel feature stack helping the Defender Community to secure their environment in this ever-growing hacked world. I’ve been working with Microsoft Sentinel since September 2019, and I have been teaching learners about this SIEM since March 2020. I have experience using Security Copilot and Security AI Agents, which have been effective in improving my incident response and compromise recovery times.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewInvestigating M365 Copilot Activity with Sentinel & Defender XDR
As organizations embrace AI-powered tools like Microsoft Copilot, ChatGPT, and other generative assistants, one thing becomes immediately clear: AI is only as trustworthy as the data it can see. These systems are increasingly woven into everyday workstreams, surfacing insights, drafting content, and answering questions based on enterprise data signals. Yet behind the magic lies a new security frontier: making sure AI only accesses the right data, the right way, at the right time. That’s where Data Security Posture Management (DSPM) comes into play. Data Security Posture Management (DSPM) for AI is a Microsoft Purview capability designed to help organizations discover, secure, and apply compliance controls for AI usage across your enterprise. With personalized recommendations, one-click policies help you protect your data and comply with regulatory requirements and get ahead of questions like: Where is my sensitive data stored and who has access? Are we protecting data from potential oversharing risks Are we protecting sensitive data references in Copilot and agent responses? How to maintain compliance and governance over data accessed by AI Are we empowering users with AI safely and responsibly, backed by security? In this blog, we will explore how Microsoft Sentinel and Defender XDR can help security teams operationalize DSPM for AI. From capturing Copilot interaction telemetry to building investigations and accelerating response. To learn more about Data Security Posture Management (DSPM) for AI, please visit DSPM for AI M365 Copilot activity in the SOC Getting Started: This advanced hunting table is populated by records from Microsoft Defender for Cloud Apps. If your organization hasn’t deployed the service in Microsoft Defender XDR, queries that use the table aren’t going to work or return any results. To make sure the CloudAppEvents table is populated, make sure to enable Microsoft 365 activities. Follow this article for detailed steps. For more information about how to deploy Defender for Cloud apps in Defender XDR, refer to Deploy supported services. You can perform Advanced Hunting of Microsoft 365 Copilot data through CloudAppEvents. CloudAppEvents is a powerful table in Microsoft Defender XDRs advanced hunting schema that captures user and admin activities across Microsoft Cloud apps. To make sure the CloudAppEvents table is populated, follow the steps mentioned in the article here. The CloudAppEvents table contains enriched logs from all SaaS applications connected to Microsoft Defender for Cloud Apps refer to Apps and Services covered by CloudAppEvents DSPM for AI and CloudAppEvents Activity Explorer is the central investigative hub in Data Security Posture Management (DSPM) for AI. It surfaces granular telemetry about AI interactions, capturing prompts, responses, user identities, and sensitive information types (SITs) provided you’ve the right permissions/policies enabled. Whether the activity originates from Microsoft Copilot, third-party GenAI apps, or custom enterprise agents, Activity Explorer provides the visibility needed to assess risk and take action. Microsoft Purview’s Data Security Posture Management (DSPM) for AI provides visibility and it’s tightly integrated with Microsoft Sentinel and Defender XDR through the CloudAppEvents table. The Flow Explained Event Generation in DSPM for AI Every AI interaction, whether from Copilot, Fabric, or unmanaged apps like DeepSeek is captured in the Microsoft 365 Unified Audit Log. These logs include metadata like user identity, app name, agent name, prompt content, and sensitivity label matches Ingestion into CloudAppEvents The audit data flows into the CloudAppEventstable within Microsoft Defender XDR if you’ve enabled the app connector. Follow this article for detailed steps. This table is part of the advanced hunting schema and includes telemetry for user and object activities across Microsoft 365 and other cloud apps Availability in Microsoft Sentinel Because CloudAppEvents is also exposed in Microsoft Sentinel, customers can query AI-related activities using KQL for threat hunting, incident correlation, and compliance investigations. This enables a unified view across Sentinel and XDR without needing a separate connector What You Can Do with CloudAppEvents Advanced Hunting: Use KQL to search for AI interactions that match specific sensitivity labels, user risk scores, or app types. Incident Investigation: Correlate AI activity with alerts from Office 365. Compliance Audits: Track for activity hunting. Custom Dashboards: Visualize AI usage patterns in Power BI or Sentinel dashboards or Workbooks. Example KQL Query Best Practice: Build KQL queries that filter by Application == "Microsoft 365 Copilot" and ActionType == " Interactwithcopilot" to surface relevant events. For eg., A simple query to get started analyzing the interactions of M365 Copilot CloudAppEvents | where Application in ("Microsoft 365", "Microsoft 365 Copilot Chat") | where ActivityType == "Interactwithcopilot" Known Gaps The CloudAppEvents table, which ingests AI activity from the Microsoft 365 Unified Audit Log, is incredibly useful for activity hunting. It gives you metadata like: Timestamp User identity App and agent name Action type (e.g., AIInteraction) You won’t see the actual prompt or response from the AI interaction and you won’t get DSPM enrichment like, Sensitivity Information Types (SITs), Policy hits. These records only contain message metadata. Navigate to Purview’s DSPM for AI Activity Explorer to review the prompts and responses. While CloudAppEvents is great for identifying patterns and correlating activity across users and apps, it doesn’t give you the full picture needed for deep investigation or compliance auditing. If you need that level of detail, you’ll want to pivot into DSPM for AIs Activity Explorer, where you can inspect the full interaction including prompt, response, and policy context. Acknowledgements: Special Thanks to Martin Gagné, Principal Group Engineering Manager, for reviewing this blog and providing valuable feedback.Introducing Microsoft Sentinel data lake
Today, we announced a significant expansion of Microsoft Sentinel’s capabilities through the introduction of Sentinel data lake, now rolling out in public preview. Security teams cannot defend what they cannot see and analyze. With exploding volumes of security data, organizations are struggling to manage costs while maintaining effective threat coverage. Do-it-yourself security data architectures have perpetuated data silos, which in turn have reduced the effectiveness of AI solutions in security operations. With Sentinel data lake, we are taking a major step to address these challenges. Microsoft Sentinel data lake enables a fully managed, cloud-native, data lake that is purposefully designed for security, right inside Sentinel. Built on a modern lake architecture and powered by Azure, Sentinel data lake simplifies security data management, eliminates security data silos, and enables cost-effective long-term security data retention with the ability to run multiple forms of analytics on a single copy of that data. Security teams can now store and manage all security data. This takes the market-leading capabilities of Sentinel SIEM and supercharges it even further. Customers can leverage the data lake for retroactive TI matching and hunting over a longer time horizon, track low and slow attacks, conduct forensics analysis, build anomaly insights, and meet reporting & compliance needs. By unifying security data, Sentinel data lake provides the AI ready data foundation for AI solutions. Let’s look at some of Sentinel data lake’s core features. Simplified onboarding and enablement inside Defender Portal: Customers can easily discover and enable the new data lake from within the Defender portal, either from the banner on the home page or from settings. Setting up a modern data lake now is just a click away, empowering security teams to get started quickly without a complex setup. Simplified security data management: Sentinel data lake works seamlessly with existing Sentinel connectors. It brings together security logs from Microsoft services across M365, Defender, Azure, Entra, Purview, Intune plus third-party sources like AWS, GCP, network and firewall data from 350+ connectors and solutions. The data lake supports Sentinel’s existing table schemas while customers can also create custom connectors to bring raw data into the data lake or transform it during ingestion. In the future, we will enable additional industry-standard schemas. The data lake expands beyond just activity logs by including a native asset store. Critical asset information is added to the data lake using new Sentinel data connectors for Microsoft 365, Entra, and Azure, enabling a single place to analyze activity and asset data enriched with Threat intelligence. A new table management experience makes it easy for customers to choose where to send and store data, as well as set related retention policies to optimize their security data estate. Customers can easily send critical, high-fidelity security data to the analytics tier or choose to send high-volume, low fidelity logs to the new data lake tier. Any data brought into the analytics tier is automatically mirrored into the data lake at no additional charge, making data lake the central location for all security data. Advanced data analysis capabilities over data in the data lake: Sentinel data lake stores all security data in an open format to enable analysts to do multi-modal security analytics on a single copy of data. Through the new data lake exploration experience in the Defender portal, customers can leverage Kusto query language to analyze historical data using the full power of Kusto. Since the data lake supports the Sentinel table schema, advanced hunting queries can be run directly on the data lake. Customers can also schedule long-running jobs, either once or on a schedule, that perform complex analysis on historical data for in-depth security insights. These insights generated from the data lake can be easily elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Additionally, as part of the public preview, we are also releasing a new Sentinel Visual Studio Code extension that enables security teams to easily connect to the same data lake data and use Python notebooks, as well as spark and ML libraries to deeply analyze lake data for anomalies. Since the environment is fully managed, there is no compute infrastructure to set up. Customers can just install the Visual Studio Code extension and use AI coding agents like GitHub Copilot to build a notebook and execute it in the managed environment. These notebooks can also be scheduled as jobs and the resulting insights can be elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Flexible business model: Sentinel data lake enables customers to separate their data ingestion and retention needs from their security analytics needs, allowing them to ingest and store data cost effectively and then pay separately when analyzing data for their specific needs. Let’s put this all together and show an example of how a customer can operationalize and derive value from the data lake for retrospective threat intelligence matching in Microsoft Sentinel. Network logs are typically high-volume logs but can often contain key insights for detecting initial entry point of an attack, command and control connection, lateral movement or an exfiltration attempt. Customers can now send these high-volume logs to the data lake tier. Next, they can create a python notebook that can join latest threat intelligence from Microsoft Defender Threat Intelligence to scan network logs for any connections to/from a suspicious IP or domain. They can schedule this notebook to run as a scheduled job, and any insights can then be promoted to analytics tiers and leveraged to enrich ongoing investigation, hunts, response or forensics analysis. All this is possible cost-effectively without having to set up any complex infrastructure, enabling security teams to achieve deeper insights. This preview is now rolling out for customers in Defender portal in our supported regions. To learn more, check out our Mechanics video and our documentation or talk to your account teams. Get started today Join us as we redefine what’s possible in security operations: Onboard Sentinel data lake: https://aka.ms/sentineldatalakedocs Explore our pricing: https://aka.ms/sentinel/pricingblog For the supported regions, please refer to https://aka.ms/sentinel/datalake/geos Learn more about our MDTI news: http://aka.ms/mdti-convergence General Availability of Auxiliary Logs and Reduced PricingGraph RAG for Security: Insights from a Microsoft Intern
As a software engineering intern at Microsoft Security, I had the exciting opportunity to explore how Graph Retrieval-Augmented Generation (Graph RAG) can enhance data security investigations. This blog post shares my learning journey and insights from working with this evolving technology.Microsoft Sentinel data lake pricing (preview)
This article describes the pricing for the new Microsoft Sentinel data lake billed components in preview. See What is Microsoft Sentinel data lake (preview) to learn more about the data lake, or the Microsoft Sentinel pricing page for an overview of Microsoft Sentinel pricing. Pricing Important The preview pricing below is provided in USD for reference only, based on pricing in the East US region and does not include taxes or currency adjustments. Preview pricing is subject to change, see terms described in our preview terms. Starting August 4, 2025, refer to the Microsoft Sentinel pricing page for current pricing within your relevant region. SKU Meter type Price Data lake ingestion Data Processed (GB) $0.05 Data processing Data Processed (GB) $0.10 Data lake storage Data Stored (GB/Month) $0.026 Data lake query Data Analyzed (GB) $0.005 Advanced data insights 1 Compute Hour $0.15 Notes Prices are estimates only and are not intended as actual price quotes. Actual pricing may vary depending on the type of agreement entered with Microsoft, date of purchase, the currency exchange rate and taxes which may be applicable. During preview, the data lake tier includes 30 days of free storage. Data processing in the data lake is also available at no cost during this time. For more information on these meters, see Data lake tier.Fishing for Syslog with Azure Kubernetes and Logstash
Deploy Secure Syslog Collection on Azure Kubernetes Service with Terraform Organizations managing distributed infrastructure face a common challenge: collecting syslog data securely and reliably from various sources. Whether you're aggregating logs from network devices, Linux servers, or applications, you need a solution that scales with your environment while maintaining security standards. This post walks through deploying Logstash on Azure Kubernetes Service (AKS) to collect RFC 5425 syslog messages over TLS. The solution uses Terraform for infrastructure automation and forwards collected logs to Azure Event Hubs for downstream processing. You'll learn how to build a production-ready deployment that integrates with Azure Sentinel, Azure Data Explorer, or other analytics platforms. Solution Architecture The deployment consists of several Azure components working together: Azure Kubernetes Service (AKS): Hosts the Logstash deployment with automatic scaling capabilities Internal Load Balancer: Provides a static IP endpoint for syslog sources within your network Azure Key Vault: Stores TLS certificates for secure syslog transmission Azure Event Hubs: Receives processed syslog data using the Kafka protocol Log Analytics Workspace: Monitors the AKS cluster health and performance Syslog sources send RFC 5425-compliant messages over TLS to the Load Balancer on port 6514. Logstash processes these messages and forwards them to Event Hubs, where they can be consumed by various Azure services or third-party tools. Prerequisites Before starting the deployment, ensure you have these tools installed and configured: Terraform: Version 1.5 or later Azure CLI: Authenticated to your Azure subscription kubectl: For managing Kubernetes resources after deployment Several Azure resources must be created manually before running Terraform, as the configuration references them. This approach provides flexibility in organizing resources across different teams or environments. Step 1: Create Resource Groups Create three resource groups to organize the solution components: az group create --name rg-syslog-prod --location eastus az group create --name rg-network-prod --location eastus az group create --name rg-data-prod --location eastus Each resource group serves a specific purpose: rg-syslog-prod : Contains the AKS cluster, Key Vault, and Log Analytics Workspace rg-network-prod : Holds networking resources (Virtual Network and Subnets) rg-data-prod : Houses the Event Hub Namespace for data ingestion Step 2: Configure Networking Create a Virtual Network with dedicated subnets for AKS and the Load Balancer: az network vnet create \ --resource-group rg-network-prod \ --name vnet-syslog-prod \ --address-prefixes 10.0.0.0/16 \ --location eastus az network vnet subnet create \ --resource-group rg-network-prod \ --vnet-name vnet-syslog-prod \ --name snet-aks-prod \ --address-prefixes 10.0.1.0/24 az network vnet subnet create \ --resource-group rg-network-prod \ --vnet-name vnet-syslog-prod \ --name snet-lb-prod \ --address-prefixes 10.0.2.0/24 The network design uses non-overlapping CIDR ranges to prevent routing conflicts. The Load Balancer subnet will later be assigned the static IP address 10.0.2.100 . Step 3: Set Up Event Hub Namespace Create an Event Hub Namespace with a dedicated Event Hub for syslog data: az eventhubs namespace create \ --resource-group rg-data-prod \ --name eh-syslog-prod \ --location eastus \ --sku Standard az eventhubs eventhub create \ --resource-group rg-data-prod \ --namespace-name eh-syslog-prod \ --name syslog The Standard SKU provides Kafka protocol support, which Logstash uses for reliable message delivery. The namespace automatically includes a RootManageSharedAccessKey for authentication. Step 4: Configure Key Vault and TLS Certificate Create a Key Vault to store the TLS certificate: az keyvault create \ --resource-group rg-syslog-prod \ --name kv-syslog-prod \ --location eastus For production environments, import a certificate from your Certificate Authority: az keyvault certificate import \ --vault-name kv-syslog-prod \ --name cert-syslog-prod \ --file certificate.pfx \ --password <pfx-password> For testing purposes, you can generate a self-signed certificate: az keyvault certificate create \ --vault-name kv-syslog-prod \ --name cert-syslog-prod \ --policy "$(az keyvault certificate get-default-policy)" Important: The certificate's Common Name (CN) or Subject Alternative Name (SAN) must match the DNS name your syslog sources will use to connect to the Load Balancer. Step 5: Create Log Analytics Workspace Set up a Log Analytics Workspace for monitoring the AKS cluster: az monitor log-analytics workspace create \ --resource-group rg-syslog-prod \ --workspace-name log-syslog-prod \ --location eastus Understanding the Terraform Configuration With the prerequisites in place, let's examine the Terraform configuration that automates the remaining deployment. The configuration follows a modular approach, making it easy to customize for different environments. Referencing Existing Resources The Terraform configuration begins by importing references to the manually created resources: data "azurerm_client_config" "current" {} data "azurerm_resource_group" "rg-main" { name = "rg-syslog-prod" } data "azurerm_resource_group" "rg-network" { name = "rg-network-prod" } data "azurerm_resource_group" "rg-data" { name = "rg-data-prod" } data "azurerm_virtual_network" "primary" { name = "vnet-syslog-prod" resource_group_name = data.azurerm_resource_group.rg-network.name } data "azurerm_subnet" "kube-cluster" { name = "snet-aks-prod" resource_group_name = data.azurerm_resource_group.rg-network.name virtual_network_name = data.azurerm_virtual_network.primary.name } data "azurerm_subnet" "kube-lb" { name = "snet-lb-prod" resource_group_name = data.azurerm_resource_group.rg-network.name virtual_network_name = data.azurerm_virtual_network.primary.name } These data sources establish connections to existing infrastructure, ensuring the AKS cluster and Load Balancer deploy into the correct network context. Deploying the AKS Cluster The AKS cluster configuration balances security, performance, and manageability: resource "azurerm_kubernetes_cluster" "primary" { name = "aks-syslog-prod" location = data.azurerm_resource_group.rg-main.location resource_group_name = data.azurerm_resource_group.rg-main.name dns_prefix = "aks-syslog-prod" default_node_pool { name = "default" node_count = 2 vm_size = "Standard_DS2_v2" vnet_subnet_id = data.azurerm_subnet.kube-cluster.id } identity { type = "SystemAssigned" } network_profile { network_plugin = "azure" load_balancer_sku = "standard" network_plugin_mode = "overlay" } oms_agent { log_analytics_workspace_id = data.azurerm_log_analytics_workspace.logstash.id } } Key configuration choices: System-assigned managed identity: Eliminates the need for service principal credentials Azure CNI in overlay mode: Provides efficient pod networking without consuming subnet IPs Standard Load Balancer SKU: Enables zone redundancy and higher performance OMS agent integration: Sends cluster metrics to Log Analytics for monitoring The cluster requires network permissions to create the internal Load Balancer: resource "azurerm_role_assignment" "aks-netcontrib" { scope = data.azurerm_virtual_network.primary.id principal_id = azurerm_kubernetes_cluster.primary.identity[0].principal_id role_definition_name = "Network Contributor" } Configuring Logstash Deployment The Logstash deployment uses Kubernetes resources for reliability and scalability. First, create a dedicated namespace: resource "kubernetes_namespace" "logstash" { metadata { name = "logstash" } } The internal Load Balancer service exposes Logstash on a static IP: resource "kubernetes_service" "loadbalancer-logstash" { metadata { name = "logstash-lb" namespace = kubernetes_namespace.logstash.metadata[0].name annotations = { "service.beta.kubernetes.io/azure-load-balancer-internal" = "true" "service.beta.kubernetes.io/azure-load-balancer-ipv4" = "10.0.2.100" "service.beta.kubernetes.io/azure-load-balancer-internal-subnet" = data.azurerm_subnet.kube-lb.name "service.beta.kubernetes.io/azure-load-balancer-resource-group" = data.azurerm_resource_group.rg-network.name } } spec { type = "LoadBalancer" selector = { app = kubernetes_deployment.logstash.metadata[0].name } port { name = "logstash-tls" protocol = "TCP" port = 6514 target_port = 6514 } } } The annotations configure Azure-specific Load Balancer behavior, including the static IP assignment and subnet placement. Securing Logstash with TLS Kubernetes Secrets store the TLS certificate and Logstash configuration: resource "kubernetes_secret" "logstash-ssl" { metadata { name = "logstash-ssl" namespace = kubernetes_namespace.logstash.metadata[0].name } data = { "server.crt" = data.azurerm_key_vault_certificate_data.logstash.pem "server.key" = data.azurerm_key_vault_certificate_data.logstash.key } type = "Opaque" } The certificate data comes directly from Key Vault, maintaining a secure chain of custody. Logstash Container Configuration The deployment specification defines how Logstash runs in the cluster: resource "kubernetes_deployment" "logstash" { metadata { name = "logstash" namespace = kubernetes_namespace.logstash.metadata[0].name } spec { selector { match_labels = { app = "logstash" } } template { metadata { labels = { app = "logstash" } } spec { container { name = "logstash" image = "docker.elastic.co/logstash/logstash:8.17.4" security_context { run_as_user = 1000 run_as_non_root = true allow_privilege_escalation = false } resources { requests = { cpu = "500m" memory = "1Gi" } limits = { cpu = "1000m" memory = "2Gi" } } volume_mount { name = "logstash-config-volume" mount_path = "/usr/share/logstash/pipeline/logstash.conf" sub_path = "logstash.conf" read_only = true } volume_mount { name = "logstash-ssl-volume" mount_path = "/etc/logstash/certs" read_only = true } } } } } } Security best practices include: Running as a non-root user (UID 1000) Disabling privilege escalation Mounting configuration and certificates as read-only Setting resource limits to prevent runaway containers Automatic Scaling Configuration The Horizontal Pod Autoscaler ensures Logstash scales with demand: resource "kubernetes_horizontal_pod_autoscaler" "logstash_hpa" { metadata { name = "logstash-hpa" namespace = kubernetes_namespace.logstash.metadata[0].name } spec { scale_target_ref { kind = "Deployment" name = kubernetes_deployment.logstash.metadata[0].name api_version = "apps/v1" } min_replicas = 1 max_replicas = 30 target_cpu_utilization_percentage = 80 } } This configuration maintains between 1 and 30 replicas, scaling up when CPU usage exceeds 80%. Logstash Pipeline Configuration The Logstash configuration file defines how to process syslog messages: input { tcp { port => 6514 type => "syslog" ssl_enable => true ssl_cert => "/etc/logstash/certs/server.crt" ssl_key => "/etc/logstash/certs/server.key" ssl_verify => false } } output { stdout { codec => rubydebug } kafka { bootstrap_servers => "${name}.servicebus.windows.net:9093" topic_id => "syslog" security_protocol => "SASL_SSL" sasl_mechanism => "PLAIN" sasl_jaas_config => 'org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://${name}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=${primary_key};EntityPath=syslog";' codec => "json" } } The configuration: Listens on port 6514 for TLS-encrypted syslog messages Outputs to stdout for debugging (visible in container logs) Forwards processed messages to Event Hubs using the Kafka protocol Deploying the Solution With all components configured, deploy the solution using Terraform: Initialize Terraform in your project directory: terraform init Review the planned changes: terraform plan Apply the configuration: terraform apply Connect to the AKS cluster: az aks get-credentials \ --resource-group rg-syslog-prod \ --name aks-syslog-prod Verify the deployment: kubectl -n logstash get pods kubectl -n logstash get svc kubectl -n logstash get hpa Configuring Syslog Sources After deployment, configure your syslog sources to send messages to the Load Balancer: Create a DNS record pointing to the Load Balancer IP (10.0.2.100). For example: syslog.yourdomain.com Configure syslog clients to send RFC 5425 messages over TLS to port 6514 Install the certificate chain on syslog clients if using a private CA or self-signed certificate Example rsyslog configuration for a Linux client: *.* @@syslog.yourdomain.com:6514;RSYSLOG_SyslogProtocol23Format Monitoring and Troubleshooting Monitor the deployment using several methods: View Logstash logs to verify message processing: kubectl -n logstash logs -l app=logstash --tail=50 Check autoscaling status: kubectl -n logstash describe hpa logstash-hpa Monitor in Azure Portal: Navigate to the Log Analytics Workspace to view AKS metrics Check Event Hub metrics to confirm message delivery Review Load Balancer health probes and connection statistics Security Best Practices This deployment incorporates several security measures: TLS encryption: All syslog traffic is encrypted using certificates from Key Vault Network isolation: The internal Load Balancer restricts access to the virtual network Managed identities: No credentials are stored in the configuration Container security: Logstash runs as a non-root user with minimal privileges For production deployments, consider these additional measures: Enable client certificate validation in Logstash for mutual TLS Add Network Security Groups to restrict source IPs Implement Azure Policy for compliance validation Enable Azure Defender for Kubernetes Integration with Azure Services Once syslog data flows into Event Hubs, you can integrate with various Azure services: Azure Sentinel: Configure Data Collection Rules to ingest syslog data for security analytics. See the Azure Sentinel documentation for detailed steps. Azure Data Explorer: Create a data connection to analyze syslog data with KQL queries. Azure Stream Analytics: Process syslog streams in real-time for alerting or transformation. Logic Apps: Trigger workflows based on specific syslog patterns or events. Cost Optimization To optimize costs while maintaining performance: Right-size the AKS node pool based on actual syslog volume Use Azure Spot instances for non-critical environments Configure Event Hub retention based on compliance requirements Enable auto-shutdown for development environments Conclusion This Terraform-based solution provides a robust foundation for collecting syslog data in Azure. The combination of AKS, Logstash, and Event Hubs creates a scalable pipeline that integrates seamlessly with Azure's security and analytics services. The modular design allows easy customization for different environments and requirements. Whether you're collecting logs from a handful of devices or thousands, this architecture scales to meet your needs while maintaining security and reliability. For next steps, consider implementing additional Logstash filters for data enrichment, setting up automated certificate rotation, or expanding the solution to collect other log formats. The flexibility of this approach ensures it can grow with your organization's logging requirements.Microsoft Security in Action: Deploying and Maximizing Advanced Identity Protection
As cyber threats grow in sophistication, identity remains the first line of defense. With credentials being a primary target for attackers, organizations must implement advanced identity protection to prevent unauthorized access, reduce the risk of breaches, and maintain regulatory compliance. This blog outlines a phased deployment approach to implement Microsoft’s identity solutions, helping ensure a strong Zero Trust foundation by enhancing security without compromising user experience. Phase 1: Deploy advanced identity protection Step 1: Build your hybrid identity foundation with synchronized identity Establishing a synchronized identity is foundational for seamless user experiences across on-premises and cloud environments. Microsoft Entra Connect synchronizes Active Directory identities with Microsoft Entra ID, enabling unified governance while enabling users to securely access resources across hybrid environments. To deploy, install Microsoft Entra Connect, configure synchronization settings to sync only necessary accounts, and monitor health through built-in tools to detect and resolve sync issues. A well-implemented hybrid identity enables consistent authentication, centralized management, and a frictionless user experience across all environments. Step 2: Enforce strong authentication with MFA and Conditional Access Multi-Factor Authentication (MFA) is the foundation of identity security. By requiring an additional verification step, MFA significantly reduces the risk of account compromise—even if credentials are stolen. Start by enforcing MFA for all users, prioritizing high-risk accounts such as administrators, finance teams, and executives. Microsoft recommends deploying passwordless authentication methods, such as Windows Hello, FIDO2 security keys, and Microsoft Authenticator, to further reduce phishing risks. Next, to balance security with usability, use Conditional Access policies to apply adaptive authentication requirements based on conditions such as user behavior, device health, and risk levels. For example, block sign-ins from non-compliant or unmanaged devices while allowing access from corporate-managed endpoints. Step 3: Automate threat detection with Identity Protection Implementing AI-driven risk detection is crucial to identifying compromised accounts before attackers can exploit them. Start by enabling Identity Protection to analyze user behavior and detect anomalies such as impossible travel logins, leaked credentials, and atypical access patterns. To reduce security risk, evolve your Conditional Access policies with risk signals that trigger automatic remediation actions. For low-risk sign-ins, require additional authentication (such as MFA), while high-risk sign-ins should be blocked entirely. By integrating Identity Protection with Conditional Access, security teams can enforce real-time access decisions based on risk intelligence, strengthening identity security across the enterprise. Step 4: Secure privileged accounts with Privileged Identity Management (PIM) Privileged accounts are prime targets for attackers, making Privileged Identity Management (PIM) essential for securing administrative access. PIM allows organizations to apply the principle of least privilege by granting Just-in-Time (JIT) access, meaning users only receive elevated permissions when needed—and only for a limited time. Start by identifying all privileged roles and moving them to PIM-managed access policies. Configure approval workflows for high-risk roles like Global Admin or Security Admin, requiring justification and multi-factor authentication before privilege escalation. Next, to maintain control, enable privileged access auditing, which logs all administrative activities and generates alerts for unusual role assignments or excessive privilege usage. Regular access reviews further enable only authorized users to retain elevated permissions. Step 5: Implement self-service and identity governance tools Start by deploying Self-Service Password Reset (SSPR). SSPR enables users to recover their accounts securely without help desk intervention. Also integrate SSPR with MFA, so that only authorized users regain access. Next, organizations should implement automated Access Reviews on all users, not just privileged accounts, to periodically validate role assignments and remove unnecessary permissions. This helps mitigate privilege creep, where users accumulate excessive permissions over time. Phase 2: Optimize identity security and automate response With core identity protection mechanisms deployed, the next step is to enhance security operations with automation, continuous monitoring, and policy refinement. Step1: Enhance visibility with centralized monitoring Start by Integrating Microsoft Entra logs with Microsoft Sentinel to gain real-time visibility into identity-based threats. By analyzing failed login attempts, suspicious sign-ins, and privilege escalations, security teams can detect and mitigate identity-based attacks before they escalate. Step 2: Apply advanced Conditional Access scenarios To further tighten access control, implement session-based Conditional Access policies. For example, allow read-only access to SharePoint Online from unmanaged devices and block data downloads entirely. By refining policies based on user roles, locations, and device health, organizations can strengthen security while ensuring seamless collaboration. Phase 3: Enable secure collaboration across teams Identity security is not just about protection—it also enables secure collaboration across employees, partners, and customers. Step 1: Secure external collaboration Collaboration with partners, vendors, and contractors requires secure, managed access without the complexity of managing external accounts. Microsoft Entra External Identities allows organizations to provide seamless authentication for external users while enforcing security policies like MFA and Conditional Access. By enabling lifecycle management policies, organizations can automate external user access reviews and expirations, ensuring least-privilege access at all times. Step 2: Automate identity governance with entitlement management To streamline access requests and approvals, Microsoft Entra Entitlement Management lets organizations create pre-configured access packages for both internal and external users. External guests can request access to pre-approved tools and resources without IT intervention. Automated access reviews and expiration policies enable users retain access only as long as needed. This reduces administrative overheads while enhancing security and compliance. Strengthening identity security for the future Deploying advanced identity protection in a structured, phased approach allows organizations to proactively defend against identity-based threats while maintaining secure, seamless access. Ready to take the next step? Explore these Microsoft identity security deployment resources: Microsoft Entra Identity Protection Documentation Conditional Access Deployment Guide Privileged Identity Management Configuration Guide The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.