admin
45 TopicsMicrosoft Ignite 2025: Top Security Innovations You Need to Know
đ¤ Security & AI -The Big Story This Year 2025 marks a turning point for cybersecurity. Rapid adoption of AI across enterprises has unlocked innovation but introduced new risks. AI agents are now part of everyday workflows-automating tasks and interacting with sensitive dataâcreating new attack surfaces that traditional security models cannot fully address. Threat actors are leveraging AI to accelerate attacks, making speed and automation critical for defense. Organizations need solutions that deliver visibility, governance, and proactive risk management for both human and machine identities. Microsoft Ignite 2025 reflects this shift with announcements focused on securing AI at scale, extending Zero Trust principles to AI agents, and embedding intelligent automation into security operations. As a Senior Cybersecurity Solution Architect, Iâve curated the top security announcements from Microsoft Ignite 2025 to help you stay ahead of evolving threats and understand the latest innovations in enterprise security. Agent 365: Control Plane for AI Agents Agent 365 is a centralized platform that gives organizations full visibility, governance, and risk management over AI agents across Microsoft and third-party ecosystems. Why it matters: Unmanaged AI agents can introduce compliance gaps and security risks. Agent 365 ensures full lifecycle control. Key Features: Complete agent registry and discovery Access control and conditional policies Visualization of agent interactions and risk posture Built-in integration with Defender, Entra, and Purview Available via the Frontier Program Microsoft Agent 365: The control plane for AI agents Deep dive blog on Agent 365 Entra Agent ID: Zero Trust for AI Identities Microsoft Entra is the identity and access management suite (covering Azure AD, permissions, and secure access). Entra Agent ID extends Zero Trust identity principles to AI agents, ensuring they are governed like human identities. Why it matters: Unmanaged or over-privileged AI agents can create major security gaps. Agent ID enforces identity governance on AI agents and reduces automation risks. Key Features: Provides unique identities for AI agents Lifecycle governance and sponsorship for agents Conditional access policies applied to agent activity Integrated with open SDKs/APIs for thirdâparty platforms Microsoft Entra Agent ID Overview Entra Ignite 2025 announcements Public Preview details Security Copilot Expansion Security Copilot is Microsoftâs AI assistant for security teams, now expanded to automate threat hunting, phishing triage, identity risk remediation, and compliance tasks. Why it matters: Security teams face alert fatigue and resource constraints. Copilot accelerates response and reduces manual effort. Key Features: 12 new Microsoft-built agents across Defender, Entra, Intune, and Purview. 30+ partner-built agents available in the Microsoft Security Store. Automates threat hunting, phishing triage, identity risk remediation, and compliance tasks. Included for Microsoft 365 E5 customers at no extra cost. Security Copilot inclusion in Microsoft 365 E5 Security Copilot Ignite blog Security Dashboard for AI A unified dashboard for CISOs and risk leaders to monitor AI risks, aggregate signals from Microsoft security services, and assign tasks via Security Copilot - included at no extra cost. Why it matters: Provides a single pane of glass for AI risk management, improving visibility and decision-making. Key Features: Aggregates signals from Entra, Defender, and Purview Supports natural language queries for risk insights Enables task assignment via Security Copilot Ignite Session: Securing AI at Scale Microsoft Security Blog Microsoft Defender Innovations Microsoft Defender serves as Microsoftâs CNAPP solution, offering comprehensive, AI-driven threat protection that spans endpoints, email, cloud workloads, and SIEM/SOAR integrations. Why It Matters Modern attacks target multi-cloud environments and software supply chains. These innovations provide proactive defense, reduce breach risks before exploitation, and extend protection beyond Microsoft ecosystems-helping organizations secure endpoints, identities, and workloads at scale. Key Features: Predictive Shielding: Proactively hardens attack paths before adversaries pivot. Automatic Attack Disruption: Extended to AWS, Okta, and Proofpoint via Sentinel. Supply Chain Security: Defender for Cloud now integrates with GitHub Advanced Security. Whatâs new in Microsoft Defender at Ignite Defender for Cloud innovations Global Secure Access & AI Gateway Part of Microsoft Entraâs secure access portfolio, providing secure connectivity and inspection for web and AI traffic. Why it matters: Protects against lateral movement and AI-specific threats while maintaining secure connectivity. Key Features: TLS inspection, URL/file filtering AI Prompt Injection protection Private access for domain controllers to prevent lateral movement attacks. Learn about Secure Web and AI Gateway for agents Microsoft Entra: Whatâs new in secure access on the AI frontier Purview Enhancements Microsoft Purview is the data governance and compliance platform, ensuring sensitive data is classified, protected, and monitored. Why it matters: Ensures sensitive data remains protected and compliant in AI-driven environments. Key Features: AI Observability: Monitor agent activities and prevent sensitive data leakage. Compliance Guardrails: Communication compliance for AI interactions. Expanded DSPM: Data Security Posture Management for AI workloads. Announcing new Microsoft Purview capabilities to protect GenAI agents Intune Updates Microsoft Intune is a cloud-based endpoint device management solution that secures apps, devices, and data across platforms. It simplifies endpoint security management and accelerates response to device risks using AI. Why it matters: Endpoint security is critical as organizations manage diverse devices in hybrid environments. These updates reduce complexity, speed up remediation, and leverage AI-driven automation-helping security teams stay ahead of evolving threats. Key Features: Security Copilot agents automate policy reviews, device offboarding, and risk-based remediation. Enhanced remote management for Windows Recovery Environment (WinRE). Policy Configuration Agent in Intune lets IT admins create and validate policies with natural language Whatâs new in Microsoft Intune at Ignite Your guide to Intune at Ignite Closing Thoughts Microsoft Ignite 2025 signals the start of an AI-driven security era. From visibility and governance for AI agents to Zero Trust for machine identities, automation in security operations, and stronger compliance for AI workloads-these innovations empower organizations to anticipate threats, simplify governance, and accelerate secure AI adoption without compromising compliance or control. đ Full Coverage: Microsoft Ignite 2025 Book of NewsEmpowering organizations with integrated data security: Whatâs new in Microsoft Purview
Today, data moves across clouds, apps, and devices at an unprecedented speed, often outside the visibility of siloed legacy tools. The rise of autonomous agents, generative AI, and distributed data ecosystems means that traditional perimeter-based security models are no longer sufficient. Even though companies are spending more than $213 billion globally, they still face several persistent security challenges: Fragmented tools donât integrate together well and leave customers lacking full visibility of their data security risks The growing use of AI in the workplace is creating new data risks for companies to manage The shortage of skilled cybersecurity professionals is making it difficult to accomplish data security objectives Microsoft is a global leader in cloud, productivity, and security solutions. Microsoft Purview benefits from this breadth of offerings, integrating seamlessly across Microsoft 365, Azure, Microsoft Fabric, and other Microsoft platforms â while also working in harmony with complementary security tools. Unlike fragmented point solutions, Purview delivers an end-to-end data security platform built into the productivity and collaboration tools organizations already rely on. This deep understanding of data within Microsoft environments, combined with continually improving external data risk detections, allows customers to simplify their security stack, increase visibility, and act on data risks more quickly. At Ignite, weâre introducing the next generation of data security â delivering advanced protection and operational efficiency, so security teams can move at business speed while maintaining control of their data. Go beyond visibility into action, across your data estate Many customers today lack a comprehensive view of how to holistically address data security risks and properly manage their data security posture.âŻTo help customers strengthen data security across their data estate, we are excited to announce the new, enhanced Microsoft Purview Data Security Posture Management (DSPM). This new AI-powered DSPM experience unifies current Purview DSPM and DSPM for AI capabilities to create a central entry point for data security insights and controls, from which organizations can take action to continually improve their data security posture and prioritize risks. The new capabilities in the enhanced DSPM experience are: Outcome-Based workflows: Choose a data security objective and see related metrics, risk patterns, a recommended action plan and its impact - going from insight to action. Expanded coverage and remediation on Data Risk Assessments: Conduct item-level analysis with new remediation actions like bulk disabling of overshared SharePoint links. Out-of-box posture reports: Uncover data protection gaps and track security posture improvements with out-of-box reports that provide rich context on label usage, auto-labeling effectiveness, posture drift through label transitions, and DLP policy activities. AI Observability: Surface an organizationâs agent inventory with assigned agent risk level and agent posture metrics based on agentic interactions with the organizationâs data. New Security Copilot Agent: Accelerate the discovery and analysis of sensitive data to uncover hidden risks across files, emails, and messages. Gain visibility of non-Microsoft data within your data estate: Enable a unified view of data risks by gaining visibility into Salesforce, Snowflake, Google Cloud Platform, and Databricks â available through integrations with external partners via Microsoft Sentinel. These DSPM enhancements will be available in Public Preview within the upcoming weeks. Learn more in our blog dedicated to the announcement of the new Microsoft Purview DSPM. Together, these innovations reflect a larger shift: data security is no longer about silosâitâs about unified visibility and control everywhere data lives and having a comprehensive understanding of the data estate to detect and prevent data risks. Organizations trust Microsoft for their productivity and security platforms, but their footprint spans across third-party data environments too. Thatâs why Purview continues to expand protection beyond Microsoft environments. In addition to bringing in 3rd party data into DSPM, we are also expanding auto-labeling to three new Data Map sources, adding to the data sources we previously announced. Currently in public preview, the new sources include Snowflake, SQL Server, and Amazon S3. Once connected to Purview, admins gain an âat-a-glanceâ view of all data sources and can automatically apply sensitivity labels, enforcing consistent security policies without manual effort. This helps organizations discover sensitive information at scale, reduce the risk of data exposure, and ensure safer AI adoption all while simplifying governance through centralized policy management and visibility across their entire data estate. Enable AI adoption and prevent data oversharing As organizations adopt more autonomous agents, new risks emerge, such as unsupervised data access and creation, cascading agent interactions, and unclear data activity accountability. Besides AI Observability in DSPM providing details on the inventory and risk level of the agents, Purview is expanding its industry-leading data security and compliance capabilities to secure and govern agents that inherit usersâ policies and controls, as well as agents that have their own unique IDs, policies, and controls. This includes agent types across Microsoft 365 Copilot, Copilot Studio, Microsoft Foundry, and third-party platforms. Key enhancements include: Extension of Purview Information Protection and Data Loss Prevention policies to autonomous agents: Scope autonomous agents with an Agent ID into Purview policies that work for users across Microsoft 365 apps, including Exchange, SharePoint, and Teams. Microsoft Purview Insider Risk Management for Agents: With dedicated indicators and behavioral analytics to flag specific risky agent activities, enable proactive investigation by assigning risk levels to each agent. Extension of Purview data compliance capabilities to agent interactions: Microsoft Purview Communication Compliance, Data Lifecycle Management, Audit, and eDiscovery extend to agent interactions, supporting responsible use, secure retention, and agentic accountability. Purview SDK embedded in Agent Framework SDK: Purview SDK embedded in Agent Framework SDK enables developers to integrate enterprise-grade security, compliance, and governance into AI agents. It delivers automatic data classification, prevents sensitive data leaks and oversharing, and provides visibility and control for regulatory compliance, empowering secure adoption of AI agents in complex environments. Purview integration with Foundry: Purview is now enabled within Foundry, allowing Foundry admins to activate Microsoft Purview on their subscription. Once enabled, interaction data from all apps and agents flows into Purview for centralized compliance, governance, and posture management of AI data. Azure AI Search honors Purview labels and policies: Azure AI Search now ingests Microsoft Purview sensitivity labels and enforces corresponding protection policies through built-in indexers (SharePoint, OneLake, Azure Blob, ADLS Gen2). This ensures secure, policy-aligned search over enterprise data, enabling agentic RAG scenarios where only authorized documents are returned or sent to LLMs, preventing oversharing and aligning with enterprise data protection standards. Extension of Purview Data Loss Prevention policies to Copilot Mode in Edge for Business: This week, Microsoft Edge for Business introduced Copilot Mode, transforming the browser into a proactive, agentic partner. This is AI-assisted browsing will honor the userâs existing DLP protections, such as endpoint DLP policies that prevent pasting to sensitive service domains, or summarizing sensitive page content. Learn more in our blog dedicated to the announcements of Microsoft Purview for Agents. New capabilities in Microsoft Purview, now in public preview, to help prevent data oversharing and leakage through AI include: Expansion of Microsoft Purview Data Loss Prevention (DLP) for Microsoft 365 Copilot: Previously, we introduced DLP for Microsoft 365 Copilot to prevent labeled files & emails from being used as grounding data for responses, therefore reducing the risk of oversharing. Today, we are expanding DLP for Microsoft 365 Copilot to safeguard prompts containing sensitive data. This real-time control helps organizations mitigate data leakage and oversharing risks by preventing Microsoft 365 Copilot, Copilot Chat, and Microsoft 365 Copilot agents from returning a response when prompts contain sensitive data or using that sensitive data for grounding in Microsoft 365 or the web. For example, if a user searches, âCan you tell me more about my customer based on their address: 1234 Main Street,â Copilot will both inform the user that organizational policies prevent it from responding to their prompt, as well as block any web queries to Bing for â1234 Main Street.â Enhancements to inline data protection in Edge for Business: Earlier this year, we introduced inline data protection in Edge for Business to prevent sensitive data from being leaked to unmanaged consumer AI apps, starting with ChatGPT, Google Gemini, and DeepSeek. We are not only making this capability generally available for the initial set of AI apps, but also expanding the capability to 30+ new apps in public preview and supporting file upload activity in addition to text. This addresses potential data leakage that can occur when employees send organizational files or data to consumer AI apps for help with work-related tasks, such as document creation or code reviews. Inline data protection for the network: For user activity outside of the browser, we are also enabling inline data protection at the network layer. Earlier this year, we introduced integrations with supported secure service edge (SSE) providers to detect when sensitive data is shared to unmanaged cloud locations, such as consumer AI apps or personal cloud storage, even if sharing occurs outside of the Edge browser. In addition to the discovery of sensitive data, these integrations now support protection controls that block sensitive data from leaving a user device and reaching an unmanaged cloud service or application. These capabilities are now generally available through the Netskope and iboss integrations, and inline data discovery is available in public preview through the Palo Alto Networks integration. Extension of Purview protection to on-device AI: Purview DLP policies now extend to the Recall experience in Copilot+ PC devices to prevent sensitive organizational data from being undesirably captured and retained. Admins can now block Recall snapshots based on sensitivity label or the presence of Purview sensitive information types (SITs) in a document open on the device, or simply honor and display the sensitivity labels of content captured in the Recall snapshot library. For example, a DLP policy can be configured to prevent recall from taking snapshots of any documents labeled âHighly Confidential,â or a product design file that contains intellectual property. Learn more in the Windows IT Pro blog. Best-in-class data security for Microsoft environments Microsoft Purview sets the standard for data security within its own ecosystem. Organizations benefit from unified security policies and seamless compliance controls that are purpose-built for Microsoft environments, ensuring sensitive data remains secure without compromising productivity. We also are constantly investing in expanding protections and controls to Microsoft collaboration tools including SharePoint, Teams, Fabric, Azure and across Microsoft 365. On-demand classification adds meeting transcript coverage and new enhancements: To help organizations protect sensitive data sitting in data-at-rest, on-demand classification now extends to meeting transcripts, enabling the discovery and classification of sensitive information shared in existing recorded meeting transcripts. Once classified, admins can set up DLP or Data Lifecycle Management (DLM) policies to properly protect and retain this data according to organizational policies. This is now generally available, empowering organizations to strengthen data security, streamline compliance, and ensure even sensitive information in data-at-rest is discovered, protected, and governed more effectively. In addition, on-demand classification for endpoints is also generally available, giving organizations even broader coverage across their data estate. New usage posture and consumption reports: Weâre introducing new usage posture and consumption reports, now in public preview. Admins can quickly identify compliance gaps, optimize Purview seat assignments, and understand how consumptive features are driving spend. With granular insights by feature, policy, and user type, admins can analyze usage trends, forecast costs, and toggle consumptive features on and off directly, all from a unified dashboard. The result: stronger compliance, easier cost management, and better alignment of Purview investments to your organizationâs needs. Enable DLP and Copilot protection with extended SharePoint permissions: Extended SharePoint permissions, now generally available, make it simple to protect and manage files in SharePoint by allowing library owners to apply a default sensitivity label to an entire document library. When this is enabled, the label is dynamically enforced across all unprotected files in the library, both new and existing, within the library. Downloaded files are automatically encrypted, and access is managed based on SharePoint site membership, giving organizations powerful, scalable access control. With extended SharePoint permissions, teams can consistently apply labels at scale, automate DLP policy enforcement, and confidently deploy Copilot, all without the need for manually labeling files. Whether for internal teams, external partners, or any group where permissions need to be tightly controlled, extended SharePoint permissions streamline protection and compliance in SharePoint. Network file filtering via Entra GSA integration: We are integrating Purview with Microsoft Entra to enable file filtering at the network layer. These filtering controls help prevent sensitive content from being shared to unauthorized services based on properties such as sensitivity labels or presence of Purview sensitive information types (SITs) within the file. For example, Entra admins can now create a file policy to block files containing credit card numbers from passing through the network. Learn more here. Expanded protection scenarios enabled by Purview endpoint DLP: We are introducing several noteworthy enhancements to Purview endpoint DLP to protect an even broader range of exfiltration or leakage scenarios from organizational devices, without hindering user productivity. These enhancements, initially available on Windows devices, include: Extending protection to unsaved files: Files no longer need to be saved to disk to be protected under a DLP policy. With this improvement, unsaved files will undergo a point-in-time evaluation to detect the presence of sensitive data and apply the appropriate protections. Expanded support for removable media: Admins can now prevent data exfiltration to broader list of removable media devices, including iPhones, Android devices, and CD-ROMs. Protection for Outlook attachments downloaded to removable media or network shares: Admins can now prevent exfiltration of email attachments when users attempt to drag and drop them into USB devices, network shares, and other removable media. Expanded capability support for macOS: In addition to the new endpoint DLP protections introduced above, we are also expanding the following capabilities, already available for Windows devices, to devices running on macOS: Expanded file type coverage to 110+ file types, blanket protections for non-Office or PDF file types, addition of âallowâ and âoffâ policy actions, device-based policy scoping to scope policies to specific devices or device groups (or apply exclusions), and integration with Power Automate. Manageability and alert investigation improvements in Purview DLP: Lastly, we are also introducing device manageability and alert investigation improvements in Purview DLP to simplify the day-to-day experience for admins. These improvements include: Reporting and troubleshooting improvements for devices onboarded to endpoint DLP: We are introducing additional tools for admins to build confidence in their Purview DLP protections for endpoint devices. These enhancements, designed to maximize reliability and enable better troubleshooting of potential issues, include near real-time reporting of policy syncs initiated on devices and policy health insights into devicesâ compliance status and readiness to receive policies. Enhancements to always-on diagnostics: Earlier this year, we introduced always-on diagnostics to automatically collect logs from Windows endpoint devices, eliminating the need to reproduce issues when submitting an investigation request or raising a support ticket. This capability is expanding so that admins now have on-demand access to diagnostic logs from usersâ devices without intervening in their operations. This further streamlines the issue resolution process for DLP admins while minimizing end user disruption. Simplified DLP alert investigation, including easier navigation to crucial alert details in just 1 click, and the ability to aggregate alerts originating from a single user for more streamlined investigation and response. For organizations who manage Purview DLP alerts within their broader incident management process in Microsoft Defender, we are pleased to share that alert severities will now be synced between the Purview portal and the Defender portal. Expanding enterprise-grade data security to small and medium businesses (SMBs): Purview is extending its reach beyond large enterprises by introducing a new add-on for Microsoft 365 Business Premium, bringing advanced data security and compliance capabilities to SMBs. The Microsoft Purview suite for Business Premium brings the same enterprise-grade protection, such as sensitivity labeling, data loss prevention, and compliance management, to organizations with up to 300 users. This enables SMBs to operate with the same level of compliance and data security as large enterprises, all within a simplified, cost-effective experience built for smaller teams. Stepping into the new era of technology with AI-powered data security Globally, there is a shortage of skilled cybersecurity professionals. Simultaneously, the volume of alerts and incidents is ever growing. By infusing AI into data security solutions, admins can scale their impact. By reducing manual workloads, they enhance operational effectiveness and strengthen overall security posture â allowing defenders to stay ahead. In 2025, 82% of organizations have developed plans to use GenAI to fortify their data security programs. With its cutting-edge generative AI-powered investigative capabilities, Microsoft Purview Data Security Investigations (DSI) is transforming and scaling how data security admins analyze incident-related data. Since being released into public preview in April, the product has made a big impact with customers like Toyota Motors North America. "Data Security Investigations eliminates manual work, automating investigations in minutes. Itâs designed to handle the scale and complexity of large data sets by correlating user activity with data movement, giving analysts a faster, more efficient path to meaningful insights,â said solution architect Dan Garawecki. This Ignite, we are introducing several new capabilities in DSI, including: DSI integration with DSPM: View proactive, summary insights and launch a Data Security Investigation directly from DSPM. This integration brings the full power of DSI analysis to your fingertips, enabling admins to drill into data risks surfaced in DSPM with speed and precision. Enhancements in DSI AI-powered deep content analysis capabilities: Admins can now add context before AI analysis for higher-quality, more efficient investigations. A new AI-powered natural language search function lets admins locate specific files using keywords, metadata, and embeddings. Vector search and content categorization enhancements allow admins to better identify risky assets. Together, these enhancements equip admins with sharper, faster tools for identifying buried data risks â both proactively and reactively. DSI cost transparency report and in-product estimator: To help customers manage pay-as-you-go billing, DSI is adding a new lightweight in-product cost estimator and transparency report. We are also expanding Security Copilot in Microsoft Purview with AI-powered capabilities that strengthen both the protection and investigation of sensitive data by introducing the Data Security Posture Agent and Data Security Triage Agent. Data Security Posture Agent: Available in preview, the new Data Security Posture Agent uses LLMs to help admins answer âIs this happening?â across thousands of filesâdelivering fast, intent-driven discovery and risk profiling, even when explicit keywords are absent. Integrated with Purview DSPM, it surfaces actionable insights and improves compliance, helping teams reduce risk and respond to threats before they escalate. Data Security Triage Agent: Alongside this, the Data Security Triage Agent, now generally available, enables analysts to efficiently triage and remediate the most critical alerts, automating incident response and surfacing the threats that matter most. Together, these agentic capabilities convert high-volume signals into consistent, closed-loop action, accelerate investigations and remediation, reduce policy-violation dwell time, and improve audit readiness, all natively integrated within Microsoft 365 and Purview so security teams can scale outcomes without scaling headcount. To make the agents easily accessible and help teams get started more quickly, we are excited to announce that Security Copilot will be available to all Microsoft 365 E5 customers. Rollout starts today for existing Security Copilot customers with Microsoft 365 E5 and will continue in the upcoming months for all Microsoft 365 E5 customers. Customers will receive advanced notice before activation. Learn more: https://aka.ms/SCP-Ignite25 Data security that keeps innovating alongside you As we look ahead, Microsoft Purview remains focused on empowering organizations with scalable solutions that address the evolving challenges of data security. While we deliver best-in-class security for Microsoft, we recognize that todayâs organizations rarely operate in a single cloud, many businesses rely on a diverse mix of platforms to power their operations and innovation. Thatâs why we have been extending Purviewâs capabilities beyond Microsoft environments, helping customers protect data across their entire digital estate. In a world where data is the lifeblood of innovation, securing it must be more than a checkboxâit must be a catalyst for progress. As organizations embrace AI, autonomous agents, and increasingly complex digital ecosystems, Microsoft Purview empowers them to move forward with confidence. By unifying visibility, governance, and protection across the entire data estate, Purview transforms security from a fragmented challenge into a strategic advantage. The future of data security isnât just about defenseâitâs about enabling bold, responsible innovation at scale. Letâs build that future together.Microsoft Security Store: Now Generally Available
When we launched the Microsoft Security Store in public preview on September 30, our goal was simple: make it easier for organizations to discover, purchase, and deploy trusted security solutions and AI agents that integrate seamlessly with Microsoft Security products. Today, Microsoft Security Store is generally availableâwith three major enhancements: Embedded where you work: Security Store is now built into Microsoft Defender, featuring SOC-focused agents, and into Microsoft Entra for Verified ID and External ID scenarios like fraud protection. By bringing these capabilities into familiar workflows, organizations can combine Microsoft and partner innovation to strengthen security operations and outcomes. Expanded catalog: Security Store now offers more than 100 third-party solutions, including advanced fraud prevention, forensic analysis, and threat intelligence agents. Security services available: Partners can now list and sell services such as managed detection and response and threat hunting directly through Security Store. Real-World Impact: What We Learned in Public Preview Thousands of customers explored Microsoft Security Store and tried a growing catalog of agents and SaaS solutions. While we are at the beginning of our journey, customer feedback shows these solutions are helping teams apply AI to improve security operations and reduce manual effort. Spairliners, a cloud-first aviation services joint venture between Air France and Lufthansa, strengthened identity and access controls by deploying Glueckkanjaâs Privileged Admin Watchdog to enforce just-in-time access. âUsing the Security Store felt easy, like adding an app in Entra. For a small team, being able to find and deploy security innovations in minutes is huge.â â Jonathan Mayer, Head of Innovation, Data and Quality GTD, a Chilean technology and telecommunications company, is testing a variety of agents from the Security Store: âAs any security team, weâre always looking for ways to automate and simplify our operations. We are exploring and applying the world of agents more and more each day so having the Security Store is convenientâitâs easy to find and deploy agents. Weâre excited about the possibilities for further automation and integrations into our workflows, like event-triggered agents, deeper Outlook integration, and more." â Jonathan Lopez Saez, Cybersecurity Architect Partners echoed the momentum they are seeing with the Security Store: âWeâre excited by the early momentum with Security Store. Weâve already received multiple new leads since going live, including one in a new market for us, and we have multiple large deals weâre looking to drive through Security Store this quarter.â - Kim Brault, Head of Alliances, Delinea âPartnering with Microsoft through the Security Store has unlocked new ways to reach enterprise customers at scale. The store is pivotal as the industry shifts toward AI, enabling us to monetize agents without building our own billing infrastructure. With the new embedded experience, our solutions appear at the exact moment customers are looking to solve real problems. And by working with Microsoftâs vetting process, we help provide customers confidence to adopt AI agentsâ â Milan Patel, Co-founder and CEO, BlueVoyant âAgents and the Microsoft Security Store represent a major step forward in bringing AI into security operations. Weâve turned years of service experience into agentic automations, and itâs resonating with customersâweâve been positively surprised by how quickly theyâre adopting these solutions and embedding our automated agentic expertise into their workflows.â â Christian Kanja, Founder and CEO of glueckkanja New at GA: Embedded in Defender, EntraâSecurity Solutions right where you work Microsoft Security Store is now embedded in the Defender and Entra portals with partner solutions that extend your Microsoft Security products. By placing Security Store in front of security practitioners, itâs now easier than ever to use the best of partner and Microsoft capabilities in combination to drive stronger security outcomes. As Dorothy Li, Corporate Vice President of Security Copilot and Ecosystem put it, âEmbedding the Security Store in our core security products is about giving customers access to innovative solutions that tap into the expertise of our partners. These solutions integrate with Microsoft Security products to complete end-to-end workflows, helping customers improve their securityâ Within the Microsoft Defender portal, SOC teams can now discover Copilot agents from both Microsoft and partners in the embedded Security Store, and run them all from a single, familiar interface. Letâs look at an example of how these agents might help in the day of the life of a SOC analyst. The day starts with Watchtower (BlueVoyant) confirming Sentinel connectors and Defender sensors are healthy, so investigations begin with full visibility. As alerts arrive, the Microsoft Defender Copilot Alert Triage Agent groups related signals, extracts key evidence, and proposes next steps; identity related cases are then validated with Login Investigator (adaQuest), which baselines recent sign-in behavior and device posture to cut false positives. To stay ahead of emerging campaigns, the analyst checks the Microsoft Threat Intelligence Briefing Agent for concise threat rundowns tied to relevant indicators, informing hunts and temporary hardening. When HR flags an offboarding, GuardianIQ (People Tech Group) correlates activity across Entra ID, email, and files to surface possible data exfiltration with evidence and risk scores. After containment, Automated Closing Comment Generator (Ascent Global Inc.) produces clear, consistent closure notes from Defender incident details, keeping documentation tight without hours of writing. Together, these Microsoft and partner agents maintain platform health, accelerate triage, sharpen identity decisions, add timely threat context, reduce insider risk blind spots, and standardize reportingâall inside the Defender portal. You can read more about the new agents available in the Defender portal in this blog. In addition, Security Store is now integrated into Microsoft Entra, focused on identity-centric solutions. Identity admins can discover and activate partner offerings for DDoS protection, intelligent bot defense, and government IDâbased verification for account recovery âall within the Entra portal. With these capabilities, Microsoft Entra delivers a seamless, multi-layered defense that combines built-in identity protection with best-in-class partner technologies, making it easier than ever for enterprises to strengthen resilience against modern identity threats. Learn more here. Levent Besik, VP of Microsoft Entra, shared that âThis sets a new benchmark for identity security and partner innovation at Microsoft. Attacks on digital identities can come from anywhere. True security comes from defense in depth, layering protection across the entire user journey so every interaction, from the first request to identity recovery, stays secure. This launch marks only the beginning; we will continue to introduce additional layers of protection to safeguard every aspect of the identity journeyâ New at GA: Services Added to a Growing Catalog of Agents and SaaS For the first time, partners can offer their security services directly through the Security Store. Customers can now find, buy, and activate managed detection and response, threat hunting, and other expert servicesâmaking it easier to augment internal teams and scale security operations. Every listing has a MXDR Verification that certifies they are providing next generation advanced threat detection and response services. You can browse all the services available at launch here, and read about some of our exciting partners below: Avanade is proud to be a launch partner for professional services in the Microsoft Security Store. As a leading global Microsoft Security Services provider, weâre excited to make our offerings easier to find and help clients strengthen cyber defenses faster through this streamlined platform - Jason Revill, Avanade Global Security Technology Lead ProServeIT partnering with Microsoft to have our offers in the Microsoft Security Store helps ProServeIT protect our joint customers and allows us to sell better with Microsoft sellers. It shows customers how our technology and services support each other to create a safe and secure platform - Eric Sugar, President Having Replyâs security services showcased in the Microsoft Security Store is a significant milestone for us. It amplifies our ability to reach customers at the exact point where they evaluate and activate Microsoft security solutions, ensuring our offerings are visible alongside Microsoftâs trusted technologies. Notable New Selections Since public preview, the Security Store catalog has grown significantly. Customers can now choose from over 100 third-party solutions, including 60+ SaaS offerings and 50+ Security Copilot agents, with new additions every week. Recent highlights include Cisco Duo and Rubrik: Cisco Duo IAM delivers comprehensive, AI-driven identity protection combining MFA, SSO, passwordless and unified directory management. Duo IAM seamlessly integrates across the Microsoft Security suiteâenhancing Entra ID with risk-based authentication and unified access policy management across cloud and on-premises applications seamlessly in just a few clicks. Intune for device compliance and access enforcement. Sentinel for centralized security monitoring and threat detection through critical log ingestion about authentication events, administrator actions, and risk-based alerts, providing real-time visibility across the identity stack. Rubrik's data security platform delivers complete cyber resilience across enterprise, cloud, and SaaS alongside Microsoft. Through the Microsoft Sentinel integration, Rubrikâs data management capabilities are combined with Sentinelâs security analytics to accelerate issue resolution, enabling unified visibility and streamlined responses. Furthermore, Rubrik empowers organizations to reduce identity risk and ensure operational continuity with real-time protection, unified visibility and rapid recovery across Microsoft Active Directory and Entra ID infrastructure. The Road Ahead This is just the beginning. Microsoft Security Store will continue to make it even easier for customers to improve their security outcomes by tapping into the innovation and expertise of our growing partner ecosystem. The momentum weâre seeing is clearâcustomers are already gaining real efficiencies and stronger outcomes by adopting AI-powered agents. As we work together with partners, weâll unlock even more automation, deeper integrations, and new capabilities that help security teams move faster and respond smarter. Explore the Security Store today to see whatâs possible. For a more detailed walk-through of the capabilities, read our previous public preview Tech Community post If youâre a partner, now is the time to list your solutions and join us in shaping the future of security.942Views3likes0CommentsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesnât require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, weâre announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. âReally amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.â Glueckkanja AG âThe biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.â Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity contextâenabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. Whatâs next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoftâs help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - itâs a foundation for integrated, AI-driven security operations. Weâre committed to helping you make this transition smoothly and confidently. If you havenât already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube39KViews9likes21CommentsSensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. Youâll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, itâs not likely to last very long. So, I affectionately refer to information that hasnât been evaluated as âporridgeâ, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of âcontent contains sensitivity labelâ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the âporridgeâ, determine whether itâs an item that needs protection and if so, move it to a higher shelf in the pantry so that itâs out of reach for the kids. Effective use of Purview revolves around the use of âknow your dataâ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like âSensitiveâ or âProtectedâ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We donât need to guess at an itemâs sensitivity if another system has already established what the itemâs classification is. These methods are much less prone to false positives. Why isnât everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations donât have this skill set. Thereâs also a lack of understanding of the relevance of checking for item properties. Microsoft havenât helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where Iâm from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, weâll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isnât for everyone. If your organisation is new to classification or you donât have external partners that you collaborate with at higher sensitivity levels, then this likely isnât for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that youâll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels wonât impact usability as you wonât publish them to users. You will, however, need to publish them to a single user or break glass account so that theyâre not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID wonât correspond with any of that tenantâs labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasnât been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the itemâs existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purviewâs data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 â Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as âcrawled propertiesâ, which weâll then need to convert into âmanaged propertiesâ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, youâll need to upload or create an item or items with the properties applied. For testing, youâll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties youâre looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, Iâve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which youâll often see applied by Purview when an item has a sensitivity label content marking applied to it. Iâve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 â Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 â Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the propertyâs type to text, select queryable and retrievable. Under âmappings to crawled propertiesâ, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so thereâs no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 â Configure Auto-labelling policies Next up, create some auto-labelling policies. Youâll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO â Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, youâll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 â Expand into DLP If youâve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an itemâs sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasnât yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesnât match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.9KViews9likes8CommentsStep by Step: 2-Tier PKI Lab
Purpose of this blog Public Key Infrastructure (PKI) is the backbone of secure digital identity management, enabling encryption, digital signatures, and certificate-based authentication. However, neither setting up a PKI nor management of certificates is something most IT pros do on a regular basis and given the complexity and vastness of the subject it only makes sense to revisit the topic from time to time. What I have found works best for me is to just set up a lab and get my hands dirty with the topic that I want to revisit. One such topic that I keep coming back to is PKI - be it for creating certificate templates, enrolling clients, or flat out creating a new PKI itself. But every time I start deploying a lab or start planning a PKI setup, I end up spending too much time sifting through the documentations and trying to figure out why my issuing certificate authority won't come online! To make my life easier I decided to create a cheatsheet to deploy a simple but secure 2-tier PKI lab based on industry best practices that I thought would be beneficial for others like me, so I decided to polish it and make it into a blog. This blog walks through deploying a two-tier PKI hierarchy using Active Directory Certificate Services (AD CS) on Windows Server: an offline Root Certification Authority (Root CA) and an online Issuing Certification Authority (Issuing CA). Weâll cover step-by-step deployment and best practices for securing the root CA, conducting key ceremonies, and maintaining Certificate Revocation Lists (CRLs). Overview: Two-Tier PKI Architecture and Components In a two-tier PKI, the Root CA sits at the top of the trust hierarchy and issues a certificate only to the subordinate Issuing CA. The Root CA is kept offline (disconnected from networks) to protect its private key and is typically a standalone CA (not domain-joined). The Issuing CA (sometimes called a subordinate or intermediate CA) is kept online to issue certificates to end-entities (users, computers, services) and is usually an enterprise CA integrated with Active Directory for automation and certificate template support. Key components: Offline Root CA: A standalone CA, often on a workgroup server, powered on only when necessary (initial setup, subordinate CA certificate signing, or periodic CRL publishing). By staying offline, it is insulated from network threats. Its self-signed certificate serves as the trust anchor for the entire PKI. The Root CAâs private key must be rigorously protected (ideally by a Hardware Security Module) because if the root is compromised, all certificates in the hierarchy are compromised. Online Issuing CA: An enterprise subordinate CA (domain-joined) that handles day-to-day certificate issuance for the organization. It trusts the Root CA (via the rootâs certificate) and is the one actually responding to certificate requests. Being online, it must also be secured, but its key is kept online for operations. Typically, the Issuing CA publishes certificates and CRLs to Active Directory and/or HTTP locations for clients to download. The following diagram shows the simplified view of this implementations: The table below summarizes the roles and differences: Aspect Offline Root CA Online Issuing CA Role Standalone Root CA (workgroup) Enterprise Subordinate CA (domain member) Network Connectivity Kept offline (powered off or disconnected when not issuing) Online (running continuously to serve requests) Usage Signs only one certificate (the subordinate CAâs cert) and CRLs Issues end-entity certificates (users, computers, services) Active Directory Not a member of AD domain; doesnât use templates or auto-enrollment Integrated with AD DS; uses certificate templates for streamlined issuance Security Extremely high: physically secured, limited access, often protected by HSM Very High: server hardened, but accessible on network; HSM recommended for private key CRL Publication Manual. Admin must periodically connect, generate, and distribute CRL. Delta CRLs usually disabled. Automatic. Publishes CRLs to configured CDP locations (AD DS, HTTP) at scheduled intervals. Validity Period Longer (e.g. 5-10+ years for the CA certificate) to reduce frequency of renewal. Shorter (e.g. 2 years) to align with organizational policy; renewed under the root when needed. In this lab setup, we will create a Contoso Root CA (offline) and a Contoso Issuing CA (online) as an example. This mirrors real-world best practices which is to "deploy a standalone offline root CA and an online enterprise subordinate CAâ. Deploying the Offline Root CA Setting up the offline Root CA involves preparing a dedicated server, installing AD CS, configuring it as a root CA, and then securing it. Weâll also configure certificate CDP/AIA (CRL Distribution Point and Authority Information Access) locations so that issued certificates will point clients to the correct locations to fetch the CAâs certificate and revocation list. Step 1: Prepare the Root CA Server (Offline) Provision an isolated server: Install a Windows Server OS (e.g., Windows Server 2022) on the machine designated to be the Root CA. Preferably on a portable enterprise grade physical server that can be stored in a safe. Do not join this server to any domain â it should function in a Workgroup to remain independent of your AD forest. System configuration: Give the server a descriptive name (e.g., ROOTCA) and assign a static IP (even though it will be offline, a static IP helps when connecting it temporarily for management). Install the latest updates and security patches while itâs still able to go online. Lock down network access: Once setup is complete, disable or unplug network connections. If the server must remain powered on for any reason, ensure all unnecessary services/ports are disabled to minimize exposure. In practice, you will keep this server shut down or physically disconnected except when performing CA maintenance. Step 2: Install the AD CS Role on the Root CA Add the Certification Authority role: On the Root CA server, open Server Manager and add the Active Directory Certificate Services role. During the wizard, select the Certification Authority role service (no need for web enrollment or others on the root). Proceed through the wizard and complete the installation. You can also install the CA role and management tools via PowerShell: Install-WindowsFeature AD-Certificate -IncludeManagementToolsThis Role Services: Choose Certification Authority. Setup Type: Select Standalone CA (since this root CA is not domain-joined). CA Type: Select Root CA. Private Key: Choose âCreate a new private key.â Cryptography: If using an HSM, select the HSMâs Cryptographic Service Provider (CSP) here; otherwise use default. Choose a strong key length (e.g., 2048 or 4096 bits) and a secure hash algorithm (SHA-256 or higher). CA Name: Provide a common name for the CA (e.g., âContoso Root CAâ). This name will appear in issued certificates as the Issuer. Avoid using a machine DNS name here for security â pick a name without revealing the serverâs actual hostname. Validity Period: Set a long validity (e.g., 10 years) for the root CAâs self-signed certificate. A decade is common for enterprise roots, reducing how often you must touch the offline CA for renewal. Database: Specify locations for the CA database and logs (the defaults are fine for a lab). Review settings and complete the configuration. This process will generate the root CAâs key pair and self-signed certificate, establishing the Root CA.Post-install configuration: After the binary installation, click Configure Active Directory Certificate Services (a notification in Server Manager). In the configuration wizard: You can also perform this configuration via PowerShell in one line: Install-AdcsCertificationAuthority ` -CAType StandaloneRootCA ` -CryptoProviderName "YourHSMProvider" ` -HashAlgorithmName SHA256 -KeyLength 2048 ` -CACommonName "Contoso Root CA" ` -ValidityPeriod Years -ValidityPeriodUnits 10 This would set up a standalone Root CA named "Contoso Root CA" with a 2048-bit key on an HSM provider, valid for 10 years. Step 3: Integrate an HSM (Optional but Recommended) If your lab has a Hardware Security Module, use it to secure the Root CAâs keys. Using an HSM provides a dedicated, tamper-resistant storage for CA private keys and can further protect against key compromise. To integrate: Install the HSM vendorâs software and drivers on the Root CA server. Initialize the HSM and create a security world or partition as per the vendor instructions. Before or during the CA configuration (Step 2 above), ensure the HSM is ready to generate/store the key. When running the AD CS configuration, select the HSMâs CSP/KSP for the cryptographic provider so that the CAâs private key is generated on the HSM. Secure any HSM admin tokens or smartcards. For a root CA, you might employ M of N key splits â requiring multiple key custodians to collaborate to activate the HSM or key â as part of the key ceremony (discussed later). (If an HSM is not available, the root key will be stored on the serverâs disk. At minimum, protect it with a strong admin passphrase when prompted, and consider enabling the option to require administrator interaction (e.g., a password) whenever the key is accessed.) Step 4: Configure CA Extensions (CDP/AIA) Itâs critical to configure how the Root CA publishes its certificate and revocation list, since the root is offline and cannot use Active Directory auto-publishing. Open the Certification Authority management console (certsrv.msc), right-click the CA name > Properties, and go to the Extensions tab. We will set the CRL Distribution Points (CDP) and Authority Information Access (AIA) URLs: CRL Distribution Point (CDP): This is where certificates will tell clients to fetch the CRL for the Root CA. By default, a standalone CA might have a file:// path or no HTTP URL. Click Add and specify an HTTP URL that will be accessible to all network clients, such as: http://<IssuingCA_Server>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl For example, if your issuing CAâs server name is ISSUINGCA.contoso.local, the URL might be http://issuingca.contoso.local/CertEnroll/Contoso%20Root%20CA.crl This assumes the Issuing CA (or another web server) will host the Root CAâs CRL in the CertEnroll directory. Check the boxes for âInclude in the CDP extension of issued certificatesâ and âInclude in all CRLs. Clients use this to find Delta CRLsâ (you can uncheck the delta CRL publication on the root, as we wonât use delta CRLs on an offline root). Since the root CA wonât often revoke its single issued cert (the subordinate CA), delta CRLs arenât necessary. Note: If your Active Directory is in use and you want to publish the Root CAâs CRL to AD, you can also add an ldap:///CN=... path and check âPublish in Active Directoryâ. However, publishing to AD from an offline CA must be done manually using the following command when the root is temporarily connected. certutil -dspublish Many setups skip LDAP for offline roots and rely on HTTP distribution. Authority Information Access (AIA): This is where the Root CAâs certificate will be published for clients to download (to build certificate chains). Add an HTTP URL similarly, for example: http://<IssuingCA_Server>/CertEnroll/<ServerDNSName>_<CaName><CertificateName>.crt This would point to a copy of the Root CAâs certificate that will be hosted on the issuing CA web server. Check âInclude in the AIA extension of issued certificatesâ. This way, any certificate signed by the Root CA (like your subordinate CAâs cert) contains a URL where clients can fetch the Root CAâs cert if they donât already have it. After adding these, remove any default entries that are not applicable (e.g., LDAP if the root isnât going to publish to AD, or file paths that wonât be used by clients). These settings ensure that certificates issued by the Root CA (in practice, just the subordinate CAâs certificate) will carry the correct URLs for chain building and revocation checking. Step 5: Back Up the Root CA and Issue the Subordinate Certificate With the Root CA configured, we need to issue a certificate for the Issuing CA (subordinate). Weâll perform that in the next section from the Issuing CAâs side via a request file. Before taking the root offline, ensure you: Back up the CAâs private key and certificate: In the Certification Authority console, or via the CA Backup wizard, export the Root CAâs key pair and CA certificate. Protect this backup (store it offline in a secure location, e.g., on encrypted removable media in a safe). This backup is crucial for disaster recovery or if the Root CA needs to be migrated or restored. Save the Root CA Certificate: You will need the Root CAâs public certificate (*.crt) to distribute to other systems. Have it exported (Base-64 or DER format) for use on the Issuing CA and for clients. Initial CRL publication: Manually publish the first CRL so that it can be distributed. Open an elevated Command Prompt on the Root CA and run: certutil -crl This generates a new CRL file (in the CAâs configured CRL folder, typically %windir%\system32\CertSrv\CertEnroll). Take that CRL file and copy it to the designated distribution point (for example, to the CertEnroll directory on the Issuing CAâs web server, as per the HTTP URL configured). If using Active Directory for CRL distribution, you would also publish it to AD now (e.g., certutil -dspublish -f RootCA.crl on a domain-connected machine). In most lab setups, copying to an HTTP share is sufficient. With these tasks done, the Root CA is ready. At this point, disconnect or power off the Root CA and store it securely â it should remain offline except when itâs absolutely needed (like publishing a new CRL or renewing the subordinate CAâs certificate in the far future). Keeping the root CA offline maximizes its security by minimizing exposure to compromise. Best Practices for Securing the Root CA: The Root CA is the trust anchor, so apply stringent security practices: Physical security: Store the Root CA machine in a locked, secure location. If itâs a virtual machine, consider storing it on a disconnected hypervisor or a USB drive locked in a safe. Only authorized PKI team members should have access. An offline CA should be treated like crown jewels â offline CAs should be stored in secure locations. Minimal exposure: Keep the Root CA powered off and disconnected when not in use. It should not be left running or connected to any network. Routine operations (like issuing end-entity certs) should never involve the root. Admin access control: Limit administrative access on the Root CA server. Use dedicated accounts for PKI administration. Enable auditing on the CA for any changes or issuance events. No additional roles or software: Do not use the Root CA server for any other function (no web browsing, no email, etc.). Fewer installed components means fewer potential vulnerabilities. Protect the private key: Use an HSM if possible; if not, ensure the key is at least protected by a strong password and consider splitting knowledge of that password among multiple people (so no single person can activate the CA). Many organizations opt for an offline root key ceremony (see below) to generate and handle the root key with multiple witnesses and strict procedures. Keep system time and settings consistent: If the Root CA is powered off for long periods, ensure its clock is accurate whenever it is started (to avoid issuing a CRL or certificate with a wrong date). Donât change the server name or CA name after installation (doing so invalidates issued certs). Periodic health checks: Even though offline, plan to turn on the Root CA at a secure interval (e.g., semi-annually or annually) to perform tasks like CRL publishing and system updates. Make sure to apply OS security updates during these maintenance windows, as offline does not mean immune to vulnerabilities (especially if it ever connects to a network for CRL publication or uses removable media). Deploying the Online Issuing CA Next, set up the Issuing CA server which will actually issue certificates to end entities in the lab. This server will be domain-joined (if using AD integration) and will obtain its CA certificate from the Root CA we just configured. Step 1: Prepare the Issuing CA Server Provision the server: Install Windows Server on a new machine (or VM) that will be the Issuing CA. Join this server to the Active Directory domain (e.g., Contoso.local). Being an enterprise CA, it needs domain membership to publish templates and integrate with AD security groups. Rename the server to something descriptive like ISSUINGCA for clarity. Assign a static IP and ensure it can communicate on the network. IIS for web enrollment (optional): If you plan to use the Web Enrollment or Certificate Enrollment Web Services, ensure IIS is installed. (The AD CS installation wizard can add it if you include those role services.) For this guide, we will include the Web Enrollment role so that the CertEnroll directory is set up for hosting certificate and CRL files. Step 2: Install AD CS Role on Issuing CA On the Issuing CA server, add the Active Directory Certificate Services role via Server Manager or PowerShell. This time, select both Certification Authority and Certification Authority Web Enrollment role services (Web Enrollment will set up the HTTP endpoints for certificate requests if needed). For example, using PowerShell: Install-WindowsFeature AD-Certificate, ADCS-Web-Enrollment -IncludeManagementTools After installation, launch the AD CS configuration wizard: Role Services: Choose Certification Authority (and Web Enrollment if prompted). Setup Type: Select Enterprise CA (since this CA will integrate with AD DS). CA Type: Select Subordinate CA (this indicates it will get its cert from an existing root CA). Private Key: Choose âCreate a new private keyâ (weâll generate a new key pair for this CA). Cryptography: If using an HSM here as well, select the HSMâs CSP/KSP for the issuing CAâs key. Otherwise, choose a strong key length (2048+ bits, SHA256 or better for hash). CA Name: Provide a name (e.g., âContoso Issuing CAâ). This name will appear as the Issuer on certificates it issues. Certificate Request: The wizard will ask how you want to get the subordinate CAâs certificate. Choose âSave a certificate request to fileâ. Specify a path, e.g., C:\CertRequest\issuingCA.req. The wizard will generate a request file that we need to take to the Root CA for signing. (Since our Root CA is offline, this file transfer might be via secure USB or a network share when the root is temporarily online.) CA Database: Choose locations or accept defaults for the certificate DB and logs. Finish the configuration wizard, which will complete pending because the CA doesnât have a certificate yet. The AD CS service on this server wonât start until we import the issued cert from the root. Step 3: Integrate HSM on Issuing CA (Optional) If available, repeat the HSM setup on the Issuing CA: install HSM drivers, initialize it, and generate/secure the key for the subordinate CA on the HSM. Ensure you chose the HSM provider during the above configuration so that the issuing CAâs private key is stored in the HSM. Even though this CA is online, an HSM still greatly enhances security by protecting the private key from extraction. The issuing CAâs HSM may not require multiple custodians to activate (as it needs to run continuously), but should still be physically secured. Step 4: Obtain the Issuing CAâs Certificate from the Root CA Now we have a pending request (issuingCA.req) for the subordinate CA. To get its certificate: Transport the request to the Root CA: Copy the request file to the offline Root CA (via secure means â e.g., formatted new USB stick). Start up the Root CA (in a secure, offline setting) and open the Certification Authority console. Submit the request on Root CA: Right-click the Root CA in the CA console -> All Tasks -> Submit new request, and select the .req file. The request will appear in the Pending Requests on the root. Issue the subordinate CA certificate: Find the pending request (it will list the Issuing CAâs name). Right-click and choose All Tasks > Issue. The subordinate CAâs certificate is now issued by the Root CA. Export the issued certificate: Still on the Root CA, go to Issued Certificates, find the newly issued subordinate CA cert (you can identify it by the Request ID or by the name). Right-click it and choose Open or All Tasks > Export to get the certificate in a file form. If using the consoleâs built-in âExportâ it might only allow binary; alternatively use the certutil command: certutil -dup <RequestID> .\ContosoIssuingCA.cer or simply open and copy to file. Save the certificate as issuingCA.cer. Also make sure you have a copy of the Root CAâs certificate (if not already done). Publish Root CA cert and CRL as needed: Before leaving the Root CA, you may also want to ensure the Rootâs own certificate and latest CRL are available to the issuing CA and clients. If not already done in Step 5 of root deployment, export the Root CA cert (DER format) and copy the CRL file. You might use certutil -crl again if some time has passed since initial CRL. Now take the issuingCA.cer file (and root cert/CRL files) and move them back to the Issuing CA server. Step 5: Install the Issuing CAâs Certificate and Complete Configuration On the Issuing CA server (which is still waiting for its CA cert): Install the subordinate CA certificate: In Server Manager or the Certification Authority console on the Issuing CA, there should be an option to âInstall CA Certificateâ (if the AD CS configuration wizard is still open, it will prompt for the file; or otherwise, in the CA console right-click the CA name > All Tasks > Install CA Certificate). Provide the issuingCA.cer file obtained from the root. This will install the CAâs own certificate and start the CA service. The Issuing CA is now operational as a subordinate CA. Alternatively, use PowerShell: certutil -installcert C:\CertRequest\issuingCA.cer This installs the cert and associates it with the pending key. Trust the Root CA certificate: Because the Issuing CA is domain-joined, when you install the subordinate cert, it might automatically place the Root CAâs certificate in the Trusted Root Certification Authorities store on that server (and possibly publish it to AD). If not, you should manually install the Root CAâs certificate into the Trusted Root CA store on the Issuing CA machine (using the Certificates MMC or certutil -addstore -f Root rootCA.cer). This step prevents any âchain not trustedâ warnings on the Issuing CA and ensures it trusts its parent. In an enterprise environment, you would also distribute the root certificate to all client machines (e.g., via Group Policy) so that they trust the whole chain. Import Root CRL: Copy the Root CAâs CRL (*.crl file) to the Issuing CAâs CRL distribution point location (e.g., C:\Windows\System32\CertSrv\CertEnroll\ if thatâs the directory served by the web server). This matches the HTTP URL we configured on the root. Place the CRL file there and ensure it is accessible (the Issuing CAâs IIS might need to serve static .crl files; often, if Web Enrollment is installed, the CertEnroll folder is under C:\Inetpub\wwwroot\CertEnroll). At this point, the subordinate CA and any client hitting the HTTP URL can retrieve the rootâs CRL. The subordinate CA is now fully established. It holds a certificate issued by the Root CA (forming a complete chain of trust), and itâs ready to issue end-entity certificates. Step 6: Configure Issuing CA Settings and Start Services Start the Certificate Services: If the CA service (CertSvc) isnât started automatically, start or restart it. On PowerShell: Restart-Service certsvc The CA should show as running in the CA console with the name âContoso Issuing CAâ (or your chosen name). Configure Certificate Templates: Because this is an Enterprise CA, it can utilize certificate templates stored in Active Directory to simplify issuing common cert types (user auth, computer auth, web server SSL, etc.). By default, some templates (e.g., User, Computer) are available but not issued. In the Certification Authority console under Certificate Templates, you can choose which templates to issue (e.g., right-click > New > Certificate Template to Issue, then select templates like âUserâ or âComputerâ). This lab guide doesnât require specific templates but know that only Enterprise CAs can use templates. Templates define the policies and settings (cryptography, enrollment permissions, etc.) for issued certificates. Ensure you enable only the templates needed and configure their permissions appropriately (e.g., allow the appropriate groups to enroll). Set CRL publishing schedule: The Issuing CA will automatically publish its own CRL (for certificates it issues) at intervals. You can adjust the CRL and Delta CRL publication interval in the CAâs Properties > CRL Period. A common practice is a small base CRL period (e.g., 1 week or 2 weeks) for issuing CAs, because they may revoke user certs more frequently; and enable Delta CRLs (published daily) for timely revocation information. Make sure the CDP/AIA for the Issuing CA itself are properly configured too (the wizard usually sets LDAP and HTTP locations, but verify in the Extensions tab). In a lab, the default settings are fine. Web Enrollment (if installed): You can verify the web enrollment by browsing to http://<IssuingCA>/certsrv. This web UI allows browser-based certificate requests. Itâs a legacy interface mostly, but for testing it can be used if your clients arenât domain-joined or if you want a manual request method. In modern use, the Certificate Enrollment Web Service/Policy roles or auto-enrollment via Group Policy are preferred for remote and automated enrollment. At this stage, your PKI is operational: the Issuing CA trusts the offline Root CA and can issue certificates. The Root CA can be kept offline with confidence that the subordinate will handle all regular work. Validation and Testing of the PKI Itâs important to verify that the PKI is configured correctly: Check CA status: On the Issuing CA, open the Certification Authority console and ensure no errors. Verify that the Issuing CAâs certificate shows OK (no red X). On the Root CA (offline most of the time), you can use the Pkiview.msc snap-in (Microsoft PKI Health Tool) on a domain-connected machine to check the health of the PKI. This tool will show if the CDPs/AIA are reachable and if certificates are properly published. Trust chain on clients: On a domain-joined client PC, the Root CA certificate should be present in the Trusted Root Certification Authorities store (if the Issuing CA was installed as Enterprise CA, it likely published the root cert to AD automatically; you can also distribute it via Group Policy or manually). The Issuing CAâs certificate should appear in the Intermediate Certification Authorities store. This establishes the chain of trust. If not, import the root cert into the domainâs Group Policy for Trusted Roots. A quick test: on a client, run certutil -config "ISSUINGCA\\Contoso Issuing CA" -ping to see if it can contact the CA (or use the Certification Authority MMC targeting the issuing CA). Enroll a test certificate: Try to enroll for a certificate from the Issuing CA. For instance, from a domain-joined client, use the Certificates MMC (in Current User or Computer context) and initiate a certificate request for a User or Computer certificate (depending on templates issued). If auto-enrollment is configured via Group Policy for a template, you can simply log on a client and see if it automatically receives a certificate. Alternatively, use the web enrollment page or certreq command to submit a request. The request should be approved and a certificate issued by "Contoso Issuing CA". After enrollment, inspect the issued certificate: it should chain up to "Contoso Root CA" without errors. Ensure that the certificateâs CDP points to the URL we set (and try to browse that URL to see the CRL file), and that the AIA points to the root cert location. Revocation test (optional): To test CRL behavior, you could revoke a test certificate on the Issuing CA (using the CA console) and publish a new CRL. On the client, after updating the CRL, the revoked certificate should show as revoked. For the Root CA, since it shouldnât issue end-entity certs, you wouldnât normally revoke anything except potentially the subordinate CAâs certificate (which would be a drastic action in case of compromise). By issuing a test certificate and validating the chain and revocation, you confirm that your two-tier PKI lab is functioning correctly. Maintaining the PKI: CRLs, Key Ceremonies, and Security Procedures Deploying the PKI is only the beginning. Proper maintenance and operational procedures are crucial to ensure the PKI remains secure and reliable over time. Periodic CRL Updates for the Offline Root: The Root CAâs CRL has a defined validity period (set during configuration, often 6 or 12 months for offline roots). Before the CRL expires, the Root CA must be brought online (in a secure environment) to issue a new CRL. Itâs recommended to schedule CRL updates periodically (e.g., semi-annually) to prevent the CRL from expiring. An expired CRL can cause certificate chain validation to fail, potentially disrupting services. Typically, organizations set the offline root CRL validity so that publishing 1-2 times a year is sufficient. When the time comes: Start the Root CA (ensuring the system clock is correct). Run certutil -crl to issue a fresh CRL. Distribute the new CRL: copy it to the HTTP CDP location (overwrite the old file) and, if applicable, use certutil -dspublish -f RootCA.crl to update it in Active Directory. Verify that the new CRLâs next update date is extended appropriately (e.g., another 6 months out). Clients and the Issuing CA will automatically pick up the new CRL when checking for revocation. (The Issuing CA, if configured, might cache the root CRL and need a restart or certutil -setreg ca\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE tweak if the root CRL expires unexpectedly. Keeping the schedule prevents such issues.) Issuing CA CRL and OCSP: The Issuing CAâs CRLs are published automatically as it is online. Ensure the IIS or file share hosting the CRL is accessible. Optionally, consider setting up an Online Responder (OCSP) for real-time status checking, especially if CRLs are large or you need faster revocation information. OCSP is another AD CS role service that can be configured on the issuing CA or another server to answer certificate status queries. This might be beyond a simple lab, but itâs worth mentioning for completeness. Key Ceremonies and Documentation: For production environments (and good practice even in labs), formalize the process of handling CA keys in a Key Ceremony. A key ceremony is a carefully controlled process for activities like generating the Root CAâs key pair, installing the CA, and signing subordinate certificates. It often involves multiple people to ensure no single person has unilateral control (principle of dual control) and to witness the process. Best practices for a Root CA key ceremony include: Advance Planning: Create a step-by-step script of the ceremony tasks. Include who will do what, what materials are needed (HSMs, installation media, backup devices, etc.), and the order of operations. Multiple trusted individuals present: Roles might include a Ceremony Administrator (leads the process), a Security Officer (responsible for HSM or key material handling), an Auditor (to observe and record), etc. This prevents any one person from manipulating the process and increases trust. Secure environment: Conduct the ceremony in a secure location (e.g., a locked room) free of recording devices or unauthorized personnel. Ensure the Root CA machine is isolated (no network), and ideally that BIOS/USB access controls are in place to prevent any malware. Generate keys with proper controls: If using an HSM, initialize and generate the key with the required number of key custodians each providing part of the activation material (e.g., smartcards or passphrases). Immediately back up the HSM partition or key to secure media (requiring the same custodians to restore). Sign subordinate CA certificate: As part of the ceremony, once the root key is ready, sign the subordinateâs request. This might also be a witnessed step. Document every action: Write down each command run, each key generated, serial numbers of devices used, and have all participants sign an acknowledgment of the outcomes. Also record the fingerprints of the generated Root CA certificate and any subordinate certificate to ensure they are exactly as expected. Secure storage: After the ceremony, store the Root CA machine (if itâs a laptop or VM) and HSM tokens in a tamper-evident bag or safe. The idea is to make it evident if someone tries to access the root outside of an authorized ceremony. While a full key ceremony might be overkill for a small lab, understanding these practices is important. Even in a lab, you can simulate some aspects (for learning), like documenting the procedure of taking the root online to sign the request and then locking it away. These practices greatly increase the trust in a production PKI by ensuring transparency and accountability for critical operations. Backup and Recovery Plans: Both CAsâ data should be regularly backed up: For the Root CA: since itâs rarely online, backup after any change. Typically, youâd back up the CAâs private key and certificate once (right after setup or any renewal). Store this securely offline (separate from the server itself). Also back up the CA database if it ever issues more than one cert (for root it might not issue many). For the Issuing CA: schedule automated backups of the CA database and private key. You can use the built-in certutil -backup or Windows Server Backup (which is aware of the AD CS database). Keep backups secure and test restoration procedures. Having a documented recovery procedure for the CA is crucial for continuity. Also consider backup of templates and any scripts. Maintain spare hardware or VMs in case you need to restore the CA on new hardware (especially for the root, having a procedure to restore on a new machine if the original is destroyed). Security maintenance: Apply OS updates to the CAs carefully. For the offline root, patch it offline if possible (offline servicing or connecting it briefly to a management network). For the issuing CA, treat it as a critical infrastructure server: limit its exposure (firewall it so only required services are reachable), monitor its event logs (enable auditing for Certificate Services events, which can log each issuance and revocation), and employ anti-malware tools with caution (whitelisting the CA processes to avoid interference). Also, periodically review the CAâs configuration and certificate templates to ensure they meet current security standards (for example, deprecate any weak cryptography or adjust validity periods if needed). By following these maintenance steps and best practices, your two-tier PKI will remain secure and trustworthy over time. Remember that PKI is not âset and forgetâ â it requires operational diligence, but the payoff is a robust trust infrastructure for your organizationâs security. Additional AD CS Features and References Active Directory Certificate Services provides more capabilities than covered in this basic lab. Depending on your needs, you might explore: Certificate Templates: We touched on templates; they are a powerful feature on Enterprise CAs to enforce standardized certificate settings. Administrators can create custom templates for various use cases (SSL, S/MIME email, code signing) and control enrollment permissions. Understanding template versions and permissions is key for enterprise deployments. (Refer to Microsoftâs documentation on Certificate template concepts in Windows Server for details on how templates work and can be customized.) Web Services for Enrollment: In scenarios with remote or non-domain clients, AD CS offers the Certificate Enrollment Web Service (CES) and Certificate Enrollment Policy Web Service (CEP) role services. These allow clients to fetch enrollment policy information and request certificates over HTTP or HTTPS, even when not connected directly to the domain. They work with the certificate templates to enable similar auto-enrollment experiences over the web. See Microsoftâs guides on the Certificate Enrollment Web Service overview and Certificate Enrollment Policy Web Service overview for when to use these. Network Device Enrollment Service (NDES): This AD CS role service implements the Simple Certificate Enrollment Protocol (SCEP) to allow devices like routers, switches, and mobile devices to obtain certificates from the CA without domain credentials. NDES acts as a proxy (Registration Authority) between devices and the CA, using one-time passwords for authentication. If you need to issue certificates to network equipment or MDM-managed mobile devices, NDES is the solution. Microsoft Docs provide a Network Device Enrollment Service(NDES) overview and even details on using a policy module with NDES for advanced scenarios (like customizing how requests are processed or integrating with custom policies). Online Responders (OCSP): As mentioned, an Online Responder can be configured to answer revocation status queries more efficiently than CRLs, especially useful if your CRLs grow large or you have high-volume certificate validation (VPNs, etc.). AD CSâs Online Responder role service can be installed on a member server and configured with the OCSP Response Signing certificate from your Issuing CA. Monitoring and Auditing: Windows Servers have options to audit CA events. Enabling auditing can log events such as certificate issuance, revocation, or changes to the CA configuration. These logs are important in enterprise PKI to track who did what (for compliance and security forensics). Also, tools like the PKI Health Tool (pkiview.msc) and PowerShell cmdlets (like Get-CertificationAuthority, Get-CertificationAuthorityCertificate) can help monitor the health and configuration of your CAs. Conclusion By following this guide, you have set up a secure two-tier PKI environment consisting of an offline Root CA and an online Issuing CA. This design, which uses an offline root, is considered a security best practice for enterprise PKI deployments because it reduces the risk of your root key being compromised. With the offline Root CA acting as a hardened trust anchor and the enterprise Issuing CA handling day-to-day certificate issuance, your lab PKI can issue certificates for various purposes (HTTPS, code signing, user authentication, etc.) in a way that models real-world deployments. As you expand this lab or move to production, always remember that PKI security is as much about process as technology. Applying strict controls to protect CA keys, keeping software up to date, and monitoring your PKIâs health are all part of the journey. For further reading and official guidance, refer to these Microsoft documentation resources: đ AD CS PKI Design Considerations: PKI design considerations using Active Directory Certificate Services in Windows Server helps in planning a PKI deployment (number of CAs, hierarchy depth, naming, key lengths, validity periods, etc.). This is useful to read when adapting this lab design to a production environment. It also covers configuring CDP/AIA and why offline roots usually donât need delta CRLs. đ AD CS Step-by-Step Guides: Microsoftâs Test Lab Guide Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy walk through a similar scenario.Evolving the Windows User Model â Introducing Administrator Protection
Previously, in part one, we outlined the history of the multi-user model in Windows, how Microsoft introduced features to secure it, and in what ways we got it right (and wrong). In the final part of this series, we will describe how Microsoft intends to raise the security bar via its new Administrator protection (AP) feature. Core Principles for Administrator Protection As the main priority, Administrator protection aims to provide a strong security boundary between elevated and non-elevated user contexts. There are several additional usability goals that we will cover later, but for security, Administrator protection can be summarized by the following five principles: Users operate within the Principle of Least Privilege Administrator privileges only persist for the duration of the task for which they were invoked Strong separation between elevated and non-elevated user accounts, except for paths of intentional access Elevation actions must be explicit (e.g. no silent elevations) Allowing a more granular use of elevated privileges by applications, rather than the âup-frontâ elevation practice common in User Account Control (UAC) Specifically, principles two and three represent major changes to the existing design of the Windows user model, while principles one and four are intent on fulfilling promises of previous features (standard user and, to a lesser extent, UAC) and rolling back changes which degraded security (auto-elevation), respectively. What Does Administrator Protection Fix and How? Administrator protection is nearly as much about what it removes as to what it adds. Recall, beginning with Windows Vista, the split-token administrator user type was added to allow a user to run as both standard user and administrator depending on the level of privilege required for a specific task. It was originally seen to make standard user more viable for wide-spread adoption and to enforce the Principle of Least Privilege. However, the features did not fully live up to expectations as UAC bypasses were numerous following the release of Windows 7. As a refresher, when a user was configured as a split-token admin, they would receive two access tokens upon logon â a full privilege, âelevatedâ administrator token with admin group policy set to âEnabledâ and a restricted, âunelevatedâ access token with admin group policy set to âDenyOnlyâ. Depending on the required run level of an application, one token or the other would be used to create the process. Administrator protection changes the paradigm via System Managed Administrator Accounts (SMAA) â a local administrator account which is linked to a specific standard user account. Upon elevation, if a SMAA does not exist already it is created. Each SMAA is a separate user profile and member of the Administrators group. It is a local account named via the following scheme utilizing extra digits in the unlikely event of a collision: Local Account: WIN-ABC123\BobFoo SMAA: WIN-ABC123\admin_BobFoo Or on collision: Local Account: WIN-ABC123\BobFoo (the account to be SMAA-linked) Local Account: WIN-ABC123\admin_BobFoo (another standard user account, oddly named) SMAA: WIN-ABC123\admin1_BobFoo Similarly, for domain accounts, the scheme remains the same, except the SMAA will still be a local account: Domain Account: Redmond\BobFoo SMAA: WIN-ABC123\admin_BobFoo To ensure these accounts canât be abused, they are created as password-less accounts with additional logon restrictions to ensure only specific, SYSTEM processes are permitted to logon as the SMAA. Specifically, following an elevation request, a logon request is made via the Local Security Authority (LSA), and the following conditions are checked: Access Check. Call NtAccessCheck, including both an ACE for the SYSTEM account and a SYSTEM IL mandatory ACE with no read up, no write up, and no execute up. The access check must pass. Process Path. Call NtOpenProcess with the callerâs PID to obtain a process handle, then check the process image path via QueryFullProcessImageName. Compare the path to the hardcoded allow-list of binaries that are allowed to logon SMAA accounts. The astute reader may notice that process path checks are not enforceable security boundaries in Windows; rather, the check is a defense-in-depth measure to prevent SYSTEM processes such as WinLogon or RDP from exposing SMAA logon surface to the user. In fact, Process Execution Block (PEB) spoofing was a class of UAC bypass in which a trusted image path was faked by a malicious process. However, in this case the PEB is not queried, but instead the kernel EPROCESS object is used to query the image path. As such, the process path check will be used alongside an allowlist to prevent current and future system components from misusing SMAA. Splitting the Hive A major design compromise made with the split-token administrator model was that both âhalvesâ of the user shared a common profile. Despite each token being appropriately restricted in its use, both restricted and admin-level processes could access shared resources such as the user file system and the registry. As such, improper access restrictions on a given file or registry key would allow a restricted user the ability to influence a privileged process. In fact, improper access controls on shared resources were the source of many classic UAC bypasses. As an example, when the Event Viewer application, âeventvwr.exeâ, attempts to launch âmmc.exeâ as a High Integrity Level (IL) process, it searches two registry locations to find the executable path (1): HKCU\Software\Classes\mscfile\shell\open\command HKCR\mscfile\shell\open\command In most circumstances, the first registry location does not exist, so the second is used to launch the process. However, an unprivileged process running within the restricted user context can create the missing key; this would then allow the attack to run any executable it wished at High IL. As a bonus for the attacker, this attack was silent as Event Viewer is a trusted Windows application and allows for âauto-elevationâ meaning no UAC prompt would be displayed. $registryPath = "HKCU:\Software\Classes\mscfile\shell\open\command" $newValue = "C:\Windows\System32\cmd.exe" # Check if the registry key exists if (-not (Test-Path $registryPath)) { # Create the registry key if it doesn't exist New-Item -Path "HKCU:\Software\Classes\mscfile\shell\open" -Name "command" -Force | Out-Null `}` # Set the registry value Set-ItemProperty -Path $registryPath -Name "(default)" -Value $newValue # Run mmc.exe to auto-elevate cmd.exe Start-Process âmmc.exeâ Similarly, the Windows Task Scheduler â which configures processes to run periodically â could be exploited to run arbitrary commands or executables in an elevated context. These attacks worked similarly in that they used writable local environment variables to overload system variables such as %WINDIR% to allow an attack to execute arbitrary applications with elevated privileges â with SilentCleanup being a particular favorite (2). Such attacks were attractive as an unprivileged process could also trigger the scheduled task to run at any time. New-ItemProperty -Path "HKCU:\Environment" -Name "windir" -Value "cmd.exe /k whoami & " -PropertyType ExpandString; schtasks.exe /Run /TN \Microsoft\Windows\DiskCleanup\SilentCleanup /I As separate-but-linked accounts, each with its own profile, registry hives are no longer shared. Thus, classic UAC bypasses, such as the registry key manipulation and environment variable (like many things in Windows, environment variables are backed in the registry) overloading attacks are mitigated. As an added benefit administrator tokens can now be created on-demand and discarded just as quickly, thus limiting exposure of the privileged token to the lifetime of the requesting process. Rolling Back Auto-Elevations When auto-elevation was added in Windows 7, it was primarily done so to improve the user experience and allow simpler administration of a Windows machine. Unfortunately, despite several restrictions placed on applications allowed to auto-elevate, the feature introduced a huge hole in the Windows security model and opened a number of new avenues for UAC bypass. Most prevalent of these bypasses were those which exploited the auto-elevating COM interface IFileOperation. Attackers would leverage this interface to write malicious DLLs to secure locations â a so-called âDLL Hijackingâ attack. The attack would work whenever a process met all of the conditions for auto-elevation but ran at the Medium Integrity Level (IL). The malicious process would inject code into the target process and request the DLL payload be written to a secure path via IFileOperation. Whenever the DLL was loaded by an elevated process, the malicious code would be run, giving the attacker full privileges on the system. With Administrator protection, auto-elevation is removed. Users will notice an increase in consent prompts, though many fewer than the Vista days as much work has been done to clean up elevation points in most workflows. Additionally, users and administrators will have the option to configure elevation prompts as âcredentialedâ (biometric/password/PIN) via Windows Hello or simply confirmation prompts. This simple change trades some user convenience for a reduction in attack surface of roughly 92 auto-elevating COM interfaces, 11 DLL Hijacks, and 23 auto-elevating apps. Of the 79 known UAC bypasses tested, all but one are now fully or partially mitigated. The remaining open issue around token manipulation attacks has been assigned MSRC cases and will be addressed. It should be noted that not all auto-elevations have been removed. Namely, the Run and RunOnce registry keys found in the HKEY_LOCAL_MACHINE hive will still auto-elevate as needed. Appropriately, these keys are ACLâd such that only an administrator can modify them. Improving Useability Administrator protection is not limited to security-focused changes only â improved useability is also a major focal point of the feature. Chief amongst the areas targeted for improvement is the removal of unnecessary elevations and âdead-endsâ. Specifically, dead-ends occur when a functional pathway which requires administrator privileges does not account for a user operating as a standard user and thus presents no elevation path at all, resulting in the user interface either displaying the setting as disabled or not at all. In such cases, a so-called âover-the-shoulderâ elevation is required â the same underlying mechanism used when elevating to the SMAA user in AP. Such scenarios represent huge inconvenience for non-Administrator accounts in both AP and non-AP enabled configurations. One example of this scenario was the group policy editor (gpedit.exe). When launching as a standard user, an error prompt would be displayed, and the app would be launched in an unusable state. More Work To Be Done Administrator protection represents a huge jump in the security of the Windows OS. However, as always, there is more work to be done. While AP has mitigated large classes of vulnerabilities, some remain, albeit in a diminished state. DLL hijacking attacks prior to AP primarily relied on abusing the auto-elevating IFileOperation COM interface to write a malicious DLL to a secure path. As auto-elevation has been removed, this path no longer exists. However, situations where an unsigned DLL is loaded from an insecure path still represent a potential AP bypass. Note that the user will still be prompted for elevation in such a scenario but may not be aware that a malicious DLL is being included in the process. Token manipulation bypasses such as those shown by James Forshaw and splinter_code, remain a class of potential exploitation. Elevation prompts are shown only before creation of an elevated token, not use. Therefore, should additional pathways be discovered where an elevated token can be obtained by a malicious process, AP would not be positioned to stop it from silently elevating. However, MSRC cases for known variants of token manipulation/reuse attack have been filed and fixes are currently in-development. Lastly, attacks which rely on obtaining a UIAccess capability from another running process are partially mitigated by AP. Previously, UAC bypass attacks would launch an auto-elevating app, such as mmc.exe, and then obtain a UIAccess-enabled token â a token which gives a lower-privileged process the ability to manipulate the UI of a higher-privileged process, typically used for accessibility features. With AP enabled, all attempts to launch an elevated process would be met with a consent prompt which an attacker would be unable manipulate with a UIAccess token alone. However, in situations where a user has previously elevated a running process, an attack would be able to obtain a UIAccess token and manipulate the UI with no additional consent prompts. This list is not exhaustive, it is likely edge cases will pop up which will require attention. Fortunately Administrator protection is covered by the Windows Insider Bug Bounty Program and internal efforts by MORSE and others will continue to identify remaining issues. A Welcome Security Boundary We In MORSE review quite a few features in Windows and are big fans of Administrator protection. It addresses many gaps left by UAC today and adds protections which for all intents and purposes simply did not exist before. The feature is far from complete, usability improvements are needed, and there are some remaining bugs which will take time to resolve. However, the short-term inconvenience, is worth long term security benefit to users. While Administrator protection will certainly experience some growing pains, even in its current state, itâs a leap forward for user security. Going forward, we encourage those users who prioritize strong security to give Administrator protection a try. If you encounter an issue, send us feedback using the feedback tool. Lastly, for app developers, we ask they update their applications to support Administrator protection, as it will eventually become the default option in Windows. References UAC Bypass â Event Viewer â Penetration Testing Lab Tyranid's Lair: Exploiting Environment Variables in Scheduled Tasks for UAC Bypass Tyranid's Lair: Bypassing UAC in the most Complex Way Possible! Bypassing UAC with SSPI Datagram Contexts Administrator protection on Windows 11 | Microsoft Community Hub6.3KViews7likes0Comments