attack surface management
14 TopicsIntroducing Microsoft Security Store
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. We recognize that defending against modern threats requires the full strength of an ecosystem, combining our unique expertise and shared threat intelligence. But with so many options out there, it’s tough for security professionals to cut through the noise, and even tougher to navigate long procurement cycles and stitch together tools and data before seeing meaningful improvements. That’s why we built Microsoft Security Store - a storefront designed for security professionals to discover, buy, and deploy security SaaS solutions and AI agents from our ecosystem partners such as Darktrace, Illumio, and BlueVoyant. Security SaaS solutions and AI agents on Security Store integrate with Microsoft Security products, including Sentinel platform, to enhance end-to-end protection. These integrated solutions and agents collaborate intelligently, sharing insights and leveraging AI to enhance critical security tasks like triage, threat hunting, and access management. In Security Store, you can: Buy with confidence – Explore solutions and agents that are validated to integrate with Microsoft Security products, so you know they’ll work in your environment. Listings are organized to make it easy for security professionals to find what’s relevant to their needs. For example, you can filter solutions based on how they integrate with your existing Microsoft Security products. You can also browse listings based on their NIST Cybersecurity Framework functions, covering everything from network security to compliance automation — helping you quickly identify which solutions strengthen the areas that matter most to your security posture. Simplify purchasing – Buy solutions and agents with your existing Microsoft billing account without any additional payment setup. For Azure benefit-eligible offers, eligible purchases contribute to your cloud consumption commitments. You can also purchase negotiated deals through private offers. Accelerate time to value – Deploy agents and their dependencies in just a few steps and start getting value from AI in minutes. Partners offer ready-to-use AI agents that can triage alerts at scale, analyze and retrieve investigation insights in real time, and surface posture and detection gaps with actionable recommendations. A rich ecosystem of solutions and AI agents to elevate security posture In Security Store, you’ll find solutions covering every corner of cybersecurity—threat protection, data security and governance, identity and device management, and more. To give you a flavor of what is available, here are some of the exciting solutions on the store: Darktrace’s ActiveAI Security SaaS solution integrates with Microsoft Security to extend self-learning AI across a customer's entire digital estate, helping detect anomalies and stop novel attacks before they spread. The Darktrace Email Analysis Agent helps SOC teams triage and threat hunt suspicious emails by automating detection of risky attachments, links, and user behaviors using Darktrace Self-Learning AI, integrated with Microsoft Defender and Security Copilot. This unified approach highlights anomalous properties and indicators of compromise, enabling proactive threat hunting and faster, more accurate response. Illumio for Microsoft Sentinel combines Illumio Insights with Microsoft Sentinel data lake and Security Copilot to enhance detection and response to cyber threats. It fuses data from Illumio and all the other sources feeding into Sentinel to deliver a unified view of threats across millions of workloads. AI-driven breach containment from Illumio gives SOC analysts, incident responders, and threat hunters unified visibility into lateral traffic threats and attack paths across hybrid and multi-cloud environments, to reduce alert fatigue, prioritize threat investigation, and instantly isolate workloads. Netskope’s Security Service Edge (SSE) platform integrates with Microsoft M365, Defender, Sentinel, Entra and Purview for identity-driven, label-aware protection across cloud, web, and private apps. Netskope's inline controls (SWG, CASB, ZTNA) and advanced DLP, with Entra signals and Conditional Access, provide real-time, context-rich policies based on user, device, and risk. Telemetry and incidents flow into Defender and Sentinel for automated enrichment and response, ensuring unified visibility, faster investigations, and consistent Zero Trust protection for cloud, data, and AI everywhere. PERFORMANTA Email Analysis Agent automates deep investigations into email threats, analyzing metadata (headers, indicators, attachments) against threat intelligence to expose phishing attempts. Complementing this, the IAM Supervisor Agent triages identity risks by scrutinizing user activity for signs of credential theft, privilege misuse, or unusual behavior. These agents deliver unified, evidence-backed reports directly to you, providing instant clarity and slashing incident response time. Tanium Autonomous Endpoint Management (AEM) pairs realtime endpoint visibility with AI-driven automation to keep IT environments healthy and secure at scale. Tanium is integrated with the Microsoft Security suite—including Microsoft Sentinel, Defender for Endpoint, Entra ID, Intune, and Security Copilot. Tanium streams current state telemetry into Microsoft’s security and AI platforms and lets analysts pivot from investigation to remediation without tool switching. Tanium even executes remediation actions from the Sentinel console. The Tanium Security Triage Agent accelerates alert triage, enabling security teams to make swift, informed decisions using Tanium Threat Response alerts and real-time endpoint data. Walkthrough of Microsoft Security Store Now that you’ve seen the types of solutions available in Security Store, let’s walk through how to find the right one for your organization. You can get started by going to the Microsoft Security Store portal. From there, you can search and browse solutions that integrate with Microsoft Security products, including a dedicated section for AI agents—all in one place. If you are using Microsoft Security Copilot, you can also open the store from within Security Copilot to find AI agents - read more here. Solutions are grouped by how they align with industry frameworks like NIST CSF 2.0, making it easier to see which areas of security each one supports. You can also filter by integration type—e.g., Defender, Sentinel, Entra, or Purview—and by compliance certifications to narrow results to what fits your environment. To explore a solution, click into its detail page to view descriptions, screenshots, integration details, and pricing. For AI agents, you’ll also see the tasks they perform, the inputs they require, and the outputs they produce —so you know what to expect before you deploy. Every listing goes through a review process that includes partner verification, security scans on code packages stored in a secure registry to protect against malware, and validation that integrations with Microsoft Security products work as intended. Customers with the right permissions can purchase agents and SaaS solutions directly through Security Store. The process is simple: choose a partner solution or AI agent and complete the purchase in just a few clicks using your existing Microsoft billing account—no new payment setup required. Qualifying SaaS purchases also count toward your Microsoft Azure Consumption Commitment (MACC), helping accelerate budget approvals while adding the security capabilities your organization needs. Security and IT admins can deploy solutions directly from Security Store in just a few steps through a guided experience. The deployment process automatically provisions the resources each solution needs—such as Security Copilot agents and Microsoft Sentinel data lake notebook jobs—so you don’t have to do so manually. Agents are deployed into Security Copilot, which is built with security in mind, providing controls like granular agent permissions and audit trails, giving admins visibility and governance. Once deployment is complete, your agent is ready to configure and use so you can start applying AI to expand detection coverage, respond faster, and improve operational efficiency. Security and IT admins can view and manage all purchased solutions from the “My Solutions” page and easily navigate to Microsoft Cost Management tools to track spending and manage subscriptions. Partners: grow your business with Microsoft For security partners, Security Store opens a powerful new channel to reach customers, monetize differentiated solutions, and grow with Microsoft. We will showcase select solutions across relevant Microsoft Security experiences, starting with Security Copilot, so your offerings appear in the right context for the right audience. You can monetize both SaaS solutions and AI agents through built-in commerce capabilities, while tapping into Microsoft’s go-to-market incentives. For agent builders, it’s even simpler—we handle the entire commerce lifecycle, including billing and entitlement, so you don’t have to build any infrastructure. You focus on embedding your security expertise into the agent, and we take care of the rest to deliver a seamless purchase experience for customers. Security Store is built on top of Microsoft Marketplace, which means partners publish their solution or agent through the Microsoft Partner Center - the central hub for managing all marketplace offers. From there, create or update your offer with details about how your solution integrates with Microsoft Security so customers can easily discover it in Security Store. Next, upload your deployable package to the Security Store registry, which is encrypted for protection. Then define your license model, terms, and pricing so customers know exactly what to expect. Before your offer goes live, it goes through certification checks that include malware and virus scans, schema validation, and solution validation. These steps help give customers confidence that your solutions meet Microsoft’s integration standards. Get started today By creating a storefront optimized for security professionals, we are making it simple to find, buy, and deploy solutions and AI agents that work together. Microsoft Security Store helps you put the right AI‑powered tools in place so your team can focus on what matters most—defending against attackers with speed and confidence. Get started today by visiting Microsoft Security Store. If you’re a partner looking to grow your business with Microsoft, start by visiting Microsoft Security Store - Partner with Microsoft to become a partner. Partners can list their solution or agent if their solution has a qualifying integration with Microsoft Security products, such as a Sentinel connector or Security Copilot agent, or another qualifying MISA solution integration. You can learn more about qualifying integrations and the listing process in our documentation here.Cybersecurity: What Every Business Leader Needs to Know Now
As a Senior Cybersecurity Solution Architect, I’ve had the privilege of supporting organisations across the United Kingdom, Europe, and the United States—spanning sectors from finance to healthcare—in strengthening their security posture. One thing has become abundantly clear: cybersecurity is no longer the sole domain of IT departments. It is a strategic imperative that demands attention at board-level. This guide distils five key lessons drawn from real-world engagements to help executive leaders navigate today’s evolving threat landscape. These insights are not merely technical—they are cultural, operational, and strategic. If you’re a C-level executive, this article is a call to action: reassess how your organisation approaches cybersecurity before the next breach forces the conversation. In this article, I share five lessons (and quotes) from the field that help demystify how to enhance an organisation’s security posture. 1. Shift the Mindset “This has always been our approach, and we’ve never experienced a breach—so why should we change it?” A significant barrier to effective cybersecurity lies not in the sophistication of attackers, but in the predictability of human behaviour. If you’ve never experienced a breach, it’s tempting to maintain the status quo. However, as threats evolve, so too must your defences. Many cyber threats exploit well-known vulnerabilities that remain unpatched or rely on individuals performing routine tasks in familiar ways. Human nature tends to favour comfort and habit—traits that adversaries are adept at exploiting. Unlike many organisations, attackers readily adopt new technologies to advance their objectives, including AI-powered ransomware to execute increasingly sophisticated attacks. It is therefore imperative to recognise—without delay—that the advent of AI has dramatically reduced both the effort and time required to compromise systems. As the UK’s National Cyber Security Centre (NCSC) has stated: “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.” Similarly, McKinsey & Company observed: “As AI quickly advances cyber threats, organisations seem to be taking a more cautious approach, balancing the benefits and risks of the new technology while trying to keep pace with attackers’ increasing sophistication.” To counter this evolving threat landscape, organisations must proactively leverage AI in their cyber defence strategies. Examples include: Identity and Access Management (IAM): AI enhances IAM by analysing real-time signals across systems to detect risky sign-ins and enforce adaptive access controls. Example: Microsoft Entra Agents for Conditional Access use AI to automate policy recommendations, streamlining access decisions with minimal manual input. Figure 1: Microsoft Entra Agents Threat Detection: AI accelerates detection, response, and recovery, helping organisations stay ahead of sophisticated threats. Example: Microsoft Defender for Cloud’s AI threat protection identifies prompt injection, data poisoning, and wallet attacks in real time. Incident Response: AI facilitates real-time decision-making, removing emotional bias and accelerating containment and recovery during security incidents. Example: Automatic Attack Disruption in Defender XDR, which can automatically contain a breach in progress. AI Security Posture Management AI workloads require continuous discovery, classification, and protection across multi-cloud environments. Example: Microsoft Defender for Cloud’s AI Security Posture Management secures custom AI apps across Azure, AWS, and GCP by detecting misconfigurations, vulnerabilities, and compliance gaps. Data Security Posture Management (DSPM) for AI AI interactions must be governed to ensure privacy, compliance, and insider risk mitigation. Example: Microsoft Purview DSPM for AI enables prompt auditing, applies Data Loss Prevention (DLP) policies to third-party AI apps like ChatGPT, and supports eDiscovery and lifecycle management. AI Threat Protection Organisations must address emerging AI threat vectors, including prompt injection, data leakage, and model exploitation. Example: Defender for AI (private preview) provides model-level security, including governance, anomaly detection, and lifecycle protection. Embracing innovation, automation, and intelligent defence is the secret sauce for cyber resilience in 2026. 2. Avoid One-Off Purchases – Invest with a Strategy “One MDE and one Sentinel to go, please.” Organisations often approach me intending to purchase a specific cybersecurity product—such as Microsoft Defender for Endpoint (MDE)—without a clearly articulated strategic rationale. My immediate question is: what is the broader objective behind this purchase? Is it driven by perceived value or popularity, or does it form part of a well-considered strategy to enhance endpoint security? Cybersecurity investments should be guided by a long-term, holistic strategy that spans multiple years and is periodically reassessed to reflect evolving threats. Strengthening endpoint protection must be integrated into a wider effort to improve the organisation’s overall security posture. This includes ensuring seamless integration between security solutions and avoiding operational silos. For example, deploying robust endpoint protection is of limited value if identities are not safeguarded with multi-factor authentication (MFA), or if storage accounts remain publicly accessible. A cohesive and forward-looking approach ensures that all components of the security architecture work in concert to mitigate risk effectively. Security Adoption Journey (Based on Zero Trust Framework) Assess – Evaluate the threat landscape, attack surface, vulnerabilities, compliance obligations, and critical assets. Align – Link security objectives to broader business goals to ensure strategic coherence. Architect – Design integrated and scalable security solutions, addressing gaps and eliminating operational silos. Activate – Implement tools with robust governance and automation to ensure consistent policy enforcement. Advance – Continuously monitor, test, and refine the security posture to stay ahead of evolving threats. Security tools are not fast food—they work best as part of a long-term plan, not a one-off order. This piecemeal approach runs counter to the modern Zero Trust security model, which assumes no single tool will prevent every breach and instead implements layered defences and integration. 3. Legacy Systems Are Holding You Back “Unfortunately, we are unable to implement phishing-resistant MFA, as our legacy app does not support integration with the required protocols.” A common challenge faced by many organisations I have worked with is the constraint on innovation within their cybersecurity architecture, primarily due to continued reliance on legacy applications—often driven by budgetary or operational necessity. These outdated systems frequently lack compatibility with modern security technologies and may introduce significant vulnerabilities. A notable example is the deployment of phishing-resistant multi-factor authentication (MFA)—such as FIDO2 security keys or certificate-based authentication—which requires advanced identity protocols and conditional access policies. These capabilities are available exclusively through Microsoft Entra ID. To address this issue effectively, it is essential to design security frameworks based on the organisation’s future aspirations rather than its current limitations. By adopting a forward-thinking approach, organisations can remain receptive to emerging technologies that align with their strategic cybersecurity objectives. Moreover, this perspective encourages investment in acquiring the necessary talent, thereby reducing reliance on extensive change management and staff retraining. I advise designing for where you want to be in the next 1–3 years—ideally cloud-first and identity-driven—essentially adopting a Zero Trust architecture, rather than being constrained by the limitations of legacy systems. 4. Collaboration Is a Security Imperative “This item will need to be added to the dev team's backlog. Given their current workload, they will do their best to implement GitHub Security in Q3, subject to capacity.” Cybersecurity threats may originate from various parts of an organisation, and one of the principal challenges many face is the fragmented nature of their defence strategies. To effectively mitigate such risks, cybersecurity must be embedded across all departments and functions, rather than being confined to a single team or role. In many organisations, the Chief Information Security Officer (CISO) operates in isolation from other C-level executives, which can limit their influence and complicate the implementation of security measures across the enterprise. Furthermore, some teams may lack the requisite expertise to execute essential security practices. For instance, an R&D lead responsible for managing developers may not possess the necessary skills in DevSecOps. To address these challenges, it is vital to ensure that the CISO is empowered to act without political or organisational barriers and is supported in implementing security measures across all business units. When the CISO has backing from the COO and HR, initiatives such as MFA rollout happen faster and more thoroughly. Cross-Functional Security Responsibilities Role Security Responsibilities R&D - Adopt DevSecOps practices - Identify vulnerabilities early - Manage code dependencies - Detect exposed secrets - Embed security in CI/CD pipelines CIO - Ensure visibility over organizational data - Implement Data Loss Prevention (DLP) - Safeguard sensitive data lifecycle - Ensure regulatory compliance CTO - Secure cloud environments (CSPM) - Manage SaaS security posture (SSPM) - Ensure hardware and endpoint protection COO - Protect digital assets - Secure domain management - Mitigate impersonation threats - Safeguard digital marketing channels and customer PII Support & Vendors - Deliver targeted training - Prevent social engineering attacks - Improve awareness of threat vectors HR - Train employees on AI-related threats - Manage insider risks - Secure employee data - Oversee cybersecurity across the employee lifecycle Empowering the CISO to act across departments helps organisations shift towards a security-first culture—embedding cybersecurity into every function, not just IT. 5. Compliance Is Not Security “We’re compliant, so we must be secure.” Many organisations mistakenly equate passing audits—such as ISO 27001 or SOC 2—with being secure. While compliance frameworks help establish a baseline for security, they are not a guarantee of protection. Determined attackers are not deterred by audit checklists; they exploit gaps, misconfigurations, and human error regardless of whether an organisation is certified. Moreover, due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks often struggle to keep pace. By the time a standard is updated, attackers may already be exploiting new techniques that fall outside its scope. This lag creates a false sense of security for organisations that rely solely on regulatory checkboxes. Security is a continuous risk management process—not a one-time certification. It must be embedded into every layer of the enterprise and treated with the same urgency as other core business priorities. Compliance may be the starting line, not the finish line. Effective security goes beyond meeting regulatory requirements—it demands ongoing vigilance, adaptability, and a proactive mindset. Conclusion: Cybersecurity Is a Continuous Discipline Cybersecurity is not a destination—it is a continuous journey. By embracing strategic thinking, cross-functional collaboration, and emerging technologies, organisations can build resilience against today’s threats and tomorrow’s unknowns. The lessons shared throughout this article are not merely technical—they are cultural, operational, and strategic. If there is one key takeaway, it is this: avoid piecemeal fixes and instead adopt an integrated, future-ready security strategy. Due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks alone cannot keep pace. Security must be treated as a dynamic, ongoing process—one that is embedded into every layer of the enterprise and reviewed regularly. Organisations should conduct periodic security posture reviews, leveraging tools such as Microsoft Secure Score or monthly risk reports, and stay informed about emerging threats through threat intelligence feeds and resources like the Microsoft Digital Defence Report, CISA (Cybersecurity and Infrastructure Security Agency), NCSC (UK National Cyber Security Centre), and other open-source intelligence platforms. As Ann Johnson aptly stated in her blog: “The most prepared organisations are those that keep asking the right questions and refining their approach together.” Cyber resilience demands ongoing investment—in people (through training and simulation drills), in processes (via playbooks and frameworks), and in technology (through updates and adoption of AI-driven defences). To reduce cybersecurity risk over time, resilient organisations must continually refine their approach and treat cybersecurity as an ongoing discipline. The time to act is now. Resources: https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat Defend against cyber threats with AI solutions from Microsoft - Microsoft Industry Blogs Generative AI Cybersecurity Solutions | Microsoft Security Require phishing-resistant multifactor authentication for Microsoft Entra administrator roles - Microsoft Entra ID | Microsoft Learn AI is the greatest threat—and defense—in cybersecurity today. Here’s why. Microsoft Entra Agents - Microsoft Entra | Microsoft Learn Smarter identity security starts with AI https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/ https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2023-critical-cybersecurity-challenges https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/1.2KViews2likes0CommentsStep by Step: 2-Tier PKI Lab
Purpose of this blog Public Key Infrastructure (PKI) is the backbone of secure digital identity management, enabling encryption, digital signatures, and certificate-based authentication. However, neither setting up a PKI nor management of certificates is something most IT pros do on a regular basis and given the complexity and vastness of the subject it only makes sense to revisit the topic from time to time. What I have found works best for me is to just set up a lab and get my hands dirty with the topic that I want to revisit. One such topic that I keep coming back to is PKI - be it for creating certificate templates, enrolling clients, or flat out creating a new PKI itself. But every time I start deploying a lab or start planning a PKI setup, I end up spending too much time sifting through the documentations and trying to figure out why my issuing certificate authority won't come online! To make my life easier I decided to create a cheatsheet to deploy a simple but secure 2-tier PKI lab based on industry best practices that I thought would be beneficial for others like me, so I decided to polish it and make it into a blog. This blog walks through deploying a two-tier PKI hierarchy using Active Directory Certificate Services (AD CS) on Windows Server: an offline Root Certification Authority (Root CA) and an online Issuing Certification Authority (Issuing CA). We’ll cover step-by-step deployment and best practices for securing the root CA, conducting key ceremonies, and maintaining Certificate Revocation Lists (CRLs). Overview: Two-Tier PKI Architecture and Components In a two-tier PKI, the Root CA sits at the top of the trust hierarchy and issues a certificate only to the subordinate Issuing CA. The Root CA is kept offline (disconnected from networks) to protect its private key and is typically a standalone CA (not domain-joined). The Issuing CA (sometimes called a subordinate or intermediate CA) is kept online to issue certificates to end-entities (users, computers, services) and is usually an enterprise CA integrated with Active Directory for automation and certificate template support. Key components: Offline Root CA: A standalone CA, often on a workgroup server, powered on only when necessary (initial setup, subordinate CA certificate signing, or periodic CRL publishing). By staying offline, it is insulated from network threats. Its self-signed certificate serves as the trust anchor for the entire PKI. The Root CA’s private key must be rigorously protected (ideally by a Hardware Security Module) because if the root is compromised, all certificates in the hierarchy are compromised. Online Issuing CA: An enterprise subordinate CA (domain-joined) that handles day-to-day certificate issuance for the organization. It trusts the Root CA (via the root’s certificate) and is the one actually responding to certificate requests. Being online, it must also be secured, but its key is kept online for operations. Typically, the Issuing CA publishes certificates and CRLs to Active Directory and/or HTTP locations for clients to download. The following diagram shows the simplified view of this implementations: The table below summarizes the roles and differences: Aspect Offline Root CA Online Issuing CA Role Standalone Root CA (workgroup) Enterprise Subordinate CA (domain member) Network Connectivity Kept offline (powered off or disconnected when not issuing) Online (running continuously to serve requests) Usage Signs only one certificate (the subordinate CA’s cert) and CRLs Issues end-entity certificates (users, computers, services) Active Directory Not a member of AD domain; doesn’t use templates or auto-enrollment Integrated with AD DS; uses certificate templates for streamlined issuance Security Extremely high: physically secured, limited access, often protected by HSM Very High: server hardened, but accessible on network; HSM recommended for private key CRL Publication Manual. Admin must periodically connect, generate, and distribute CRL. Delta CRLs usually disabled. Automatic. Publishes CRLs to configured CDP locations (AD DS, HTTP) at scheduled intervals. Validity Period Longer (e.g. 5-10+ years for the CA certificate) to reduce frequency of renewal. Shorter (e.g. 2 years) to align with organizational policy; renewed under the root when needed. In this lab setup, we will create a Contoso Root CA (offline) and a Contoso Issuing CA (online) as an example. This mirrors real-world best practices which is to "deploy a standalone offline root CA and an online enterprise subordinate CA”. Deploying the Offline Root CA Setting up the offline Root CA involves preparing a dedicated server, installing AD CS, configuring it as a root CA, and then securing it. We’ll also configure certificate CDP/AIA (CRL Distribution Point and Authority Information Access) locations so that issued certificates will point clients to the correct locations to fetch the CA’s certificate and revocation list. Step 1: Prepare the Root CA Server (Offline) Provision an isolated server: Install a Windows Server OS (e.g., Windows Server 2022) on the machine designated to be the Root CA. Preferably on a portable enterprise grade physical server that can be stored in a safe. Do not join this server to any domain – it should function in a Workgroup to remain independent of your AD forest. System configuration: Give the server a descriptive name (e.g., ROOTCA) and assign a static IP (even though it will be offline, a static IP helps when connecting it temporarily for management). Install the latest updates and security patches while it’s still able to go online. Lock down network access: Once setup is complete, disable or unplug network connections. If the server must remain powered on for any reason, ensure all unnecessary services/ports are disabled to minimize exposure. In practice, you will keep this server shut down or physically disconnected except when performing CA maintenance. Step 2: Install the AD CS Role on the Root CA Add the Certification Authority role: On the Root CA server, open Server Manager and add the Active Directory Certificate Services role. During the wizard, select the Certification Authority role service (no need for web enrollment or others on the root). Proceed through the wizard and complete the installation. You can also install the CA role and management tools via PowerShell: Install-WindowsFeature AD-Certificate -IncludeManagementToolsThis Role Services: Choose Certification Authority. Setup Type: Select Standalone CA (since this root CA is not domain-joined). CA Type: Select Root CA. Private Key: Choose “Create a new private key.” Cryptography: If using an HSM, select the HSM’s Cryptographic Service Provider (CSP) here; otherwise use default. Choose a strong key length (e.g., 2048 or 4096 bits) and a secure hash algorithm (SHA-256 or higher). CA Name: Provide a common name for the CA (e.g., “Contoso Root CA”). This name will appear in issued certificates as the Issuer. Avoid using a machine DNS name here for security – pick a name without revealing the server’s actual hostname. Validity Period: Set a long validity (e.g., 10 years) for the root CA’s self-signed certificate. A decade is common for enterprise roots, reducing how often you must touch the offline CA for renewal. Database: Specify locations for the CA database and logs (the defaults are fine for a lab). Review settings and complete the configuration. This process will generate the root CA’s key pair and self-signed certificate, establishing the Root CA.Post-install configuration: After the binary installation, click Configure Active Directory Certificate Services (a notification in Server Manager). In the configuration wizard: You can also perform this configuration via PowerShell in one line: Install-AdcsCertificationAuthority ` -CAType StandaloneRootCA ` -CryptoProviderName "YourHSMProvider" ` -HashAlgorithmName SHA256 -KeyLength 2048 ` -CACommonName "Contoso Root CA" ` -ValidityPeriod Years -ValidityPeriodUnits 10 This would set up a standalone Root CA named "Contoso Root CA" with a 2048-bit key on an HSM provider, valid for 10 years. Step 3: Integrate an HSM (Optional but Recommended) If your lab has a Hardware Security Module, use it to secure the Root CA’s keys. Using an HSM provides a dedicated, tamper-resistant storage for CA private keys and can further protect against key compromise. To integrate: Install the HSM vendor’s software and drivers on the Root CA server. Initialize the HSM and create a security world or partition as per the vendor instructions. Before or during the CA configuration (Step 2 above), ensure the HSM is ready to generate/store the key. When running the AD CS configuration, select the HSM’s CSP/KSP for the cryptographic provider so that the CA’s private key is generated on the HSM. Secure any HSM admin tokens or smartcards. For a root CA, you might employ M of N key splits – requiring multiple key custodians to collaborate to activate the HSM or key – as part of the key ceremony (discussed later). (If an HSM is not available, the root key will be stored on the server’s disk. At minimum, protect it with a strong admin passphrase when prompted, and consider enabling the option to require administrator interaction (e.g., a password) whenever the key is accessed.) Step 4: Configure CA Extensions (CDP/AIA) It’s critical to configure how the Root CA publishes its certificate and revocation list, since the root is offline and cannot use Active Directory auto-publishing. Open the Certification Authority management console (certsrv.msc), right-click the CA name > Properties, and go to the Extensions tab. We will set the CRL Distribution Points (CDP) and Authority Information Access (AIA) URLs: CRL Distribution Point (CDP): This is where certificates will tell clients to fetch the CRL for the Root CA. By default, a standalone CA might have a file:// path or no HTTP URL. Click Add and specify an HTTP URL that will be accessible to all network clients, such as: http://<IssuingCA_Server>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl For example, if your issuing CA’s server name is ISSUINGCA.contoso.local, the URL might be http://issuingca.contoso.local/CertEnroll/Contoso%20Root%20CA.crl This assumes the Issuing CA (or another web server) will host the Root CA’s CRL in the CertEnroll directory. Check the boxes for “Include in the CDP extension of issued certificates” and “Include in all CRLs. Clients use this to find Delta CRLs” (you can uncheck the delta CRL publication on the root, as we won’t use delta CRLs on an offline root). Since the root CA won’t often revoke its single issued cert (the subordinate CA), delta CRLs aren’t necessary. Note: If your Active Directory is in use and you want to publish the Root CA’s CRL to AD, you can also add an ldap:///CN=... path and check “Publish in Active Directory”. However, publishing to AD from an offline CA must be done manually using the following command when the root is temporarily connected. certutil -dspublish Many setups skip LDAP for offline roots and rely on HTTP distribution. Authority Information Access (AIA): This is where the Root CA’s certificate will be published for clients to download (to build certificate chains). Add an HTTP URL similarly, for example: http://<IssuingCA_Server>/CertEnroll/<ServerDNSName>_<CaName><CertificateName>.crt This would point to a copy of the Root CA’s certificate that will be hosted on the issuing CA web server. Check “Include in the AIA extension of issued certificates”. This way, any certificate signed by the Root CA (like your subordinate CA’s cert) contains a URL where clients can fetch the Root CA’s cert if they don’t already have it. After adding these, remove any default entries that are not applicable (e.g., LDAP if the root isn’t going to publish to AD, or file paths that won’t be used by clients). These settings ensure that certificates issued by the Root CA (in practice, just the subordinate CA’s certificate) will carry the correct URLs for chain building and revocation checking. Step 5: Back Up the Root CA and Issue the Subordinate Certificate With the Root CA configured, we need to issue a certificate for the Issuing CA (subordinate). We’ll perform that in the next section from the Issuing CA’s side via a request file. Before taking the root offline, ensure you: Back up the CA’s private key and certificate: In the Certification Authority console, or via the CA Backup wizard, export the Root CA’s key pair and CA certificate. Protect this backup (store it offline in a secure location, e.g., on encrypted removable media in a safe). This backup is crucial for disaster recovery or if the Root CA needs to be migrated or restored. Save the Root CA Certificate: You will need the Root CA’s public certificate (*.crt) to distribute to other systems. Have it exported (Base-64 or DER format) for use on the Issuing CA and for clients. Initial CRL publication: Manually publish the first CRL so that it can be distributed. Open an elevated Command Prompt on the Root CA and run: certutil -crl This generates a new CRL file (in the CA’s configured CRL folder, typically %windir%\system32\CertSrv\CertEnroll). Take that CRL file and copy it to the designated distribution point (for example, to the CertEnroll directory on the Issuing CA’s web server, as per the HTTP URL configured). If using Active Directory for CRL distribution, you would also publish it to AD now (e.g., certutil -dspublish -f RootCA.crl on a domain-connected machine). In most lab setups, copying to an HTTP share is sufficient. With these tasks done, the Root CA is ready. At this point, disconnect or power off the Root CA and store it securely – it should remain offline except when it’s absolutely needed (like publishing a new CRL or renewing the subordinate CA’s certificate in the far future). Keeping the root CA offline maximizes its security by minimizing exposure to compromise. Best Practices for Securing the Root CA: The Root CA is the trust anchor, so apply stringent security practices: Physical security: Store the Root CA machine in a locked, secure location. If it’s a virtual machine, consider storing it on a disconnected hypervisor or a USB drive locked in a safe. Only authorized PKI team members should have access. An offline CA should be treated like crown jewels – offline CAs should be stored in secure locations. Minimal exposure: Keep the Root CA powered off and disconnected when not in use. It should not be left running or connected to any network. Routine operations (like issuing end-entity certs) should never involve the root. Admin access control: Limit administrative access on the Root CA server. Use dedicated accounts for PKI administration. Enable auditing on the CA for any changes or issuance events. No additional roles or software: Do not use the Root CA server for any other function (no web browsing, no email, etc.). Fewer installed components means fewer potential vulnerabilities. Protect the private key: Use an HSM if possible; if not, ensure the key is at least protected by a strong password and consider splitting knowledge of that password among multiple people (so no single person can activate the CA). Many organizations opt for an offline root key ceremony (see below) to generate and handle the root key with multiple witnesses and strict procedures. Keep system time and settings consistent: If the Root CA is powered off for long periods, ensure its clock is accurate whenever it is started (to avoid issuing a CRL or certificate with a wrong date). Don’t change the server name or CA name after installation (doing so invalidates issued certs). Periodic health checks: Even though offline, plan to turn on the Root CA at a secure interval (e.g., semi-annually or annually) to perform tasks like CRL publishing and system updates. Make sure to apply OS security updates during these maintenance windows, as offline does not mean immune to vulnerabilities (especially if it ever connects to a network for CRL publication or uses removable media). Deploying the Online Issuing CA Next, set up the Issuing CA server which will actually issue certificates to end entities in the lab. This server will be domain-joined (if using AD integration) and will obtain its CA certificate from the Root CA we just configured. Step 1: Prepare the Issuing CA Server Provision the server: Install Windows Server on a new machine (or VM) that will be the Issuing CA. Join this server to the Active Directory domain (e.g., Contoso.local). Being an enterprise CA, it needs domain membership to publish templates and integrate with AD security groups. Rename the server to something descriptive like ISSUINGCA for clarity. Assign a static IP and ensure it can communicate on the network. IIS for web enrollment (optional): If you plan to use the Web Enrollment or Certificate Enrollment Web Services, ensure IIS is installed. (The AD CS installation wizard can add it if you include those role services.) For this guide, we will include the Web Enrollment role so that the CertEnroll directory is set up for hosting certificate and CRL files. Step 2: Install AD CS Role on Issuing CA On the Issuing CA server, add the Active Directory Certificate Services role via Server Manager or PowerShell. This time, select both Certification Authority and Certification Authority Web Enrollment role services (Web Enrollment will set up the HTTP endpoints for certificate requests if needed). For example, using PowerShell: Install-WindowsFeature AD-Certificate, ADCS-Web-Enrollment -IncludeManagementTools After installation, launch the AD CS configuration wizard: Role Services: Choose Certification Authority (and Web Enrollment if prompted). Setup Type: Select Enterprise CA (since this CA will integrate with AD DS). CA Type: Select Subordinate CA (this indicates it will get its cert from an existing root CA). Private Key: Choose “Create a new private key” (we’ll generate a new key pair for this CA). Cryptography: If using an HSM here as well, select the HSM’s CSP/KSP for the issuing CA’s key. Otherwise, choose a strong key length (2048+ bits, SHA256 or better for hash). CA Name: Provide a name (e.g., “Contoso Issuing CA”). This name will appear as the Issuer on certificates it issues. Certificate Request: The wizard will ask how you want to get the subordinate CA’s certificate. Choose “Save a certificate request to file”. Specify a path, e.g., C:\CertRequest\issuingCA.req. The wizard will generate a request file that we need to take to the Root CA for signing. (Since our Root CA is offline, this file transfer might be via secure USB or a network share when the root is temporarily online.) CA Database: Choose locations or accept defaults for the certificate DB and logs. Finish the configuration wizard, which will complete pending because the CA doesn’t have a certificate yet. The AD CS service on this server won’t start until we import the issued cert from the root. Step 3: Integrate HSM on Issuing CA (Optional) If available, repeat the HSM setup on the Issuing CA: install HSM drivers, initialize it, and generate/secure the key for the subordinate CA on the HSM. Ensure you chose the HSM provider during the above configuration so that the issuing CA’s private key is stored in the HSM. Even though this CA is online, an HSM still greatly enhances security by protecting the private key from extraction. The issuing CA’s HSM may not require multiple custodians to activate (as it needs to run continuously), but should still be physically secured. Step 4: Obtain the Issuing CA’s Certificate from the Root CA Now we have a pending request (issuingCA.req) for the subordinate CA. To get its certificate: Transport the request to the Root CA: Copy the request file to the offline Root CA (via secure means – e.g., formatted new USB stick). Start up the Root CA (in a secure, offline setting) and open the Certification Authority console. Submit the request on Root CA: Right-click the Root CA in the CA console -> All Tasks -> Submit new request, and select the .req file. The request will appear in the Pending Requests on the root. Issue the subordinate CA certificate: Find the pending request (it will list the Issuing CA’s name). Right-click and choose All Tasks > Issue. The subordinate CA’s certificate is now issued by the Root CA. Export the issued certificate: Still on the Root CA, go to Issued Certificates, find the newly issued subordinate CA cert (you can identify it by the Request ID or by the name). Right-click it and choose Open or All Tasks > Export to get the certificate in a file form. If using the console’s built-in “Export” it might only allow binary; alternatively use the certutil command: certutil -dup <RequestID> .\ContosoIssuingCA.cer or simply open and copy to file. Save the certificate as issuingCA.cer. Also make sure you have a copy of the Root CA’s certificate (if not already done). Publish Root CA cert and CRL as needed: Before leaving the Root CA, you may also want to ensure the Root’s own certificate and latest CRL are available to the issuing CA and clients. If not already done in Step 5 of root deployment, export the Root CA cert (DER format) and copy the CRL file. You might use certutil -crl again if some time has passed since initial CRL. Now take the issuingCA.cer file (and root cert/CRL files) and move them back to the Issuing CA server. Step 5: Install the Issuing CA’s Certificate and Complete Configuration On the Issuing CA server (which is still waiting for its CA cert): Install the subordinate CA certificate: In Server Manager or the Certification Authority console on the Issuing CA, there should be an option to “Install CA Certificate” (if the AD CS configuration wizard is still open, it will prompt for the file; or otherwise, in the CA console right-click the CA name > All Tasks > Install CA Certificate). Provide the issuingCA.cer file obtained from the root. This will install the CA’s own certificate and start the CA service. The Issuing CA is now operational as a subordinate CA. Alternatively, use PowerShell: certutil -installcert C:\CertRequest\issuingCA.cer This installs the cert and associates it with the pending key. Trust the Root CA certificate: Because the Issuing CA is domain-joined, when you install the subordinate cert, it might automatically place the Root CA’s certificate in the Trusted Root Certification Authorities store on that server (and possibly publish it to AD). If not, you should manually install the Root CA’s certificate into the Trusted Root CA store on the Issuing CA machine (using the Certificates MMC or certutil -addstore -f Root rootCA.cer). This step prevents any “chain not trusted” warnings on the Issuing CA and ensures it trusts its parent. In an enterprise environment, you would also distribute the root certificate to all client machines (e.g., via Group Policy) so that they trust the whole chain. Import Root CRL: Copy the Root CA’s CRL (*.crl file) to the Issuing CA’s CRL distribution point location (e.g., C:\Windows\System32\CertSrv\CertEnroll\ if that’s the directory served by the web server). This matches the HTTP URL we configured on the root. Place the CRL file there and ensure it is accessible (the Issuing CA’s IIS might need to serve static .crl files; often, if Web Enrollment is installed, the CertEnroll folder is under C:\Inetpub\wwwroot\CertEnroll). At this point, the subordinate CA and any client hitting the HTTP URL can retrieve the root’s CRL. The subordinate CA is now fully established. It holds a certificate issued by the Root CA (forming a complete chain of trust), and it’s ready to issue end-entity certificates. Step 6: Configure Issuing CA Settings and Start Services Start the Certificate Services: If the CA service (CertSvc) isn’t started automatically, start or restart it. On PowerShell: Restart-Service certsvc The CA should show as running in the CA console with the name “Contoso Issuing CA” (or your chosen name). Configure Certificate Templates: Because this is an Enterprise CA, it can utilize certificate templates stored in Active Directory to simplify issuing common cert types (user auth, computer auth, web server SSL, etc.). By default, some templates (e.g., User, Computer) are available but not issued. In the Certification Authority console under Certificate Templates, you can choose which templates to issue (e.g., right-click > New > Certificate Template to Issue, then select templates like “User” or “Computer”). This lab guide doesn’t require specific templates but know that only Enterprise CAs can use templates. Templates define the policies and settings (cryptography, enrollment permissions, etc.) for issued certificates. Ensure you enable only the templates needed and configure their permissions appropriately (e.g., allow the appropriate groups to enroll). Set CRL publishing schedule: The Issuing CA will automatically publish its own CRL (for certificates it issues) at intervals. You can adjust the CRL and Delta CRL publication interval in the CA’s Properties > CRL Period. A common practice is a small base CRL period (e.g., 1 week or 2 weeks) for issuing CAs, because they may revoke user certs more frequently; and enable Delta CRLs (published daily) for timely revocation information. Make sure the CDP/AIA for the Issuing CA itself are properly configured too (the wizard usually sets LDAP and HTTP locations, but verify in the Extensions tab). In a lab, the default settings are fine. Web Enrollment (if installed): You can verify the web enrollment by browsing to http://<IssuingCA>/certsrv. This web UI allows browser-based certificate requests. It’s a legacy interface mostly, but for testing it can be used if your clients aren’t domain-joined or if you want a manual request method. In modern use, the Certificate Enrollment Web Service/Policy roles or auto-enrollment via Group Policy are preferred for remote and automated enrollment. At this stage, your PKI is operational: the Issuing CA trusts the offline Root CA and can issue certificates. The Root CA can be kept offline with confidence that the subordinate will handle all regular work. Validation and Testing of the PKI It’s important to verify that the PKI is configured correctly: Check CA status: On the Issuing CA, open the Certification Authority console and ensure no errors. Verify that the Issuing CA’s certificate shows OK (no red X). On the Root CA (offline most of the time), you can use the Pkiview.msc snap-in (Microsoft PKI Health Tool) on a domain-connected machine to check the health of the PKI. This tool will show if the CDPs/AIA are reachable and if certificates are properly published. Trust chain on clients: On a domain-joined client PC, the Root CA certificate should be present in the Trusted Root Certification Authorities store (if the Issuing CA was installed as Enterprise CA, it likely published the root cert to AD automatically; you can also distribute it via Group Policy or manually). The Issuing CA’s certificate should appear in the Intermediate Certification Authorities store. This establishes the chain of trust. If not, import the root cert into the domain’s Group Policy for Trusted Roots. A quick test: on a client, run certutil -config "ISSUINGCA\\Contoso Issuing CA" -ping to see if it can contact the CA (or use the Certification Authority MMC targeting the issuing CA). Enroll a test certificate: Try to enroll for a certificate from the Issuing CA. For instance, from a domain-joined client, use the Certificates MMC (in Current User or Computer context) and initiate a certificate request for a User or Computer certificate (depending on templates issued). If auto-enrollment is configured via Group Policy for a template, you can simply log on a client and see if it automatically receives a certificate. Alternatively, use the web enrollment page or certreq command to submit a request. The request should be approved and a certificate issued by "Contoso Issuing CA". After enrollment, inspect the issued certificate: it should chain up to "Contoso Root CA" without errors. Ensure that the certificate’s CDP points to the URL we set (and try to browse that URL to see the CRL file), and that the AIA points to the root cert location. Revocation test (optional): To test CRL behavior, you could revoke a test certificate on the Issuing CA (using the CA console) and publish a new CRL. On the client, after updating the CRL, the revoked certificate should show as revoked. For the Root CA, since it shouldn’t issue end-entity certs, you wouldn’t normally revoke anything except potentially the subordinate CA’s certificate (which would be a drastic action in case of compromise). By issuing a test certificate and validating the chain and revocation, you confirm that your two-tier PKI lab is functioning correctly. Maintaining the PKI: CRLs, Key Ceremonies, and Security Procedures Deploying the PKI is only the beginning. Proper maintenance and operational procedures are crucial to ensure the PKI remains secure and reliable over time. Periodic CRL Updates for the Offline Root: The Root CA’s CRL has a defined validity period (set during configuration, often 6 or 12 months for offline roots). Before the CRL expires, the Root CA must be brought online (in a secure environment) to issue a new CRL. It’s recommended to schedule CRL updates periodically (e.g., semi-annually) to prevent the CRL from expiring. An expired CRL can cause certificate chain validation to fail, potentially disrupting services. Typically, organizations set the offline root CRL validity so that publishing 1-2 times a year is sufficient. When the time comes: Start the Root CA (ensuring the system clock is correct). Run certutil -crl to issue a fresh CRL. Distribute the new CRL: copy it to the HTTP CDP location (overwrite the old file) and, if applicable, use certutil -dspublish -f RootCA.crl to update it in Active Directory. Verify that the new CRL’s next update date is extended appropriately (e.g., another 6 months out). Clients and the Issuing CA will automatically pick up the new CRL when checking for revocation. (The Issuing CA, if configured, might cache the root CRL and need a restart or certutil -setreg ca\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE tweak if the root CRL expires unexpectedly. Keeping the schedule prevents such issues.) Issuing CA CRL and OCSP: The Issuing CA’s CRLs are published automatically as it is online. Ensure the IIS or file share hosting the CRL is accessible. Optionally, consider setting up an Online Responder (OCSP) for real-time status checking, especially if CRLs are large or you need faster revocation information. OCSP is another AD CS role service that can be configured on the issuing CA or another server to answer certificate status queries. This might be beyond a simple lab, but it’s worth mentioning for completeness. Key Ceremonies and Documentation: For production environments (and good practice even in labs), formalize the process of handling CA keys in a Key Ceremony. A key ceremony is a carefully controlled process for activities like generating the Root CA’s key pair, installing the CA, and signing subordinate certificates. It often involves multiple people to ensure no single person has unilateral control (principle of dual control) and to witness the process. Best practices for a Root CA key ceremony include: Advance Planning: Create a step-by-step script of the ceremony tasks. Include who will do what, what materials are needed (HSMs, installation media, backup devices, etc.), and the order of operations. Multiple trusted individuals present: Roles might include a Ceremony Administrator (leads the process), a Security Officer (responsible for HSM or key material handling), an Auditor (to observe and record), etc. This prevents any one person from manipulating the process and increases trust. Secure environment: Conduct the ceremony in a secure location (e.g., a locked room) free of recording devices or unauthorized personnel. Ensure the Root CA machine is isolated (no network), and ideally that BIOS/USB access controls are in place to prevent any malware. Generate keys with proper controls: If using an HSM, initialize and generate the key with the required number of key custodians each providing part of the activation material (e.g., smartcards or passphrases). Immediately back up the HSM partition or key to secure media (requiring the same custodians to restore). Sign subordinate CA certificate: As part of the ceremony, once the root key is ready, sign the subordinate’s request. This might also be a witnessed step. Document every action: Write down each command run, each key generated, serial numbers of devices used, and have all participants sign an acknowledgment of the outcomes. Also record the fingerprints of the generated Root CA certificate and any subordinate certificate to ensure they are exactly as expected. Secure storage: After the ceremony, store the Root CA machine (if it’s a laptop or VM) and HSM tokens in a tamper-evident bag or safe. The idea is to make it evident if someone tries to access the root outside of an authorized ceremony. While a full key ceremony might be overkill for a small lab, understanding these practices is important. Even in a lab, you can simulate some aspects (for learning), like documenting the procedure of taking the root online to sign the request and then locking it away. These practices greatly increase the trust in a production PKI by ensuring transparency and accountability for critical operations. Backup and Recovery Plans: Both CAs’ data should be regularly backed up: For the Root CA: since it’s rarely online, backup after any change. Typically, you’d back up the CA’s private key and certificate once (right after setup or any renewal). Store this securely offline (separate from the server itself). Also back up the CA database if it ever issues more than one cert (for root it might not issue many). For the Issuing CA: schedule automated backups of the CA database and private key. You can use the built-in certutil -backup or Windows Server Backup (which is aware of the AD CS database). Keep backups secure and test restoration procedures. Having a documented recovery procedure for the CA is crucial for continuity. Also consider backup of templates and any scripts. Maintain spare hardware or VMs in case you need to restore the CA on new hardware (especially for the root, having a procedure to restore on a new machine if the original is destroyed). Security maintenance: Apply OS updates to the CAs carefully. For the offline root, patch it offline if possible (offline servicing or connecting it briefly to a management network). For the issuing CA, treat it as a critical infrastructure server: limit its exposure (firewall it so only required services are reachable), monitor its event logs (enable auditing for Certificate Services events, which can log each issuance and revocation), and employ anti-malware tools with caution (whitelisting the CA processes to avoid interference). Also, periodically review the CA’s configuration and certificate templates to ensure they meet current security standards (for example, deprecate any weak cryptography or adjust validity periods if needed). By following these maintenance steps and best practices, your two-tier PKI will remain secure and trustworthy over time. Remember that PKI is not “set and forget” – it requires operational diligence, but the payoff is a robust trust infrastructure for your organization’s security. Additional AD CS Features and References Active Directory Certificate Services provides more capabilities than covered in this basic lab. Depending on your needs, you might explore: Certificate Templates: We touched on templates; they are a powerful feature on Enterprise CAs to enforce standardized certificate settings. Administrators can create custom templates for various use cases (SSL, S/MIME email, code signing) and control enrollment permissions. Understanding template versions and permissions is key for enterprise deployments. (Refer to Microsoft’s documentation on Certificate template concepts in Windows Server for details on how templates work and can be customized.) Web Services for Enrollment: In scenarios with remote or non-domain clients, AD CS offers the Certificate Enrollment Web Service (CES) and Certificate Enrollment Policy Web Service (CEP) role services. These allow clients to fetch enrollment policy information and request certificates over HTTP or HTTPS, even when not connected directly to the domain. They work with the certificate templates to enable similar auto-enrollment experiences over the web. See Microsoft’s guides on the Certificate Enrollment Web Service overview and Certificate Enrollment Policy Web Service overview for when to use these. Network Device Enrollment Service (NDES): This AD CS role service implements the Simple Certificate Enrollment Protocol (SCEP) to allow devices like routers, switches, and mobile devices to obtain certificates from the CA without domain credentials. NDES acts as a proxy (Registration Authority) between devices and the CA, using one-time passwords for authentication. If you need to issue certificates to network equipment or MDM-managed mobile devices, NDES is the solution. Microsoft Docs provide a Network Device Enrollment Service(NDES) overview and even details on using a policy module with NDES for advanced scenarios (like customizing how requests are processed or integrating with custom policies). Online Responders (OCSP): As mentioned, an Online Responder can be configured to answer revocation status queries more efficiently than CRLs, especially useful if your CRLs grow large or you have high-volume certificate validation (VPNs, etc.). AD CS’s Online Responder role service can be installed on a member server and configured with the OCSP Response Signing certificate from your Issuing CA. Monitoring and Auditing: Windows Servers have options to audit CA events. Enabling auditing can log events such as certificate issuance, revocation, or changes to the CA configuration. These logs are important in enterprise PKI to track who did what (for compliance and security forensics). Also, tools like the PKI Health Tool (pkiview.msc) and PowerShell cmdlets (like Get-CertificationAuthority, Get-CertificationAuthorityCertificate) can help monitor the health and configuration of your CAs. Conclusion By following this guide, you have set up a secure two-tier PKI environment consisting of an offline Root CA and an online Issuing CA. This design, which uses an offline root, is considered a security best practice for enterprise PKI deployments because it reduces the risk of your root key being compromised. With the offline Root CA acting as a hardened trust anchor and the enterprise Issuing CA handling day-to-day certificate issuance, your lab PKI can issue certificates for various purposes (HTTPS, code signing, user authentication, etc.) in a way that models real-world deployments. As you expand this lab or move to production, always remember that PKI security is as much about process as technology. Applying strict controls to protect CA keys, keeping software up to date, and monitoring your PKI’s health are all part of the journey. For further reading and official guidance, refer to these Microsoft documentation resources: 📖 AD CS PKI Design Considerations: PKI design considerations using Active Directory Certificate Services in Windows Server helps in planning a PKI deployment (number of CAs, hierarchy depth, naming, key lengths, validity periods, etc.). This is useful to read when adapting this lab design to a production environment. It also covers configuring CDP/AIA and why offline roots usually don’t need delta CRLs. 📖 AD CS Step-by-Step Guides: Microsoft’s Test Lab Guide Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy walk through a similar scenario.7.3KViews5likes6CommentsUnveiling the Shadows: Extended Critical Asset Protection with MSEM
As cybersecurity evolves, identifying critical assets becomes an essential step in exposure management, as it allows for the prioritization of the most significant assets. This task is challenging because each type of critical asset requires different data to indicate its criticality. The challenge is even greater when a critical asset is not managed by a security agent such as EDR or AV, making the relevant data unreachable. Breaking traditional boundaries, Microsoft Security Exposure Management leverages multiple insights and signals to provide enhanced visibility into both managed and unmanaged critical assets. This approach allows customers to enhance visibility and facilitates more proactive defense strategies by maintaining an up-to-date, prioritized inventory of assets. Visibility is the Key Attackers often exploit unmanaged assets to compromise systems, pivot, or target sensitive data. The risk escalates if these devices are critical and have access to valuable information. Thus, organizations must ensure comprehensive visibility across their networks. This blog post will discuss methods Microsoft Security Exposure Management uses to improve visibility into both managed and unmanaged critical assets. Case Study: Domain Controllers A domain controller server is one of the most critical assets within an organization’s environment. It authenticates users, stores sensitive Active Directory data like user password hashes, and enforces security policies. Threat actors frequently target domain controller servers because once they are compromised, they gain high privileges, which allow full control over the network. This can result in a massive impact, such as organization-wide encryption. Therefore, having the right visibility into both managed and unmanaged domain controllers is crucial to protect the organization's network. Microsoft Security Exposure Management creates such visibility by collecting and analyzing signals and events from Microsoft Defender for Endpoint (MDE) onboarded devices. This approach extends, enriches, and improves the customer’s device inventory, ensuring comprehensive insight into both managed and unmanaged domain controller assets. Domain Controller Discovery Methods Microsoft Browser Protocol The Microsoft Browser protocol, a component of the SMB protocol, facilitates the discovery and connection of network resources within a Windows environment. Once a Windows server is promoted to a domain controller, the operating system automatically broadcasts Microsoft Browser packets to the local network, indicating that the originating server is a domain controller. These packets hold meaningful information such as the device’s name, operating system-related information, and more. 1: An MSBrowser packet originating from a domain controller. Microsoft Security Exposure Management leverages Microsoft Defender for Endpoint’s deep packet inspection capabilities to parse and extract valuable data such as the domain controller’s NetBios name, operating system version and more from the Microsoft Browser protocol. Group Policy Events Group Policy (GPO) is a key component in every Active Directory environment. GPO allows administrators to manage and configure operating systems, applications, and user settings in an Active Directory domain-joined environment. Depending on the configuration, every domain-joined device locates the relevant domain controller within the same Active Directory site and pulls the relevant group policies that should be applied. During this process, the client's operating system audits valuable information within the Windows event log Once the relevant event has been observed on an MDE onboarded device, valuable information such as the domain controller’s FQDN and IP address is extracted from it. LDAP Protocol A domain controller stores the Active Directory configuration in a central database that is replicated between the domain controllers within the same domain. This database holds user data, user groups, security policies, and more. To query and update information in this database, a dedicated network protocol, LDAP (Lightweight Directory Access Protocol), is used. For example, to retrieve a user’s display name or determine their group membership, an LDAP query is directed to the domain controller for the relevant information. This same database also holds details about other domain controllers, configured domain trusts, and additional domain-related metadata. 3: Domain controller computer account in Active directory Users and Computers management console. Once a domain controller is onboarded to Microsoft Defender for Endpoint, the LDAP protocol is used to identify all other domain controllers within the same domain, along with their operating system information, FQDN, and more. Identifying what is critical After gaining visibility through various protocols, it's crucial to identify which domain controllers are production and contain sensitive data, distinguishing them from test assets in a testing environment. Microsoft Security Exposure Management uses several techniques, including tracking the number of devices, users, and logins, to accurately identify production domain controllers. Domain controllers and other important assets not identified as production assets are not automatically classified as critical assets by the system. However, they remain visible under the relevant classification, allowing customers to manually override the system’s decision and classify them as critical. Building the Full Picture In addition to classifying assets as domain controllers, Microsoft Security Exposure Management provides customers with additional visibility by automatically classifying other critical devices and identities such as Exchange servers, VMware vCenter, backup servers, and more. 4: Microsoft Defender XDR Critical Asset Management settings page. Identifying critical assets and distinguishing them from other assets empowers analysts and administrators with additional information to prioritize tasks related to these assets. The context of asset criticality is integrated within various Microsoft Defender XDR experiences, including the device page, incidents, and more. This empowers customers to streamline SOC operations, swiftly prioritize and address threats to critical assets, implement targeted security recommendations, and disrupt ongoing attacks. For those looking to learn more about critical assets and exposure management, here are some additional resources you can explore. Overview of critical asset protection - Overview of critical asset management in Microsoft Security Exposure Management - Microsoft Security Exposure Management | Microsoft Learn Learn about predefined classifications - Criticality Levels for Classifications - Microsoft Security Exposure Management | Microsoft Learn Overview of critical assets protection blog post - Critical Asset Protection with Microsoft Security Exposure Management | Microsoft Community HubUnlock Proactive Defense: Microsoft Security Exposure Management Now Generally Available
As the digital landscape grows increasingly interconnected, defenders face a critical challenge: the data and insights from various security tools are often siloed or, at best, loosely integrated. This fragmented approach makes it difficult to gain a holistic view of threats or assess their potential impact on critical assets. In a world where a single compromised asset can trigger a domino effect across connected resources, thinking in graphs has become essential for defenders. This approach breaks down silos, allowing them to visualize relationships between assets, vulnerabilities, and threats, ultimately enabling proactive risk management and strengthening their stance against attackers. Traditional vulnerability management is no longer sufficient. While patching every potential weakness might seem like a solution, it's neither practical nor effective. Instead, modern security strategies must focus on the exposures that are easiest for attackers to exploit, prioritizing vulnerabilities that present the greatest risk. This shift marks the evolution of vulnerability management into what we now call exposure management. Earlier this year, we launched Microsoft Security Exposure Management in public preview, introducing defenders to powerful foundational capabilities for holistic exposure management. Backed by extensive threat research and Microsoft’s vast visibility into security signals, these tools provide coverage for commonly observed attack techniques. Exposure Management includes Attack Surface Management, Attack Path Analysis, and Unified Exposure Insights— solutions that offer security teams unmatched visibility and insight into their risk landscape. Attack Surface Management offers a complete, continuous view of an organization’s attack surface, enabling teams to fully explore assets, uncover interdependencies, and monitor exposure across the entire digital estate. Central to this is the identification of critical assets, which are often prime targets for attackers. By highlighting these key assets, security teams can prioritize their efforts and better understand which areas require the most protection. By giving security teams a clear map of their exposure points, Attack Surface Management empowers a more informed and comprehensive defense strategy. Attack Path Analysis takes this a step further, guiding teams in visualizing and prioritizing high-risk attack paths across diverse environments, with a specific focus on critical assets. This capability allows for targeted, effective remediation of vulnerabilities that impact these key assets, helping to significantly reduce exposure and the likelihood of a breach by focusing on the most impactful pathways an attacker might exploit. Unified Exposure Insights gives decision-makers a clear view of an organization's threat exposure, helping security teams address key questions about their posture. Through Security Initiatives, teams focus on priority areas like cloud security and ransomware, supported by actionable metrics to track progress, prioritize risks, and align remediation with business goals for proactive risk management. Exposure Management translates vulnerabilities and exposures into more understandable language about risk and actionable initiatives related to our environment, which helps stakeholders and leadership grasp the impact more clearly. - Bjorn Pauwels Cyber Security Architect Atlas Copco Throughout the public preview, we collaborated closely with customers and industry experts, refining Microsoft Security Exposure Management based on real-world usage and feedback. This partnership revealed that the biggest challenges extended beyond deploying the right tools; they involved enhancing organizational maturity, evolving processes, and fostering a proactive security mindset. These insights drove strategic enhancements to features and user experience, ensuring the solution effectively supports organizations aiming to shift from reactive to proactive threat management. For example, several organizations created a 'RiskOps' role specifically to champion cross-domain threat exposure reduction, breaking down silos and unifying teams around common security goals. Security Operations (SecOps) teams now report significantly streamlined processes by leveraging asset criticality in incident prioritization, helping them address the most impactful threats faster than previously possible. Likewise, vulnerability management teams are using enhanced attack map and path analysis features to refine patching strategies, focusing more precisely on vulnerabilities most likely to lead to real risks. These examples underscore Exposure Management's ability to drive practical, measurable improvements across diverse teams, empowering them to stay ahead of evolving threats with a targeted, collaborative approach to risk management. Exposure Management enables organizations to zero in on their most critical exposures and act quickly. By breaking down silos and connecting security insights across the entire digital estate, organizations gain a holistic view of their risk posture. This comprehensive visibility is crucial for making faster, more informed decisions—reducing exposure before attackers can exploit it. We are excited to announce the general availability of Microsoft Security Exposure Management This release includes several new capabilities designed to help you build and enhance a Continuous Threat Exposure Management (CTEM) program, ensuring that you stay ahead of threats by continuously identifying, prioritizing, and mitigating risks across your digital landscape. Global rollout started 19 Nov, 2024 so keep an eye out for Exposure Management in your Defender portal, https://security.microsoft.com Cyber Asset Attack Surface Management To help you establish a comprehensive, single source of truth for your assets, we are expanding our signal collection beyond Microsoft solutions to include third-party integrations. The new Exposure connectors gallery offers a range of connectors to popular security vendors. Data collected through these connectors is normalized within our exposure graph, enhancing your device inventory, mapping relationships, and revealing new attack paths for comprehensive attack surface visibility. Additional insights like asset criticality, internet exposure and business application or operational affiliation are incorporated from the connected tools to enrich the context that Exposure Management can apply on the collected assets. This integrated data can be visualized through the Attack Map tool or explored using advanced hunting queries via KQL (Kusto Query Language). External data connectors to non-Microsoft security tools are currently in public preview, we are continuously working to add more connectors for leading market solutions, ensuring you have the broadest possible visibility across your security ecosystem. Discover more about data connectors in our documentation. Extended Attack Path Analysis Attack Path Analysis provides organizations with a crucial attacker’s-eye perspective, revealing how adversaries might exploit vulnerabilities and move laterally across both cloud and on-premise environments. By identifying and visualizing potential paths – from initial access points, such as internet-exposed devices, to critical assets – security teams gain valuable insight into the paths attackers could take, including hybrid attack paths that traverse cloud and on-prem infrastructure. Microsoft Security Exposure Management addresses the challenge of fragmented visibility by offering defenders an integrated view of their most critical assets and the likely routes attackers might exploit. This approach moves beyond isolated vulnerabilities, allowing teams to see their environment as a connected landscape of risks across hybrid infrastructures, ultimately enhancing their ability to secure critical assets and discover potential entry points. We are excited to update on our solution’s latest enhancement, which includes a high-level overview experience, offering a clear understanding of top attack scenarios, entry points, and common target types. Additionally, Exposure Management highlights chokepoints with a dedicated experience – these chokepoints are assets that appear in multiple attack paths, enabling cost-effective mitigation. Chokepoints also support blast radius querying, showing how attackers might exploit these assets to reach critical targets. In addition, we are adding support for new adversarial techniques including: DACL Support: We now include Discretionary Access Control Lists (DACLs) in our attack path analysis, through which more extensive attack paths are uncovered, particularly those that exploit misconfigurations or excessive permissions within access control lists. Hybrid Attack Paths: Our expanded analysis now identifies hybrid attack paths, capturing routes that originate on-premises and extend into cloud environments, providing defenders with a more complete view of potential threats across both infrastructures. In essence, attack path management allows defenders to transform isolated vulnerabilities into actionable insights across hybrid infrastructures. This comprehensive perspective enables security teams to shift from reactive to proactive defense, strengthening resilience by focusing on the most critical threats across their entire environment. Unified Exposure Insights With Microsoft Security Exposure Management, organizations can transform raw technical data into actionable insights that bridge the gap between cybersecurity teams and business decision-makers. By offering clear, real-time metrics, this platform answers key questions such as "How secure are we?", "What risks do we face?", and "Where should we focus first to reduce our exposure?" These insights not only provide a comprehensive view of your security posture but also guide prioritization and remediation efforts. To help your organization embrace a proactive security mindset, we introduced Security Initiatives—a strategic framework to focus your teams on critical aspects of your attack surface. These initiatives help teams to scope, discover, prioritize, and validate security findings while ensuring effective communication with stakeholders. Now, we are enhancing these capabilities to offer even greater visibility and control. The expanded initiative catalog now features new programs targeting high-priority areas like SaaS security, IoT, OT, and alongside existing domain and threat-focused initiatives. Each initiative continues to provide real-time metrics, expert-curated recommendations, and progress tracking, empowering security teams to drive maturity across their security programs. With this expanded toolset, organizations can further align their security efforts with evolving risks, ensuring a continuous, dynamic response to the threat landscape. SaaS Security Initiative (Powered by Microsoft Defender for Cloud Apps): Effective SaaS posture management is essential for proactively preventing SaaS-related attacks. The SaaS Security initiative delivers a comprehensive view of your SaaS security coverage, health, configuration, and performance and consolidates all best-practice recommendations for configuring SaaS apps into measurable metrics to help security teams efficiently manage and prioritize critical security controls. To optimize this initiative, activate key application connectors in Defender for Cloud Apps, including Microsoft 365, Salesforce, ServiceNow, GitHub, Okta, Citrix ShareFile, DocuSign, Dropbox, Google Workspace, NetDocuments, Workplace (preview), Zendesk, Zoom (preview), and Atlassian. For more information, check out https://aka.ms/Ignite2024MDA OT Security Initiative (Powered by Microsoft Defender for IoT): The convergence of Operational Technology (OT) and Information Technology (IT) has transformed industries worldwide, but it has also introduced significant new security challenges, particularly for industrial operations and critical infrastructure. The modern threat landscape, now accelerated by the growing capabilities of AI, demands specialized security solutions for these sensitive environments. The OT Security Initiative addresses these challenges by providing practitioners with a comprehensive solution to identify, monitor, and mitigate risks within OT environments, ensuring both operational reliability and safety. By leveraging Microsoft Defender for Endpoint discovery, the initiative offers unified visibility across enterprise and OT networks, empowering organizations to identify unprotected OT assets, assess their risk levels, and implement security measures across all physical sites. Enterprise IoT Security Initiative (Powered by Microsoft Defender for IoT): This initiative delivers comprehensive visibility into the risks associated with IoT devices within the enterprise, enabling organizations to assess their resilience against these emerging threats. As IoT devices frequently connect to endpoints, one another, or the internet, they become prime targets for cyberattacks. Therefore, businesses must continuously monitor the security of these devices, tracking their distribution, configuration, connectivity, exposure, and behavior to prevent the introduction of hidden vulnerabilities. By leveraging this initiative, organizations can proactively manage IoT risks and safeguard their digital landscape. Proactively understand how system updates affect scores The new versioning feature offers proactive notifications about upcoming version updates, giving users advanced visibility into anticipated metric changes and their impact on related initiatives. A dedicated side panel provides comprehensive details about each update, including the expected release date, release notes, current and updated metric values, and any changes to related initiative scores. Additionally, users can share direct feedback on the updates within the platform, fostering continuous improvement and responsiveness to user needs. Exposure History With the new history-reasoning feature, users can investigate metric changes by reviewing detailed asset exposure updates. In the initiative's history tab, selecting a specific metric now reveals a list of assets where exposure has been either added or removed, providing clearer insight into exposure shifts over time. Unified Role-Based Access Control (URBAC) Support We are excited to introduce the capability to manage user privileges and access to Microsoft Security Exposure Management through custom roles within the Microsoft Defender XDR Unified Role-Based Access Control (URBAC) system. This enhancement ensures higher productivity and efficient access control on a single, centralized platform. The unified RBAC permissions model offers administrators an alternative to Entra ID directory roles, allowing for more granular permission management and customization. This model complements Entra ID global roles by enabling administrators to implement access policies based on the principle of least privilege, thereby assigning users only the permissions they need for their daily tasks. We recommend maintaining three custom roles that align with organizational posture personas: Posture Reader: Users with read-only access to Exposure Management data. Posture Contributor: Users with read and manage permissions, enabling them to handle security initiatives and metrics, as well as manage posture recommendations. Posture Admin: Users who likely already hold higher-level permissions within the Microsoft Defender portal and can now perform sensitive posture-related actions within Exposure Management experiences. To learn more about the Microsoft XDR Unified RBAC permissions model, click here. For more information on Microsoft Security Exposure Management access management with unified RBAC, click here. How to get Microsoft Security Exposure Management Exposure Management is available in the Microsoft Defender portal at https://security.microsoft.com. Access to the exposure management blade and features in the Microsoft Defender portal is available with any of the following licenses: Microsoft 365 E5 or A5 Microsoft 365 E3 Microsoft 365 E3 with the Microsoft Enterprise Mobility + Security E5 add-on Microsoft 365 A3 with the Microsoft 365 A5 security add-on Microsoft Enterprise Mobility + Security E5 or A5 Microsoft Defender for Endpoint (Plan 1 and 2) Microsoft Defender for Identity Microsoft Defender for Cloud Apps Microsoft Defender for Office 365 (Plans 1 and 2) Microsoft Defender Vulnerability Management Integration of data from the above tools, as well as other Microsoft security tools like Microsoft Defender for Cloud, Microsoft Defender Cloud Security Posture Management, and Microsoft Defender External Attack Surface Management, is available with these licenses. Integration of non-Microsoft security tools will incur a consumption-based cost based on the number of assets in the connected security tool. The external connectors are currently in public preview, with plans to reach general availability (GA) by the end of Q1 2025. Pricing will be announced before billing for external connectors begins at GA. Learn More The threat landscape is constantly shifting, and the attack surface continues to grow, leaving organizations exposed. Outpacing threat actors through patching alone is no longer feasible. Now is the time to evolve your vulnerability management strategy to be smarter, more dynamic, and more powerful — focused on exposures and built on a proactive mindset. By adopting a Continuous Threat Exposure Management (CTEM) process, you can stay ahead of attackers. Microsoft Security Exposure Management equips you with the tools to scope, discover, prioritize, validate, and mobilize your teams, empowering you to defend your organization with precision and confidence. Embrace the future of cybersecurity resilience—contact us today to learn more, sign up for a demo, or speak with our team about how Microsoft Security Exposure Management can transform your defense strategy. Don’t wait to secure your organization. Get started today. Explore overview and core scenarios on our website Learn about capabilities and scenarios in blog posts written by our engineering and research teamsAttack path management with Microsoft Security Exposure Management
Imagine trying to assemble a jigsaw puzzle without the box showing the final picture. You have all the pieces scattered in front of you, but without knowing how they connect, you’re left guessing and constantly reworking pieces in search of the right fit. Now, imagine someone hands you the completed picture. Suddenly, everything clicks into place, and each piece’s role in the larger image becomes clear. This is the new superpower defenders gain with Microsoft Security Exposure Management’s attack path management – a full perspective that turns scattered vulnerabilities, assets, and risks into a cohesive map. In this sense, an attack path can be defined as the route an adversary takes by leveraging the connections between multiple assets. It represents the sequence of steps or methods an attacker uses to exploit security gaps and traverse through an organization’s environment to reach a target. As the attack surface has grown increasingly complex in recent years, defenders have struggled to piece together a comprehensive view of their exposure. Even with a strong grasp of critical assets, they often lack visibility into how attackers might exploit seemingly unrelated gaps, several steps or "hops" away, to reach these targets. Microsoft Security Exposure Management addresses this by unifying fragmented information and highlighting the most probable attack paths, enabling defenders to drive resilience with clarity and precision. In a recent blog post, we emphasized the importance of prioritizing critical assets. Now, we’ll shift our focus to how Microsoft Security Exposure Management utilizes attack path analysis and management to help organizations effectively reduce their attack surface. To know your enemy, you must become your enemy Unlike defenders, attackers are not focused on individual vulnerabilities or security gaps. Instead, they are driven by their end goal, which is often to target critical assets for disruption, financial gain, or other malicious motives. In this regard, each security gap is merely a step on a path toward achieving their objective. To counter this, defenders should embrace a similar approach and view security vulnerabilities and gaps as building blocks of a potential attack, as valuable context and insights are often missed when vulnerabilities or other findings are examined independently. For example, a vulnerability on-prem may seem less significant when viewed separately but could lead to sensitive cloud resources when considered in context. Attack path management bridges the gap by viewing the entire organization – across both on-premises and cloud infrastructure – as a connected network of assets with their relevant security findings. This change in mindset is becoming increasingly critical as security teams struggle to manage the expanding enterprise attack surface, leading to what can be described as “risk fatigue” from the overwhelming number of findings they face. , top attack scenarios, top targets and more. Moving from resolving security vulnerabilities to increasing cyber resilience Consider how organizations address security vulnerabilities today: an endless queue of issues and devices to patch. Instead of patching thousands of devices for a single high-risk vulnerability, imagine if defenders could see how that vulnerability contributes to attack paths across on-premises and cloud environments. With this context, defenders can prioritize critical assets and findings involved in potential attack paths or apply quicker, less disruptive mitigations. Moreover, it's important to recognize that fixing a security issue doesn’t always translate to meaningful risk reduction. Often, while resolving such an issue may improve compliance with security policies or standards, it doesn’t necessarily strengthen resilience against real threats. On the other hand, focusing on attack paths usually leads to addressing the most critical gaps, resulting in greater risk reduction and enhanced resilience. If attackers think in graphs, defenders should think in paths Microsoft Security Exposure Management offers defenders a fresh perspective on their exposure and risks. With attack path management, defenders can identify emerging threat patterns in the form of attack paths and are equipped with the tools to reassess and prioritize risk mitigation. This includes automatic discovery of potential attack paths, risk assessment for each path, identification of chokepoints – assets involved in multiple attack paths – and tailored recommendations for mitigating these paths. Users can view a comprehensive list of all discovered attack paths in their environment. Each path is assigned a risk score that reflects the likelihood of its exploitation and potential impact, based on factors such as path complexity, involved assets, and security findings. Additionally, by using the Attack Surface Map, users gain enhanced exploration capabilities, allowing for further investigation into attack paths. Exposure Management also provides visibility into chokepoints – assets that multiple attack paths pass through. By focusing on chokepoints, security teams can adopt a cost-effective approach to risk reduction, addressing significant threats by targeting key assets. Customers can review chokepoints, learn more about these assets, and visually explore their role in attack paths using Exposure Management's "Blast Radius Analysis" feature. Spotlight: From a vulnerable device to entire domain control As mentioned, attackers are driven by their end goal, often targeting critical assets through a sequence of other weaknesses and assets. They often can’t breach the most critical asset right away; instead, they focus on finding a way to reach it. Once inside, they can navigate through the organization’s environment to reach the crown jewels, that is, unless the organization has the visibility and the measures in place to prevent it. To see this in action, let’s walk through an example from the attacker’s point of view: After multiple reconnaissance attempts, an attacker identifies a vulnerable internet-facing web service running on a development server. John, a developer at the company, had set up this server for testing purposes but inadvertently left it accessible to the public internet without proper security measures. While this oversight might seem insignificant to John, the exposed server now serves as an entry point for the attacker to infiltrate the company's network. Once inside, the attacker begins to explore options for lateral movement within the organization, utilizing John’s access, with the target of reaching critical assets. During this reconnaissance, the attacker identifies a server accessible via RDP to all employees, including John – TERM-SRV. By using pass-the-hash (an attack technique that involves passing the hashed credentials to authenticate to another resource) with John’s credentials the attacker can start an RDP session into the TERM-SRV server. By exploiting another vulnerability, the attacker can perform an elevation of privileges. With these privileges, the attacker can now enumerate and dump all logged-on users’ credentials using mimikatz. Mimikatz to enumerate all user credentials in the server and finds Alex’s credentials One of these users is Alex, an IT Administrator who is part of the IT Admins group, that maintains servers like Domain Controllers, SCCMs etc. This means that the attacker can use Alex’s permissions to remotely execute code on the Domain Controller as admin, practically gaining control over the entire domain. From the defender's perspective, limited visibility makes the task of identifying and countering such an attack a real challenge. Moreover, the attacker's steps outlined above represent just one of many potential paths that could be taken at every turn. Defending against threats like the one described traditionally requires manual work of analyzing data and events from an extensive array of tools and solutions. To prevent an attack path like the one outlined, defenders would need to patch many vulnerable devices, identify all internet-exposed endpoints, cross-reference these with identity data, and map out permissions across systems. Detecting and mitigating such attack paths remains a challenge with conventional methods, leaving defenders constantly trying to catch up. To overcome these challenges, Microsoft Security Exposure Management leverages advanced graph algorithms that mimic adversarial behavior. Applying these algorithms continuously allows for ongoing monitoring of the customer environment and its changes to discover attack paths covering various adversary techniques across both on-premises and cloud environments. Getting Started with Attack Path Management Here are some tips for getting started with attack path management concepts and features: Define your critical assets: Use the Critical Asset Management module to create custom queries for discovering and flagging your critical assets. Once an asset is defined as critical, Microsoft Security Exposure Management automatically marks it as a potential target and identifies attack paths leading to it. Explore attack path management in Exposure Management: Review the Attack Path Overview page to gain a high-level overview of the risks discovered in your environment. Switch to the Attack Path list tab to view a comprehensive list of all identified attack paths, and utilize the filters and group-by features to focus on the paths that are most relevant to you. Utilize attack path recommendations to resolve paths and reduce risk: After identifying an attack path of interest, navigate to the Recommendations tab in the attack path side pane to review the necessary actions required to “break” the attack path. Explore an attack path in the Attack Surface Map: From the attack path list screen, you can select a specific attack path and choose to view it in the Attack Surface Map for enhanced exploration capabilities. Focus on chokepoints: In the attack path area of Exposure Management, navigate to the Chokepoints tab to review assets involved in multiple attack paths. Focus on resolving the issues associated with these assets to maximize the impact of your risk mitigation efforts. Additionally, chokepoints will be marked in the Attack Surface Map with a distinctive design (outlined with a dashed border). View chokepoint blast radius: Use the Blast Radius functionality to visualize the attack paths a chokepoint is involved in. This functionality is available for chokepoints in the asset side pane within the Chokepoint screen and in the Attack Surface Map. Integrate the Continuous Threat Exposure Management (CTEM) framework into your strategy: Focusing on prioritization and validation, shift your perspective to view vulnerabilities and exposures through the lens of an attacker. Utilize the Attack Path Management capabilities in Microsoft Security Exposure Management to identify and prioritize critical gaps. Encourage your team to engage in regular reviews of attack paths and chokepoints. This mindset shift will enable faster and more effective mitigation of risks. Enhance Defender deployment: It’s important to note that the capabilities of the Microsoft Security Exposure Management attack path management module are enhanced when visibility is increased. This means that the broader the deployment of Defender products, the greater our visibility, and consequently, our ability to identify potential paths. Key products include Microsoft Defender for Endpoint and Microsoft Defender for Identity for on-premises attack paths, and Microsoft Defender for Cloud DCSPM plan for cloud-based attack paths. To summarize, Microsoft Security Exposure Management enables security teams to adopt a contextual, risk-based approach by considering both the criticality of assets and the likelihood of their compromise through automatic attack path discovery. With Exposure Management, teams can strategically prioritize activities that have the greatest security impact, while enhancing the organization's overall resilience. In today's challenging and evolving threat landscape, defenders should not only adopt an attacker's mindset, but also leverage their visibility to advance even further. If, as the saying goes, defenders think in lists while attackers think in graphs, Exposure Management allows defenders to evolve beyond graphs to “think in paths”. For those looking to learn more about attack paths, critical assets, and exposure management in general, here are some additional resources you can explore: Attack path management documentation: Overview of attack paths in Microsoft Security Exposure Management - Microsoft Security Exposure Management | Microsoft Learn Critical asset protection documentation: Overview of critical asset management in Microsoft Security Exposure Management - Microsoft Security Exposure Management | Microsoft Learn Microsoft Security Exposure Management website: Microsoft Security Exposure Management | Microsoft SecurityMicrosoft Security Exposure Management Graph: Prioritization is the king
Unlock the full potential of Microsoft’s Security Exposure Management raph in Advanced Hunting. This blog post delves into essential concepts like Blast Radius and Asset Exposure, equipping you with powerful queries to enhance your security posture.5.3KViews1like0CommentsLearn how to customize and optimize Copilot for Security with the custom Data Security plugin
This is a step-by-step guided walkthrough of how to use the custom Copilot for Security pack for Microsoft Data Security and how it can empower your organization to understand the cyber security risks in a context that allows them to achieve more.