identity standards
33 TopicsFake Employees, Real Threat: Decentralized Identity to combat Deepfake Hiring?
In recent months, cybersecurity experts have sounded the alarm on a surge of fake “employees” – job candidates who are not who they claim to be. These fraudsters use everything from fabricated CVs and stolen identities to AI-generated deepfake videos in interviews to land jobs under false pretenses. It’s a global phenomenon making headlines on LinkedIn and in the press. With the topic surfacing everywhere, I wanted to take a closer look at what’s really going on — and explore the solutions that could help organizations respond to this growing challenge. And as it happens, one solution is finally reaching maturity at exactly the right moment: decentralized identity. Let me walk you through it. But first, let’s look at a few troubling facts: Even tech giants aren’t immune. Amazon’s Chief Security Officer revealed that since April 2024 the company has blocked over 1,800 suspected North Korean scammers from getting hired, and that the volume of such fake applicants jumped 27% each quarter this year (1.1). In fact, a coordinated scheme involving North Korean IT operatives posing as remote workers has infiltrated over 300 U.S. companies since 2020, generating at least $6.8 million in revenue for the regime (2.1). CrowdStrike also reported more than 320 confirmed incidents in the past year alone, marking a 220% surge in activity (2.2). And it’s not just North Korea: organised crime groups globally are adopting similar tactics. This trend is not a small blip; it’s likely a sign of things to come. Gartner predicts that by 2028, one in four job applicant profiles could be fake in some way (3). Think about that – in a few years, 25% of the people applying to your jobs might be bots or impostors trying to trick their way in. We’re not just talking about exaggerated resumes; we’re talking about full-scale deception: people hiring stand-ins for interviews, AI bots filling out assessments, and deepfake avatars smiling through video calls. It’s a hiring manager’s nightmare — no one wants to waste time interviewing bots or deepfakes — and a CISO’s worst-case scenario rolled into one. The Rise of the Deepfake Employee What does a “fake employee” actually do? In many cases, these impostors are part of organized schemes (even state-sponsored) to steal money or data. They might forge impressive résumés and create a minimal but believable online presence. During remote interviews, some have been caught using deepfake video filters – basically digital masks – to appear as someone else. In one case, Amazon investigators noticed an interviewee’s typing did not sync with the on-screen video (the keystrokes had a 110ms lag); it turned out to be a North Korean hacker remotely controlling a fake persona on the video call (1.2). Others refuse video entirely, claiming technical issues, so you only hear a voice. Some even hire proxy interviewees – a real person who interviews in their place. The level of creativity is frightening. Once inside, a fake employee can do serious damage. They gain legitimate access to internal systems, data, and tools. Some have stolen sensitive source code and threatened to leak it unless the company paid a ransom (1). Others quietly set up backdoor access for future cyberattacks. And as noted, if they’re part of a nation-state operation, the salary you pay them is funding adversaries. The U.S. Department of Justice recently warned that many North Korean IT workers send the majority of their pay back to the regime’s illicit weapons programs (1)(2.3). Beyond the financial angle, think of the security breach: a malicious actor is now an “insider” with an access badge. No sector is safe. While tech companies with lots of remote jobs were the first targets, the scam has expanded. According to the World Economic Forum, about half of the companies targeted by these attacks aren’t in the tech industry at all (4). Financial services, healthcare, media, energy – any business that hires remote freelancers or IT staff could be at risk. Many Fortune 500 firms have quietly admitted to Charles Carmakal (Chief Technology Officer at Google Cloud’s Mandiant) that they’ve encountered fake candidates (2.3). Brandon Wales — former Executive Director of the Cybersecurity and Infrastructure Security Agency (CISA) and now VP of Cybersecurity Strategy at SentinelOne — warned that the “scale and speed” of these operations is unlike anything seen before (2.3). Rivka Little, Chief Growth Officer at Socure, put it bluntly: “Every Fortune 100 and potentially Fortune 500 has a pretty high number of risky employees on their books” right now (1). If you’re in charge of security or IT, this should send a chill down your spine. How do you defend against an attack that walks in through your front door (virtually) with HR’s approval? It calls for rethinking some fundamental practices, which leads us to the biggest gap these scams have exposed: identity verification in the hiring process. The Identity Verification Gap in Hiring Let’s face it: traditional hiring and onboarding operate on a lot of trust. You collect a résumé, maybe call some references, do a background check that might catch a criminal record but won’t catch a well-crafted fake identity. You might ask for a copy of a driver’s license or passport to satisfy HR paperwork, but how thoroughly is it checked? And once the person is hired and given an employee account, how often do we re-confirm that person’s identity in the months or years that follow? Almost never. Now let’s look at the situation from the reverse perspective: During your last recruitment, or when you became a new vendor for a client, were you asked to send over a full copy of your ID via email? Most likely, yes. You send a scan of your passport or ID card to an HR representative or a partner’s portal, and you have no idea where that image gets stored, who can see it, or how long it will sit around. It feels uncomfortable, but we do it because we need to prove who we are. In reality, we’re making a leap of faith that the process is secure. This is the identity verification gap. Companies are trusting documents and self-assertions that can be forged, and they rarely have a way to verify those beyond a cursory glance. Fraudsters exploit this gap mercilessly. They provide fake documents that look real, or steal someone else’s identity details to pass background checks. Once they’ve cleared that initial hurdle, the organization treats them as legit. IT sets up accounts, security gives them access, and from then on the “user identity” is assumed to be genuine. Forever. Moreover, once an employee is on board, internal processes often default to trust. Need a password reset? The helpdesk might ask for your birthdate or employee ID – pieces of info a savvy attacker can learn or steal. We don’t usually ask an employee who calls IT to re-prove that they are the same person HR hired months or years ago. All of this stands in contrast to the principle of Zero Trust security that many companies are now adopting. Thanks to John Kindervag (Forrester, 2009), Zero Trust says “never trust, always verify” each access request. But how can you verify if the underlying identity was fake to start with? As part of Microsoft, we often say that “identity is the new perimeter” – meaning the primary defense line is verifying identities, not just securing network walls. If that identity perimeter is built on shaky ground (unverified people), the whole security model is weak. So, what can be done? Security leaders and even the World Economic Forum are advocating for stronger identity proofing in hiring. The WEF specifically recommends “verifiable government ID checks at multiple stages of recruitment and into employment” (4). In other words, don’t just verify once and forget it – verify early, verify often. That might mean an ID and background check when offering the job, another verification during onboarding, and perhaps periodic re-checks or at least on certain events (like when the employee requests elevated privileges). Amazon’s CSO, S. Schmidt, echoed this after battling North Korean fakes; he advised companies to “Implement identity verification at multiple hiring stages and monitor for anomalous technical behavior” as a key defense (1). Of course, doing this manually is tough. You can’t very well ask each candidate to fly in their first day just to show their passport in person, especially with global and remote workforces. That’s where technology is stepping up. Enter the world of Verified ID and decentralized identity. Enter Microsoft Entra Verified ID: proving Identity, not just Checking a Box Imagine if, instead of emailing copies of your passport to every new employer or partner, you could carry a digital identity credential that is already verified and can be trusted by others instantly. That’s the idea behind Microsoft Entra Verified ID. It’s essentially a system for issuing and verifying cryptographically-secure digital identity credentials. Let’s break down what that means in plain terms. At its core, a Verified ID credential is like a digital ID card that lives in an app on your phone. But unlike a photocopy of your driver’s license (which anyone could copy, steal or tamper with), this digital credential is signed with cryptographic keys that make it tamper-proof and verifiable. It’s based on open standards. Microsoft has been heavily involved in the development of Decentralized Identifiers (DID) and W3C Verifiable Credentials standards over the past few years (7). The benefit of standards is that this isn’t a proprietary Microsoft-only thing – it’s part of a broader move toward decentralized identity, where the user is in control of their own credentials. Here’s a real-life analogy: When you go to a bar and need to prove you’re over 18, you show your driver’s license, National ID or Passport. The bouncer checks your birth date and maybe the hologram, but they don’t photocopy your entire ID and keep it; they just verify it and hand it back. You remain in possession of your ID. Now translate that to digital interactions: with Verified ID, you could have a credential on your phone that says “Government ID verified: [Your Name], age 25”. When a verifier (like an employer or service) needs proof, you share that credential through a secure app. The verifier’s system checks the credential’s digital signature to confirm it was issued by a trusted authority (for example, a background check company or a government agency) and that it hasn’t been altered. You don’t have to send over a scan of your actual passport or reveal extra info like your full birthdate or address – the credential can be designed to reveal only the necessary facts (e.g. “is over 18” = yes). This concept is called selective disclosure, and it’s a big win for privacy. Crucially, you decide which credentials to share and with whom. You might have one that proves your legal name and age (from a government issuer), another that proves your employment status (from your employer), another that proves a certification or degree (from a university). And you only share them when needed. They can also have expiration dates or be revoked. For instance, an employment credential could automatically expire when you leave the company. This means if someone tries to use an old credential, it would fail verification – another useful security feature. Now, how do these credentials get issued in the first place? This is where the integration of our Microsoft Partner IDEMIA comes in, which was a highlight of Microsoft Ignite 2025. IDEMIA is a company you might not have heard of, but they’re a huge player in the identity world – they’re the folks behind many government ID and biometric systems (think passport chips, national ID programs, biometric border control, etc.). Microsoft announced that Entra Verified ID now integrate IDEMIA’s identity verification services. In practice, this means when you need a high-assurance credential (like proving your real identity for a job), the system can invoke IDEMIA to do a thorough check. For example, as part of a remote onboarding process, an employer using Verified ID could ask the new hire to verify their identity through IDEMIA. The new hire gets a link or prompt, and is guided to scan their official government ID and take a live selfie video. IDEMIA’s system checks that the ID is authentic (not a forgery) and matches the person’s face, doing so in a privacy-protecting way (for instance, biometric data might be used momentarily to match and then not stored long-term, depending on the service policies). This process confirms “Yes, this is Alice, and we’ve confirmed her identity with a passport and live face check.” At that point, Microsoft Entra Verified ID can issue a credential to Alice, such as “Alice – identity verified by Contoso Corp on [Date]”. Alice stores this credential in her digital wallet (for instance, the Microsoft Authenticator app). Now Alice can present that credential to apps or IT systems to prove it’s really Alice. The employer might require it to activate her accounts, or later if Alice calls IT support, they might ask her to present the credential to prove her identity for a password reset. The verification of the credential is cryptographically secure and instantaneous – the IT system just checks the digital signature. There’s no need to manually pull up Alice’s passport scan from HR files or interrogate her with personal questions. Plus, Alice isn’t repeatedly sending sensitive personal documents; she shared them once with a trusted verifier (IDEMIA via the Verified ID app flow), not with every individual who asks for ID. This reduces the exposure of her personal data. From the company’s perspective, this approach dramatically improves security and streamlines processes. During onboarding, it’s actually faster to have someone go through an automated ID verification flow than to coordinate an in-person verification or trust slow manual checks. Organizations also avoid collecting and storing piles of personal documents, which is a compliance headache and a breach risk. Instead, they get a cryptographic assurance. Think of it like the difference between keeping copies of everyone’s credit card versus using a payment token – the latter is safer and just as effective for the transaction. Microsoft has been laying the groundwork for this for years. Back in 2020 (and even 2017....), Microsoft discussed decentralized identity concepts where users own their identity data and apps verify facts about you through digital attestations (7). Now it’s reality: Entra Verified ID uses those open standards (DID and Verifiable Credentials) under the hood. Plus, the integration with IDEMIA and others means it’s not just theoretical — it’s operational and scalable. As Ankur Patel, one of our product leaders for Microsoft Entra, said about these integrations: it enables “high assurance verification without custom business contracts or technical implementations” (6). In other words, companies can now easily plug this capability in, rather than building their own verification processes from scratch. Before moving on, let’s not forget to include the promised quote from IDEMIA’s exec that really underscores the value: “With more than 40 years of experience in identity issuance, verification and advanced biometrics, our collaboration with Microsoft enables secure authentication with verified identities organizations can rely on to ensure individuals are who they claim to be and critical services can be accessed seamlessly and securely.” – Amit Sharma, Head of Digital Strategy, IDEMIA (6) That quote basically says it all: verified identities that organizations can rely on, enabling seamless and secure access. Now, let’s see how that translates into real-world usage. Use Cases and Benefits: From Onboarding to Recovery How can Verified ID (plus IDEMIA’s) be applied in day-to-day business? There are several high-impact use cases: Remote Employee Onboarding (aka Hire with Confidence): This is the most straightforward scenario. When bringing in a new hire you haven’t met in person, you can integrate an identity verification step. As described earlier, the new employee verifies their government ID and face once, gets a credential, and uses that to start their work. The hiring team can trust that “this person is real and is who they say they are.” This directly prevents many fake-employee scams. In fact, some companies have already tried informal versions of this: The Register reported a story of an identity verification company (ironically) who, after seeing suspicious candidates, told one applicant “next interview we’ll do a document verification, it’s easy, we’ll send you a barcode to scan your ID” – and that candidate never showed up for the next round because they knew they’d be caught (1). With Verified ID, this becomes a standard, automated part of the process, not an ad-hoc test. As a bonus, the employee’s Verified ID credential can also speed up IT onboarding (auto-provisioning accounts when the verified credential is presented) and even simplify things like proving work authorization to other services (think how you often have to send copies of IDs to benefits providers or background screeners – a credential could replace that). The new hire starts faster, and with less anxiety because they know there’s a strong proof attached to their identity, and the company has less risk from day one. Oh, and HR isn’t stuck babysitting sensitive documents – governance and privacy risk go down. Stronger Helpdesk and Support Authentication: Helpdesk fraud is a common way attackers exploit weak verification. Instead of asking employees for their first pet’s name or a short code (which an attacker might phish), support can use Verified ID to confirm the person’s identity. For example, if someone calls IT saying “I’m locked out of my account,” the support portal can send a push notification asking the user to present their Verified Employee credential or do a quick re-verify via the app. If the person on the phone is an impostor, they’ll fail this check. If it’s the real employee, it’s an easy tap on their phone to prove it’s them. This approach secures processes like password resets, unlocking accounts, or granting temporary access. Think of it as caller-ID on steroids. Instead of taking someone’s word that “I am Alice from Finance,” the system actually asks for proof. And because the proof is cryptographically verified, it’s much harder to trick than a human support agent with a sob story. This reduces the burden on support too – less time playing detective with personal questions, more confidence in automating certain requests. Account Recovery and On-Demand Re-Verification: We’ve all dealt with the hassle of account recovery when we lose a password or device. Often it’s a weak link: backup codes, personal Q&A, the support team asking some manager who can’t even tell if it’s really you, or asking for a copy of your ID… With Verified ID, organizations can offer a secure self-service recovery that doesn’t rely on shared secrets. For instance, if you lose access to your multi-factor auth and need to regain entry, you could be prompted to verify your identity with a government ID check through the Verified ID system. Once you pass, you might be allowed to reset your authentication methods. Microsoft is already moving in this direction – there’s talk of replacing security questions with Verified ID checks for Entra ID account recovery (6). The benefit here is you get high assurance that the person recovering the account is the legitimate owner. This is especially important for administrators or other highly privileged users. And it’s still faster for the user than, say, waiting days for IT to manual vet and approve a request. Additionally, companies could have policies where every X months, employees might get a prompt to reaffirm their identity if they’re engaging in sensitive work. It keeps everyone honest and catches any anomalies (like, imagine an attacker somehow compromised an account – when faced with an unexpected ID check, they wouldn’t be able to comply, raising a red flag). Step-Up Authentication for Sensitive Actions: Not every action an employee takes needs this level of verification, but some absolutely do. For example, a finance officer making a $10 million wire transfer, or an engineer pushing code to a production environment, or an HR admin downloading an entire employee database – these could all trigger a step-up authentication that includes verifying the user’s identity credential. In practice, the user might get a pop-up saying “Please present your Verified ID to continue.” It might even ask for a quick fresh selfie depending on the sensitivity, which can be matched against the one on file (using Face Match in a privacy-conscious way). This is like saying: “We know you logged in with your password and MFA earlier, but this action is so critical that we want to double-check you are still the one executing it – not someone who stole your session or is using your computer.” It’s analogous to how some banks send a one-time code for high-value transactions, but instead of just a code (which could be stolen), it’s verifying you. This dramatically reduces the risk of insider threats and account takeovers causing catastrophic damage. And for the user, it’s usually a simple extra step that they’ll understand the importance of, especially in high-stakes fields. It builds trust – both that the company trusts them enough to give access, but also verifies them to ensure no one is impersonating them. In all these cases, Verified ID is adding security without a huge usability cost. In fact, many users might prefer it to the status quo: I’d rather verify my identity once properly than have to answer a bunch of security questions or have an IT person eyeballing my ID over a grainy video call. It also introduces transparency and control. As an employee, if I’m using a Verified ID, I know exactly what credential I’m sharing and why, and I have a log of it. It’s not an opaque process where I send documents into a void. From a governance perspective, using Verified ID means less widespread personal data to protect, and a clearer audit trail of “this action was taken by Alice, whose identity was verified by method X at time Y.” It can even help with regulatory compliance – for instance, proving that you really know who has access to sensitive financial data (important for things like SOX compliance or other audits). And circling back to the theme of fake employees, if such a system is in place, it’s a massive deterrent. The barrier to entry for fraudsters becomes much higher. It’s not impossible (nothing is, and you still need to Assume breach), but now they’d have to fool a top-tier document verification and biometric check – not just an overworked recruiter. That likely requires physical presence and high-quality fake documents, which are riskier and more costly for attackers. The more companies adopt such measures, the less “return on investment” these hiring scams will have for cybercriminals. The Bigger Picture: Verified Identity as the New Security Frontier The convergence of trends here is interesting. On one hand, we have digital transformation and remote work which opened the door to these novel attacks. On the other hand, we have new security philosophies like Zero Trust that emphasize continuous verification of identity and context. Verified ID is essentially Zero Trust for the hiring and identity side of things: “never trust an identity claim, always verify it.” What’s exciting is that this can now be done without turning the enterprise into a surveillance state or creating unbearable friction for legitimate users. It leverages cryptography and user-centric design to raise security and preserve privacy. Microsoft’s involvement in decentralized identity and the integration of partners like IDEMIA signals that this approach is maturing. It’s moving from pilot projects to being built into mainstream products (Entra ID, Microsoft 365, LinkedIn even offers verification badges via Entra Verified ID now (5)). It’s worth noting LinkedIn’s angle here: job seekers can verify where they work or their government ID on their LinkedIn profile, which could also help employers spot fakes (though it’s voluntary and early-stage). For CISOs and identity architects, Verified ID offers a concrete tool to address what was previously a very squishy problem. Instead of just crossing your fingers that employees are who they say they are, you can enforce it. It’s analogous to the evolution of payments security: we moved from signatures (which were rarely checked) to PIN codes and chips, and now to contactless cryptographic payments. Hiring and access management can undergo a similar upgrade from assumption-based to verification-based. Of course, adopting Verified ID or any new identity tech requires planning. Organizations will need to update their onboarding processes, train HR and IT staff on the new procedure, and ensure employees are comfortable with it. Privacy considerations must be addressed (e.g., clarify that biometric data used for verification isn’t stored indefinitely, etc.). But compared to the alternative – doing nothing and hoping to avoid being the next company in a scathing news headline about North Korean fake workers – the effort is worthwhile. In summary, human identity has become the new primary perimeter for cybersecurity. We can build all the firewalls and endpoint protections we want, but if a malicious actor can legitimately log in through the front door as an employee, those defenses may not matter. Verified identity solutions like Microsoft Entra Verified ID (with partners like IDEMIA) provide a way to fortify that perimeter with strong, real-time checks. They bring trust back into remote interactions by shifting from “trust by default” to “trust because verified.” This is not just a theoretical future; it’s happening now. As of late 2025, these tools are generally available and being rolled out in enterprises. Early adopters will likely be those in highly targeted sectors or with regulatory pressures – think defense contractors, financial institutions, and tech companies burned by experience. But I suspect it will trickle into standard best practices over the next few years, much like multi-factor authentication did. The fight against fake employees and deepfake hiring scams will continue, and attackers will evolve new tricks (perhaps trying to fake the verifications themselves). But having this layer in place tilts the balance back in favor of the defenders. It forces attackers to take more risks and expend more resources, which in turn dissuades many from even trying. To end on a practical note: If you’re a security decision-maker, now is a good time to evaluate your organization’s hiring and identity verification practices. Conduct a risk assessment – do you have any way to truly verify a new remote hire’s identity? How confident are you that all your current employees are real? If those questions make you uncomfortable, it’s worth looking into solutions like Verified ID. We’re entering an era where digital identity proofing will be as standard as background checks in HR processes. The technology has caught up to the threat, and embracing it could save your company from a very costly “lesson learned.” Remember: trust is good, but verified trust is better. By making identity verification a seamless part of the employee lifecycle, we can help ensure that the only people on the payroll are the ones we intended to hire. In a world of sophisticated fakes, that confidence is priceless. Sources: (1.1) The Register – Amazon blocked 1,800 suspected North Korean scammers seeking jobs (Dec 18, 2025) – S. Schmidt comments on DPRK fake workers and advises multi-stage identity verification. https://www.theregister.com/2025/12/18/amazon_blocked_fake_dprk_workers ("We believe, at this point, every Fortune 100 and potentially Fortune 500 has a pretty high number of risky employees on their books" Socure Chief Growth Officer Rivka Little) & https://www.linkedin.com/posts/stephenschmidt1_over-the-past-few-years-north-korean-dprk-activity-7407485036142276610-dot7 (“Implement identity verification at multiple hiring stages and monitor for anomalous technical behavior”, Amazon’s CSO, S. Schmidt) | (1.2) Heal Security – Amazon Catches North Korean IT Worker by Tracking Tiny 110ms Keystroke Delays (Dec 19, 2025). https://healsecurity.com/amazon-catches-north-korean-it-worker-by-tracking-tiny-110ms-keystroke-delays/ (2.1) U.S. Department of Justice – “Charges and Seizures Brought in Fraud Scheme Aimed at Denying Revenue for Workers Associated with North Korea” (May 16, 2024). https://www.justice.gov/usao-dc/pr/charges-and-seizures-brought-fraud-scheme-aimed-denying-revenue-workers-associated-north | (2.2) PCMag – “Remote Scammers Infiltrate 300+ Companies” (Aug 4, 2025). https://www.pcmag.com/news/is-your-coworker-a-north-korean-remote-scammers-infiltrate-300-plus-companies | (2.3) POLITICO – Tech companies have a big remote worker problem: North Korean operatives (May 12 2025). https://www.politico.com/news/2025/05/12/north-korea-remote-workers-us-tech-companies-00340208 ("I’ve talked to a lot of CISOs at Fortune 500 companies, and nearly every one that I’ve spoken to about the North Korean IT worker problem has admitted they’ve hired at least one North Korean IT worker, if not a dozen or a few dozen,” Charles Carmakal, Chief Technology Officer at Google Cloud’s Mandiant) & North Koreans posing as remote IT workers infiltrated 136 U.S. companies (Nov 14, 2025). https://www.politico.com/news/2025/11/14/north-korean-remote-work-it-scam-00652866 HR Dive – By 2028, 1 in 4 candidate profiles will be fake, Gartner predicts (Aug 8, 2025) – Gartner research highlighting rising candidate fraud and 25% fake profile forecast. https://www.hrdive.com/news/fake-job-candidates-ai/757126/ World Economic Forum – Unmasking the AI-powered, remote IT worker scams threatening businesses (Dec 15, 2025) – Overview of deepfake hiring threats; recommends government ID checks at multiple hiring stages. https://www.weforum.org/stories/2025/12/unmasking-ai-powered-remote-it-worker-scams-threatening-businesses-worldwide/ The Verge – LinkedIn gets a free verified badge that lets you prove where you work (Apr 2023) – Describes LinkedIn’s integration with Microsoft Entra for profile verification. https://www.theverge.com/2023/4/12/23679998/linkedin-verification-badge-system-clear-microsoft-entra Microsoft Tech Community – Building defense in depth: Simplifying identity security with new partner integrations (Nov 24, 2025 by P. Nrisimha) – Microsoft Entra blog announcing Verified ID GA, includes IDEMIA integration and quotes (Amit Sharma, Ankur Patel). https://techcommunity.microsoft.com/t5/microsoft-entra-blog/building-defense-in-depth-simplifying-identity-security-with-new/ba-p/4468733 & https://www.linkedin.com/posts/idemia-public-security_synced-passkeys-and-high-assurance-account-activity-7407061181879709696-SMi7 & https://www.linkedin.com/posts/4ankurpatel_synced-passkeys-and-high-assurance-account-activity-7406757097578799105-uFZz ("high assurance verification without custom business contracts or technical implementations", Ankur Patel) Microsoft Entra Blog – Building trust into digital experiences with decentralized identities (June 10, 2020 by A. Simons & A. Patel) – Background on Microsoft’s approach to decentralized identity (DID, Verifiable Credentials). https://techcommunity.microsoft.com/t5/microsoft-entra-blog/building-trust-into-digital-experiences-with-decentralized/ba-p/1257362 & Decentralized digital identities and blockchain: The future as we see it. https://www.microsoft.com/en-us/microsoft-365/blog/2018/02/12/decentralized-digital-identities-and-blockchain-the-future-as-we-see-it/ & Partnering for a path to digital identity (Janv 22, 2018) https://blogs.microsoft.com/blog/2018/01/22/partnering-for-a-path-to-digital-identity/ About the Author I'm Samuel Gaston-Raoul, Partner Solution Architect at Microsoft, working across the EMEA region with the diverse ecosystem of Microsoft partners—including System Integrators (SIs) and strategic advisory firms, Independent Software Vendors (ISVs) / Software Development Companies (SDCs), and Startups. I engage with our partners to build, scale, and innovate securely on Microsoft Cloud and Microsoft Security platforms. With a strong focus on cloud and cybersecurity, I help shape strategic offerings and guide the development of security practices—ensuring alignment with market needs, emerging challenges, and Microsoft’s product roadmap. I also engage closely with our product and engineering teams to foster early technical dialogue and drive innovation through collaborative design. Whether through architecture workshops, technical enablement, or public speaking engagements, I aim to evangelize Microsoft’s security vision while co-creating solutions that meet the evolving demands of the AI and cybersecurity era.Step by Step: 2-Tier PKI Lab
Purpose of this blog Public Key Infrastructure (PKI) is the backbone of secure digital identity management, enabling encryption, digital signatures, and certificate-based authentication. However, neither setting up a PKI nor management of certificates is something most IT pros do on a regular basis and given the complexity and vastness of the subject it only makes sense to revisit the topic from time to time. What I have found works best for me is to just set up a lab and get my hands dirty with the topic that I want to revisit. One such topic that I keep coming back to is PKI - be it for creating certificate templates, enrolling clients, or flat out creating a new PKI itself. But every time I start deploying a lab or start planning a PKI setup, I end up spending too much time sifting through the documentations and trying to figure out why my issuing certificate authority won't come online! To make my life easier I decided to create a cheatsheet to deploy a simple but secure 2-tier PKI lab based on industry best practices that I thought would be beneficial for others like me, so I decided to polish it and make it into a blog. This blog walks through deploying a two-tier PKI hierarchy using Active Directory Certificate Services (AD CS) on Windows Server: an offline Root Certification Authority (Root CA) and an online Issuing Certification Authority (Issuing CA). We’ll cover step-by-step deployment and best practices for securing the root CA, conducting key ceremonies, and maintaining Certificate Revocation Lists (CRLs). Overview: Two-Tier PKI Architecture and Components In a two-tier PKI, the Root CA sits at the top of the trust hierarchy and issues a certificate only to the subordinate Issuing CA. The Root CA is kept offline (disconnected from networks) to protect its private key and is typically a standalone CA (not domain-joined). The Issuing CA (sometimes called a subordinate or intermediate CA) is kept online to issue certificates to end-entities (users, computers, services) and is usually an enterprise CA integrated with Active Directory for automation and certificate template support. Key components: Offline Root CA: A standalone CA, often on a workgroup server, powered on only when necessary (initial setup, subordinate CA certificate signing, or periodic CRL publishing). By staying offline, it is insulated from network threats. Its self-signed certificate serves as the trust anchor for the entire PKI. The Root CA’s private key must be rigorously protected (ideally by a Hardware Security Module) because if the root is compromised, all certificates in the hierarchy are compromised. Online Issuing CA: An enterprise subordinate CA (domain-joined) that handles day-to-day certificate issuance for the organization. It trusts the Root CA (via the root’s certificate) and is the one actually responding to certificate requests. Being online, it must also be secured, but its key is kept online for operations. Typically, the Issuing CA publishes certificates and CRLs to Active Directory and/or HTTP locations for clients to download. The following diagram shows the simplified view of this implementations: The table below summarizes the roles and differences: Aspect Offline Root CA Online Issuing CA Role Standalone Root CA (workgroup) Enterprise Subordinate CA (domain member) Network Connectivity Kept offline (powered off or disconnected when not issuing) Online (running continuously to serve requests) Usage Signs only one certificate (the subordinate CA’s cert) and CRLs Issues end-entity certificates (users, computers, services) Active Directory Not a member of AD domain; doesn’t use templates or auto-enrollment Integrated with AD DS; uses certificate templates for streamlined issuance Security Extremely high: physically secured, limited access, often protected by HSM Very High: server hardened, but accessible on network; HSM recommended for private key CRL Publication Manual. Admin must periodically connect, generate, and distribute CRL. Delta CRLs usually disabled. Automatic. Publishes CRLs to configured CDP locations (AD DS, HTTP) at scheduled intervals. Validity Period Longer (e.g. 5-10+ years for the CA certificate) to reduce frequency of renewal. Shorter (e.g. 2 years) to align with organizational policy; renewed under the root when needed. In this lab setup, we will create a Contoso Root CA (offline) and a Contoso Issuing CA (online) as an example. This mirrors real-world best practices which is to "deploy a standalone offline root CA and an online enterprise subordinate CA”. Deploying the Offline Root CA Setting up the offline Root CA involves preparing a dedicated server, installing AD CS, configuring it as a root CA, and then securing it. We’ll also configure certificate CDP/AIA (CRL Distribution Point and Authority Information Access) locations so that issued certificates will point clients to the correct locations to fetch the CA’s certificate and revocation list. Step 1: Prepare the Root CA Server (Offline) Provision an isolated server: Install a Windows Server OS (e.g., Windows Server 2022) on the machine designated to be the Root CA. Preferably on a portable enterprise grade physical server that can be stored in a safe. Do not join this server to any domain – it should function in a Workgroup to remain independent of your AD forest. System configuration: Give the server a descriptive name (e.g., ROOTCA) and assign a static IP (even though it will be offline, a static IP helps when connecting it temporarily for management). Install the latest updates and security patches while it’s still able to go online. Lock down network access: Once setup is complete, disable or unplug network connections. If the server must remain powered on for any reason, ensure all unnecessary services/ports are disabled to minimize exposure. In practice, you will keep this server shut down or physically disconnected except when performing CA maintenance. Step 2: Install the AD CS Role on the Root CA Add the Certification Authority role: On the Root CA server, open Server Manager and add the Active Directory Certificate Services role. During the wizard, select the Certification Authority role service (no need for web enrollment or others on the root). Proceed through the wizard and complete the installation. You can also install the CA role and management tools via PowerShell: Install-WindowsFeature AD-Certificate -IncludeManagementToolsThis Role Services: Choose Certification Authority. Setup Type: Select Standalone CA (since this root CA is not domain-joined). CA Type: Select Root CA. Private Key: Choose “Create a new private key.” Cryptography: If using an HSM, select the HSM’s Cryptographic Service Provider (CSP) here; otherwise use default. Choose a strong key length (e.g., 2048 or 4096 bits) and a secure hash algorithm (SHA-256 or higher). CA Name: Provide a common name for the CA (e.g., “Contoso Root CA”). This name will appear in issued certificates as the Issuer. Avoid using a machine DNS name here for security – pick a name without revealing the server’s actual hostname. Validity Period: Set a long validity (e.g., 10 years) for the root CA’s self-signed certificate. A decade is common for enterprise roots, reducing how often you must touch the offline CA for renewal. Database: Specify locations for the CA database and logs (the defaults are fine for a lab). Review settings and complete the configuration. This process will generate the root CA’s key pair and self-signed certificate, establishing the Root CA.Post-install configuration: After the binary installation, click Configure Active Directory Certificate Services (a notification in Server Manager). In the configuration wizard: You can also perform this configuration via PowerShell in one line: Install-AdcsCertificationAuthority ` -CAType StandaloneRootCA ` -CryptoProviderName "YourHSMProvider" ` -HashAlgorithmName SHA256 -KeyLength 2048 ` -CACommonName "Contoso Root CA" ` -ValidityPeriod Years -ValidityPeriodUnits 10 This would set up a standalone Root CA named "Contoso Root CA" with a 2048-bit key on an HSM provider, valid for 10 years. Step 3: Integrate an HSM (Optional but Recommended) If your lab has a Hardware Security Module, use it to secure the Root CA’s keys. Using an HSM provides a dedicated, tamper-resistant storage for CA private keys and can further protect against key compromise. To integrate: Install the HSM vendor’s software and drivers on the Root CA server. Initialize the HSM and create a security world or partition as per the vendor instructions. Before or during the CA configuration (Step 2 above), ensure the HSM is ready to generate/store the key. When running the AD CS configuration, select the HSM’s CSP/KSP for the cryptographic provider so that the CA’s private key is generated on the HSM. Secure any HSM admin tokens or smartcards. For a root CA, you might employ M of N key splits – requiring multiple key custodians to collaborate to activate the HSM or key – as part of the key ceremony (discussed later). (If an HSM is not available, the root key will be stored on the server’s disk. At minimum, protect it with a strong admin passphrase when prompted, and consider enabling the option to require administrator interaction (e.g., a password) whenever the key is accessed.) Step 4: Configure CA Extensions (CDP/AIA) It’s critical to configure how the Root CA publishes its certificate and revocation list, since the root is offline and cannot use Active Directory auto-publishing. Open the Certification Authority management console (certsrv.msc), right-click the CA name > Properties, and go to the Extensions tab. We will set the CRL Distribution Points (CDP) and Authority Information Access (AIA) URLs: CRL Distribution Point (CDP): This is where certificates will tell clients to fetch the CRL for the Root CA. By default, a standalone CA might have a file:// path or no HTTP URL. Click Add and specify an HTTP URL that will be accessible to all network clients, such as: http://<IssuingCA_Server>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl For example, if your issuing CA’s server name is ISSUINGCA.contoso.local, the URL might be http://issuingca.contoso.local/CertEnroll/Contoso%20Root%20CA.crl This assumes the Issuing CA (or another web server) will host the Root CA’s CRL in the CertEnroll directory. Check the boxes for “Include in the CDP extension of issued certificates” and “Include in all CRLs. Clients use this to find Delta CRLs” (you can uncheck the delta CRL publication on the root, as we won’t use delta CRLs on an offline root). Since the root CA won’t often revoke its single issued cert (the subordinate CA), delta CRLs aren’t necessary. Note: If your Active Directory is in use and you want to publish the Root CA’s CRL to AD, you can also add an ldap:///CN=... path and check “Publish in Active Directory”. However, publishing to AD from an offline CA must be done manually using the following command when the root is temporarily connected. certutil -dspublish Many setups skip LDAP for offline roots and rely on HTTP distribution. Authority Information Access (AIA): This is where the Root CA’s certificate will be published for clients to download (to build certificate chains). Add an HTTP URL similarly, for example: http://<IssuingCA_Server>/CertEnroll/<ServerDNSName>_<CaName><CertificateName>.crt This would point to a copy of the Root CA’s certificate that will be hosted on the issuing CA web server. Check “Include in the AIA extension of issued certificates”. This way, any certificate signed by the Root CA (like your subordinate CA’s cert) contains a URL where clients can fetch the Root CA’s cert if they don’t already have it. After adding these, remove any default entries that are not applicable (e.g., LDAP if the root isn’t going to publish to AD, or file paths that won’t be used by clients). These settings ensure that certificates issued by the Root CA (in practice, just the subordinate CA’s certificate) will carry the correct URLs for chain building and revocation checking. Step 5: Back Up the Root CA and Issue the Subordinate Certificate With the Root CA configured, we need to issue a certificate for the Issuing CA (subordinate). We’ll perform that in the next section from the Issuing CA’s side via a request file. Before taking the root offline, ensure you: Back up the CA’s private key and certificate: In the Certification Authority console, or via the CA Backup wizard, export the Root CA’s key pair and CA certificate. Protect this backup (store it offline in a secure location, e.g., on encrypted removable media in a safe). This backup is crucial for disaster recovery or if the Root CA needs to be migrated or restored. Save the Root CA Certificate: You will need the Root CA’s public certificate (*.crt) to distribute to other systems. Have it exported (Base-64 or DER format) for use on the Issuing CA and for clients. Initial CRL publication: Manually publish the first CRL so that it can be distributed. Open an elevated Command Prompt on the Root CA and run: certutil -crl This generates a new CRL file (in the CA’s configured CRL folder, typically %windir%\system32\CertSrv\CertEnroll). Take that CRL file and copy it to the designated distribution point (for example, to the CertEnroll directory on the Issuing CA’s web server, as per the HTTP URL configured). If using Active Directory for CRL distribution, you would also publish it to AD now (e.g., certutil -dspublish -f RootCA.crl on a domain-connected machine). In most lab setups, copying to an HTTP share is sufficient. With these tasks done, the Root CA is ready. At this point, disconnect or power off the Root CA and store it securely – it should remain offline except when it’s absolutely needed (like publishing a new CRL or renewing the subordinate CA’s certificate in the far future). Keeping the root CA offline maximizes its security by minimizing exposure to compromise. Best Practices for Securing the Root CA: The Root CA is the trust anchor, so apply stringent security practices: Physical security: Store the Root CA machine in a locked, secure location. If it’s a virtual machine, consider storing it on a disconnected hypervisor or a USB drive locked in a safe. Only authorized PKI team members should have access. An offline CA should be treated like crown jewels – offline CAs should be stored in secure locations. Minimal exposure: Keep the Root CA powered off and disconnected when not in use. It should not be left running or connected to any network. Routine operations (like issuing end-entity certs) should never involve the root. Admin access control: Limit administrative access on the Root CA server. Use dedicated accounts for PKI administration. Enable auditing on the CA for any changes or issuance events. No additional roles or software: Do not use the Root CA server for any other function (no web browsing, no email, etc.). Fewer installed components means fewer potential vulnerabilities. Protect the private key: Use an HSM if possible; if not, ensure the key is at least protected by a strong password and consider splitting knowledge of that password among multiple people (so no single person can activate the CA). Many organizations opt for an offline root key ceremony (see below) to generate and handle the root key with multiple witnesses and strict procedures. Keep system time and settings consistent: If the Root CA is powered off for long periods, ensure its clock is accurate whenever it is started (to avoid issuing a CRL or certificate with a wrong date). Don’t change the server name or CA name after installation (doing so invalidates issued certs). Periodic health checks: Even though offline, plan to turn on the Root CA at a secure interval (e.g., semi-annually or annually) to perform tasks like CRL publishing and system updates. Make sure to apply OS security updates during these maintenance windows, as offline does not mean immune to vulnerabilities (especially if it ever connects to a network for CRL publication or uses removable media). Deploying the Online Issuing CA Next, set up the Issuing CA server which will actually issue certificates to end entities in the lab. This server will be domain-joined (if using AD integration) and will obtain its CA certificate from the Root CA we just configured. Step 1: Prepare the Issuing CA Server Provision the server: Install Windows Server on a new machine (or VM) that will be the Issuing CA. Join this server to the Active Directory domain (e.g., Contoso.local). Being an enterprise CA, it needs domain membership to publish templates and integrate with AD security groups. Rename the server to something descriptive like ISSUINGCA for clarity. Assign a static IP and ensure it can communicate on the network. IIS for web enrollment (optional): If you plan to use the Web Enrollment or Certificate Enrollment Web Services, ensure IIS is installed. (The AD CS installation wizard can add it if you include those role services.) For this guide, we will include the Web Enrollment role so that the CertEnroll directory is set up for hosting certificate and CRL files. Step 2: Install AD CS Role on Issuing CA On the Issuing CA server, add the Active Directory Certificate Services role via Server Manager or PowerShell. This time, select both Certification Authority and Certification Authority Web Enrollment role services (Web Enrollment will set up the HTTP endpoints for certificate requests if needed). For example, using PowerShell: Install-WindowsFeature AD-Certificate, ADCS-Web-Enrollment -IncludeManagementTools After installation, launch the AD CS configuration wizard: Role Services: Choose Certification Authority (and Web Enrollment if prompted). Setup Type: Select Enterprise CA (since this CA will integrate with AD DS). CA Type: Select Subordinate CA (this indicates it will get its cert from an existing root CA). Private Key: Choose “Create a new private key” (we’ll generate a new key pair for this CA). Cryptography: If using an HSM here as well, select the HSM’s CSP/KSP for the issuing CA’s key. Otherwise, choose a strong key length (2048+ bits, SHA256 or better for hash). CA Name: Provide a name (e.g., “Contoso Issuing CA”). This name will appear as the Issuer on certificates it issues. Certificate Request: The wizard will ask how you want to get the subordinate CA’s certificate. Choose “Save a certificate request to file”. Specify a path, e.g., C:\CertRequest\issuingCA.req. The wizard will generate a request file that we need to take to the Root CA for signing. (Since our Root CA is offline, this file transfer might be via secure USB or a network share when the root is temporarily online.) CA Database: Choose locations or accept defaults for the certificate DB and logs. Finish the configuration wizard, which will complete pending because the CA doesn’t have a certificate yet. The AD CS service on this server won’t start until we import the issued cert from the root. Step 3: Integrate HSM on Issuing CA (Optional) If available, repeat the HSM setup on the Issuing CA: install HSM drivers, initialize it, and generate/secure the key for the subordinate CA on the HSM. Ensure you chose the HSM provider during the above configuration so that the issuing CA’s private key is stored in the HSM. Even though this CA is online, an HSM still greatly enhances security by protecting the private key from extraction. The issuing CA’s HSM may not require multiple custodians to activate (as it needs to run continuously), but should still be physically secured. Step 4: Obtain the Issuing CA’s Certificate from the Root CA Now we have a pending request (issuingCA.req) for the subordinate CA. To get its certificate: Transport the request to the Root CA: Copy the request file to the offline Root CA (via secure means – e.g., formatted new USB stick). Start up the Root CA (in a secure, offline setting) and open the Certification Authority console. Submit the request on Root CA: Right-click the Root CA in the CA console -> All Tasks -> Submit new request, and select the .req file. The request will appear in the Pending Requests on the root. Issue the subordinate CA certificate: Find the pending request (it will list the Issuing CA’s name). Right-click and choose All Tasks > Issue. The subordinate CA’s certificate is now issued by the Root CA. Export the issued certificate: Still on the Root CA, go to Issued Certificates, find the newly issued subordinate CA cert (you can identify it by the Request ID or by the name). Right-click it and choose Open or All Tasks > Export to get the certificate in a file form. If using the console’s built-in “Export” it might only allow binary; alternatively use the certutil command: certutil -dup <RequestID> .\ContosoIssuingCA.cer or simply open and copy to file. Save the certificate as issuingCA.cer. Also make sure you have a copy of the Root CA’s certificate (if not already done). Publish Root CA cert and CRL as needed: Before leaving the Root CA, you may also want to ensure the Root’s own certificate and latest CRL are available to the issuing CA and clients. If not already done in Step 5 of root deployment, export the Root CA cert (DER format) and copy the CRL file. You might use certutil -crl again if some time has passed since initial CRL. Now take the issuingCA.cer file (and root cert/CRL files) and move them back to the Issuing CA server. Step 5: Install the Issuing CA’s Certificate and Complete Configuration On the Issuing CA server (which is still waiting for its CA cert): Install the subordinate CA certificate: In Server Manager or the Certification Authority console on the Issuing CA, there should be an option to “Install CA Certificate” (if the AD CS configuration wizard is still open, it will prompt for the file; or otherwise, in the CA console right-click the CA name > All Tasks > Install CA Certificate). Provide the issuingCA.cer file obtained from the root. This will install the CA’s own certificate and start the CA service. The Issuing CA is now operational as a subordinate CA. Alternatively, use PowerShell: certutil -installcert C:\CertRequest\issuingCA.cer This installs the cert and associates it with the pending key. Trust the Root CA certificate: Because the Issuing CA is domain-joined, when you install the subordinate cert, it might automatically place the Root CA’s certificate in the Trusted Root Certification Authorities store on that server (and possibly publish it to AD). If not, you should manually install the Root CA’s certificate into the Trusted Root CA store on the Issuing CA machine (using the Certificates MMC or certutil -addstore -f Root rootCA.cer). This step prevents any “chain not trusted” warnings on the Issuing CA and ensures it trusts its parent. In an enterprise environment, you would also distribute the root certificate to all client machines (e.g., via Group Policy) so that they trust the whole chain. Import Root CRL: Copy the Root CA’s CRL (*.crl file) to the Issuing CA’s CRL distribution point location (e.g., C:\Windows\System32\CertSrv\CertEnroll\ if that’s the directory served by the web server). This matches the HTTP URL we configured on the root. Place the CRL file there and ensure it is accessible (the Issuing CA’s IIS might need to serve static .crl files; often, if Web Enrollment is installed, the CertEnroll folder is under C:\Inetpub\wwwroot\CertEnroll). At this point, the subordinate CA and any client hitting the HTTP URL can retrieve the root’s CRL. The subordinate CA is now fully established. It holds a certificate issued by the Root CA (forming a complete chain of trust), and it’s ready to issue end-entity certificates. Step 6: Configure Issuing CA Settings and Start Services Start the Certificate Services: If the CA service (CertSvc) isn’t started automatically, start or restart it. On PowerShell: Restart-Service certsvc The CA should show as running in the CA console with the name “Contoso Issuing CA” (or your chosen name). Configure Certificate Templates: Because this is an Enterprise CA, it can utilize certificate templates stored in Active Directory to simplify issuing common cert types (user auth, computer auth, web server SSL, etc.). By default, some templates (e.g., User, Computer) are available but not issued. In the Certification Authority console under Certificate Templates, you can choose which templates to issue (e.g., right-click > New > Certificate Template to Issue, then select templates like “User” or “Computer”). This lab guide doesn’t require specific templates but know that only Enterprise CAs can use templates. Templates define the policies and settings (cryptography, enrollment permissions, etc.) for issued certificates. Ensure you enable only the templates needed and configure their permissions appropriately (e.g., allow the appropriate groups to enroll). Set CRL publishing schedule: The Issuing CA will automatically publish its own CRL (for certificates it issues) at intervals. You can adjust the CRL and Delta CRL publication interval in the CA’s Properties > CRL Period. A common practice is a small base CRL period (e.g., 1 week or 2 weeks) for issuing CAs, because they may revoke user certs more frequently; and enable Delta CRLs (published daily) for timely revocation information. Make sure the CDP/AIA for the Issuing CA itself are properly configured too (the wizard usually sets LDAP and HTTP locations, but verify in the Extensions tab). In a lab, the default settings are fine. Web Enrollment (if installed): You can verify the web enrollment by browsing to http://<IssuingCA>/certsrv. This web UI allows browser-based certificate requests. It’s a legacy interface mostly, but for testing it can be used if your clients aren’t domain-joined or if you want a manual request method. In modern use, the Certificate Enrollment Web Service/Policy roles or auto-enrollment via Group Policy are preferred for remote and automated enrollment. At this stage, your PKI is operational: the Issuing CA trusts the offline Root CA and can issue certificates. The Root CA can be kept offline with confidence that the subordinate will handle all regular work. Validation and Testing of the PKI It’s important to verify that the PKI is configured correctly: Check CA status: On the Issuing CA, open the Certification Authority console and ensure no errors. Verify that the Issuing CA’s certificate shows OK (no red X). On the Root CA (offline most of the time), you can use the Pkiview.msc snap-in (Microsoft PKI Health Tool) on a domain-connected machine to check the health of the PKI. This tool will show if the CDPs/AIA are reachable and if certificates are properly published. Trust chain on clients: On a domain-joined client PC, the Root CA certificate should be present in the Trusted Root Certification Authorities store (if the Issuing CA was installed as Enterprise CA, it likely published the root cert to AD automatically; you can also distribute it via Group Policy or manually). The Issuing CA’s certificate should appear in the Intermediate Certification Authorities store. This establishes the chain of trust. If not, import the root cert into the domain’s Group Policy for Trusted Roots. A quick test: on a client, run certutil -config "ISSUINGCA\\Contoso Issuing CA" -ping to see if it can contact the CA (or use the Certification Authority MMC targeting the issuing CA). Enroll a test certificate: Try to enroll for a certificate from the Issuing CA. For instance, from a domain-joined client, use the Certificates MMC (in Current User or Computer context) and initiate a certificate request for a User or Computer certificate (depending on templates issued). If auto-enrollment is configured via Group Policy for a template, you can simply log on a client and see if it automatically receives a certificate. Alternatively, use the web enrollment page or certreq command to submit a request. The request should be approved and a certificate issued by "Contoso Issuing CA". After enrollment, inspect the issued certificate: it should chain up to "Contoso Root CA" without errors. Ensure that the certificate’s CDP points to the URL we set (and try to browse that URL to see the CRL file), and that the AIA points to the root cert location. Revocation test (optional): To test CRL behavior, you could revoke a test certificate on the Issuing CA (using the CA console) and publish a new CRL. On the client, after updating the CRL, the revoked certificate should show as revoked. For the Root CA, since it shouldn’t issue end-entity certs, you wouldn’t normally revoke anything except potentially the subordinate CA’s certificate (which would be a drastic action in case of compromise). By issuing a test certificate and validating the chain and revocation, you confirm that your two-tier PKI lab is functioning correctly. Maintaining the PKI: CRLs, Key Ceremonies, and Security Procedures Deploying the PKI is only the beginning. Proper maintenance and operational procedures are crucial to ensure the PKI remains secure and reliable over time. Periodic CRL Updates for the Offline Root: The Root CA’s CRL has a defined validity period (set during configuration, often 6 or 12 months for offline roots). Before the CRL expires, the Root CA must be brought online (in a secure environment) to issue a new CRL. It’s recommended to schedule CRL updates periodically (e.g., semi-annually) to prevent the CRL from expiring. An expired CRL can cause certificate chain validation to fail, potentially disrupting services. Typically, organizations set the offline root CRL validity so that publishing 1-2 times a year is sufficient. When the time comes: Start the Root CA (ensuring the system clock is correct). Run certutil -crl to issue a fresh CRL. Distribute the new CRL: copy it to the HTTP CDP location (overwrite the old file) and, if applicable, use certutil -dspublish -f RootCA.crl to update it in Active Directory. Verify that the new CRL’s next update date is extended appropriately (e.g., another 6 months out). Clients and the Issuing CA will automatically pick up the new CRL when checking for revocation. (The Issuing CA, if configured, might cache the root CRL and need a restart or certutil -setreg ca\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE tweak if the root CRL expires unexpectedly. Keeping the schedule prevents such issues.) Issuing CA CRL and OCSP: The Issuing CA’s CRLs are published automatically as it is online. Ensure the IIS or file share hosting the CRL is accessible. Optionally, consider setting up an Online Responder (OCSP) for real-time status checking, especially if CRLs are large or you need faster revocation information. OCSP is another AD CS role service that can be configured on the issuing CA or another server to answer certificate status queries. This might be beyond a simple lab, but it’s worth mentioning for completeness. Key Ceremonies and Documentation: For production environments (and good practice even in labs), formalize the process of handling CA keys in a Key Ceremony. A key ceremony is a carefully controlled process for activities like generating the Root CA’s key pair, installing the CA, and signing subordinate certificates. It often involves multiple people to ensure no single person has unilateral control (principle of dual control) and to witness the process. Best practices for a Root CA key ceremony include: Advance Planning: Create a step-by-step script of the ceremony tasks. Include who will do what, what materials are needed (HSMs, installation media, backup devices, etc.), and the order of operations. Multiple trusted individuals present: Roles might include a Ceremony Administrator (leads the process), a Security Officer (responsible for HSM or key material handling), an Auditor (to observe and record), etc. This prevents any one person from manipulating the process and increases trust. Secure environment: Conduct the ceremony in a secure location (e.g., a locked room) free of recording devices or unauthorized personnel. Ensure the Root CA machine is isolated (no network), and ideally that BIOS/USB access controls are in place to prevent any malware. Generate keys with proper controls: If using an HSM, initialize and generate the key with the required number of key custodians each providing part of the activation material (e.g., smartcards or passphrases). Immediately back up the HSM partition or key to secure media (requiring the same custodians to restore). Sign subordinate CA certificate: As part of the ceremony, once the root key is ready, sign the subordinate’s request. This might also be a witnessed step. Document every action: Write down each command run, each key generated, serial numbers of devices used, and have all participants sign an acknowledgment of the outcomes. Also record the fingerprints of the generated Root CA certificate and any subordinate certificate to ensure they are exactly as expected. Secure storage: After the ceremony, store the Root CA machine (if it’s a laptop or VM) and HSM tokens in a tamper-evident bag or safe. The idea is to make it evident if someone tries to access the root outside of an authorized ceremony. While a full key ceremony might be overkill for a small lab, understanding these practices is important. Even in a lab, you can simulate some aspects (for learning), like documenting the procedure of taking the root online to sign the request and then locking it away. These practices greatly increase the trust in a production PKI by ensuring transparency and accountability for critical operations. Backup and Recovery Plans: Both CAs’ data should be regularly backed up: For the Root CA: since it’s rarely online, backup after any change. Typically, you’d back up the CA’s private key and certificate once (right after setup or any renewal). Store this securely offline (separate from the server itself). Also back up the CA database if it ever issues more than one cert (for root it might not issue many). For the Issuing CA: schedule automated backups of the CA database and private key. You can use the built-in certutil -backup or Windows Server Backup (which is aware of the AD CS database). Keep backups secure and test restoration procedures. Having a documented recovery procedure for the CA is crucial for continuity. Also consider backup of templates and any scripts. Maintain spare hardware or VMs in case you need to restore the CA on new hardware (especially for the root, having a procedure to restore on a new machine if the original is destroyed). Security maintenance: Apply OS updates to the CAs carefully. For the offline root, patch it offline if possible (offline servicing or connecting it briefly to a management network). For the issuing CA, treat it as a critical infrastructure server: limit its exposure (firewall it so only required services are reachable), monitor its event logs (enable auditing for Certificate Services events, which can log each issuance and revocation), and employ anti-malware tools with caution (whitelisting the CA processes to avoid interference). Also, periodically review the CA’s configuration and certificate templates to ensure they meet current security standards (for example, deprecate any weak cryptography or adjust validity periods if needed). By following these maintenance steps and best practices, your two-tier PKI will remain secure and trustworthy over time. Remember that PKI is not “set and forget” – it requires operational diligence, but the payoff is a robust trust infrastructure for your organization’s security. Additional AD CS Features and References Active Directory Certificate Services provides more capabilities than covered in this basic lab. Depending on your needs, you might explore: Certificate Templates: We touched on templates; they are a powerful feature on Enterprise CAs to enforce standardized certificate settings. Administrators can create custom templates for various use cases (SSL, S/MIME email, code signing) and control enrollment permissions. Understanding template versions and permissions is key for enterprise deployments. (Refer to Microsoft’s documentation on Certificate template concepts in Windows Server for details on how templates work and can be customized.) Web Services for Enrollment: In scenarios with remote or non-domain clients, AD CS offers the Certificate Enrollment Web Service (CES) and Certificate Enrollment Policy Web Service (CEP) role services. These allow clients to fetch enrollment policy information and request certificates over HTTP or HTTPS, even when not connected directly to the domain. They work with the certificate templates to enable similar auto-enrollment experiences over the web. See Microsoft’s guides on the Certificate Enrollment Web Service overview and Certificate Enrollment Policy Web Service overview for when to use these. Network Device Enrollment Service (NDES): This AD CS role service implements the Simple Certificate Enrollment Protocol (SCEP) to allow devices like routers, switches, and mobile devices to obtain certificates from the CA without domain credentials. NDES acts as a proxy (Registration Authority) between devices and the CA, using one-time passwords for authentication. If you need to issue certificates to network equipment or MDM-managed mobile devices, NDES is the solution. Microsoft Docs provide a Network Device Enrollment Service(NDES) overview and even details on using a policy module with NDES for advanced scenarios (like customizing how requests are processed or integrating with custom policies). Online Responders (OCSP): As mentioned, an Online Responder can be configured to answer revocation status queries more efficiently than CRLs, especially useful if your CRLs grow large or you have high-volume certificate validation (VPNs, etc.). AD CS’s Online Responder role service can be installed on a member server and configured with the OCSP Response Signing certificate from your Issuing CA. Monitoring and Auditing: Windows Servers have options to audit CA events. Enabling auditing can log events such as certificate issuance, revocation, or changes to the CA configuration. These logs are important in enterprise PKI to track who did what (for compliance and security forensics). Also, tools like the PKI Health Tool (pkiview.msc) and PowerShell cmdlets (like Get-CertificationAuthority, Get-CertificationAuthorityCertificate) can help monitor the health and configuration of your CAs. Conclusion By following this guide, you have set up a secure two-tier PKI environment consisting of an offline Root CA and an online Issuing CA. This design, which uses an offline root, is considered a security best practice for enterprise PKI deployments because it reduces the risk of your root key being compromised. With the offline Root CA acting as a hardened trust anchor and the enterprise Issuing CA handling day-to-day certificate issuance, your lab PKI can issue certificates for various purposes (HTTPS, code signing, user authentication, etc.) in a way that models real-world deployments. As you expand this lab or move to production, always remember that PKI security is as much about process as technology. Applying strict controls to protect CA keys, keeping software up to date, and monitoring your PKI’s health are all part of the journey. For further reading and official guidance, refer to these Microsoft documentation resources: 📖 AD CS PKI Design Considerations: PKI design considerations using Active Directory Certificate Services in Windows Server helps in planning a PKI deployment (number of CAs, hierarchy depth, naming, key lengths, validity periods, etc.). This is useful to read when adapting this lab design to a production environment. It also covers configuring CDP/AIA and why offline roots usually don’t need delta CRLs. 📖 AD CS Step-by-Step Guides: Microsoft’s Test Lab Guide Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy walk through a similar scenario.Cybersecurity: What Every Business Leader Needs to Know Now
As a Senior Cybersecurity Solution Architect, I’ve had the privilege of supporting organisations across the United Kingdom, Europe, and the United States—spanning sectors from finance to healthcare—in strengthening their security posture. One thing has become abundantly clear: cybersecurity is no longer the sole domain of IT departments. It is a strategic imperative that demands attention at board-level. This guide distils five key lessons drawn from real-world engagements to help executive leaders navigate today’s evolving threat landscape. These insights are not merely technical—they are cultural, operational, and strategic. If you’re a C-level executive, this article is a call to action: reassess how your organisation approaches cybersecurity before the next breach forces the conversation. In this article, I share five lessons (and quotes) from the field that help demystify how to enhance an organisation’s security posture. 1. Shift the Mindset “This has always been our approach, and we’ve never experienced a breach—so why should we change it?” A significant barrier to effective cybersecurity lies not in the sophistication of attackers, but in the predictability of human behaviour. If you’ve never experienced a breach, it’s tempting to maintain the status quo. However, as threats evolve, so too must your defences. Many cyber threats exploit well-known vulnerabilities that remain unpatched or rely on individuals performing routine tasks in familiar ways. Human nature tends to favour comfort and habit—traits that adversaries are adept at exploiting. Unlike many organisations, attackers readily adopt new technologies to advance their objectives, including AI-powered ransomware to execute increasingly sophisticated attacks. It is therefore imperative to recognise—without delay—that the advent of AI has dramatically reduced both the effort and time required to compromise systems. As the UK’s National Cyber Security Centre (NCSC) has stated: “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.” Similarly, McKinsey & Company observed: “As AI quickly advances cyber threats, organisations seem to be taking a more cautious approach, balancing the benefits and risks of the new technology while trying to keep pace with attackers’ increasing sophistication.” To counter this evolving threat landscape, organisations must proactively leverage AI in their cyber defence strategies. Examples include: Identity and Access Management (IAM): AI enhances IAM by analysing real-time signals across systems to detect risky sign-ins and enforce adaptive access controls. Example: Microsoft Entra Agents for Conditional Access use AI to automate policy recommendations, streamlining access decisions with minimal manual input. Figure 1: Microsoft Entra Agents Threat Detection: AI accelerates detection, response, and recovery, helping organisations stay ahead of sophisticated threats. Example: Microsoft Defender for Cloud’s AI threat protection identifies prompt injection, data poisoning, and wallet attacks in real time. Incident Response: AI facilitates real-time decision-making, removing emotional bias and accelerating containment and recovery during security incidents. Example: Automatic Attack Disruption in Defender XDR, which can automatically contain a breach in progress. AI Security Posture Management AI workloads require continuous discovery, classification, and protection across multi-cloud environments. Example: Microsoft Defender for Cloud’s AI Security Posture Management secures custom AI apps across Azure, AWS, and GCP by detecting misconfigurations, vulnerabilities, and compliance gaps. Data Security Posture Management (DSPM) for AI AI interactions must be governed to ensure privacy, compliance, and insider risk mitigation. Example: Microsoft Purview DSPM for AI enables prompt auditing, applies Data Loss Prevention (DLP) policies to third-party AI apps like ChatGPT, and supports eDiscovery and lifecycle management. AI Threat Protection Organisations must address emerging AI threat vectors, including prompt injection, data leakage, and model exploitation. Example: Defender for AI (private preview) provides model-level security, including governance, anomaly detection, and lifecycle protection. Embracing innovation, automation, and intelligent defence is the secret sauce for cyber resilience in 2026. 2. Avoid One-Off Purchases – Invest with a Strategy “One MDE and one Sentinel to go, please.” Organisations often approach me intending to purchase a specific cybersecurity product—such as Microsoft Defender for Endpoint (MDE)—without a clearly articulated strategic rationale. My immediate question is: what is the broader objective behind this purchase? Is it driven by perceived value or popularity, or does it form part of a well-considered strategy to enhance endpoint security? Cybersecurity investments should be guided by a long-term, holistic strategy that spans multiple years and is periodically reassessed to reflect evolving threats. Strengthening endpoint protection must be integrated into a wider effort to improve the organisation’s overall security posture. This includes ensuring seamless integration between security solutions and avoiding operational silos. For example, deploying robust endpoint protection is of limited value if identities are not safeguarded with multi-factor authentication (MFA), or if storage accounts remain publicly accessible. A cohesive and forward-looking approach ensures that all components of the security architecture work in concert to mitigate risk effectively. Security Adoption Journey (Based on Zero Trust Framework) Assess – Evaluate the threat landscape, attack surface, vulnerabilities, compliance obligations, and critical assets. Align – Link security objectives to broader business goals to ensure strategic coherence. Architect – Design integrated and scalable security solutions, addressing gaps and eliminating operational silos. Activate – Implement tools with robust governance and automation to ensure consistent policy enforcement. Advance – Continuously monitor, test, and refine the security posture to stay ahead of evolving threats. Security tools are not fast food—they work best as part of a long-term plan, not a one-off order. This piecemeal approach runs counter to the modern Zero Trust security model, which assumes no single tool will prevent every breach and instead implements layered defences and integration. 3. Legacy Systems Are Holding You Back “Unfortunately, we are unable to implement phishing-resistant MFA, as our legacy app does not support integration with the required protocols.” A common challenge faced by many organisations I have worked with is the constraint on innovation within their cybersecurity architecture, primarily due to continued reliance on legacy applications—often driven by budgetary or operational necessity. These outdated systems frequently lack compatibility with modern security technologies and may introduce significant vulnerabilities. A notable example is the deployment of phishing-resistant multi-factor authentication (MFA)—such as FIDO2 security keys or certificate-based authentication—which requires advanced identity protocols and conditional access policies. These capabilities are available exclusively through Microsoft Entra ID. To address this issue effectively, it is essential to design security frameworks based on the organisation’s future aspirations rather than its current limitations. By adopting a forward-thinking approach, organisations can remain receptive to emerging technologies that align with their strategic cybersecurity objectives. Moreover, this perspective encourages investment in acquiring the necessary talent, thereby reducing reliance on extensive change management and staff retraining. I advise designing for where you want to be in the next 1–3 years—ideally cloud-first and identity-driven—essentially adopting a Zero Trust architecture, rather than being constrained by the limitations of legacy systems. 4. Collaboration Is a Security Imperative “This item will need to be added to the dev team's backlog. Given their current workload, they will do their best to implement GitHub Security in Q3, subject to capacity.” Cybersecurity threats may originate from various parts of an organisation, and one of the principal challenges many face is the fragmented nature of their defence strategies. To effectively mitigate such risks, cybersecurity must be embedded across all departments and functions, rather than being confined to a single team or role. In many organisations, the Chief Information Security Officer (CISO) operates in isolation from other C-level executives, which can limit their influence and complicate the implementation of security measures across the enterprise. Furthermore, some teams may lack the requisite expertise to execute essential security practices. For instance, an R&D lead responsible for managing developers may not possess the necessary skills in DevSecOps. To address these challenges, it is vital to ensure that the CISO is empowered to act without political or organisational barriers and is supported in implementing security measures across all business units. When the CISO has backing from the COO and HR, initiatives such as MFA rollout happen faster and more thoroughly. Cross-Functional Security Responsibilities Role Security Responsibilities R&D - Adopt DevSecOps practices - Identify vulnerabilities early - Manage code dependencies - Detect exposed secrets - Embed security in CI/CD pipelines CIO - Ensure visibility over organizational data - Implement Data Loss Prevention (DLP) - Safeguard sensitive data lifecycle - Ensure regulatory compliance CTO - Secure cloud environments (CSPM) - Manage SaaS security posture (SSPM) - Ensure hardware and endpoint protection COO - Protect digital assets - Secure domain management - Mitigate impersonation threats - Safeguard digital marketing channels and customer PII Support & Vendors - Deliver targeted training - Prevent social engineering attacks - Improve awareness of threat vectors HR - Train employees on AI-related threats - Manage insider risks - Secure employee data - Oversee cybersecurity across the employee lifecycle Empowering the CISO to act across departments helps organisations shift towards a security-first culture—embedding cybersecurity into every function, not just IT. 5. Compliance Is Not Security “We’re compliant, so we must be secure.” Many organisations mistakenly equate passing audits—such as ISO 27001 or SOC 2—with being secure. While compliance frameworks help establish a baseline for security, they are not a guarantee of protection. Determined attackers are not deterred by audit checklists; they exploit gaps, misconfigurations, and human error regardless of whether an organisation is certified. Moreover, due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks often struggle to keep pace. By the time a standard is updated, attackers may already be exploiting new techniques that fall outside its scope. This lag creates a false sense of security for organisations that rely solely on regulatory checkboxes. Security is a continuous risk management process—not a one-time certification. It must be embedded into every layer of the enterprise and treated with the same urgency as other core business priorities. Compliance may be the starting line, not the finish line. Effective security goes beyond meeting regulatory requirements—it demands ongoing vigilance, adaptability, and a proactive mindset. Conclusion: Cybersecurity Is a Continuous Discipline Cybersecurity is not a destination—it is a continuous journey. By embracing strategic thinking, cross-functional collaboration, and emerging technologies, organisations can build resilience against today’s threats and tomorrow’s unknowns. The lessons shared throughout this article are not merely technical—they are cultural, operational, and strategic. If there is one key takeaway, it is this: avoid piecemeal fixes and instead adopt an integrated, future-ready security strategy. Due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks alone cannot keep pace. Security must be treated as a dynamic, ongoing process—one that is embedded into every layer of the enterprise and reviewed regularly. Organisations should conduct periodic security posture reviews, leveraging tools such as Microsoft Secure Score or monthly risk reports, and stay informed about emerging threats through threat intelligence feeds and resources like the Microsoft Digital Defence Report, CISA (Cybersecurity and Infrastructure Security Agency), NCSC (UK National Cyber Security Centre), and other open-source intelligence platforms. As Ann Johnson aptly stated in her blog: “The most prepared organisations are those that keep asking the right questions and refining their approach together.” Cyber resilience demands ongoing investment—in people (through training and simulation drills), in processes (via playbooks and frameworks), and in technology (through updates and adoption of AI-driven defences). To reduce cybersecurity risk over time, resilient organisations must continually refine their approach and treat cybersecurity as an ongoing discipline. The time to act is now. Resources: https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat Defend against cyber threats with AI solutions from Microsoft - Microsoft Industry Blogs Generative AI Cybersecurity Solutions | Microsoft Security Require phishing-resistant multifactor authentication for Microsoft Entra administrator roles - Microsoft Entra ID | Microsoft Learn AI is the greatest threat—and defense—in cybersecurity today. Here’s why. Microsoft Entra Agents - Microsoft Entra | Microsoft Learn Smarter identity security starts with AI https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/ https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2023-critical-cybersecurity-challenges https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/1.6KViews2likes0CommentsNavigating Mergers and Acquisitions: IT Consolidation Best Practices and Approach
In today’s business environment, mergers, divestitures, and acquisitions (M&A) are becoming increasingly common. However, with these business transformations come the challenges of consolidating multiple IT services and applications. Effective consolidation streamlines operations, enhances accessibility, provides a single pane of glass for management, security, and scalability, most importantly, reduces costs. At Microsoft, my team specializes in helping customers tackle these challenges by delivering top-notch solutions. In this article, I’ll Walk you through some key activities to consider when faced with IT consolidation during M&A. Step 1: Define the Scope of Consolidation The first critical step in any consolidation project is identifying the scope. Where is the consolidation happening, and what are the source and target environments? Once these are defined, the next step is to determine how to consolidate the apps, identities, and infrastructure across the two entities. Step 2: Data is Key When it comes to IT, everything revolves around data. In a consolidation scenario, the most valuable data typically comes from Configuration Items (CI’s), customer asset management systems, and server management teams. If the organization maintains robust documentation of these CIs, it can greatly simplify the process. However, what if documentation is sparse? In such cases, we rely on interviews with various teams, including application, server, and infrastructure teams, as well as vendors who support the current infrastructure and applications. These interviews help us gather the essential information needed for a smooth consolidation and migration. Step 3: Leverage Tools for Data Collection Even with comprehensive interviews, there may still be gaps in the available information. This is where tools like the MAP Toolkit, Connection Loggers, and Network traces come into play. The list of tools could be longer, but I have mentioned a few here just for reference. These tools help identify the applications running within the environment and provide details about them. Our team at Microsoft is well-versed in using these tools and can assist customers in creating an accurate inventory of applications hosted in their environment, which will ultimately help in defining the application scope for migrations. Step 4: Prioritize and Plan Migration Once we have a comprehensive list of the applications, servers, and identities involved, the next step is to prioritize and plan the migration. A thorough discussion with stakeholders will determine which elements should be migrated from one domain to another. For Active Directory (AD) users, groups and computer migrations, we leverage the expertise of our Microsoft IMS (Identity Migration Service.) team. For server / application migrations, the approach will depend on the methodologies we plan to follow. Approach to Consider for Users/Groups and Computer Migration: Prepare source and target domain readiness. Define the migration approach and tool selection. Migrate the users, groups, and computers to the target. Test the resources that have been migrated and confirm that users can log in to the target domain. Step 5: Server / Application Migration Scenarios In some cases, servers may already be tied or tagged to a specific domain. However, if users are available in the new domain and the required groups and other settings are configured (using Identity Migration Services team to migrate resources to the target domain), we can switch the server’s domain and reapply the necessary user and group-specific permissions. This is the easiest option when the application hosted on the server is simple and does not require any code changes or reprogramming. In more complex scenarios, switching the domain isn’t possible. In these cases, we may choose to clone the server and create a mirrored environment in the target Active Directory domain, or we may need to set up a parallel environment for the applications. From there, we can reconfigure the applications, endpoints, and any other settings necessary for the app to function properly. Some Common Approaches to Consider: Assessment of the current environment. Setting up migration goals. Creating a migration plan. Testing the migration plan in a lower environment. Migrate server and databases (If applicable). Consider taking sign-off from respective teams. Step 6: Good Rollback Plan You should always create a solid rollback plan for any migration activity. This is a key requirement for a successful migration, and it helps you prepare in case something goes wrong. A good rollback plan will act as a lifesaver, minimizing the impact and helping you recover quickly in case of failure. Step 7: Post-Migration Cleanup Cleanup is the most essential and important step in any migration or consolidation activity. We will need to clean up any resources left behind in the source domain. This includes the cleanup and removal of any Active Directory resources, such as Users, Groups, Group Policies, DNS entries, or trust relationships between the domains. Proven Success in IT Consolidation We have successfully guided numerous customers through consolidation projects, helping them achieve their goals efficiently and securely. If you're currently facing an M&A scenario and need assistance with your IT consolidation efforts, don’t hesitate to reach out to Microsoft. Our team is ready to help. Security First: Protecting Your Environment At Microsoft, security is always a top priority. As we help you consolidate and manage your IT infrastructure, we ensure that security remains at the forefront. We not only help with the technical aspects of consolidation but also with safeguarding your environment throughout the process.1.5KViews3likes0CommentsMicrosoft Security in Action: Deploying and Maximizing Advanced Identity Protection
As cyber threats grow in sophistication, identity remains the first line of defense. With credentials being a primary target for attackers, organizations must implement advanced identity protection to prevent unauthorized access, reduce the risk of breaches, and maintain regulatory compliance. This blog outlines a phased deployment approach to implement Microsoft’s identity solutions, helping ensure a strong Zero Trust foundation by enhancing security without compromising user experience. Phase 1: Deploy advanced identity protection Step 1: Build your hybrid identity foundation with synchronized identity Establishing a synchronized identity is foundational for seamless user experiences across on-premises and cloud environments. Microsoft Entra Connect synchronizes Active Directory identities with Microsoft Entra ID, enabling unified governance while enabling users to securely access resources across hybrid environments. To deploy, install Microsoft Entra Connect, configure synchronization settings to sync only necessary accounts, and monitor health through built-in tools to detect and resolve sync issues. A well-implemented hybrid identity enables consistent authentication, centralized management, and a frictionless user experience across all environments. Step 2: Enforce strong authentication with MFA and Conditional Access Multi-Factor Authentication (MFA) is the foundation of identity security. By requiring an additional verification step, MFA significantly reduces the risk of account compromise—even if credentials are stolen. Start by enforcing MFA for all users, prioritizing high-risk accounts such as administrators, finance teams, and executives. Microsoft recommends deploying passwordless authentication methods, such as Windows Hello, FIDO2 security keys, and Microsoft Authenticator, to further reduce phishing risks. Next, to balance security with usability, use Conditional Access policies to apply adaptive authentication requirements based on conditions such as user behavior, device health, and risk levels. For example, block sign-ins from non-compliant or unmanaged devices while allowing access from corporate-managed endpoints. Step 3: Automate threat detection with Identity Protection Implementing AI-driven risk detection is crucial to identifying compromised accounts before attackers can exploit them. Start by enabling Identity Protection to analyze user behavior and detect anomalies such as impossible travel logins, leaked credentials, and atypical access patterns. To reduce security risk, evolve your Conditional Access policies with risk signals that trigger automatic remediation actions. For low-risk sign-ins, require additional authentication (such as MFA), while high-risk sign-ins should be blocked entirely. By integrating Identity Protection with Conditional Access, security teams can enforce real-time access decisions based on risk intelligence, strengthening identity security across the enterprise. Step 4: Secure privileged accounts with Privileged Identity Management (PIM) Privileged accounts are prime targets for attackers, making Privileged Identity Management (PIM) essential for securing administrative access. PIM allows organizations to apply the principle of least privilege by granting Just-in-Time (JIT) access, meaning users only receive elevated permissions when needed—and only for a limited time. Start by identifying all privileged roles and moving them to PIM-managed access policies. Configure approval workflows for high-risk roles like Global Admin or Security Admin, requiring justification and multi-factor authentication before privilege escalation. Next, to maintain control, enable privileged access auditing, which logs all administrative activities and generates alerts for unusual role assignments or excessive privilege usage. Regular access reviews further enable only authorized users to retain elevated permissions. Step 5: Implement self-service and identity governance tools Start by deploying Self-Service Password Reset (SSPR). SSPR enables users to recover their accounts securely without help desk intervention. Also integrate SSPR with MFA, so that only authorized users regain access. Next, organizations should implement automated Access Reviews on all users, not just privileged accounts, to periodically validate role assignments and remove unnecessary permissions. This helps mitigate privilege creep, where users accumulate excessive permissions over time. Phase 2: Optimize identity security and automate response With core identity protection mechanisms deployed, the next step is to enhance security operations with automation, continuous monitoring, and policy refinement. Step1: Enhance visibility with centralized monitoring Start by Integrating Microsoft Entra logs with Microsoft Sentinel to gain real-time visibility into identity-based threats. By analyzing failed login attempts, suspicious sign-ins, and privilege escalations, security teams can detect and mitigate identity-based attacks before they escalate. Step 2: Apply advanced Conditional Access scenarios To further tighten access control, implement session-based Conditional Access policies. For example, allow read-only access to SharePoint Online from unmanaged devices and block data downloads entirely. By refining policies based on user roles, locations, and device health, organizations can strengthen security while ensuring seamless collaboration. Phase 3: Enable secure collaboration across teams Identity security is not just about protection—it also enables secure collaboration across employees, partners, and customers. Step 1: Secure external collaboration Collaboration with partners, vendors, and contractors requires secure, managed access without the complexity of managing external accounts. Microsoft Entra External Identities allows organizations to provide seamless authentication for external users while enforcing security policies like MFA and Conditional Access. By enabling lifecycle management policies, organizations can automate external user access reviews and expirations, ensuring least-privilege access at all times. Step 2: Automate identity governance with entitlement management To streamline access requests and approvals, Microsoft Entra Entitlement Management lets organizations create pre-configured access packages for both internal and external users. External guests can request access to pre-approved tools and resources without IT intervention. Automated access reviews and expiration policies enable users retain access only as long as needed. This reduces administrative overheads while enhancing security and compliance. Strengthening identity security for the future Deploying advanced identity protection in a structured, phased approach allows organizations to proactively defend against identity-based threats while maintaining secure, seamless access. Ready to take the next step? Explore these Microsoft identity security deployment resources: Microsoft Entra Identity Protection Documentation Conditional Access Deployment Guide Privileged Identity Management Configuration Guide The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.New Blog | Streamlining AI Compliance: Introducing the Premium Template for Indonesia's PDP Law
By Manny Sahota Accelerating Compliance in the AI Era: Introducing the Premium Assessment Template for Indonesia’s PDP Law in Purview Compliance Manager In an increasingly complex regulatory landscape, businesses are under growing pressure to comply with both local and global data privacy laws, while simultaneously building trust with their customers. As AI and digital technologies continue to transform industries, aligning solutions with regulatory requirements has never been more critical. To help organizations navigate these challenges, we’re excited to introduce the Premium Assessment Template for Indonesia’s Personal Data Protection (PDP) Law in Microsoft Purview Compliance Manager. Read the full post here: Streamlining AI Compliance: Introducing the Premium Template for Indonesia's PDP Law in Purview218Views0likes0CommentsRequest help to add a new company outside of network to MIM
The company I work for recently acquired a new company. We would like to use MIM to connect to their HR source and export out to their company Active Directory. We do have a two-way trust to their active directory. Any procedures or thoughts on how to go about this?665Views0likes1Comment