automation
7 TopicsMicrosoft Ignite 2025: Top Security Innovations You Need to Know
đ¤ Security & AI -The Big Story This Year 2025 marks a turning point for cybersecurity. Rapid adoption of AI across enterprises has unlocked innovation but introduced new risks. AI agents are now part of everyday workflows-automating tasks and interacting with sensitive dataâcreating new attack surfaces that traditional security models cannot fully address. Threat actors are leveraging AI to accelerate attacks, making speed and automation critical for defense. Organizations need solutions that deliver visibility, governance, and proactive risk management for both human and machine identities. Microsoft Ignite 2025 reflects this shift with announcements focused on securing AI at scale, extending Zero Trust principles to AI agents, and embedding intelligent automation into security operations. As a Senior Cybersecurity Solution Architect, Iâve curated the top security announcements from Microsoft Ignite 2025 to help you stay ahead of evolving threats and understand the latest innovations in enterprise security. Agent 365: Control Plane for AI Agents Agent 365 is a centralized platform that gives organizations full visibility, governance, and risk management over AI agents across Microsoft and third-party ecosystems. Why it matters: Unmanaged AI agents can introduce compliance gaps and security risks. Agent 365 ensures full lifecycle control. Key Features: Complete agent registry and discovery Access control and conditional policies Visualization of agent interactions and risk posture Built-in integration with Defender, Entra, and Purview Available via the Frontier Program Microsoft Agent 365: The control plane for AI agents Deep dive blog on Agent 365 Entra Agent ID: Zero Trust for AI Identities Microsoft Entra is the identity and access management suite (covering Azure AD, permissions, and secure access). Entra Agent ID extends Zero Trust identity principles to AI agents, ensuring they are governed like human identities. Why it matters: Unmanaged or over-privileged AI agents can create major security gaps. Agent ID enforces identity governance on AI agents and reduces automation risks. Key Features: Provides unique identities for AI agents Lifecycle governance and sponsorship for agents Conditional access policies applied to agent activity Integrated with open SDKs/APIs for thirdâparty platforms Microsoft Entra Agent ID Overview Entra Ignite 2025 announcements Public Preview details Security Copilot Expansion Security Copilot is Microsoftâs AI assistant for security teams, now expanded to automate threat hunting, phishing triage, identity risk remediation, and compliance tasks. Why it matters: Security teams face alert fatigue and resource constraints. Copilot accelerates response and reduces manual effort. Key Features: 12 new Microsoft-built agents across Defender, Entra, Intune, and Purview. 30+ partner-built agents available in the Microsoft Security Store. Automates threat hunting, phishing triage, identity risk remediation, and compliance tasks. Included for Microsoft 365 E5 customers at no extra cost. Security Copilot inclusion in Microsoft 365 E5 Security Copilot Ignite blog Security Dashboard for AI A unified dashboard for CISOs and risk leaders to monitor AI risks, aggregate signals from Microsoft security services, and assign tasks via Security Copilot - included at no extra cost. Why it matters: Provides a single pane of glass for AI risk management, improving visibility and decision-making. Key Features: Aggregates signals from Entra, Defender, and Purview Supports natural language queries for risk insights Enables task assignment via Security Copilot Ignite Session: Securing AI at Scale Microsoft Security Blog Microsoft Defender Innovations Microsoft Defender serves as Microsoftâs CNAPP solution, offering comprehensive, AI-driven threat protection that spans endpoints, email, cloud workloads, and SIEM/SOAR integrations. Why It Matters Modern attacks target multi-cloud environments and software supply chains. These innovations provide proactive defense, reduce breach risks before exploitation, and extend protection beyond Microsoft ecosystems-helping organizations secure endpoints, identities, and workloads at scale. Key Features: Predictive Shielding: Proactively hardens attack paths before adversaries pivot. Automatic Attack Disruption: Extended to AWS, Okta, and Proofpoint via Sentinel. Supply Chain Security: Defender for Cloud now integrates with GitHub Advanced Security. Whatâs new in Microsoft Defender at Ignite Defender for Cloud innovations Global Secure Access & AI Gateway Part of Microsoft Entraâs secure access portfolio, providing secure connectivity and inspection for web and AI traffic. Why it matters: Protects against lateral movement and AI-specific threats while maintaining secure connectivity. Key Features: TLS inspection, URL/file filtering AI Prompt Injection protection Private access for domain controllers to prevent lateral movement attacks. Learn about Secure Web and AI Gateway for agents Microsoft Entra: Whatâs new in secure access on the AI frontier Purview Enhancements Microsoft Purview is the data governance and compliance platform, ensuring sensitive data is classified, protected, and monitored. Why it matters: Ensures sensitive data remains protected and compliant in AI-driven environments. Key Features: AI Observability: Monitor agent activities and prevent sensitive data leakage. Compliance Guardrails: Communication compliance for AI interactions. Expanded DSPM: Data Security Posture Management for AI workloads. Announcing new Microsoft Purview capabilities to protect GenAI agents Intune Updates Microsoft Intune is a cloud-based endpoint device management solution that secures apps, devices, and data across platforms. It simplifies endpoint security management and accelerates response to device risks using AI. Why it matters: Endpoint security is critical as organizations manage diverse devices in hybrid environments. These updates reduce complexity, speed up remediation, and leverage AI-driven automation-helping security teams stay ahead of evolving threats. Key Features: Security Copilot agents automate policy reviews, device offboarding, and risk-based remediation. Enhanced remote management for Windows Recovery Environment (WinRE). Policy Configuration Agent in Intune lets IT admins create and validate policies with natural language Whatâs new in Microsoft Intune at Ignite Your guide to Intune at Ignite Closing Thoughts Microsoft Ignite 2025 signals the start of an AI-driven security era. From visibility and governance for AI agents to Zero Trust for machine identities, automation in security operations, and stronger compliance for AI workloads-these innovations empower organizations to anticipate threats, simplify governance, and accelerate secure AI adoption without compromising compliance or control. đ Full Coverage: Microsoft Ignite 2025 Book of NewsTransforming Security Analysis into a Repeatable, Auditable, and Agentic Workflow
Author(s): Animesh Jain, Vinay Yadav Shaped by investigations into the strategic question of what it takes for Windows to achieve world-leading securityâand the practical engineering needed to explore agentic workflows at scale and their interfaces. Our work in Windows Servicing & Delivery (WSD) is shaped by two guiding prompts from leadership: "what does it take for Windows to achieve world-leading security", and "how do we responsibly integrate AI into systems as large and high-churn as Windows?". Reasoning models open new possibilities on both fronts. As we continue experimenting, one issue repeatedly surfaces as the bottleneck for scalable security assurance: variant vulnerabilities. They are subtle, recurring, and easy to missâmaking them an ideal proving ground for the enterprise-grade workflow we present here. Security Analysis at Windows Scale Security analysis shouldnât be an afterthoughtâit should be a continuous, auditable, and intelligence-driven process built directly into the engineering workflow. This work introduces an agentic security analysis pipeline that uses reasoning models and tool-based agents to detect variant vulnerabilities across large, fast-changing codebases. By combining automation with explainability, it transforms security validation from a manual, point-in-time task into a repeatable and trustworthy part of every build. Why are variants the hard part? Security flaws rarely occur in isolation. Once a vulnerability is fixed, its logical or structural pattern often reappears elsewhere in the codebaseâhidden behind different variables, layers, or call paths. These recurring patterns are variantsâthe quiet echoes of known issues that can persist across millions of lines of code. Finding them manually is slow, repetitive, and incomplete. As engineering velocity increases, so does the likelihood of variant driftâthe same vulnerability class re-emerging in a slightly altered form. Each missed variant carries a downstream cost: regression, re-servicing, or, in the worst cases, re-exploitation. Modern large systems like Windows are too large, too interconnected, and ship too frequently for manual vulnerability discovery to keep pace. Traditional static analyzers and deterministic class-based scanners struggle to generalize these patterns or create too much noise, while targeted fuzzing campaigns often fail to trigger the nuanced runtime conditions that expose them. To stay ahead, automation must evolve. We need systems that reasonânot just scanâsystems capable of understanding relationships between code regions and applying logical analogies instead of brute-force enumeration. Reasoning Models: A Turning Point in Security Research Recent advances in AI reasoning have demonstrated that large language models can uncover vulnerabilities previously missed by deterministic tools. For example, Googleâs Big Sleep agent surfaced an exploitable SQLite flaw (CVE-2025-6965) that bypassed traditional fuzzers due to configuration-sensitive logic. Similarly, an o-series reasoning model helped identify a critical Linux SMB logoff use-after-free (CVE-2025-37899), proving that reasoning-driven automation can detect complex, context-dependent flaws in mature kernel code. These breakthroughs show whatâs possible when systems can form, test, and refine hypotheses about software behavior. The challenge now is scaling that intelligence into repeatable, auditable, enterprise-grade workflowsâwhere every result is traceable, reviewable, and integrated into the developerâs daily workflow. A Framework for Agentic Security Analysis To address this challenge, weâve developed an agentic security analysis framework that applies reasoning models within structured, enterprise grade workflow pattern. It combines large language model agents, specialized analysis tools, and structured artifact generation to make vulnerability discovery continuous, explainable, and auditable. It is interfaced as a first-class Azure DevOps (ADO) pipeline and can be integrated natively into enterprise CI/CD processes. For security analysis, it continuously reasons over large, evolving codebases to identify and validate variant vulnerabilities earlier in the release cycle. Together, these components form a repeatable workflow that helps surface variant patterns with greater consistency and clarity. Core Technical Pillars Scale â Autonomous Code Reasoning Long-context models extend analysis across massive, evolving codebases. They infer analogies, relationships, and behavioral patterns between code regions, enabling scalable reasoning that adapts as systems grow. ToolâAgent Collaboration Specialized agents coordinate to perform semantic search, graph traversal, and both static and dynamic interpretation. This distributed reasoning approach ensures resilience and precision across diverse enterprise environments. Structured Artifact Generation Every step produces versioned, auditable artifacts that document the reasoning process. These artifacts help provide reproducibility, compliance, and transparencyâcritical for enterprise governance and regulated industries. Together, these pillars enable scalable, explainable, and repeatable vulnerability discovery across large software ecosystems such as Windows. Every stageâfrom reasoning to validationâis logged and traceable, designed to make each discovery reproducible and reviewable. Inside the framework Agent-Led, Human-Reviewed The system is agent-led from start to finish and human-reviewed only at decision boundaries. Agents form hypotheses from recent fixes or vulnerability classes, test them against context, perform validation passes, and generate evidence-backed reports for reviewer confirmation. The workflow mirrors how seasoned security engineers operateâonly faster and continuously. n tasks based on templatized prompts. Tool Specialists as Agents Each analytical tool functions as a domain-specific agentâperforming semantic search, file inspection, or function-graph traversal. These agents collaborate through structured orchestration, maintaining specialization without sacrificing coherence. Agentic Patterns and Orchestration The framework employs reusable reasoning patternsâreflective reasoning, actorâvalidator loops, and parallel tool dialoguesâfor accuracy and scale. A central conductor agent governs task coordination, context flow, and artifact persistence across runs. Auditability Through Artifacts Every investigation yields a transparent chain of artifacts: Analysis Notes â summarize candidate issues Critique Notes â document reasoning and counter-evidence Synthesis Reports â provide developer-ready summaries, diffs, call graphs, and exploitability insights Agentic Conversation Logs - provides conversation logs so developers can backtrack on reasoning and get more context This structure makes each discovery fully traceable and auditable. CI/CD-Native Integration The interface operates as a first-class Azure DevOps pipeline, attachable to pull requests, nightly builds, or release triggers. Each run publishes versioned artifacts and validation notes directly into the developer workflowâmaking reasoning-driven security a seamless part of software delivery. What It Can Do Today Seeded Variant Hunts: Start from a recent fix or known pattern to enumerate analogous cases, analyze helper functions, and test reachability. Evidence-First Reporting: Every finding includes reproducible evidenceâcode snippets, diffs, and caller graphsâdelivered within the PR or work item. Scalable Coverage: Runs across servicing branches, producing consistent and auditable validation artifacts. Improved Precision: A reasoning-based validation pass has significantly reduced false positives in internal testing. Case Study: CVE-2025-55325 During a sweep of â*_DEFAULTSâ deserializers, the agentic pipeline independently identified GetPoolDefaults trusting a user-controlled size field and copying that many bytes from a caller buffer. The missing runtime bounds checkâguarded only by an assertion in debug buildsâenabled a potential read access violation and information disclosure. The mitigation mirrored a hardened sibling helper: enforcing runtime bounds on Size versus BytesAvailable/Version before allocation and copy. The finding was later validated by the servicing teams, confirming it matched an issue already under active investigationâillustrating how the automated reasoning process can independently surface real-world vulnerabilities that align with expert analysis. Beyond Variant Analysis The underlying architecture of this framework extends naturally beyond variant detection: Net-new vulnerability discovery through cross-binary pattern matching Model-assisted fuzzing & static analysis orchestrated through CI/CD integration Regression detection via historical code comparisons Security Development Lifecycle (SDL) enforcement and reproducibility checks The agentic patterns and tooling can support net-new vulnerability discovery through cross-binary pattern matching, regression detection using historical code comparisons, reproducibility checks aligned with SDL requirements, and model-assisted fuzzing orchestrated through CI/CD processes. These capabilities open the door to applying reasoning-driven workflows across a broader range of security & validation tasks. The Road Ahead Looking ahead, this trajectory naturally leads toward autonomous cybersecurity pipelines powered by reasoning agents that apply reflective analysis, validation loops, and structured tool interactions to complex codebases. By structuring each step as an auditable artifact, the approach supports security & validation analysis that is both explainable and repeatable. These agents could help validate security posture, analyze historical and real-time signals, and detect anomalous patterns early in the lifecycle. References Google Cloud Blog â Big Sleep and AI-Assisted Vulnerability Discovery âA summer of security: empowering cyber defenders with AI.â https://blog.google/technology/safety-security/cybersecurity-updates-summer-2025 The Hacker News â Google AI âBig Sleepâ Stops Exploitation of Critical SQLite Flaw https://thehackernews.com/2025/07/google-ai-big-sleep-stops-exploitation.html NIST National Vulnerability Database â CVE-2025-6965 (SQLite) https://nvd.nist.gov/vuln/detail/CVE-2025-6965 Sean Heelan â âReasoning Models and the ksmbd Use-After-Freeâ https://simonwillison.net/2025/May/24/sean-heelan The Cyber Express â AI Finds CVE-2025-37899 Zero-Day in Linux SMB Kernel https://thecyberexpress.com/cve-2025-37899-zero-day-in-linux-smb-kernel NIST National Vulnerability Database â CVE-2025-37899 (Linux SMB Use-After-Free) https://nvd.nist.gov/vuln/detail/CVE-2025-37899 NIST National Vulnerability Database â CVE-2025-55325 (Windows Storage Management Provider Buffer Over-read) https://nvd.nist.gov/vuln/detail/CVE-2025-55325 NVD Microsoft Security Response Center â Vulnerability Details for CVE-2025-55325 https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-55325Microsoft Security Store: Now Generally Available
When we launched the Microsoft Security Store in public preview on September 30, our goal was simple: make it easier for organizations to discover, purchase, and deploy trusted security solutions and AI agents that integrate seamlessly with Microsoft Security products. Today, Microsoft Security Store is generally availableâwith three major enhancements: Embedded where you work: Security Store is now built into Microsoft Defender, featuring SOC-focused agents, and into Microsoft Entra for Verified ID and External ID scenarios like fraud protection. By bringing these capabilities into familiar workflows, organizations can combine Microsoft and partner innovation to strengthen security operations and outcomes. Expanded catalog: Security Store now offers more than 100 third-party solutions, including advanced fraud prevention, forensic analysis, and threat intelligence agents. Security services available: Partners can now list and sell services such as managed detection and response and threat hunting directly through Security Store. Real-World Impact: What We Learned in Public Preview Thousands of customers explored Microsoft Security Store and tried a growing catalog of agents and SaaS solutions. While we are at the beginning of our journey, customer feedback shows these solutions are helping teams apply AI to improve security operations and reduce manual effort. Spairliners, a cloud-first aviation services joint venture between Air France and Lufthansa, strengthened identity and access controls by deploying Glueckkanjaâs Privileged Admin Watchdog to enforce just-in-time access. âUsing the Security Store felt easy, like adding an app in Entra. For a small team, being able to find and deploy security innovations in minutes is huge.â â Jonathan Mayer, Head of Innovation, Data and Quality GTD, a Chilean technology and telecommunications company, is testing a variety of agents from the Security Store: âAs any security team, weâre always looking for ways to automate and simplify our operations. We are exploring and applying the world of agents more and more each day so having the Security Store is convenientâitâs easy to find and deploy agents. Weâre excited about the possibilities for further automation and integrations into our workflows, like event-triggered agents, deeper Outlook integration, and more." â Jonathan Lopez Saez, Cybersecurity Architect Partners echoed the momentum they are seeing with the Security Store: âWeâre excited by the early momentum with Security Store. Weâve already received multiple new leads since going live, including one in a new market for us, and we have multiple large deals weâre looking to drive through Security Store this quarter.â - Kim Brault, Head of Alliances, Delinea âPartnering with Microsoft through the Security Store has unlocked new ways to reach enterprise customers at scale. The store is pivotal as the industry shifts toward AI, enabling us to monetize agents without building our own billing infrastructure. With the new embedded experience, our solutions appear at the exact moment customers are looking to solve real problems. And by working with Microsoftâs vetting process, we help provide customers confidence to adopt AI agentsâ â Milan Patel, Co-founder and CEO, BlueVoyant âAgents and the Microsoft Security Store represent a major step forward in bringing AI into security operations. Weâve turned years of service experience into agentic automations, and itâs resonating with customersâweâve been positively surprised by how quickly theyâre adopting these solutions and embedding our automated agentic expertise into their workflows.â â Christian Kanja, Founder and CEO of glueckkanja New at GA: Embedded in Defender, EntraâSecurity Solutions right where you work Microsoft Security Store is now embedded in the Defender and Entra portals with partner solutions that extend your Microsoft Security products. By placing Security Store in front of security practitioners, itâs now easier than ever to use the best of partner and Microsoft capabilities in combination to drive stronger security outcomes. As Dorothy Li, Corporate Vice President of Security Copilot and Ecosystem put it, âEmbedding the Security Store in our core security products is about giving customers access to innovative solutions that tap into the expertise of our partners. These solutions integrate with Microsoft Security products to complete end-to-end workflows, helping customers improve their securityâ Within the Microsoft Defender portal, SOC teams can now discover Copilot agents from both Microsoft and partners in the embedded Security Store, and run them all from a single, familiar interface. Letâs look at an example of how these agents might help in the day of the life of a SOC analyst. The day starts with Watchtower (BlueVoyant) confirming Sentinel connectors and Defender sensors are healthy, so investigations begin with full visibility. As alerts arrive, the Microsoft Defender Copilot Alert Triage Agent groups related signals, extracts key evidence, and proposes next steps; identity related cases are then validated with Login Investigator (adaQuest), which baselines recent sign-in behavior and device posture to cut false positives. To stay ahead of emerging campaigns, the analyst checks the Microsoft Threat Intelligence Briefing Agent for concise threat rundowns tied to relevant indicators, informing hunts and temporary hardening. When HR flags an offboarding, GuardianIQ (People Tech Group) correlates activity across Entra ID, email, and files to surface possible data exfiltration with evidence and risk scores. After containment, Automated Closing Comment Generator (Ascent Global Inc.) produces clear, consistent closure notes from Defender incident details, keeping documentation tight without hours of writing. Together, these Microsoft and partner agents maintain platform health, accelerate triage, sharpen identity decisions, add timely threat context, reduce insider risk blind spots, and standardize reportingâall inside the Defender portal. You can read more about the new agents available in the Defender portal in this blog. In addition, Security Store is now integrated into Microsoft Entra, focused on identity-centric solutions. Identity admins can discover and activate partner offerings for DDoS protection, intelligent bot defense, and government IDâbased verification for account recovery âall within the Entra portal. With these capabilities, Microsoft Entra delivers a seamless, multi-layered defense that combines built-in identity protection with best-in-class partner technologies, making it easier than ever for enterprises to strengthen resilience against modern identity threats. Learn more here. Levent Besik, VP of Microsoft Entra, shared that âThis sets a new benchmark for identity security and partner innovation at Microsoft. Attacks on digital identities can come from anywhere. True security comes from defense in depth, layering protection across the entire user journey so every interaction, from the first request to identity recovery, stays secure. This launch marks only the beginning; we will continue to introduce additional layers of protection to safeguard every aspect of the identity journeyâ New at GA: Services Added to a Growing Catalog of Agents and SaaS For the first time, partners can offer their security services directly through the Security Store. Customers can now find, buy, and activate managed detection and response, threat hunting, and other expert servicesâmaking it easier to augment internal teams and scale security operations. Every listing has a MXDR Verification that certifies they are providing next generation advanced threat detection and response services. You can browse all the services available at launch here, and read about some of our exciting partners below: Avanade is proud to be a launch partner for professional services in the Microsoft Security Store. As a leading global Microsoft Security Services provider, weâre excited to make our offerings easier to find and help clients strengthen cyber defenses faster through this streamlined platform - Jason Revill, Avanade Global Security Technology Lead ProServeIT partnering with Microsoft to have our offers in the Microsoft Security Store helps ProServeIT protect our joint customers and allows us to sell better with Microsoft sellers. It shows customers how our technology and services support each other to create a safe and secure platform - Eric Sugar, President Having Replyâs security services showcased in the Microsoft Security Store is a significant milestone for us. It amplifies our ability to reach customers at the exact point where they evaluate and activate Microsoft security solutions, ensuring our offerings are visible alongside Microsoftâs trusted technologies. Notable New Selections Since public preview, the Security Store catalog has grown significantly. Customers can now choose from over 100 third-party solutions, including 60+ SaaS offerings and 50+ Security Copilot agents, with new additions every week. Recent highlights include Cisco Duo and Rubrik: Cisco Duo IAM delivers comprehensive, AI-driven identity protection combining MFA, SSO, passwordless and unified directory management. Duo IAM seamlessly integrates across the Microsoft Security suiteâenhancing Entra ID with risk-based authentication and unified access policy management across cloud and on-premises applications seamlessly in just a few clicks. Intune for device compliance and access enforcement. Sentinel for centralized security monitoring and threat detection through critical log ingestion about authentication events, administrator actions, and risk-based alerts, providing real-time visibility across the identity stack. Rubrik's data security platform delivers complete cyber resilience across enterprise, cloud, and SaaS alongside Microsoft. Through the Microsoft Sentinel integration, Rubrikâs data management capabilities are combined with Sentinelâs security analytics to accelerate issue resolution, enabling unified visibility and streamlined responses. Furthermore, Rubrik empowers organizations to reduce identity risk and ensure operational continuity with real-time protection, unified visibility and rapid recovery across Microsoft Active Directory and Entra ID infrastructure. The Road Ahead This is just the beginning. Microsoft Security Store will continue to make it even easier for customers to improve their security outcomes by tapping into the innovation and expertise of our growing partner ecosystem. The momentum weâre seeing is clearâcustomers are already gaining real efficiencies and stronger outcomes by adopting AI-powered agents. As we work together with partners, weâll unlock even more automation, deeper integrations, and new capabilities that help security teams move faster and respond smarter. Explore the Security Store today to see whatâs possible. For a more detailed walk-through of the capabilities, read our previous public preview Tech Community post If youâre a partner, now is the time to list your solutions and join us in shaping the future of security.944Views3likes0CommentsCybersecurity: What Every Business Leader Needs to Know Now
As a Senior Cybersecurity Solution Architect, Iâve had the privilege of supporting organisations across the United Kingdom, Europe, and the United Statesâspanning sectors from finance to healthcareâin strengthening their security posture. One thing has become abundantly clear: cybersecurity is no longer the sole domain of IT departments. It is a strategic imperative that demands attention at board-level. This guide distils five key lessons drawn from real-world engagements to help executive leaders navigate todayâs evolving threat landscape. These insights are not merely technicalâthey are cultural, operational, and strategic. If youâre a C-level executive, this article is a call to action: reassess how your organisation approaches cybersecurity before the next breach forces the conversation. In this article, I share five lessons (and quotes) from the field that help demystify how to enhance an organisationâs security posture. 1. Shift the Mindset âThis has always been our approach, and weâve never experienced a breachâso why should we change it?â A significant barrier to effective cybersecurity lies not in the sophistication of attackers, but in the predictability of human behaviour. If youâve never experienced a breach, itâs tempting to maintain the status quo. However, as threats evolve, so too must your defences. Many cyber threats exploit well-known vulnerabilities that remain unpatched or rely on individuals performing routine tasks in familiar ways. Human nature tends to favour comfort and habitâtraits that adversaries are adept at exploiting. Unlike many organisations, attackers readily adopt new technologies to advance their objectives, including AI-powered ransomware to execute increasingly sophisticated attacks. It is therefore imperative to recogniseâwithout delayâthat the advent of AI has dramatically reduced both the effort and time required to compromise systems. As the UKâs National Cyber Security Centre (NCSC) has stated: âAI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.â Similarly, McKinsey & Company observed: âAs AI quickly advances cyber threats, organisations seem to be taking a more cautious approach, balancing the benefits and risks of the new technology while trying to keep pace with attackersâ increasing sophistication.â To counter this evolving threat landscape, organisations must proactively leverage AI in their cyber defence strategies. Examples include: Identity and Access Management (IAM): AI enhances IAM by analysing real-time signals across systems to detect risky sign-ins and enforce adaptive access controls. Example: Microsoft Entra Agents for Conditional Access use AI to automate policy recommendations, streamlining access decisions with minimal manual input. Figure 1: Microsoft Entra Agents Threat Detection: AI accelerates detection, response, and recovery, helping organisations stay ahead of sophisticated threats. Example: Microsoft Defender for Cloudâs AI threat protection identifies prompt injection, data poisoning, and wallet attacks in real time. Incident Response: AI facilitates real-time decision-making, removing emotional bias and accelerating containment and recovery during security incidents. Example: Automatic Attack Disruption in Defender XDR, which can automatically contain a breach in progress. AI Security Posture Management AI workloads require continuous discovery, classification, and protection across multi-cloud environments. Example: Microsoft Defender for Cloudâs AI Security Posture Management secures custom AI apps across Azure, AWS, and GCP by detecting misconfigurations, vulnerabilities, and compliance gaps. Data Security Posture Management (DSPM) for AI AI interactions must be governed to ensure privacy, compliance, and insider risk mitigation. Example: Microsoft Purview DSPM for AI enables prompt auditing, applies Data Loss Prevention (DLP) policies to third-party AI apps like ChatGPT, and supports eDiscovery and lifecycle management. AI Threat Protection Organisations must address emerging AI threat vectors, including prompt injection, data leakage, and model exploitation. Example: Defender for AI (private preview) provides model-level security, including governance, anomaly detection, and lifecycle protection. Embracing innovation, automation, and intelligent defence is the secret sauce for cyber resilience in 2026. 2. Avoid One-Off Purchases â Invest with a Strategy âOne MDE and one Sentinel to go, please.â Organisations often approach me intending to purchase a specific cybersecurity productâsuch as Microsoft Defender for Endpoint (MDE)âwithout a clearly articulated strategic rationale. My immediate question is: what is the broader objective behind this purchase? Is it driven by perceived value or popularity, or does it form part of a well-considered strategy to enhance endpoint security? Cybersecurity investments should be guided by a long-term, holistic strategy that spans multiple years and is periodically reassessed to reflect evolving threats. Strengthening endpoint protection must be integrated into a wider effort to improve the organisationâs overall security posture. This includes ensuring seamless integration between security solutions and avoiding operational silos. For example, deploying robust endpoint protection is of limited value if identities are not safeguarded with multi-factor authentication (MFA), or if storage accounts remain publicly accessible. A cohesive and forward-looking approach ensures that all components of the security architecture work in concert to mitigate risk effectively. Security Adoption Journey (Based on Zero Trust Framework) Assess â Evaluate the threat landscape, attack surface, vulnerabilities, compliance obligations, and critical assets. Align â Link security objectives to broader business goals to ensure strategic coherence. Architect â Design integrated and scalable security solutions, addressing gaps and eliminating operational silos. Activate â Implement tools with robust governance and automation to ensure consistent policy enforcement. Advance â Continuously monitor, test, and refine the security posture to stay ahead of evolving threats. Security tools are not fast foodâthey work best as part of a long-term plan, not a one-off order. This piecemeal approach runs counter to the modern Zero Trust security model, which assumes no single tool will prevent every breach and instead implements layered defences and integration. 3. Legacy Systems Are Holding You Back âUnfortunately, we are unable to implement phishing-resistant MFA, as our legacy app does not support integration with the required protocols.â A common challenge faced by many organisations I have worked with is the constraint on innovation within their cybersecurity architecture, primarily due to continued reliance on legacy applicationsâoften driven by budgetary or operational necessity. These outdated systems frequently lack compatibility with modern security technologies and may introduce significant vulnerabilities. A notable example is the deployment of phishing-resistant multi-factor authentication (MFA)âsuch as FIDO2 security keys or certificate-based authenticationâwhich requires advanced identity protocols and conditional access policies. These capabilities are available exclusively through Microsoft Entra ID. To address this issue effectively, it is essential to design security frameworks based on the organisationâs future aspirations rather than its current limitations. By adopting a forward-thinking approach, organisations can remain receptive to emerging technologies that align with their strategic cybersecurity objectives. Moreover, this perspective encourages investment in acquiring the necessary talent, thereby reducing reliance on extensive change management and staff retraining. I advise designing for where you want to be in the next 1â3 yearsâideally cloud-first and identity-drivenâessentially adopting a Zero Trust architecture, rather than being constrained by the limitations of legacy systems. 4. Collaboration Is a Security Imperative âThis item will need to be added to the dev team's backlog. Given their current workload, they will do their best to implement GitHub Security in Q3, subject to capacity.â Cybersecurity threats may originate from various parts of an organisation, and one of the principal challenges many face is the fragmented nature of their defence strategies. To effectively mitigate such risks, cybersecurity must be embedded across all departments and functions, rather than being confined to a single team or role. In many organisations, the Chief Information Security Officer (CISO) operates in isolation from other C-level executives, which can limit their influence and complicate the implementation of security measures across the enterprise. Furthermore, some teams may lack the requisite expertise to execute essential security practices. For instance, an R&D lead responsible for managing developers may not possess the necessary skills in DevSecOps. To address these challenges, it is vital to ensure that the CISO is empowered to act without political or organisational barriers and is supported in implementing security measures across all business units. When the CISO has backing from the COO and HR, initiatives such as MFA rollout happen faster and more thoroughly. Cross-Functional Security Responsibilities Role Security Responsibilities R&D - Adopt DevSecOps practices - Identify vulnerabilities early - Manage code dependencies - Detect exposed secrets - Embed security in CI/CD pipelines CIO - Ensure visibility over organizational data - Implement Data Loss Prevention (DLP) - Safeguard sensitive data lifecycle - Ensure regulatory compliance CTO - Secure cloud environments (CSPM) - Manage SaaS security posture (SSPM) - Ensure hardware and endpoint protection COO - Protect digital assets - Secure domain management - Mitigate impersonation threats - Safeguard digital marketing channels and customer PII Support & Vendors - Deliver targeted training - Prevent social engineering attacks - Improve awareness of threat vectors HR - Train employees on AI-related threats - Manage insider risks - Secure employee data - Oversee cybersecurity across the employee lifecycle Empowering the CISO to act across departments helps organisations shift towards a security-first cultureâembedding cybersecurity into every function, not just IT. 5. Compliance Is Not Security âWeâre compliant, so we must be secure.â Many organisations mistakenly equate passing auditsâsuch as ISO 27001 or SOC 2âwith being secure. While compliance frameworks help establish a baseline for security, they are not a guarantee of protection. Determined attackers are not deterred by audit checklists; they exploit gaps, misconfigurations, and human error regardless of whether an organisation is certified. Moreover, due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks often struggle to keep pace. By the time a standard is updated, attackers may already be exploiting new techniques that fall outside its scope. This lag creates a false sense of security for organisations that rely solely on regulatory checkboxes. Security is a continuous risk management processânot a one-time certification. It must be embedded into every layer of the enterprise and treated with the same urgency as other core business priorities. Compliance may be the starting line, not the finish line. Effective security goes beyond meeting regulatory requirementsâit demands ongoing vigilance, adaptability, and a proactive mindset. Conclusion: Cybersecurity Is a Continuous Discipline Cybersecurity is not a destinationâit is a continuous journey. By embracing strategic thinking, cross-functional collaboration, and emerging technologies, organisations can build resilience against todayâs threats and tomorrowâs unknowns. The lessons shared throughout this article are not merely technicalâthey are cultural, operational, and strategic. If there is one key takeaway, it is this: avoid piecemeal fixes and instead adopt an integrated, future-ready security strategy. Due to the rapidly evolving nature of the cyber threat landscape, compliance frameworks alone cannot keep pace. Security must be treated as a dynamic, ongoing processâone that is embedded into every layer of the enterprise and reviewed regularly. Organisations should conduct periodic security posture reviews, leveraging tools such as Microsoft Secure Score or monthly risk reports, and stay informed about emerging threats through threat intelligence feeds and resources like the Microsoft Digital Defence Report, CISA (Cybersecurity and Infrastructure Security Agency), NCSC (UK National Cyber Security Centre), and other open-source intelligence platforms. As Ann Johnson aptly stated in her blog: âThe most prepared organisations are those that keep asking the right questions and refining their approach together.â Cyber resilience demands ongoing investmentâin people (through training and simulation drills), in processes (via playbooks and frameworks), and in technology (through updates and adoption of AI-driven defences). To reduce cybersecurity risk over time, resilient organisations must continually refine their approach and treat cybersecurity as an ongoing discipline. The time to act is now. Resources: https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat Defend against cyber threats with AI solutions from Microsoft - Microsoft Industry Blogs Generative AI Cybersecurity Solutions | Microsoft Security Require phishing-resistant multifactor authentication for Microsoft Entra administrator roles - Microsoft Entra ID | Microsoft Learn AI is the greatest threatâand defenseâin cybersecurity today. Hereâs why. Microsoft Entra Agents - Microsoft Entra | Microsoft Learn Smarter identity security starts with AI https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/ https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2023-critical-cybersecurity-challenges https://www.microsoft.com/en-us/security/blog/2025/06/12/cyber-resilience-begins-before-the-crisis/1.5KViews2likes0CommentsSensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. Youâll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, itâs not likely to last very long. So, I affectionately refer to information that hasnât been evaluated as âporridgeâ, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of âcontent contains sensitivity labelâ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the âporridgeâ, determine whether itâs an item that needs protection and if so, move it to a higher shelf in the pantry so that itâs out of reach for the kids. Effective use of Purview revolves around the use of âknow your dataâ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like âSensitiveâ or âProtectedâ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We donât need to guess at an itemâs sensitivity if another system has already established what the itemâs classification is. These methods are much less prone to false positives. Why isnât everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations donât have this skill set. Thereâs also a lack of understanding of the relevance of checking for item properties. Microsoft havenât helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where Iâm from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, weâll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isnât for everyone. If your organisation is new to classification or you donât have external partners that you collaborate with at higher sensitivity levels, then this likely isnât for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that youâll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels wonât impact usability as you wonât publish them to users. You will, however, need to publish them to a single user or break glass account so that theyâre not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID wonât correspond with any of that tenantâs labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasnât been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the itemâs existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purviewâs data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 â Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as âcrawled propertiesâ, which weâll then need to convert into âmanaged propertiesâ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, youâll need to upload or create an item or items with the properties applied. For testing, youâll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties youâre looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, Iâve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which youâll often see applied by Purview when an item has a sensitivity label content marking applied to it. Iâve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 â Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 â Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the propertyâs type to text, select queryable and retrievable. Under âmappings to crawled propertiesâ, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so thereâs no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 â Configure Auto-labelling policies Next up, create some auto-labelling policies. Youâll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO â Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, youâll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 â Expand into DLP If youâve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an itemâs sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasnât yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesnât match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.9KViews9likes8CommentsGetting started with the eDiscovery APIs
The Microsoft Purview APIs for eDiscovery in Microsoft Graph enable organizations to automate repetitive tasks and integrate with their existing eDiscovery tools to build repeatable workflows that industry regulations might require. Before you can make any calls to the Microsoft Purview APIs for eDiscovery you must first register an app in the Microsoftâs Identity Platform, Entra ID. An app can access data in two ways: Delegated Access: an app acting on behalf of a signed-in user App-only access: an app action with its own identity For more information on access scenarios see Authentication and authorization basics. This article will demonstrate how to configure the required pre-requisites to enable access to the Microsoft Purview APIs for eDiscovery. This will based on using app-only access to the APIs, using either a client secret or a self-signed certificate to authenticate the requests. The Microsoft Purview APIs for eDiscovery have two separate APIs, they are: Microsoft Graph: Part of the Microsoft.Graph.Security namespace and used for working with Microsoft Purview eDiscovery Cases. MicrosoftPurviewEDiscovery: Used exclusively to download programmatically the export package created by a Microsoft Purview eDiscovery Export job. Currently, the eDiscovery APIs in Microsoft Graph only work with eDiscovery (Premium) cases. For a list of supported API calls within the Microsoft Graph calls, see Use the Microsoft Purview eDiscovery API. Microsoft Graph API Pre-requisites Implementing app-only access involves registering an app in Azure portal, creating client secret/certificates, assigning API permissions, setting up a service principal, and then using app-only access to call Microsoft Graph APIs. To register an app, create client secret/certificates and assign API permissions the account must be at least a Cloud Application Administrator. For more information on registering an app in the Azure portal, see Register an application with the Microsoft identity platform. Granting tenant-wide admin consent for Microsoft Purview eDiscovery API application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Setting up a service principal requires the following pre-requisites: A machine with the ExchangeOnlineManagement module installed An account that has the Role Management role assigned in Microsoft Purview, see Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview Configuration steps For detailed steps on implementing app-only access for Microsoft Purview eDiscovery, see Set up app-only access for Microsoft Purview eDiscovery. Connecting to Microsoft Graph API using app-only access Use the Connect-MgGraph cmdlet in PowerShell to authenticate and connect to Microsoft Graph using the app-only access method. This cmdlets enables your app to interact with Microsoft Graph securely and enables you to explore the Microsoft Purview eDiscovery APIs. Connecting via client secret To connect using a client secret, update and run the following example PowerShell code. $clientSecret = "<client secret>" ## Update with client secret added to the registered app $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientSecretPW = ConvertTo-SecureString "$clientSecret" -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ("$appID", $clientSecretPW) Connect-MgGraph -TenantId "$tenantId" -ClientSecretCredential $clientSecretCred Connecting via certificate To connect using a certificate, update and run the following example PowerShell code. $certPath = "Cert:\currentuser\my\<xxxxxxxxxx>" ## Update with the cert thumbnail $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientCert = Get-ChildItem $certPath Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert Invoke Microsoft Graph API calls Once connected you can start making calls to the Microsoft Graph API. For example, lets look at listing the eDiscovery cases within the tenant, see List ediscoveryCases. Within the documentation, for each operation it will list the following information: Permissions required to make the API call HTTP request and method Request header and body information Response Examples (HTTP, C#, CLI, Go, Java, PHP, PowerShell, Python) As we are connected via the Microsoft Graph PowerShell module we can either use the HTTP or the eDiscovery specific cmdlets within the Microsoft Graph PowerShell module. First letâs look at the PowerShell cmdlet example. As you can see it returns a list of all the cases within the tenant. When delving deeper into a case it is important to record the Case ID as you will use this in future calls. Then we can look at the HTTP example, we will use the Invoke-MgGraphRequest cmdlet to make the call via PowerShell. First we need to store the URL in a variable as below. $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases" Then we will use the Invoke-MgGraphRequest cmdlet to make the API call. Invoke-MgGraphRequest -Method Get -Uri $uri As you can see from the output below, we need to extract the values from the returned response. This can be done by saving the Value elements of the response to a new variable using the following command. $cases = (Invoke-MgGraphRequest -Method Get -Uri $uri).value This returns a collection of Hashtables; optionally you can run a small bit of PowerShell code to convert the hash tables into PS Objects for easier use with cmdlets such as format-table and format-list. $CasesAsObjects = @() foreach($i in $cases) {$CasesAsObjects += [pscustomobject]$i} MicrosoftPurviewEDiscovery API You can also configure the MicrosoftPurviewEDiscovery API to enable the programmatic download of export packages and the item report from an export job in a Microsoft Purview eDiscovery case. Pre-requisites Prior to executing the configuration steps in this section it is assumed that you have completed and validated the configuration detailed in the Microsoft Graph API section. The previously registered app in Entra ID will be extended to include the required permissions to achieve programmatic download of the export package. This already provides the following pre-requisites: Registered App in Azure portal configured with the appropriate client secret/certificate Service principal in Microsoft Purview assigned the relevant eDiscovery roles Microsoft eDiscovery API permissions configured for the Microsoft Graph To extend the existing registered apps API permissions to enable programmatic download, the following steps must be completed Registering a new Microsoft Application and service principal in the tenant Assign additional API permissions to the previously registered app in the Azure Portal Granting tenant-wide admin consent for Microsoft Purview eDiscovery APIs application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Configuration steps Step 1 â Register the MicrosoftPurviewEDiscovery app in Entra ID First validate that the MicrosoftPurviewEDiscovery app is not already registered by logging into the Azure Portal and browsing to Microsoft Entra ID > Enterprise Applications. Change the application type filter to show Microsoft Applications and in the search box enter MicrosoftPurviewEDiscovery. If this returns a result as below, move to step 2. If the search returns no results as per the example below, proceed with registering the app in Entra ID. The Microsoft.Graph PowerShell Module can be used to register the MicrosoftPurviewEDiscovery App in Entra ID, see Install the Microsoft Graph PowerShell SDK. Once installed on a machine, run the following cmdlet to connect to the Microsoft Graph via PowerShell. Connect-MgGraph -scopes "Application.ReadWrite.All" If this is the first time using the Microsoft.Graph PowerShell cmdlets you may be prompted to consent to the following permissions. To register the MicrosoftPurviewEDiscovery app, run the following PowerShell commands. $spId = @{"AppId" = "b26e684c-5068-4120-a679-64a5d2c909d9" } New-MgServicePrincipal -BodyParameter $spId; Step 2 â Assign additional MicrosoftPurviewEDiscovery permissions to the registered app Now that the Service Principal has been added you can update the permissions on your previously registered app created in the Microsoft Graph API section of this document. Log into the Azure Portal and browse to Microsoft Entra ID > App Registrations. Find and select the app you created in the Microsoft Graph API section of this document. Select API Permissions from the navigation menu. Select Add a permission and then APIs my organization uses. Search for MicrosoftPurviewEDiscovery and select it. Then select Application Permissions and select the tick box for eDiscovery.Download.Read before selecting Add Permissions. You will be returned to the API permissions screen, now you must select Grant Admin Consent.. to approve the newly added permissions. User.Read Microsoft Graph API permissions have been added and admin consent granted. It also shows that the eDiscovery.Download.Read MicrosoftPurviewEDiscovery API application permissions have been added but admin consent has not yet been granted. Once admin consent is granted you will see the Status of the newly added permissions update to Granted for... Downloading the export packages and reports Retrieving the case ID and export Job ID To successfully download the export packages and reports of an export job in an eDiscovery case, you must first retrieve the case ID and the operation/job ID for the export job. To gather this information via the Purview Portal you can open the eDiscovery Case, locate the export job and select Copy support information before pasting this information into Notepad. , case ID, job ID, job state, created by, created timestamp, completed timestamp and support information generation time. To access this information programmatically you can make the following Graph API calls to locate the case ID and the job ID you wish to export. First connect to the Microsoft Graph using the steps detailed in the previous section titled "Connecting to Microsoft Graph API using app-only access" Using the eDiscovery Graph PowerShell Cmdlets you can use the following command if you know the case name. Get-MgSecurityCaseEdiscoveryCase | where {$_.displayname -eq "<Name of case>"} Once you have the case ID you can look up the operations in the case to identify the job ID for the export using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" Export jobs will either be logged under an action of exportResult (direct export) or ContentExport (export from review set). The name of the export jobs are not returned by this API call, to find the name of the export job you must query the specific operation ID. This can be achieved using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId â<operation ID>â The name of the export operation is contained within the property AdditionalProperties. If you wish to make the HTTP API calls directly to list cases in the tenant, see List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn. If you wish to make the HTTP API calls directly to list the operations for a case, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. You will need to use the Case ID in the API call to indicate which case you wish to list the operations from. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/ The name of the export jobs are not returned with this API call, to find the name of the export job you must query the specific job ID. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/<OperationID> Downloading the Export Package Retrieving the download URLs for export packages The URL required to download the export packages and reports are contained within a property called exportFileMetaData. To retrieve this information we need to know the case ID of the eDiscovery case that the export job was run in, as well as the operation ID for the export job. Using the eDiscovery Graph PowerShell Cmdlets you can retrieve this property use the following commands. $Operation = Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId â<operation ID>â $Operation.AdditionalProperties.exportFileMetadata If you wish to make the HTTP API calls directly to return the exportFileMetaData for an operation, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. For each export package visible in the Microsoft Purview Portal there will be an entry in the exportFileMetaData property. Each entry will list the following: The export package file name The downloadUrl to retrieve the export package The size of the export package Example scripts to download the Export Package As the MicrosoftPurviewEDiscovery API is separate to the Microsoft Graph API, it requires a separate authentication token to authorise the download request. As a result, you must use the MSAL.PS PowerShell Module and the Get-MSALToken cmdlet to acquire a separate token in addition to connecting to the Microsoft Graph APIs via the Connect-MgGraph cmdlet. The following example scripts can be used to as a reference when developing your own scripts to enable the programmatic download of the export packages. Connecting with a client secret If you have configured your app to use a client secret, then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingApp.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [string]$appSecret, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } $password = ConvertTo-SecureString $appSecret -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred } else { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $exportToken = Get-MsalToken -ClientId $appId -Scopes "b26e684c-5068-4120-a679-64a5d2c909d9/.default" -TenantId $tenantId -RedirectUri "http://localhost" -ClientSecret $password $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved, open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingApp.ps1 -tenantId â<tenant ID>â -appId â<App ID>â -appSecret â<Client Secret>â -caseId â<CaseID>â -exportId â<ExportID>â -path â<Output Path>â Review the folder which you have specified as the Path to view the downloaded files. Connecting with a certificate If you have configured your app to use a certificate then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingAppCert.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [String]$certPath, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } ##$password = ConvertTo-SecureString $appSecret -AsPlainText -Force ##$clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) $ClientCert = Get-ChildItem $certPath if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert } else { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $connectionDetails = @{ 'TenantId' = $tenantId 'ClientId' = $appID 'ClientCertificate' = $ClientCert 'Scope' = "b26e684c-5068-4120-a679-64a5d2c909d9/.default" } $exportToken = Get-MsalToken @connectionDetails $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingAppCert.ps1 -tenantId â<tenant ID>â -appId â<App ID>â -certPath â<Certificate Path>â -caseId â<CaseID>â -exportId â<ExportID>â -path â<Output Path>â Review the folder which you have specified as the Path to view the downloaded files. Conclusion Congratulations you have now configured your environment to enable access to the eDiscovery APIs! It is a great opportunity to further explore the available Microsoft Purview eDiscovery REST API calls using the Microsoft.Graph PowerShell module. For a full list of API calls available, see Use the Microsoft Purview eDiscovery API. Stay tuned for future blog posts covering other aspects of the eDiscovery APIs and examples on how it can be used to automate existing eDiscovery workflows.Automating Active Directory Domain Join in Azure
The journey to Azure is an exciting milestone for any organization, and our customer is no exception. With Microsoft assisting in migrating all application servers to the Azure IaaS platform, the goal was clear: make the migration seamless, error-free, efficient, and fast. While the customer had already laid some groundwork with Bicep scripts, we took it a step furtherârefactoring and enhancing these scripts to not only streamline the current process but also create a robust, scalable framework for their future in Azure. In this blog, weâll dive into one of the critical pieces of this automation puzzle: Active Directory Domain Join. We'll explore how we crafted a PowerShell script to automate this essential step, ensuring that every migrated server is seamlessly integrated into the Azure environment. Letâs get started! Step 1: List all the tasks or functionalities we want to achieve AD domain Join process in this script: Verify Local Administrative Rights: Ensure the current user has local admin rights required for installation and configuration. Check for Active Directory PowerShell Module: Confirm if the module is already installed. If not, install the module. Check Domain Join Status: Determine the current domain join status of the server. Validate Active Directory Ports Availability: Ensure necessary AD ports are open and accessible. Verify Domain Controller (DC) Availability: Confirm the availability of a domain controller. Test Network Connectivity: Check connectivity between the server and the domain controller. Retrieve Domain Admin Credentials: Securely prompt and retrieve credentials for a domain administrator account. Perform AD Join: Execute the Active Directory domain join operation. Create Log Files: Capture progress and errors in detailed log files for troubleshooting. Update Event Logs: Record key milestones in the Windows Event Log for auditing and monitoring. Step 2: In PowerShell scripting, functions play a crucial role in creating efficient, modular, and reusable code. By making scripts flexible and customizable, functions help streamline processes within a global scope. To simplify the AD domain-join process, I grouped related tasks into functions that achieve specific functionalities. For instance, tasks like checking the server's domain join status (point 3) and validating AD ports (point 4) can be combined into a single function, VM-Checks, as they both focus on verifying the local server's readiness. Similarly, we can define other functions such as AD-RSAT-Module, DC-Discovery, Check-DC-Connectivity, and Application-Log. For better organization, weâll divide all functions into two categories: Operation Functions: Functions responsible for executing the domain join process. Logging Functions: Functions dedicated to robust logging for progress tracking and error handling. Letâs start by building the operation functions. Step 1: Define the variables that we will be using in this script, like: $DomainName $SrvUsernameSecretName, $SrvPasswordSecretName $Creds . . . Step 2: We need a function to validate if the current user has local administrative rights, ensuring the script can perform privileged operations seamlessly. function Check-AdminRights { # Check if the current user is a member of the local Administrators group $isAdmin = [Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent() $isAdminRole = [Security.Principal.WindowsBuiltInRole]::Administrator if ($isAdmin.IsInRole($isAdminRole)) { Add-Content $progressLogFile "The current user has administrative privileges." return $true } else { $errorMessage = "Exiting script due to lack of administrative privileges." Add-Content $progressLogFile $errorMessage Write-ErrorLog $errorMessage Log-Failure -functionName "Check-AdminRights" -message $errorMessage exit } } Step 3: Validate the status of Active Directory Module. If it's not installed already, install using the below logic: . . if (Get-Module -ListAvailable -Name ActiveDirectory) { Add-Content $progressLogFile "Active Directory Module is already installed." Log-Success -functionName "InstallADModule" -message "Active Directory Module is already installed." } else { Add-Content $progressLogFile "Active Directory Module not found. Initializing installation." Add-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Install-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Import-Module ActiveDirectory -ErrorAction Stop Add-Content $progressLogFile "Active Directory Module imported successfully." Log-Success -functionName "InstallADModule" -message "Active Directory Module imported successfully." } . . Step 4: Next, we need to perform multiple checks on the local server, and if desired can be clubbed into a function. Check the current domain-join status: If the server is already joined to a domain, there's no need to join. So, use the below logic to exit the script . . $computerSystem = Get-WmiObject Win32_ComputerSystem if ($computerSystem.PartOfDomain) { Add-Content $progressLogFile "This machine is already joined to : $($computerSystem.Domain)." Log-Success -functionName "VM-Checks" -message "Machine is already joined to : $($computerSystem.Domain)." exit 0 } else { Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." } . . Check the Active directory ports availability: Define parameters with the list of all ports that needs to be available for domain-join : param ( $ports = @( @{Port = 88; Protocol = "TCP"}, @{Port = 389; Protocol = "TCP"}, @{Port = 445; Protocol = "TCP"} ) ) Once you have the parameters defined, check the status of each port using the below sample code. . . foreach ($port in $ports) { try { $checkPort = Test-NetConnection -ComputerName $DomainController -Port $port.Port if ($checkPort.TcpTestSucceeded) { Add-Content $progressLogFile "Port $($port.Port) ($($port.Protocol)) is open." } else { throw "Port $($port.Port) ($($port.Protocol)) is closed." } } catch { $errorMessage = "$($_.Exception.Message) Please check firewall settings." Write-ErrorLog $errorMessage Log-Failure -functionName "VM-Checks" -message $errorMessage exit } } . . Step 5: Now, we need to find an available domain controller in the domain, to process the domain join request. . . try { $domainController = (Get-ADDomainController -DomainName $DomainName -Discover -ErrorAction Stop).HostName Add-Content $progressLogFile "Discovered domain controller: $domainController" Log-Success -functionName "Dc-Discovery" -message "Discovered domain controller $domainController." } catch { $errorMessage = "Failed to discover domain controller for $DomainName." Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-Discovery" -message $errorMessage exit } . . Step 6: We need to perform connectivity and name resolution checks between the local server and the previously identified domain controller. For Network connectivity check, you can use this logic: if (Test-Connection -ComputerName $DomainController -Count 2 -Quiet) { Write-Host "Domain Controller $DomainController is reachable." -ForegroundColor Green Add-Content $progressLogFile "Domain Controller $DomainController is reachable." } else { $errorMessage = "Domain Controller $DomainController is not reachable." Write-ErrorLog $errorMessage exit } For DNS check, you can use the below logic: try { Resolve-DnsName -Name $DomainController -ErrorAction Stop Write-Host "DNS resolution for $DomainController successful." -ForegroundColor Green Add-Content $progressLogFile "DNS resolution for $DomainController successful." } catch { $errorMessage = "DNS resolution for $DomainController failed." Write-Host $errorMessage -ForegroundColor Red Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-ConnectivityCheck" -message $errorMessage exit } To fully automate the domain-join process, itâs essential to retrieve and pass service account credentials within the script without any manual intervention. However, this comes with a critical responsibilityâensuring the security of the service account, as it holds privileged rights. Any compromise here could have serious repercussions for the entire environment. To address this, we leverage Azure Key Vault for secure storage and retrieval of credentials. By using Key Vault, we ensure that sensitive information remains protected while enabling seamless automation. P.S : In this blog, weâll focus on utilizing Azure Key Vault for this purpose. In the next post, weâll explore how to retrieve domain credentials from the CyberArk Password Vault using the same level of security and automation. Stay tuned! Step 7: We need to declare the variable to provide the "key vault name" where the service account credentials are stored. This should be done in Step 1: $KeyVaultName = "MTest-KV" The below code ensures that the Azure Key Vault PowerShell module is installed on the local server and if not present, then installs it: # Check if the Az.KeyVault module is installed if (-not (Get-Module -ListAvailable -Name Az.KeyVault)) { Add-Content $progressLogFile "Az.KeyVault module not found. Installing..." # Install the Az.KeyVault module if not found Install-Module -Name Az.KeyVault -Force -AllowClobber -Scope CurrentUser } else { Add-Content $progressLogFile "Az.KeyVault module is already installed." } Now, we'll create a function to retrieve the service account credentials from Azure Key Vault, assuming the logged-in user already has the necessary permissions to access the secrets stored in the Key Vault. function Get-ServiceAccount-Creds { param ( [string]$KeyVaultName, [string]$SrvUsernameSecretName, [string]$SrvPasswordSecretName ) Add-Content $progressLogFile "Initiating retrieval of credentials from vault." try { Add-Content $progressLogFile "Retrieving service account credentials from Azure Key Vault." # Authenticate to access Azure KeyVault using the current account Connect-AzAccount -Identity # Retrieve service account's username and password from Azure Key Vault $SrvUsername = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvUsernameSecretName).SecretValueText $SrvPassword = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvPasswordSecretName).SecretValueText # Create a PSCredential object $SecurePassword = ConvertTo-SecureString $Password -AsPlainText -Force $Creds = New-Object System.Management.Automation.PSCredential($SrvUsername, $SecurePassword) Add-Content $progressLogFile "Successfully retrieved service account credentials." Log-Success -functionName "AD-DomainJoin" -message "Successfully retrieved credentials." return $Credentials } catch { $errorMessage = "Error retrieving credentials from Azure Key Vault: $($_.Exception.Message)" Add-Content $errorLogFile $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage exit } } Step 8: We will now use the retrieved service account credentials to send a domain join request to the identified domain controller. function Join-Domain { param ( [string]$DomainName, [PSCredential]$Creds, [string]$DomainController ) try { Add-Content $progressLogFile "Joining machine to domain: $DomainName via domain controller: $DomainController." # Perform the domain join specifying the domain controller Add-Computer -DomainName $DomainName -Credential $Creds -Server $DomainController -ErrorAction Stop Restart-Computer -Force -ErrorAction Stop Add-Content $progressLogFile "Successfully joined the machine to the domain via domain controller: $DomainController." Log-Success -functionName "AD-DomainJoin" -message "$ComputerName successfully joined $DomainName." } catch { $errorMessage = "Error joining machine to domain via domain controller $DomainController: $($_.Exception.Message)" Write-ErrorLog $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage Add-Content $progressLogFile "Domain join to $DomainName for $ComputerName failed. Check error log." exit } } Now that we've done all the heavy lifting with operational functions, let's talk about logging functions. During technical activities, especially complex ones, real-time progress monitoring and quick issue identification are essential. Robust logging improves visibility, simplifies troubleshooting, and ensures efficiency when something goes wrong. To achieve this, weâll implement two types of logs: a detailed progress log to track each step and an error log to capture issues separately. This approach provides a clear audit trail and makes post-execution analysis much easier. Let's see how we can implement this. Step 1: Create log files including current timestamp in the variable declaration holding the log file path: # Global Log Files $progressLogFile = "C:\Logs\ProgressLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" $errorLogFile = "C:\Logs\ErrorLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" Note: I have included the timestamp in file name, this allows us to capture the logs in separate files in case of multiple attempts. If you do not want multiple files and want to overwrite the existing file, you can remove "+ (Get-Date -Format yyyy-MM-dd_HH-m) +" and it will create a single file named ProcessLog.log. Step 2: How to write events in the log files: To capture the occurrence of any event in the log file while building the PowerShell script, you can use the following code: For capturing progress in ProgressLog.log file, use: Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." For capturing error occurrence in ErrorLog.log file we need to create a function: # Function to Write Error Log function Write-ErrorLog { param ( [string]$message ) Add-Content $errorLogFile "$message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." } We will call this function to capture the failure occurrence in the log file: $errorMessage = "Error while checking the domain: $($_.Exception.Message)" Write-ErrorLog $errorMessage Step 3: As we want to capture the milestones in Application event logs locally on the server as well, we create another function: # Function to Write to the Application Event Log function Write-ApplicationLog { param ( [string]$functionName, [string]$message, [int]$eventID, [string]$entryType ) # Ensure the event source exists if (-not (Get-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction SilentlyContinue)) { New-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction Stop } $formattedMessage = "$functionName : $message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." Write-EventLog -LogName Application -Source "BuildDomainJoin" -EventID $eventID -EntryType $entryType -Message $formattedMessage } To capture the success and failure events in Application event logs, we can create separate functions for each case. These functions can be called from other function(s) to capture the results. Step 4: Function for the success events: We will use Event Id 3011 to capture the success, by creating separate function. You can use any event Id of your choice but do due diligence to ensure that it does not conflict with any of the existing event ID functions. # Function to Log Success function Log-Success { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Success: $message" -eventID 3011 -entryType "Information" } Step 5: To capture failure events, weâll create a separate function that uses Event ID 3010. Ensure the chosen Event ID does not conflict with any existing Event ID functions. # Function to Log Failure function Log-Failure { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Failed: $message" -eventID 3010 -entryType "Error" } Step 6: How to call and use Log-Success function in script: In case of successful completion of any task, call the function to write success event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the successful completion of the module installation: Log-Success -functionName "AD-RSAT-Module" -message "RSAT-AD-PowerShell feature and Active Directory Module imported successfully." Step 7: How to call and use Log-Failure function in script: In case of a failure in any task, call the function to write failure event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the failure along with the error it failed with. It also stops further processing of the PowerShell script: $errorMessage = "Error during RSAT-AD-PowerShell feature installation or AD module import: $($_.Exception.Message)" Log-Failure -functionName "RSAT-ADModule-Installation" -message $errorMessage Exit With the implementation of an automated domain join solution, the process of integrating servers into Azure becomes more efficient and error-free. By leveraging PowerShell and Azure services, weâve laid the foundation for future-proof scalability, reducing manual intervention and increasing reliability. This approach sets the stage for further automation in the migration process, providing a seamless experience as the organization continues to grow in the cloud.