automation
2 TopicsGetting started with the eDiscovery APIs
The Microsoft Purview APIs for eDiscovery in Microsoft Graph enable organizations to automate repetitive tasks and integrate with their existing eDiscovery tools to build repeatable workflows that industry regulations might require. Before you can make any calls to the Microsoft Purview APIs for eDiscovery you must first register an app in the Microsoft’s Identity Platform, Entra ID. An app can access data in two ways: Delegated Access: an app acting on behalf of a signed-in user App-only access: an app action with its own identity For more information on access scenarios see Authentication and authorization basics. This article will demonstrate how to configure the required pre-requisites to enable access to the Microsoft Purview APIs for eDiscovery. This will based on using app-only access to the APIs, using either a client secret or a self-signed certificate to authenticate the requests. The Microsoft Purview APIs for eDiscovery have two separate APIs, they are: Microsoft Graph: Part of the Microsoft.Graph.Security namespace and used for working with Microsoft Purview eDiscovery Cases. MicrosoftPurviewEDiscovery: Used exclusively to download programmatically the export package created by a Microsoft Purview eDiscovery Export job. Currently, the eDiscovery APIs in Microsoft Graph only work with eDiscovery (Premium) cases. For a list of supported API calls within the Microsoft Graph calls, see Use the Microsoft Purview eDiscovery API. Microsoft Graph API Pre-requisites Implementing app-only access involves registering an app in Azure portal, creating client secret/certificates, assigning API permissions, setting up a service principal, and then using app-only access to call Microsoft Graph APIs. To register an app, create client secret/certificates and assign API permissions the account must be at least a Cloud Application Administrator. For more information on registering an app in the Azure portal, see Register an application with the Microsoft identity platform. Granting tenant-wide admin consent for Microsoft Purview eDiscovery API application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Setting up a service principal requires the following pre-requisites: A machine with the ExchangeOnlineManagement module installed An account that has the Role Management role assigned in Microsoft Purview, see Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview Configuration steps For detailed steps on implementing app-only access for Microsoft Purview eDiscovery, see Set up app-only access for Microsoft Purview eDiscovery. Connecting to Microsoft Graph API using app-only access Use the Connect-MgGraph cmdlet in PowerShell to authenticate and connect to Microsoft Graph using the app-only access method. This cmdlets enables your app to interact with Microsoft Graph securely and enables you to explore the Microsoft Purview eDiscovery APIs. Connecting via client secret To connect using a client secret, update and run the following example PowerShell code. $clientSecret = "<client secret>" ## Update with client secret added to the registered app $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientSecretPW = ConvertTo-SecureString "$clientSecret" -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ("$appID", $clientSecretPW) Connect-MgGraph -TenantId "$tenantId" -ClientSecretCredential $clientSecretCred Connecting via certificate To connect using a certificate, update and run the following example PowerShell code. $certPath = "Cert:\currentuser\my\<xxxxxxxxxx>" ## Update with the cert thumbnail $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientCert = Get-ChildItem $certPath Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert Invoke Microsoft Graph API calls Once connected you can start making calls to the Microsoft Graph API. For example, lets look at listing the eDiscovery cases within the tenant, see List ediscoveryCases. Within the documentation, for each operation it will list the following information: Permissions required to make the API call HTTP request and method Request header and body information Response Examples (HTTP, C#, CLI, Go, Java, PHP, PowerShell, Python) As we are connected via the Microsoft Graph PowerShell module we can either use the HTTP or the eDiscovery specific cmdlets within the Microsoft Graph PowerShell module. First let’s look at the PowerShell cmdlet example. As you can see it returns a list of all the cases within the tenant. When delving deeper into a case it is important to record the Case ID as you will use this in future calls. Then we can look at the HTTP example, we will use the Invoke-MgGraphRequest cmdlet to make the call via PowerShell. First we need to store the URL in a variable as below. $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases" Then we will use the Invoke-MgGraphRequest cmdlet to make the API call. Invoke-MgGraphRequest -Method Get -Uri $uri As you can see from the output below, we need to extract the values from the returned response. This can be done by saving the Value elements of the response to a new variable using the following command. $cases = (Invoke-MgGraphRequest -Method Get -Uri $uri).value This returns a collection of Hashtables; optionally you can run a small bit of PowerShell code to convert the hash tables into PS Objects for easier use with cmdlets such as format-table and format-list. $CasesAsObjects = @() foreach($i in $cases) {$CasesAsObjects += [pscustomobject]$i} MicrosoftPurviewEDiscovery API You can also configure the MicrosoftPurviewEDiscovery API to enable the programmatic download of export packages and the item report from an export job in a Microsoft Purview eDiscovery case. Pre-requisites Prior to executing the configuration steps in this section it is assumed that you have completed and validated the configuration detailed in the Microsoft Graph API section. The previously registered app in Entra ID will be extended to include the required permissions to achieve programmatic download of the export package. This already provides the following pre-requisites: Registered App in Azure portal configured with the appropriate client secret/certificate Service principal in Microsoft Purview assigned the relevant eDiscovery roles Microsoft eDiscovery API permissions configured for the Microsoft Graph To extend the existing registered apps API permissions to enable programmatic download, the following steps must be completed Registering a new Microsoft Application and service principal in the tenant Assign additional API permissions to the previously registered app in the Azure Portal Granting tenant-wide admin consent for Microsoft Purview eDiscovery APIs application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Configuration steps Step 1 – Register the MicrosoftPurviewEDiscovery app in Entra ID First validate that the MicrosoftPurviewEDiscovery app is not already registered by logging into the Azure Portal and browsing to Microsoft Entra ID > Enterprise Applications. Change the application type filter to show Microsoft Applications and in the search box enter MicrosoftPurviewEDiscovery. If this returns a result as below, move to step 2. If the search returns no results as per the example below, proceed with registering the app in Entra ID. The Microsoft.Graph PowerShell Module can be used to register the MicrosoftPurviewEDiscovery App in Entra ID, see Install the Microsoft Graph PowerShell SDK. Once installed on a machine, run the following cmdlet to connect to the Microsoft Graph via PowerShell. Connect-MgGraph -scopes "Application.ReadWrite.All" If this is the first time using the Microsoft.Graph PowerShell cmdlets you may be prompted to consent to the following permissions. To register the MicrosoftPurviewEDiscovery app, run the following PowerShell commands. $spId = @{"AppId" = "b26e684c-5068-4120-a679-64a5d2c909d9" } New-MgServicePrincipal -BodyParameter $spId; Step 2 – Assign additional MicrosoftPurviewEDiscovery permissions to the registered app Now that the Service Principal has been added you can update the permissions on your previously registered app created in the Microsoft Graph API section of this document. Log into the Azure Portal and browse to Microsoft Entra ID > App Registrations. Find and select the app you created in the Microsoft Graph API section of this document. Select API Permissions from the navigation menu. Select Add a permission and then APIs my organization uses. Search for MicrosoftPurviewEDiscovery and select it. Then select Application Permissions and select the tick box for eDiscovery.Download.Read before selecting Add Permissions. You will be returned to the API permissions screen, now you must select Grant Admin Consent.. to approve the newly added permissions. User.Read Microsoft Graph API permissions have been added and admin consent granted. It also shows that the eDiscovery.Download.Read MicrosoftPurviewEDiscovery API application permissions have been added but admin consent has not yet been granted. Once admin consent is granted you will see the Status of the newly added permissions update to Granted for... Downloading the export packages and reports Retrieving the case ID and export Job ID To successfully download the export packages and reports of an export job in an eDiscovery case, you must first retrieve the case ID and the operation/job ID for the export job. To gather this information via the Purview Portal you can open the eDiscovery Case, locate the export job and select Copy support information before pasting this information into Notepad. , case ID, job ID, job state, created by, created timestamp, completed timestamp and support information generation time. To access this information programmatically you can make the following Graph API calls to locate the case ID and the job ID you wish to export. First connect to the Microsoft Graph using the steps detailed in the previous section titled "Connecting to Microsoft Graph API using app-only access" Using the eDiscovery Graph PowerShell Cmdlets you can use the following command if you know the case name. Get-MgSecurityCaseEdiscoveryCase | where {$_.displayname -eq "<Name of case>"} Once you have the case ID you can look up the operations in the case to identify the job ID for the export using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" Export jobs will either be logged under an action of exportResult (direct export) or ContentExport (export from review set). The name of the export jobs are not returned by this API call, to find the name of the export job you must query the specific operation ID. This can be achieved using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” The name of the export operation is contained within the property AdditionalProperties. If you wish to make the HTTP API calls directly to list cases in the tenant, see List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn. If you wish to make the HTTP API calls directly to list the operations for a case, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. You will need to use the Case ID in the API call to indicate which case you wish to list the operations from. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/ The name of the export jobs are not returned with this API call, to find the name of the export job you must query the specific job ID. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/<OperationID> Downloading the Export Package Retrieving the download URLs for export packages The URL required to download the export packages and reports are contained within a property called exportFileMetaData. To retrieve this information we need to know the case ID of the eDiscovery case that the export job was run in, as well as the operation ID for the export job. Using the eDiscovery Graph PowerShell Cmdlets you can retrieve this property use the following commands. $Operation = Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” $Operation.AdditionalProperties.exportFileMetadata If you wish to make the HTTP API calls directly to return the exportFileMetaData for an operation, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. For each export package visible in the Microsoft Purview Portal there will be an entry in the exportFileMetaData property. Each entry will list the following: The export package file name The downloadUrl to retrieve the export package The size of the export package Example scripts to download the Export Package As the MicrosoftPurviewEDiscovery API is separate to the Microsoft Graph API, it requires a separate authentication token to authorise the download request. As a result, you must use the MSAL.PS PowerShell Module and the Get-MSALToken cmdlet to acquire a separate token in addition to connecting to the Microsoft Graph APIs via the Connect-MgGraph cmdlet. The following example scripts can be used to as a reference when developing your own scripts to enable the programmatic download of the export packages. Connecting with a client secret If you have configured your app to use a client secret, then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingApp.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [string]$appSecret, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } $password = ConvertTo-SecureString $appSecret -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred } else { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $exportToken = Get-MsalToken -ClientId $appId -Scopes "b26e684c-5068-4120-a679-64a5d2c909d9/.default" -TenantId $tenantId -RedirectUri "http://localhost" -ClientSecret $password $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved, open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingApp.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -appSecret “<Client Secret>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Connecting with a certificate If you have configured your app to use a certificate then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingAppCert.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [String]$certPath, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } ##$password = ConvertTo-SecureString $appSecret -AsPlainText -Force ##$clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) $ClientCert = Get-ChildItem $certPath if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert } else { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $connectionDetails = @{ 'TenantId' = $tenantId 'ClientId' = $appID 'ClientCertificate' = $ClientCert 'Scope' = "b26e684c-5068-4120-a679-64a5d2c909d9/.default" } $exportToken = Get-MsalToken @connectionDetails $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingAppCert.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -certPath “<Certificate Path>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Conclusion Congratulations you have now configured your environment to enable access to the eDiscovery APIs! It is a great opportunity to further explore the available Microsoft Purview eDiscovery REST API calls using the Microsoft.Graph PowerShell module. For a full list of API calls available, see Use the Microsoft Purview eDiscovery API. Stay tuned for future blog posts covering other aspects of the eDiscovery APIs and examples on how it can be used to automate existing eDiscovery workflows.Automating Active Directory Domain Join in Azure
The journey to Azure is an exciting milestone for any organization, and our customer is no exception. With Microsoft assisting in migrating all application servers to the Azure IaaS platform, the goal was clear: make the migration seamless, error-free, efficient, and fast. While the customer had already laid some groundwork with Bicep scripts, we took it a step further—refactoring and enhancing these scripts to not only streamline the current process but also create a robust, scalable framework for their future in Azure. In this blog, we’ll dive into one of the critical pieces of this automation puzzle: Active Directory Domain Join. We'll explore how we crafted a PowerShell script to automate this essential step, ensuring that every migrated server is seamlessly integrated into the Azure environment. Let’s get started! Step 1: List all the tasks or functionalities we want to achieve AD domain Join process in this script: Verify Local Administrative Rights: Ensure the current user has local admin rights required for installation and configuration. Check for Active Directory PowerShell Module: Confirm if the module is already installed. If not, install the module. Check Domain Join Status: Determine the current domain join status of the server. Validate Active Directory Ports Availability: Ensure necessary AD ports are open and accessible. Verify Domain Controller (DC) Availability: Confirm the availability of a domain controller. Test Network Connectivity: Check connectivity between the server and the domain controller. Retrieve Domain Admin Credentials: Securely prompt and retrieve credentials for a domain administrator account. Perform AD Join: Execute the Active Directory domain join operation. Create Log Files: Capture progress and errors in detailed log files for troubleshooting. Update Event Logs: Record key milestones in the Windows Event Log for auditing and monitoring. Step 2: In PowerShell scripting, functions play a crucial role in creating efficient, modular, and reusable code. By making scripts flexible and customizable, functions help streamline processes within a global scope. To simplify the AD domain-join process, I grouped related tasks into functions that achieve specific functionalities. For instance, tasks like checking the server's domain join status (point 3) and validating AD ports (point 4) can be combined into a single function, VM-Checks, as they both focus on verifying the local server's readiness. Similarly, we can define other functions such as AD-RSAT-Module, DC-Discovery, Check-DC-Connectivity, and Application-Log. For better organization, we’ll divide all functions into two categories: Operation Functions: Functions responsible for executing the domain join process. Logging Functions: Functions dedicated to robust logging for progress tracking and error handling. Let’s start by building the operation functions. Step 1: Define the variables that we will be using in this script, like: $DomainName $SrvUsernameSecretName, $SrvPasswordSecretName $Creds . . . Step 2: We need a function to validate if the current user has local administrative rights, ensuring the script can perform privileged operations seamlessly. function Check-AdminRights { # Check if the current user is a member of the local Administrators group $isAdmin = [Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent() $isAdminRole = [Security.Principal.WindowsBuiltInRole]::Administrator if ($isAdmin.IsInRole($isAdminRole)) { Add-Content $progressLogFile "The current user has administrative privileges." return $true } else { $errorMessage = "Exiting script due to lack of administrative privileges." Add-Content $progressLogFile $errorMessage Write-ErrorLog $errorMessage Log-Failure -functionName "Check-AdminRights" -message $errorMessage exit } } Step 3: Validate the status of Active Directory Module. If it's not installed already, install using the below logic: . . if (Get-Module -ListAvailable -Name ActiveDirectory) { Add-Content $progressLogFile "Active Directory Module is already installed." Log-Success -functionName "InstallADModule" -message "Active Directory Module is already installed." } else { Add-Content $progressLogFile "Active Directory Module not found. Initializing installation." Add-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Install-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Import-Module ActiveDirectory -ErrorAction Stop Add-Content $progressLogFile "Active Directory Module imported successfully." Log-Success -functionName "InstallADModule" -message "Active Directory Module imported successfully." } . . Step 4: Next, we need to perform multiple checks on the local server, and if desired can be clubbed into a function. Check the current domain-join status: If the server is already joined to a domain, there's no need to join. So, use the below logic to exit the script . . $computerSystem = Get-WmiObject Win32_ComputerSystem if ($computerSystem.PartOfDomain) { Add-Content $progressLogFile "This machine is already joined to : $($computerSystem.Domain)." Log-Success -functionName "VM-Checks" -message "Machine is already joined to : $($computerSystem.Domain)." exit 0 } else { Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." } . . Check the Active directory ports availability: Define parameters with the list of all ports that needs to be available for domain-join : param ( $ports = @( @{Port = 88; Protocol = "TCP"}, @{Port = 389; Protocol = "TCP"}, @{Port = 445; Protocol = "TCP"} ) ) Once you have the parameters defined, check the status of each port using the below sample code. . . foreach ($port in $ports) { try { $checkPort = Test-NetConnection -ComputerName $DomainController -Port $port.Port if ($checkPort.TcpTestSucceeded) { Add-Content $progressLogFile "Port $($port.Port) ($($port.Protocol)) is open." } else { throw "Port $($port.Port) ($($port.Protocol)) is closed." } } catch { $errorMessage = "$($_.Exception.Message) Please check firewall settings." Write-ErrorLog $errorMessage Log-Failure -functionName "VM-Checks" -message $errorMessage exit } } . . Step 5: Now, we need to find an available domain controller in the domain, to process the domain join request. . . try { $domainController = (Get-ADDomainController -DomainName $DomainName -Discover -ErrorAction Stop).HostName Add-Content $progressLogFile "Discovered domain controller: $domainController" Log-Success -functionName "Dc-Discovery" -message "Discovered domain controller $domainController." } catch { $errorMessage = "Failed to discover domain controller for $DomainName." Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-Discovery" -message $errorMessage exit } . . Step 6: We need to perform connectivity and name resolution checks between the local server and the previously identified domain controller. For Network connectivity check, you can use this logic: if (Test-Connection -ComputerName $DomainController -Count 2 -Quiet) { Write-Host "Domain Controller $DomainController is reachable." -ForegroundColor Green Add-Content $progressLogFile "Domain Controller $DomainController is reachable." } else { $errorMessage = "Domain Controller $DomainController is not reachable." Write-ErrorLog $errorMessage exit } For DNS check, you can use the below logic: try { Resolve-DnsName -Name $DomainController -ErrorAction Stop Write-Host "DNS resolution for $DomainController successful." -ForegroundColor Green Add-Content $progressLogFile "DNS resolution for $DomainController successful." } catch { $errorMessage = "DNS resolution for $DomainController failed." Write-Host $errorMessage -ForegroundColor Red Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-ConnectivityCheck" -message $errorMessage exit } To fully automate the domain-join process, it’s essential to retrieve and pass service account credentials within the script without any manual intervention. However, this comes with a critical responsibility—ensuring the security of the service account, as it holds privileged rights. Any compromise here could have serious repercussions for the entire environment. To address this, we leverage Azure Key Vault for secure storage and retrieval of credentials. By using Key Vault, we ensure that sensitive information remains protected while enabling seamless automation. P.S : In this blog, we’ll focus on utilizing Azure Key Vault for this purpose. In the next post, we’ll explore how to retrieve domain credentials from the CyberArk Password Vault using the same level of security and automation. Stay tuned! Step 7: We need to declare the variable to provide the "key vault name" where the service account credentials are stored. This should be done in Step 1: $KeyVaultName = "MTest-KV" The below code ensures that the Azure Key Vault PowerShell module is installed on the local server and if not present, then installs it: # Check if the Az.KeyVault module is installed if (-not (Get-Module -ListAvailable -Name Az.KeyVault)) { Add-Content $progressLogFile "Az.KeyVault module not found. Installing..." # Install the Az.KeyVault module if not found Install-Module -Name Az.KeyVault -Force -AllowClobber -Scope CurrentUser } else { Add-Content $progressLogFile "Az.KeyVault module is already installed." } Now, we'll create a function to retrieve the service account credentials from Azure Key Vault, assuming the logged-in user already has the necessary permissions to access the secrets stored in the Key Vault. function Get-ServiceAccount-Creds { param ( [string]$KeyVaultName, [string]$SrvUsernameSecretName, [string]$SrvPasswordSecretName ) Add-Content $progressLogFile "Initiating retrieval of credentials from vault." try { Add-Content $progressLogFile "Retrieving service account credentials from Azure Key Vault." # Authenticate to access Azure KeyVault using the current account Connect-AzAccount -Identity # Retrieve service account's username and password from Azure Key Vault $SrvUsername = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvUsernameSecretName).SecretValueText $SrvPassword = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvPasswordSecretName).SecretValueText # Create a PSCredential object $SecurePassword = ConvertTo-SecureString $Password -AsPlainText -Force $Creds = New-Object System.Management.Automation.PSCredential($SrvUsername, $SecurePassword) Add-Content $progressLogFile "Successfully retrieved service account credentials." Log-Success -functionName "AD-DomainJoin" -message "Successfully retrieved credentials." return $Credentials } catch { $errorMessage = "Error retrieving credentials from Azure Key Vault: $($_.Exception.Message)" Add-Content $errorLogFile $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage exit } } Step 8: We will now use the retrieved service account credentials to send a domain join request to the identified domain controller. function Join-Domain { param ( [string]$DomainName, [PSCredential]$Creds, [string]$DomainController ) try { Add-Content $progressLogFile "Joining machine to domain: $DomainName via domain controller: $DomainController." # Perform the domain join specifying the domain controller Add-Computer -DomainName $DomainName -Credential $Creds -Server $DomainController -ErrorAction Stop Restart-Computer -Force -ErrorAction Stop Add-Content $progressLogFile "Successfully joined the machine to the domain via domain controller: $DomainController." Log-Success -functionName "AD-DomainJoin" -message "$ComputerName successfully joined $DomainName." } catch { $errorMessage = "Error joining machine to domain via domain controller $DomainController: $($_.Exception.Message)" Write-ErrorLog $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage Add-Content $progressLogFile "Domain join to $DomainName for $ComputerName failed. Check error log." exit } } Now that we've done all the heavy lifting with operational functions, let's talk about logging functions. During technical activities, especially complex ones, real-time progress monitoring and quick issue identification are essential. Robust logging improves visibility, simplifies troubleshooting, and ensures efficiency when something goes wrong. To achieve this, we’ll implement two types of logs: a detailed progress log to track each step and an error log to capture issues separately. This approach provides a clear audit trail and makes post-execution analysis much easier. Let's see how we can implement this. Step 1: Create log files including current timestamp in the variable declaration holding the log file path: # Global Log Files $progressLogFile = "C:\Logs\ProgressLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" $errorLogFile = "C:\Logs\ErrorLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" Note: I have included the timestamp in file name, this allows us to capture the logs in separate files in case of multiple attempts. If you do not want multiple files and want to overwrite the existing file, you can remove "+ (Get-Date -Format yyyy-MM-dd_HH-m) +" and it will create a single file named ProcessLog.log. Step 2: How to write events in the log files: To capture the occurrence of any event in the log file while building the PowerShell script, you can use the following code: For capturing progress in ProgressLog.log file, use: Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." For capturing error occurrence in ErrorLog.log file we need to create a function: # Function to Write Error Log function Write-ErrorLog { param ( [string]$message ) Add-Content $errorLogFile "$message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." } We will call this function to capture the failure occurrence in the log file: $errorMessage = "Error while checking the domain: $($_.Exception.Message)" Write-ErrorLog $errorMessage Step 3: As we want to capture the milestones in Application event logs locally on the server as well, we create another function: # Function to Write to the Application Event Log function Write-ApplicationLog { param ( [string]$functionName, [string]$message, [int]$eventID, [string]$entryType ) # Ensure the event source exists if (-not (Get-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction SilentlyContinue)) { New-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction Stop } $formattedMessage = "$functionName : $message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." Write-EventLog -LogName Application -Source "BuildDomainJoin" -EventID $eventID -EntryType $entryType -Message $formattedMessage } To capture the success and failure events in Application event logs, we can create separate functions for each case. These functions can be called from other function(s) to capture the results. Step 4: Function for the success events: We will use Event Id 3011 to capture the success, by creating separate function. You can use any event Id of your choice but do due diligence to ensure that it does not conflict with any of the existing event ID functions. # Function to Log Success function Log-Success { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Success: $message" -eventID 3011 -entryType "Information" } Step 5: To capture failure events, we’ll create a separate function that uses Event ID 3010. Ensure the chosen Event ID does not conflict with any existing Event ID functions. # Function to Log Failure function Log-Failure { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Failed: $message" -eventID 3010 -entryType "Error" } Step 6: How to call and use Log-Success function in script: In case of successful completion of any task, call the function to write success event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the successful completion of the module installation: Log-Success -functionName "AD-RSAT-Module" -message "RSAT-AD-PowerShell feature and Active Directory Module imported successfully." Step 7: How to call and use Log-Failure function in script: In case of a failure in any task, call the function to write failure event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the failure along with the error it failed with. It also stops further processing of the PowerShell script: $errorMessage = "Error during RSAT-AD-PowerShell feature installation or AD module import: $($_.Exception.Message)" Log-Failure -functionName "RSAT-ADModule-Installation" -message $errorMessage Exit With the implementation of an automated domain join solution, the process of integrating servers into Azure becomes more efficient and error-free. By leveraging PowerShell and Azure services, we’ve laid the foundation for future-proof scalability, reducing manual intervention and increasing reliability. This approach sets the stage for further automation in the migration process, providing a seamless experience as the organization continues to grow in the cloud.