automation
3 TopicsSensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. You’ll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, it’s not likely to last very long. So, I affectionately refer to information that hasn’t been evaluated as ‘porridge’, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of ‘content contains sensitivity label’ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the ‘porridge’, determine whether it’s an item that needs protection and if so, move it to a higher shelf in the pantry so that it’s out of reach for the kids. Effective use of Purview revolves around the use of ‘know your data’ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like ‘Sensitive’ or ‘Protected’ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We don’t need to guess at an item’s sensitivity if another system has already established what the item’s classification is. These methods are much less prone to false positives. Why isn’t everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations don’t have this skill set. There’s also a lack of understanding of the relevance of checking for item properties. Microsoft haven’t helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where I’m from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, we’ll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isn’t for everyone. If your organisation is new to classification or you don’t have external partners that you collaborate with at higher sensitivity levels, then this likely isn’t for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that you’ll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels won’t impact usability as you won’t publish them to users. You will, however, need to publish them to a single user or break glass account so that they’re not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID won’t correspond with any of that tenant’s labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasn’t been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the item’s existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purview’s data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 – Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as ‘crawled properties’, which we’ll then need to convert into ‘managed properties’ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, you’ll need to upload or create an item or items with the properties applied. For testing, you’ll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties you’re looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, I’ve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which you’ll often see applied by Purview when an item has a sensitivity label content marking applied to it. I’ve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 – Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 – Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the property’s type to text, select queryable and retrievable. Under ‘mappings to crawled properties’, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so there’s no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 – Configure Auto-labelling policies Next up, create some auto-labelling policies. You’ll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO – Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, you’ll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 – Expand into DLP If you’ve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an item’s sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasn’t yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesn’t match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.2KViews8likes8CommentsGetting started with the eDiscovery APIs
The Microsoft Purview APIs for eDiscovery in Microsoft Graph enable organizations to automate repetitive tasks and integrate with their existing eDiscovery tools to build repeatable workflows that industry regulations might require. Before you can make any calls to the Microsoft Purview APIs for eDiscovery you must first register an app in the Microsoft’s Identity Platform, Entra ID. An app can access data in two ways: Delegated Access: an app acting on behalf of a signed-in user App-only access: an app action with its own identity For more information on access scenarios see Authentication and authorization basics. This article will demonstrate how to configure the required pre-requisites to enable access to the Microsoft Purview APIs for eDiscovery. This will based on using app-only access to the APIs, using either a client secret or a self-signed certificate to authenticate the requests. The Microsoft Purview APIs for eDiscovery have two separate APIs, they are: Microsoft Graph: Part of the Microsoft.Graph.Security namespace and used for working with Microsoft Purview eDiscovery Cases. MicrosoftPurviewEDiscovery: Used exclusively to download programmatically the export package created by a Microsoft Purview eDiscovery Export job. Currently, the eDiscovery APIs in Microsoft Graph only work with eDiscovery (Premium) cases. For a list of supported API calls within the Microsoft Graph calls, see Use the Microsoft Purview eDiscovery API. Microsoft Graph API Pre-requisites Implementing app-only access involves registering an app in Azure portal, creating client secret/certificates, assigning API permissions, setting up a service principal, and then using app-only access to call Microsoft Graph APIs. To register an app, create client secret/certificates and assign API permissions the account must be at least a Cloud Application Administrator. For more information on registering an app in the Azure portal, see Register an application with the Microsoft identity platform. Granting tenant-wide admin consent for Microsoft Purview eDiscovery API application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Setting up a service principal requires the following pre-requisites: A machine with the ExchangeOnlineManagement module installed An account that has the Role Management role assigned in Microsoft Purview, see Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview Configuration steps For detailed steps on implementing app-only access for Microsoft Purview eDiscovery, see Set up app-only access for Microsoft Purview eDiscovery. Connecting to Microsoft Graph API using app-only access Use the Connect-MgGraph cmdlet in PowerShell to authenticate and connect to Microsoft Graph using the app-only access method. This cmdlets enables your app to interact with Microsoft Graph securely and enables you to explore the Microsoft Purview eDiscovery APIs. Connecting via client secret To connect using a client secret, update and run the following example PowerShell code. $clientSecret = "<client secret>" ## Update with client secret added to the registered app $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientSecretPW = ConvertTo-SecureString "$clientSecret" -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ("$appID", $clientSecretPW) Connect-MgGraph -TenantId "$tenantId" -ClientSecretCredential $clientSecretCred Connecting via certificate To connect using a certificate, update and run the following example PowerShell code. $certPath = "Cert:\currentuser\my\<xxxxxxxxxx>" ## Update with the cert thumbnail $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientCert = Get-ChildItem $certPath Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert Invoke Microsoft Graph API calls Once connected you can start making calls to the Microsoft Graph API. For example, lets look at listing the eDiscovery cases within the tenant, see List ediscoveryCases. Within the documentation, for each operation it will list the following information: Permissions required to make the API call HTTP request and method Request header and body information Response Examples (HTTP, C#, CLI, Go, Java, PHP, PowerShell, Python) As we are connected via the Microsoft Graph PowerShell module we can either use the HTTP or the eDiscovery specific cmdlets within the Microsoft Graph PowerShell module. First let’s look at the PowerShell cmdlet example. As you can see it returns a list of all the cases within the tenant. When delving deeper into a case it is important to record the Case ID as you will use this in future calls. Then we can look at the HTTP example, we will use the Invoke-MgGraphRequest cmdlet to make the call via PowerShell. First we need to store the URL in a variable as below. $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases" Then we will use the Invoke-MgGraphRequest cmdlet to make the API call. Invoke-MgGraphRequest -Method Get -Uri $uri As you can see from the output below, we need to extract the values from the returned response. This can be done by saving the Value elements of the response to a new variable using the following command. $cases = (Invoke-MgGraphRequest -Method Get -Uri $uri).value This returns a collection of Hashtables; optionally you can run a small bit of PowerShell code to convert the hash tables into PS Objects for easier use with cmdlets such as format-table and format-list. $CasesAsObjects = @() foreach($i in $cases) {$CasesAsObjects += [pscustomobject]$i} MicrosoftPurviewEDiscovery API You can also configure the MicrosoftPurviewEDiscovery API to enable the programmatic download of export packages and the item report from an export job in a Microsoft Purview eDiscovery case. Pre-requisites Prior to executing the configuration steps in this section it is assumed that you have completed and validated the configuration detailed in the Microsoft Graph API section. The previously registered app in Entra ID will be extended to include the required permissions to achieve programmatic download of the export package. This already provides the following pre-requisites: Registered App in Azure portal configured with the appropriate client secret/certificate Service principal in Microsoft Purview assigned the relevant eDiscovery roles Microsoft eDiscovery API permissions configured for the Microsoft Graph To extend the existing registered apps API permissions to enable programmatic download, the following steps must be completed Registering a new Microsoft Application and service principal in the tenant Assign additional API permissions to the previously registered app in the Azure Portal Granting tenant-wide admin consent for Microsoft Purview eDiscovery APIs application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Configuration steps Step 1 – Register the MicrosoftPurviewEDiscovery app in Entra ID First validate that the MicrosoftPurviewEDiscovery app is not already registered by logging into the Azure Portal and browsing to Microsoft Entra ID > Enterprise Applications. Change the application type filter to show Microsoft Applications and in the search box enter MicrosoftPurviewEDiscovery. If this returns a result as below, move to step 2. If the search returns no results as per the example below, proceed with registering the app in Entra ID. The Microsoft.Graph PowerShell Module can be used to register the MicrosoftPurviewEDiscovery App in Entra ID, see Install the Microsoft Graph PowerShell SDK. Once installed on a machine, run the following cmdlet to connect to the Microsoft Graph via PowerShell. Connect-MgGraph -scopes "Application.ReadWrite.All" If this is the first time using the Microsoft.Graph PowerShell cmdlets you may be prompted to consent to the following permissions. To register the MicrosoftPurviewEDiscovery app, run the following PowerShell commands. $spId = @{"AppId" = "b26e684c-5068-4120-a679-64a5d2c909d9" } New-MgServicePrincipal -BodyParameter $spId; Step 2 – Assign additional MicrosoftPurviewEDiscovery permissions to the registered app Now that the Service Principal has been added you can update the permissions on your previously registered app created in the Microsoft Graph API section of this document. Log into the Azure Portal and browse to Microsoft Entra ID > App Registrations. Find and select the app you created in the Microsoft Graph API section of this document. Select API Permissions from the navigation menu. Select Add a permission and then APIs my organization uses. Search for MicrosoftPurviewEDiscovery and select it. Then select Application Permissions and select the tick box for eDiscovery.Download.Read before selecting Add Permissions. You will be returned to the API permissions screen, now you must select Grant Admin Consent.. to approve the newly added permissions. User.Read Microsoft Graph API permissions have been added and admin consent granted. It also shows that the eDiscovery.Download.Read MicrosoftPurviewEDiscovery API application permissions have been added but admin consent has not yet been granted. Once admin consent is granted you will see the Status of the newly added permissions update to Granted for... Downloading the export packages and reports Retrieving the case ID and export Job ID To successfully download the export packages and reports of an export job in an eDiscovery case, you must first retrieve the case ID and the operation/job ID for the export job. To gather this information via the Purview Portal you can open the eDiscovery Case, locate the export job and select Copy support information before pasting this information into Notepad. , case ID, job ID, job state, created by, created timestamp, completed timestamp and support information generation time. To access this information programmatically you can make the following Graph API calls to locate the case ID and the job ID you wish to export. First connect to the Microsoft Graph using the steps detailed in the previous section titled "Connecting to Microsoft Graph API using app-only access" Using the eDiscovery Graph PowerShell Cmdlets you can use the following command if you know the case name. Get-MgSecurityCaseEdiscoveryCase | where {$_.displayname -eq "<Name of case>"} Once you have the case ID you can look up the operations in the case to identify the job ID for the export using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" Export jobs will either be logged under an action of exportResult (direct export) or ContentExport (export from review set). The name of the export jobs are not returned by this API call, to find the name of the export job you must query the specific operation ID. This can be achieved using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” The name of the export operation is contained within the property AdditionalProperties. If you wish to make the HTTP API calls directly to list cases in the tenant, see List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn. If you wish to make the HTTP API calls directly to list the operations for a case, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. You will need to use the Case ID in the API call to indicate which case you wish to list the operations from. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/ The name of the export jobs are not returned with this API call, to find the name of the export job you must query the specific job ID. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/<OperationID> Downloading the Export Package Retrieving the download URLs for export packages The URL required to download the export packages and reports are contained within a property called exportFileMetaData. To retrieve this information we need to know the case ID of the eDiscovery case that the export job was run in, as well as the operation ID for the export job. Using the eDiscovery Graph PowerShell Cmdlets you can retrieve this property use the following commands. $Operation = Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” $Operation.AdditionalProperties.exportFileMetadata If you wish to make the HTTP API calls directly to return the exportFileMetaData for an operation, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. For each export package visible in the Microsoft Purview Portal there will be an entry in the exportFileMetaData property. Each entry will list the following: The export package file name The downloadUrl to retrieve the export package The size of the export package Example scripts to download the Export Package As the MicrosoftPurviewEDiscovery API is separate to the Microsoft Graph API, it requires a separate authentication token to authorise the download request. As a result, you must use the MSAL.PS PowerShell Module and the Get-MSALToken cmdlet to acquire a separate token in addition to connecting to the Microsoft Graph APIs via the Connect-MgGraph cmdlet. The following example scripts can be used to as a reference when developing your own scripts to enable the programmatic download of the export packages. Connecting with a client secret If you have configured your app to use a client secret, then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingApp.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [string]$appSecret, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } $password = ConvertTo-SecureString $appSecret -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred } else { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $exportToken = Get-MsalToken -ClientId $appId -Scopes "b26e684c-5068-4120-a679-64a5d2c909d9/.default" -TenantId $tenantId -RedirectUri "http://localhost" -ClientSecret $password $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved, open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingApp.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -appSecret “<Client Secret>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Connecting with a certificate If you have configured your app to use a certificate then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingAppCert.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [String]$certPath, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } ##$password = ConvertTo-SecureString $appSecret -AsPlainText -Force ##$clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) $ClientCert = Get-ChildItem $certPath if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert } else { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $connectionDetails = @{ 'TenantId' = $tenantId 'ClientId' = $appID 'ClientCertificate' = $ClientCert 'Scope' = "b26e684c-5068-4120-a679-64a5d2c909d9/.default" } $exportToken = Get-MsalToken @connectionDetails $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingAppCert.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -certPath “<Certificate Path>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Conclusion Congratulations you have now configured your environment to enable access to the eDiscovery APIs! It is a great opportunity to further explore the available Microsoft Purview eDiscovery REST API calls using the Microsoft.Graph PowerShell module. For a full list of API calls available, see Use the Microsoft Purview eDiscovery API. Stay tuned for future blog posts covering other aspects of the eDiscovery APIs and examples on how it can be used to automate existing eDiscovery workflows.Automating Active Directory Domain Join in Azure
The journey to Azure is an exciting milestone for any organization, and our customer is no exception. With Microsoft assisting in migrating all application servers to the Azure IaaS platform, the goal was clear: make the migration seamless, error-free, efficient, and fast. While the customer had already laid some groundwork with Bicep scripts, we took it a step further—refactoring and enhancing these scripts to not only streamline the current process but also create a robust, scalable framework for their future in Azure. In this blog, we’ll dive into one of the critical pieces of this automation puzzle: Active Directory Domain Join. We'll explore how we crafted a PowerShell script to automate this essential step, ensuring that every migrated server is seamlessly integrated into the Azure environment. Let’s get started! Step 1: List all the tasks or functionalities we want to achieve AD domain Join process in this script: Verify Local Administrative Rights: Ensure the current user has local admin rights required for installation and configuration. Check for Active Directory PowerShell Module: Confirm if the module is already installed. If not, install the module. Check Domain Join Status: Determine the current domain join status of the server. Validate Active Directory Ports Availability: Ensure necessary AD ports are open and accessible. Verify Domain Controller (DC) Availability: Confirm the availability of a domain controller. Test Network Connectivity: Check connectivity between the server and the domain controller. Retrieve Domain Admin Credentials: Securely prompt and retrieve credentials for a domain administrator account. Perform AD Join: Execute the Active Directory domain join operation. Create Log Files: Capture progress and errors in detailed log files for troubleshooting. Update Event Logs: Record key milestones in the Windows Event Log for auditing and monitoring. Step 2: In PowerShell scripting, functions play a crucial role in creating efficient, modular, and reusable code. By making scripts flexible and customizable, functions help streamline processes within a global scope. To simplify the AD domain-join process, I grouped related tasks into functions that achieve specific functionalities. For instance, tasks like checking the server's domain join status (point 3) and validating AD ports (point 4) can be combined into a single function, VM-Checks, as they both focus on verifying the local server's readiness. Similarly, we can define other functions such as AD-RSAT-Module, DC-Discovery, Check-DC-Connectivity, and Application-Log. For better organization, we’ll divide all functions into two categories: Operation Functions: Functions responsible for executing the domain join process. Logging Functions: Functions dedicated to robust logging for progress tracking and error handling. Let’s start by building the operation functions. Step 1: Define the variables that we will be using in this script, like: $DomainName $SrvUsernameSecretName, $SrvPasswordSecretName $Creds . . . Step 2: We need a function to validate if the current user has local administrative rights, ensuring the script can perform privileged operations seamlessly. function Check-AdminRights { # Check if the current user is a member of the local Administrators group $isAdmin = [Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent() $isAdminRole = [Security.Principal.WindowsBuiltInRole]::Administrator if ($isAdmin.IsInRole($isAdminRole)) { Add-Content $progressLogFile "The current user has administrative privileges." return $true } else { $errorMessage = "Exiting script due to lack of administrative privileges." Add-Content $progressLogFile $errorMessage Write-ErrorLog $errorMessage Log-Failure -functionName "Check-AdminRights" -message $errorMessage exit } } Step 3: Validate the status of Active Directory Module. If it's not installed already, install using the below logic: . . if (Get-Module -ListAvailable -Name ActiveDirectory) { Add-Content $progressLogFile "Active Directory Module is already installed." Log-Success -functionName "InstallADModule" -message "Active Directory Module is already installed." } else { Add-Content $progressLogFile "Active Directory Module not found. Initializing installation." Add-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Install-WindowsFeature RSAT-AD-PowerShell -ErrorAction Stop Import-Module ActiveDirectory -ErrorAction Stop Add-Content $progressLogFile "Active Directory Module imported successfully." Log-Success -functionName "InstallADModule" -message "Active Directory Module imported successfully." } . . Step 4: Next, we need to perform multiple checks on the local server, and if desired can be clubbed into a function. Check the current domain-join status: If the server is already joined to a domain, there's no need to join. So, use the below logic to exit the script . . $computerSystem = Get-WmiObject Win32_ComputerSystem if ($computerSystem.PartOfDomain) { Add-Content $progressLogFile "This machine is already joined to : $($computerSystem.Domain)." Log-Success -functionName "VM-Checks" -message "Machine is already joined to : $($computerSystem.Domain)." exit 0 } else { Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." } . . Check the Active directory ports availability: Define parameters with the list of all ports that needs to be available for domain-join : param ( $ports = @( @{Port = 88; Protocol = "TCP"}, @{Port = 389; Protocol = "TCP"}, @{Port = 445; Protocol = "TCP"} ) ) Once you have the parameters defined, check the status of each port using the below sample code. . . foreach ($port in $ports) { try { $checkPort = Test-NetConnection -ComputerName $DomainController -Port $port.Port if ($checkPort.TcpTestSucceeded) { Add-Content $progressLogFile "Port $($port.Port) ($($port.Protocol)) is open." } else { throw "Port $($port.Port) ($($port.Protocol)) is closed." } } catch { $errorMessage = "$($_.Exception.Message) Please check firewall settings." Write-ErrorLog $errorMessage Log-Failure -functionName "VM-Checks" -message $errorMessage exit } } . . Step 5: Now, we need to find an available domain controller in the domain, to process the domain join request. . . try { $domainController = (Get-ADDomainController -DomainName $DomainName -Discover -ErrorAction Stop).HostName Add-Content $progressLogFile "Discovered domain controller: $domainController" Log-Success -functionName "Dc-Discovery" -message "Discovered domain controller $domainController." } catch { $errorMessage = "Failed to discover domain controller for $DomainName." Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-Discovery" -message $errorMessage exit } . . Step 6: We need to perform connectivity and name resolution checks between the local server and the previously identified domain controller. For Network connectivity check, you can use this logic: if (Test-Connection -ComputerName $DomainController -Count 2 -Quiet) { Write-Host "Domain Controller $DomainController is reachable." -ForegroundColor Green Add-Content $progressLogFile "Domain Controller $DomainController is reachable." } else { $errorMessage = "Domain Controller $DomainController is not reachable." Write-ErrorLog $errorMessage exit } For DNS check, you can use the below logic: try { Resolve-DnsName -Name $DomainController -ErrorAction Stop Write-Host "DNS resolution for $DomainController successful." -ForegroundColor Green Add-Content $progressLogFile "DNS resolution for $DomainController successful." } catch { $errorMessage = "DNS resolution for $DomainController failed." Write-Host $errorMessage -ForegroundColor Red Write-ErrorLog $errorMessage Log-Failure -functionName "Dc-ConnectivityCheck" -message $errorMessage exit } To fully automate the domain-join process, it’s essential to retrieve and pass service account credentials within the script without any manual intervention. However, this comes with a critical responsibility—ensuring the security of the service account, as it holds privileged rights. Any compromise here could have serious repercussions for the entire environment. To address this, we leverage Azure Key Vault for secure storage and retrieval of credentials. By using Key Vault, we ensure that sensitive information remains protected while enabling seamless automation. P.S : In this blog, we’ll focus on utilizing Azure Key Vault for this purpose. In the next post, we’ll explore how to retrieve domain credentials from the CyberArk Password Vault using the same level of security and automation. Stay tuned! Step 7: We need to declare the variable to provide the "key vault name" where the service account credentials are stored. This should be done in Step 1: $KeyVaultName = "MTest-KV" The below code ensures that the Azure Key Vault PowerShell module is installed on the local server and if not present, then installs it: # Check if the Az.KeyVault module is installed if (-not (Get-Module -ListAvailable -Name Az.KeyVault)) { Add-Content $progressLogFile "Az.KeyVault module not found. Installing..." # Install the Az.KeyVault module if not found Install-Module -Name Az.KeyVault -Force -AllowClobber -Scope CurrentUser } else { Add-Content $progressLogFile "Az.KeyVault module is already installed." } Now, we'll create a function to retrieve the service account credentials from Azure Key Vault, assuming the logged-in user already has the necessary permissions to access the secrets stored in the Key Vault. function Get-ServiceAccount-Creds { param ( [string]$KeyVaultName, [string]$SrvUsernameSecretName, [string]$SrvPasswordSecretName ) Add-Content $progressLogFile "Initiating retrieval of credentials from vault." try { Add-Content $progressLogFile "Retrieving service account credentials from Azure Key Vault." # Authenticate to access Azure KeyVault using the current account Connect-AzAccount -Identity # Retrieve service account's username and password from Azure Key Vault $SrvUsername = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvUsernameSecretName).SecretValueText $SrvPassword = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SrvPasswordSecretName).SecretValueText # Create a PSCredential object $SecurePassword = ConvertTo-SecureString $Password -AsPlainText -Force $Creds = New-Object System.Management.Automation.PSCredential($SrvUsername, $SecurePassword) Add-Content $progressLogFile "Successfully retrieved service account credentials." Log-Success -functionName "AD-DomainJoin" -message "Successfully retrieved credentials." return $Credentials } catch { $errorMessage = "Error retrieving credentials from Azure Key Vault: $($_.Exception.Message)" Add-Content $errorLogFile $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage exit } } Step 8: We will now use the retrieved service account credentials to send a domain join request to the identified domain controller. function Join-Domain { param ( [string]$DomainName, [PSCredential]$Creds, [string]$DomainController ) try { Add-Content $progressLogFile "Joining machine to domain: $DomainName via domain controller: $DomainController." # Perform the domain join specifying the domain controller Add-Computer -DomainName $DomainName -Credential $Creds -Server $DomainController -ErrorAction Stop Restart-Computer -Force -ErrorAction Stop Add-Content $progressLogFile "Successfully joined the machine to the domain via domain controller: $DomainController." Log-Success -functionName "AD-DomainJoin" -message "$ComputerName successfully joined $DomainName." } catch { $errorMessage = "Error joining machine to domain via domain controller $DomainController: $($_.Exception.Message)" Write-ErrorLog $errorMessage Log-Failure -functionName "AD-DomainJoin" -message $errorMessage Add-Content $progressLogFile "Domain join to $DomainName for $ComputerName failed. Check error log." exit } } Now that we've done all the heavy lifting with operational functions, let's talk about logging functions. During technical activities, especially complex ones, real-time progress monitoring and quick issue identification are essential. Robust logging improves visibility, simplifies troubleshooting, and ensures efficiency when something goes wrong. To achieve this, we’ll implement two types of logs: a detailed progress log to track each step and an error log to capture issues separately. This approach provides a clear audit trail and makes post-execution analysis much easier. Let's see how we can implement this. Step 1: Create log files including current timestamp in the variable declaration holding the log file path: # Global Log Files $progressLogFile = "C:\Logs\ProgressLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" $errorLogFile = "C:\Logs\ErrorLog" + (Get-Date -Format yyyy-MM-dd_HH-m) + ".log" Note: I have included the timestamp in file name, this allows us to capture the logs in separate files in case of multiple attempts. If you do not want multiple files and want to overwrite the existing file, you can remove "+ (Get-Date -Format yyyy-MM-dd_HH-m) +" and it will create a single file named ProcessLog.log. Step 2: How to write events in the log files: To capture the occurrence of any event in the log file while building the PowerShell script, you can use the following code: For capturing progress in ProgressLog.log file, use: Add-Content $progressLogFile "This machine is part of the workgroup: $($computerSystem.Workgroup)." For capturing error occurrence in ErrorLog.log file we need to create a function: # Function to Write Error Log function Write-ErrorLog { param ( [string]$message ) Add-Content $errorLogFile "$message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." } We will call this function to capture the failure occurrence in the log file: $errorMessage = "Error while checking the domain: $($_.Exception.Message)" Write-ErrorLog $errorMessage Step 3: As we want to capture the milestones in Application event logs locally on the server as well, we create another function: # Function to Write to the Application Event Log function Write-ApplicationLog { param ( [string]$functionName, [string]$message, [int]$eventID, [string]$entryType ) # Ensure the event source exists if (-not (Get-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction SilentlyContinue)) { New-EventLog -LogName Application -Source "BuildDomainJoin" -ErrorAction Stop } $formattedMessage = "$functionName : $message at $(Get-Date -Format 'HH:mm, dd-MMM-yyyy')." Write-EventLog -LogName Application -Source "BuildDomainJoin" -EventID $eventID -EntryType $entryType -Message $formattedMessage } To capture the success and failure events in Application event logs, we can create separate functions for each case. These functions can be called from other function(s) to capture the results. Step 4: Function for the success events: We will use Event Id 3011 to capture the success, by creating separate function. You can use any event Id of your choice but do due diligence to ensure that it does not conflict with any of the existing event ID functions. # Function to Log Success function Log-Success { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Success: $message" -eventID 3011 -entryType "Information" } Step 5: To capture failure events, we’ll create a separate function that uses Event ID 3010. Ensure the chosen Event ID does not conflict with any existing Event ID functions. # Function to Log Failure function Log-Failure { param ( [string]$functionName, [string]$message ) Write-ApplicationLog -functionName $functionName -message "Failed: $message" -eventID 3010 -entryType "Error" } Step 6: How to call and use Log-Success function in script: In case of successful completion of any task, call the function to write success event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the successful completion of the module installation: Log-Success -functionName "AD-RSAT-Module" -message "RSAT-AD-PowerShell feature and Active Directory Module imported successfully." Step 7: How to call and use Log-Failure function in script: In case of a failure in any task, call the function to write failure event in the application log. For example, I used the below code in "AD-RSAT-Module" function to report the failure along with the error it failed with. It also stops further processing of the PowerShell script: $errorMessage = "Error during RSAT-AD-PowerShell feature installation or AD module import: $($_.Exception.Message)" Log-Failure -functionName "RSAT-ADModule-Installation" -message $errorMessage Exit With the implementation of an automated domain join solution, the process of integrating servers into Azure becomes more efficient and error-free. By leveraging PowerShell and Azure services, we’ve laid the foundation for future-proof scalability, reducing manual intervention and increasing reliability. This approach sets the stage for further automation in the migration process, providing a seamless experience as the organization continues to grow in the cloud.