Hi there - @Heinrich_Gantenbein and @Anthony_W here to talk to you about a new method for implementing Azure Policy as code!
Important Update: Enterprise Policy as Code has had some major improvements to performance and functionality - providing more features including improved speeds, simpler settings management, more concise output, and new brownfield capabilities. For this to work there are some breaking changes covered in this document. Please ensure that if you have deployed the solution to follow the document to update your existing deployment. The contents of this article will remain the same as the concepts are still the same.
We work closely with customers using Azure Policy and have seen many different methods of deploying and maintaining it, from manual to over-complicated automated methods, everyone has a unique way of doing it. This code was developed to make policy deployment and management simpler while providing full flexibility for complex environments. The driver was looking at the over-engineered methods and incomplete solutions being used and trying to produce something that could easily be implemented and managed by people with little knowledge of infrastructure as code, while still being scalable and maintainable.
The solution allows you to deploy policies, initiatives (set definitions), assignments and policy exemptions at scale with an easy-to-understand deployment and management structure. It is based on our enterprise-azure-policy-as-code solution maintained by a team in a GitHub repository here. Some complexity is unavoidable in an enterprise scenario and this article is the first in a series to help you deal with that complexity.
To get started, create an empty repository of your own. Now clone or fork your GitHub repository locally on your laptop. Use the included Sync-Repo.ps1 script to populate your empty repository. The solution has an StarterKit folder containing definitions and pipelines. Copy the relevant ones to your Definitions and Pipeline folders. Modify the files as needed for your environment.
Alternatively, you can generate the definition folders by importing the definitions from Azure Landing Zone artifacts. We will cover this in the second article in this series. You will need the pipeline from the StarterKit.
Deployments are broken into 3 scripts to facilitate:
The system is built to put Policy and Initiative definitions at one scope. Assignment must be done at that hierarchy-level or below. Following the landing zone recommendations, definitions would be deployed at Management Group "Contoso".
In most environments you will differentiate the Policy Assignments by solution environment type (e.g., Sandbox, Dev, Test, Prod). This will require an additional Management Group level to allow for efficient Assignment of the nuanced Assignments.
Note: The CAF discourages this pattern, and the authors like to not differentiate security requirements for the different environments. However, it is common to have additional Policy requirements for Production environments which dictate higher end SKUs (e.g., Key Vault HSM, App Service Environment, Confidential Compute, etc.). In that case, expand the above hierarchy in contradiction of the CAF guidance: Duplicate the MGs in the blue box (under Landing Zones) creating Corp-NonProd, Corp-Prod, Online-NonProd and Online-Prod.
Finally if you need further guidance in deciding what your management group structure should look like and why Microsoft discourages the deviation from the recommendations please review the articles below:
Management group design and recommendations is out of scope for the rest of this article.
The starter kit contains a fully defined pipeline for Azure DevOps. If you adopt the solution’s EPAC environment names, folder structure and service connection naming and are single tenant, no changes are needed. The pipeline implements GitHub flow; however, it does not do a build after a PR is created from the feature branch. This is on purpose since EPAC Production environment planning can be time consuming and, in most cases, Azure Policy is managed by a small team of 2 or 3. Therefore, the PR approver can just use the Prod plan from the feature branch during the development CI for decision making.
The Definitions folder has one sub folder per type of definition: Policies, Initiatives, Assignments and Exemptions. Documentation drives the output from script Build-PolicyAssignmentDocumentation.ps1.
This is the most important file in the solution. Like any other software or X as Code solution, EPAC needs areas for developing and testing new Policies, Initiatives and Assignments before any deployment to EPAC prod environments. In most cases you will need one subscription each for development and testing. EPAC's prod environment will govern all other IaC environments (e.g., sandbox, development, integration, test/qa, pre-prod, prod, ...). This can be confusing. We will use EPAC environment(s) and IaC environments to disambiguate the name environment.
In a centralized single tenant scenario, you will define three EPAC environments: epac-dev, epac-test and tenant. For each EPAC environment, you must specify:
Example (from starter kit):
{
"managedIdentityLocations": {
"*": "eastus2"
},
"globalNotScopes": {
"*": [
"/resourceGroupPatterns/excluded-rg*"
]
},
"pacEnvironments": [
{
"pacSelector": "epac-dev",
"cloud": "AzureCloud",
"tenantId": "77777777-8888-9999-1111-222222222222",
"defaultSubscriptionId": "11111111-2222-3333-4444-555555555555",
"rootScope": {
"SubscriptionId": "11111111-2222-3333-4444-555555555555"
}
},
{
"pacSelector": "epac-test",
"cloud": "AzureCloud",
"tenantId": "77777777-8888-9999-1111-222222222222",
"defaultSubscriptionId": "99999999-8888-7777-4444-333333333333",
"rootScope": {
"SubscriptionId": "99999999-8888-7777-4444-333333333333"
}
},
{
"pacSelector": "tenant",
"cloud": "AzureCloud",
"tenantId": "77777777-8888-9999-1111-222222222222",
"defaultSubscriptionId": "99999999-8888-7777-4444-333333333333",
"rootScope": {
"ManagementGroupName": "Contoso-Root"
}
}
]
}
Policy definitions use Azure’s normal JSON format. You can organize them in folders (recommended).
Initiatives are like Azure’s normal JSON format, except for:
{
"importPolicyDefinitionGroups": [
"1f3afdf9-d0c9-4c3d-847f-89da613e70a8" // built-in Initiative definition (ASB v3)
]
}
{
"policyDefinitionReferenceId": "Enable Azure Defender for Resource Type AppServices",
"policyDefinitionName": "Enable Azure Defender for Resource Type",
"parameters": {
"effect": {
"value": "[parameters('effect')]"
},
"resourceProvider": {
"value": "AppServices"
}
},
"groupNames": [
"Azure_Security_Benchmark_v3.0_LT-1"
]
}
EPAC Assignment files are structured for minimal copy/paste and maximal efficiency; this approach does create some complexity. Let’s start with the simple assignments.
{
"nodeName": "/Loc/",
"assignment": {
"Name": "Allowed Locations",
"displayName": "Allowed Locations",
"description": "Sets the allowed locations - force update"
},
"definitionEntry": {
"initiativeName": "Allowed Locations",
"friendlyNameToDocumentIfGuid": ""
},
"parameters": {
"AllowedLocations": [
"eastus2"
]
},
"scope": {
"epac-dev": [
"/providers/Microsoft.Management/managementGroups/PAC-Demo-Dev"
],
"epac-test": [
"/providers/Microsoft.Management/managementGroups/PAC-Demo-Test"
],
"tenant1": [
"/providers/Microsoft.Management/managementGroups/Contoso-Demo-Root"
]
}
}
The node name is used in more complex scenario to indicate the error location in the error messages.
Here we see a more complex assignment assigning two built-in Initiatives with different parameters for Prod and NonProd environments:
{
"nodeName": "/Security/",
"children": [
{
"nodeName": "Prod/",
"scope": {
"epac-test": [
"/providers/Microsoft.Management/managementGroups/PAC-Demo-Test"
],
"tenant1": [
"/providers/Microsoft.Management/managementGroups/Contoso-Demo-Prod"
]
},
"parameters": {
// ASB
"publicNetworkAccessOnAzureSQLDatabaseShouldBeDisabledMonitoringEffect": "Deny",
"disallowPublicBlobAccessEffect": "deny",
"secureTransferToStorageAccountMonitoringEffect": "Deny",
"publicNetworkAccessShouldBeDisabledForCognitiveServicesAccountsMonitoringEffect": "Deny",
"keyVaultsShouldHaveSoftDeleteEnabledMonitoringEffect": "Deny",
"keyVaultsShouldHavePurgeProtectionEnabledMonitoringEffect": "Deny",
"webApplicationFirewallShouldBeEnabledForAzureFrontDoorServiceServiceMonitoringEffect": "Deny",
"azureSpringCloudShouldUseNetworkInjectionMonitoringEffect": "Deny",
},
"children": [
{
"nodeName": "azure-security-benchmark",
"assignment": {
"name": "pr-asb",
"displayName": "Prod Azure Security Benchmark",
"description": "ASB Initiative parameterized for … "
},
"definitionEntry": {
"initiativeName": "1f3afdf9-d0c9-4c3d-847f-89da613e70a8",
"friendlyNameToDocumentIfGuid": "Azure Security Benchmark"
}
},
{
"nodeName": "nist-800-53-r5",
"assignment": {
"name": "pr-nist-800-53-r5",
"displayName": "Prod NIST SP 800-53 Rev. 5",
"description": "NIST SP 800-53 Rev. 5 Initiative … "
},
"definitionEntry": {
"initiativeName": "179d1daa-458f-4e47-8086-2a68d0d6c38f",
"friendlyNameToDocumentIfGuid": "NIST SP 800-53 Rev. 5"
}
}
]
},
{
"nodeName": "NonProd/",
"scope": {
"epac-dev": [
"/providers/Microsoft.Management/managementGroups/PAC-Demo-Dev"
],
"tenant1": [
"/providers/Microsoft.Management/managementGroups/Contoso-Demo-NonProd"
]
},
"children": [
{
"nodeName": "azure-security-benchmark",
"assignment": {
"name": "np-asb",
"displayName": "NonProd Azure Security Benchmark",
"description": "ASB Initiative parameterized for controlling … "
},
"definitionEntry": {
"initiativeName": "1f3afdf9-d0c9-4c3d-847f-89da613e70a8",
"friendlyNameToDocumentIfGuid": "Azure Security Benchmark"
}
},
{
"nodeName": "nist-800-53-r5",
"assignment": {
"name": "np-nist-800-53-r5",
"displayName": "NonProd NIST SP 800-53 Rev. 5",
"description": "NIST SP 800-53 Rev. 5 Initiative … "
},
"definitionEntry": {
"initiativeName": "179d1daa-458f-4e47-8086-2a68d0d6c38f",
"friendlyNameToDocumentIfGuid": "NIST SP 800-53 Rev. 5"
}
}
]
}
]
}
The tree structure is used for efficiency. This generates 4 assignments for EPAC prod environment (2 environments x 2 Initiatives). The assignment fields are string concatenated in the hierarchy. Scope and definitionEntry is allowed exactly once in each branch of the tree. parameters are the union of all the parameters encountered in the tree with parameters being redefined lower in the tree if the same parameter is specified. notScopes are cumulative but cannot be added lower than the sole
scope definition. While there is no depth limit enforced, it is unusual to see more that 5 levels since it would be hard to understand the file of such a complexity.
We will cover these in a future post. They are fully documented in the GitHub repository.
This is the simplest case possible – there is a built-in policy definition you want to assign at a certain scope. Because the policy only contains an audit effect (or deny), there is also no RBAC that needs to be done.
For this example, I’ve chosen to deploy the policy “Secure transfer to storage accounts should be enabled”. You can follow the steps below to see how to do this.
2. Because I’m deploying a built-in policy, I don’t need to add any files to the Initiatives or Policies folders.
3. To complete the assignment, I create a file in the Assignments folder – the structure of these files is quite flexible and there are multiple ways to organize the assignment files (e.g. type/location) but for this example I’ve made it as simple as possible. The important parts are highlighted below – the scope section relates the environment to a scope in my Azure management group structure, and the policyName is the GUID for the built-in policy (from the Azure Policy blade). For the parameters I have derived this from the policy definition.
4. While I can have a pipeline deploy this – to show how it works a bit better I ran the script manually as below – specifying the environment to plan for and the SupressDeletes switch otherwise it will plan to remove existing policy assignments.
5. The output looks like below, and in my Output folder it will generate a plan of the actions the deployment is going to make.
6. To run the plan, I call the Deploy-AzPoliciesInitiativesAssignmentsFromPlan.ps1 file and provide the environment. The script reads from the plan file in the Output folder and makes the necessary changes. In the case of a policy assignment that needs roles assigned it will also generate a plan for role assignments which can be run. I can verify in the portal that the policy has been deployed.
Moving on now to some other deployment cases....
This time we have a custom policy – there is no initiative and no RBAC. We must add the definition and then create an assignment.
For the example I have simple policy to deny the creation of a storage account if it doesn’t have the minimum TLS setting correct.
1. Put the custom definition in a file in the “Policies” folder. An example is given here, this file is just the properties for the policy definition in JSON format.
2. As before I can run the Build-AzPoliciesInitiativesAssignmentsPlan.ps1 script and the output will show me that a new policy is to be added.
3. I could run the Deploy script and push that policy out – but I can just create the assignment and rerun to create a new plan. I’ll just add a new assignment object to the previous assignment file to keep it simple. You’ll notice I changed the structure of that assignment file so that I can deploy multiple assignments. The different node names are used to organize the policies and assist with troubleshooting.
4. Again, running the build script shows me I now have one new policy to deploy and one new assignment – generating this plan makes it easier to see what is going to be changed in the environment and can help prevent deployment mistakes.
5. All I do now if run the Deploy script and EPAC takes care of the rest – I now have my custom policy deploy and assigned.
On to the next case...
This time I’m going to take both the custom policy and the built-in policy and combine them into an initiative and assign it at a scope.
The custom initiative is a JSON file with some changes as described in the Initiatives section above.
Create the custom initiative and place it in the initiative folder. An example of what the file should look like is here.
2. For the assignment we start to see the power of how this solution works – this time I can modify my assignment file to deploy just the initiative.
3. Run the Build script and examine the results – because the EPAC solution checks the state in Azure before deploying and creates a plan I can see it is going to remove the two assignments I created previously and add a new assignment. This means that the code becomes the source of truth for what policies are deployed in Azure.
4. Running the Deploy script and EPAC takes care of the rest – I can verify the previous assignments were removed and my new initiative and assignment exist.
The policy initiative is now deployed successfully.
The final case deals with Deploy If Not Exists and Modify effect policies – these have the extra requirement to deal with Azure Role Based Access Control for the policy managed identity.
These policies need RBAC permission to be able to make changes to an object when the policy effects run. EPAC handles this scenario by generating a separate plan for role assignments, let’s look at how it works.
I’m going to deploy the built-in policy “Inherit a tag from the subscription if missing”, it has a modify effect. I’ll just create a simple assignment file to deploy the policy as below.
2. When I run the Deploy script the output will be a little bit different.
The script will generate a plan for role assignments to be added as well – it passes through each policy being deployed and finds all the applicable role assignments.
3. I can deploy the new role assignments by running the script below.
And when I check the role assignments on the management group I can see it has added the managed identity.
This covers most of the cases for policy deployment, however you will produce different combinations.
Remember to thoroughly test the code and policies in a safe environment before deploying to production and if there are any issues with the code, please feel free to raise them on the GitHub project.
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.