terraform
43 TopicsAnnouncing MSGraph Provider Public Preview and the Microsoft Terraform VSCode Extension
We are thrilled to announce two exciting developments in the Microsoft ecosystem for Terraform infrastructure-as-code (IaC) practitioners: the public preview of the Terraform Microsoft Graph (MSGraph) provider and the release of the Microsoft Terraform Visual Studio Code (VSCode) extension. These innovations are designed to streamline your workflow, empower your automation, and make managing Microsoft cloud resources easier than ever. Public Preview: Terraform Microsoft Graph (MSGraph) Provider The Terraform MSGraph provider empowers you to manage Entra APIs like privileged identity management as well as M365 Graph APIs like SharePoint sites from day 0 by leveraging the power and flexibility of HashiCorp Configuration Language (HCL) in Terraform. resource "msgraph_resource" "application" { url = "applications" body = { displayName = "My Application" } response_export_values = { all = "@" app_id = "appId" } } output "app_id" { value = msgraph_resource.application.output.app_id } output "all" { // it will output the whole response value = msgraph_resource.application.output.all } Historically, Terraform users could utilize the `azuread` provider to manage Entra features like users, groups, service principals, and applications. The new `msgraph` provider also supports these features and extends functionality to all beta and v1 Microsoft Graph endpoints. Querying role assignments for a service principal The below example shows how to use the `msgraph` provider to grant app permissions to a service principal: locals { MicrosoftGraphAppId = "00000003-0000-0000-c000-000000000000" # AppRoleAssignment userReadAllAppRoleId = one([for role in data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].appRoles : role.id if role.value == "User.Read.All"]) userReadWriteRoleId = one([for role in data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].oauth2PermissionScopes : role.id if role.value == "User.ReadWrite"]) # ServicePrincipal MSGraphServicePrincipalId = data.msgraph_resource.servicePrincipal_msgraph.output.all.value[0].id TestApplicationServicePrincipalId = msgraph_resource.servicePrincipal_application.output.all.id } data "msgraph_resource" "servicePrincipal_msgraph" { url = "servicePrincipals" query_parameters = { "$filter" = ["appId eq '${local.MicrosoftGraphAppId}'"] } response_export_values = { all = "@" } } resource "msgraph_resource" "application" { url = "applications" body = { displayName = "My Application" requiredResourceAccess = [ { resourceAppId = local.MicrosoftGraphAppId resourceAccess = [ { id = local.userReadAllAppRoleId type = "Scope" }, { id = local.userReadWriteRoleId type = "Scope" } ] } ] } response_export_values = { appId = "appId" } } resource "msgraph_resource" "servicePrincipal_application" { url = "servicePrincipals" body = { appId = msgraph_resource.application.output.appId } response_export_values = { all = "@" } } resource "msgraph_resource" "appRoleAssignment" { url = "servicePrincipals/${local.MSGraphServicePrincipalId}/appRoleAssignments" body = { appRoleId = local.userReadAllAppRoleId principalId = local.TestApplicationServicePrincipalId resourceId = local.MSGraphServicePrincipalId } } SharePoint & Outlook Notifications With your service principals properly configured, you can set up M365 endpoint workflows such an outlook notification template list as shown below. The actual service principal setup has been omitted from this code sample for the sake of brevity, but you will need Sites.Manage.All, Sites.ReadWrite.All, User.Read, and User.Read.All permissions for this example to work: data "msgraph_resource" "sharepoint_site_by_path" { url = "sites/microsoft.sharepoint.com:/sites/msgraphtest:" response_export_values = { full_response = "@" site_id = "id || ''" } } resource "msgraph_resource" "notification_templates_list" { url = "sites/${msgraph_resource.sharepoint_site_by_path.output.site_id}/lists" body = { displayName = "DevOps Notification Templates" description = "Centrally managed email templates for DevOps automation" template = "genericList" columns = [ { name = "TemplateName" text = { allowMultipleLines = false appendChangesToExistingText = false linesForEditing = 1 maxLength = 255 } }, { name = "Subject" text = { allowMultipleLines = false appendChangesToExistingText = false linesForEditing = 1 maxLength = 500 } }, { name = "HtmlBody" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 10 maxLength = 10000 } }, { name = "Recipients" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 3 maxLength = 1000 } }, { name = "TriggerConditions" text = { allowMultipleLines = true appendChangesToExistingText = false linesForEditing = 5 maxLength = 2000 } } ] } response_export_values = { list_id = "id" list_name = "displayName" web_url = "webUrl" } } The MSGraph provider is to AzureAD as the AzAPI provider is to AzureRM. Since support for resource types is automatic, you can access the latest features and functionality as soon as they're released via the provider. AzureAD will continue to serve as the convenience layer implementation of a subset of Entra APIs. We invite you to try the new provider today: - Deploy your first msgraph resources - Check out the registry page - Visit the provider GitHub Introducing the Microsoft Terraform VSCode Extension The new official Microsoft Terraform extension for Visual Studio Code consolidates AzureRM, AzAPI, and MSGraph VSCode support into a single powerful extension. The extension supports exporting Azure resources as Terraform code, as well as IntelliSense, syntax highlighting, and code sample generation. It replaces the Azure Terraform and AzAPI VSCode extensions and adds some new features. Installation & Migration New users can install the extension by searching “Microsoft Terraform” within Visual Studio Marketplace or their “Extensions” tab. Users can also click this link to the Visual Studio marketplace. Users of the “Azure Terraform” extension can navigate to “Extensions” tab and selecting the old extension. Select the “Migrate” button to move to the new extension. Users of the “Terraform AzAPI Provider” extension will be directed to the new extension: New Features Export Azure Resources As Terraform This feature allows you to export existing Azure resources as Terraform configuration blocks using Azure Export for Terraform. This helps you migrate existing Azure resources to Terraform-managed infrastructure. Open the Command Palette (Command+Shift+P on macOS and Ctrl+Shift+P on Windows/Linux). Search for and select the command Microsoft Terraform: Export Azure Resource as Terraform. Follow the prompts to select the Azure subscription and resource group containing the resources you want to export. Select the azurerm provider or the azapi provider to export the resources. The extension will generate the Terraform configuration blocks for the selected resources and display them in a new editor tab. Support for MSGraph The new extension comes fully equipped with intellisense, code completion, and code samples just like the AzAPI provider. See the next section for recorded examples of these features within the AzureRM & AzAPI providers. Preexisting Features Intelligent Code Completion: Benefit from context-aware suggestions, like property names or resource types. Code Samples: Quickly insert code samples for your resources: Paste as AzAPI: Copy your existing resource JSON or ARM Templates into VSCode with the Microsoft Terraform extension, and it will automatically convert your code into AzAPI. The below example takes a resource JSON from the Azure Portal and pastes it into VSCode as AzAPI: Migrate AzureRM to AzAPI: Move existing AzureRM code to the AzAPI provider whenever you wish to. Read more in the Guide to migrate AzureRM resources to AzAPI Feedback We value your feedback! You can share your experience with the Microsoft Terraform extension by running the command Microsoft Terraform: Show Survey from the Command Palette. Your input helps us improve the extension and better serve your needs. Conclusion Whether you are managing traditional Azure resources, modern Microsoft Graph environments, or a combination of both, the new MSGraph provider and Microsoft Terraform VS Code extension are designed to help you deliver robust, reliable infrastructure—faster and with greater confidence. Stay tuned for further updates, workshops, and community events as we continue to evolve these offerings. Your feedback and participation are invaluable as we build the next generation of infrastructure automation together.3.5KViews4likes2CommentsTerraform Azure Verified Modules for Platform Landing Zone (ALZ) Migration Guidance and Tooling
We are very pleased to announce that migration guidance and tooling to aid Terraform import is now moving from public preview to being generally available. Where to find it Head over to aka.ms/alz/tf/migrate to read our guidance and find our tooling. What does it do The migration guidance talks you through the procedure to migrate Terraform state from the classic CAF Enterprise Scale module to the Terraform Azure Verified Modules for Platform Landing Zone (ALZ) modules. The guidance and tooling helps you generate a set of Terraform import blocks to import the state of your existing platform landing zones into the Azure Verified Modules (AVM). Once those blocks have been generated, you can raise a pull request, test and merge then apply with your continuous delivery tool to import the state. From there forwards, you will be managing the platform landing zone with the AVM. How does it work The migration tool aids in mapping your deployed Microsoft Azure resources against the Azure Verified Modules. It maps on name or other available attributes. The tool follows a 3 stage process: Setup Resource Mapping Resource Attribute Mapping Setup This stage involves you configuring the target Terraform module by using the ALZ IaC Accelerator or composing your own module for advanced use cases. You will also need to identify your existing management group hierarchy and platform subscriptions in this stage. Resource Mapping During this stage you will run the migration tool. The tool will attempt to match all resources in the target subscriptions and / or management groups against the Azure Verified Module planned resources. For anything it can't match on, it will provide details in a file called issues.csv. You'll review issues.csv and correct any resource names in the target module to ensure they match your existing resource names. You'll then run the tool again and repeat until you have matched everything you can. We provide example tfvars files and lib folder to make this easier, they are commented with things you'll likely need to change. Once you have updated all the names you can, if you still have any issues left in issues.csv, you'll need to specify what you want to do with them. You can either: Ignore them Destroy and Recreate them Destroy them You'll add the action against each row in the CSV and then save the CSV file as resolved-issues.csv ready for the next stage. Resource Attribute Mapping Now you've mapped the resources themselves, you'll need to check that the attributes of the resources also match your existing configuration where they need to. To help with this you'll run the tool again, this time supplying your resolved-issues.csv as an input. This will prompt the tool to generate the Terraform import blocks and run a Terraform plan. The tool outputs a simplified plan file that only includes the changes you need to care about, namely update and destroy and recreate. You'll review the simplified plan file and determine if anything in there requires an update to your target module. If it does, you can update it and re-run the tool. You can repeat this until all unwanted changes are handled. You'll run the tool one last time to generate the final set of imports and now you are ready to apply the Terraform via your standard CI / CD process. Limitations At this time our guidance only supports resources that can be deployed by the classic CAF Enterprise Scale module. The tooling can technically support importing any resources, but we don't provide support or guidance for that scenario. The documentation of the tool for advanced scenarios is currently limited and we assume usage for this use case only at this time. Tooling This migration guidance uses a generic tool called Terraform State Importer . This tool can be used to migrate state for any Azure Resource Manager Terraform resources to a new Terraform module. We provide specific configuration files and settings for this use case, but you could modify them for more advanced scenarios. The tool does not look at any existing module or Terraform state file, it directly queries Azure using KQL queries to identify your deployed resources, as such it could also be used to import resources deployed via ClickOps, ARM or Bicep too. Thanks Thanks to the following people for making this happen: Matt White and Jack Tracey for technical guidance and validation Paul Grimley and Charlie Grabiaud for keeping it on track Haflidi Fridthjofsson (Microsoft) and Aidan Hughes (Servent) for the comprehensive and very valuable testing and feedback2.4KViews6likes1CommentAzure Verified Modules: Support Statement & Target Response Times Update
We are announcing an update to the Azure Verified Modules (AVM) support statement. This change reflects our commitment to providing clarity alongside timely and effective support for our community and AVM module consumers. These changes are in preparation to allow us to enable AVM modules to be published as V1.X.X modules (future announcement on this soon 🥳 sign up to the next AVM Community Call on July 1st 2025 to learn more). What is the new support statement? You can find the support statement on the AVM website here: https://azure.github.io/Azure-Verified-Modules/help-support/module-support/#support-statements For bugs/security issues 5 business days for a triage, meaningful response, and ETA to be provided for fix/resolution by module owner (which could be past the 5 days) For issues that breach the 5 business days, the AVM core team will be notified and will attempt to respond to the issue within an additional 5 business days to assist in triage. For security issues, the Bicep or Terraform Product Groups may step in to resolve security issues, if unresolved, after a further additional 5 business days. For feature requests 15 business days for a meaningful response and initial triage to understand the feature request. An ETA may be provided by the module owner if possible. Key changes from the previous support statement In short its two items: Increasing response time targets from: 3 to 5 business days for issues And from 3 to 5 business days for AVM core team escalation Handling bugs/security issues separately from feature requests Feature requests now have a 15 business day target response time The previous support statement outlined a more rigid structure for issue triage and resolution. It required module owners/contributors to respond within 3 business days, with the AVM core team stepping in if there was no response within a further 24 hours. In the event of a security issue being unaddressed after 5 business days, escalation to the product group (Bicep/Terraform) would occur to assist the AVM core team. There was also no differentiation between bugs/security issues and feature requests, which there now is. You can view the git diff of the support statement here Why the changes? Being honest, we weren't meeting the previous support statement 100% of the time, which we are striving for, across all the AVM modules. And we heard from you that, that wasn't ideal and we agree whole heartedly. Therefore, we took a step back, reflected, looked at the data available and huddled together to redefine what the new AVM support statement and targets should be. "Yeah, but why can't you just meet the previous support statement and targets?" This is a very valid question that you may have or be wondering. And we want to be honest with you so here are the reasons why this isn't possible today: Module owners are not 100% dedicated to only supporting their AVM modules; they also have other daily roles and responsibilities in their jobs at Microsoft. Sometimes this also means conflicting priorities for module owners and they have to make a priority call. We underestimated the impact of holidays, annual leave, public holidays etc. The AVM core teams responsibility is not to resolve all module issues/requests as they are smaller team driving the AVM framework, specs, tooling and tests. They will of course step in when needed, as they have done so far today 👍 We don't get as many contributions from the open-source community as we expected and would still love to see 😉 For clarity we always love to see a Pull Request to help us add new features or resolve bugs and issues, even for simple things like typos. It really does help us go faster 🏃➡️ "How are you going to try and avoid changing (increasing) the support statement and targets in the future?" Again another very valid ask! And we reflected upon this when making these changes to the support statement we are announcing here. To avoid this potential risk we are also taking the following actions today: Building new internal tooling and dashboards for module owners to discover, track and monitor their issues and pull requests across various modules they may own, across multiple languages. (already complete and published 👍) This tooling will also help the AVM core team track issues and report on them more easily to help module owners avoid non-compliance with the targets. Continue to push for, promote, and encourage open-source community contributions Prevent AVM modules being published as V1.X.X if they are unable to prove compliance with the new support statement and targets (sneak peek into V1.X.X requirements) Looking further into the future we are also investigating the following: Building a dedicated AVM team, separate from the AVM core team, that will triage, work on, and fix/resolve issues that are nearing or breaching the support statement and targets. Also they will look into feature requests as and where time allows or are popular/upvoted heavily where module owners are unable to prioritize in the near future due to other priorities. Seeing where AI and other automation tooling can assist with issue triage and resolution to reduce module owner workload. Summary We hope that this provides you with a clear understanding of the changes to the AVM support statement and targets and why we are making these. We also hope you appreciate our honesty on the situation and can see we are taking action to make things better while also reflecting and amending our support statements to be more realistic based on the past 2 years of launching and running AVM to date. Finally we just want to reassure everyone that we remain committed to AVM and have big plans for the rest of the calendar year and beyond! 😎 And with this in mind we want to remind you to sign up to the next AVM Community Call on July 1st 2025 to learn more and ask any questions on this topic or anything else AVM related with the rest of the community 👍 Thanks The AVM Core Team1.5KViews4likes0CommentsApril 2025 Recap: Azure Database for PostgreSQL Flexible Server
Hello Azure Community, April has brought powerful capabilities to Azure Database for PostgreSQL flexible server, On-Demand backups are now Generally Available, a new Terraform version for our latest REST API has been released, the Public Preview of the MCP Server is now live, and there are also a few other updates that we are excited to share in this blog. Stay tuned as we dive into the details of these new features and how they can benefit you! Feature Highlights General Availability of On-Demand Backups Public Preview of Model Context Protocol (MCP) Server Additional Tuning Parameters in PG 17 Terraform resource released for latest REST API version General Availability of pg_cron extension in PG 17 General Availability of On-Demand Backups We are excited to announce General Availability of On-Demand backups for Azure Database for PostgreSQL flexible server. With this it becomes easier to streamline the process of backup management, including automated, scheduled storage volume snapshots encompassing the entire database instance and all associated transaction logs. On-demand backups provide you with the flexibility to initiate backups at any time, supplementing the existing scheduled backups. This capability is useful for scenarios such as application upgrades, schema modifications, or major version upgrades. For instance, before making schema changes, you can take a database backup, in an unlikely case, if you run into any issues, you can quickly restore (PITR) database back to a point before the schema changes were initiated. Similarly, during major version upgrades, on-demand backups provide a safety net, allowing you to revert to a previous state if anything goes wrong. In the absence of on-demand backup, the PITR could take much longer as it would need to take the last snapshot which could be 24 hours earlier and then replay the WAL. Azure Database for PostgreSQL flexible server already does on-demand backup behind the scenes for you and then deletes it when the upgrade is successful. Key Benefits: Immediate Backup Creation: Trigger full backups instantly. Cost Control: Delete on-demand backups when no longer needed. Improved Safety: Safeguard data before major changes or refreshes. Easy Access: Use via Azure Portal, CLI, ARM templates, or REST APIs. For more details and on how to get started, check out this announcement blog post. Create your first on-demand backup using the Azure portal or Azure CLI. Public Preview of Model Context Protocol (MCP) Server Model Context Protocol (MCP) is a new and emerging open protocol designed to integrate AI models with the environments where your data and tools reside in a scalable, standardized, and secure manner. We are excited to introduce the Public Preview of MCP Server for Azure Database for PostgreSQL flexible server which enables your AI applications and models to talk to your data hosted in Azure Database for PostgreSQL flexible servers according to the MCP standard. The MCP Server exposes a suite of tools including listing databases, tables, and schema information, reading and writing data, creating and dropping tables, listing Azure Database for PostgreSQL configurations, retrieving server parameter values, and more. You can either build custom AI apps and agents with MCP clients to invoke these capabilities or use AI tools like Claude Desktop and GitHub Copilot in Visual Studio Code to interact with your Azure PostgreSQL data simply by asking questions in plain English. For more details and demos on how to get started, check out this announcement blog post. Additional Tuning Parameters in PG17 We have now provided an expanded set of configuration parameters in Azure Database for PostgreSQL flexible server (V17) that allows you to modify and have greater control to optimize your database performance for unique workloads. You can now tune internal buffer settings like commit timestamp, multixact member and offset, notify, serializable, subtransaction, and transaction buffers, allowing you to better manage memory and concurrency in high-throughput environments. Additionally, you can also configure parallel append, plan cache mode, and event triggers that opens powerful optimization and automation opportunities for analytical workloads and custom logic execution. This gives you more control for memory intensive and high-concurrency applications, increased control over execution plans and allowing parallel execution of queries. To get started, all newly modifiable parameters are available now through the Azure portal, Azure CLI, and ARM templates, just like any other server configuration setting. To learn more, visit our Server Parameter Documentation. Terraform resource released for latest REST API version A new version of the Terraform resource for Azure Databases for PostgreSQL flexible server is now available, this brings several key improvements including the ability to easily revive dropped databases with geo-redundancy and customer-managed keys (Geo + CMK - Revive Dropped), seamless switchover of read replicas to a new site (Read Replicas - Switchover), improved connectivity through virtual endpoints for read replicas, and using on-demand backups for your servers. To get started with Terraform support, please follow this link: Deploy Azure Database for PostgreSQL flexible server with Terraform General Availability of pg_cron extension in PG 17 We’re excited to announce that the pg_cron extension is now supported in Azure Database for PostgreSQL flexible server major versions including PostgreSQL 17. This extension enables simple, time-based job scheduling directly within your database, making maintenance and automation tasks easier than ever. You can get started today by enabling the extension through the Azure portal or CLI. To learn more, please refer Azure Database for PostgreSQL flexible server list of extensions. Azure Postgres Learning Bytes 🎓 Setting up alerts for Azure Database PostgreSQL flexible server using Terraform Monitoring metrics and setting up alerts for your Azure Database for PostgreSQL flexible server instance is crucial for maintaining optimal performance and troubleshooting workload issues. By configuring alerts, you can track key metrics like CPU usage and storage etc. and receive notifications by creating an action group for your alert metrics. This guide will walk you through the process of setting up alerts using Terraform. First, create an instance of Azure Database for PostgreSQL flexible server (if not already created) Next, create a Terraform File and add these resources 'azurerm_monitor_action_group', 'azurerm_monitor_metric_alert' as shown below. resource "azurerm_monitor_action_group" "example" { name = "<action-group-name>" resource_group_name = "<rg-name>" short_name = "<short-name>" email_receiver { name = "sendalerts" email_address = "<youremail>" use_common_alert_schema = true } } resource "azurerm_monitor_metric_alert" "example" { name = "<alert-name>" resource_group_name = "<rg-name>" scopes = [data.azurerm_postgresql_flexible_server.demo.id] description = "Alert when CPU usage is high" severity = 3 frequency = "PT5M" window_size = "PT5M" enabled = true criteria { metric_namespace = "Microsoft.DBforPostgreSQL/flexibleServers" metric_name = "cpu_percent" aggregation = "Average" operator = "GreaterThan" threshold = 80 } action { action_group_id = azurerm_monitor_action_group.example.id } } 3. Run the terraform initialize, plan and apply commands to create an action group and attach a metric to the Azure Database for PostgreSQL flexible server instance. terraform init -upgrade terraform plan -out <file-name> terraform apply <file-name>.tfplan Note: This script assumes you have already created an Azure Database for PostgreSQL flexible server instance. To verify your alert, check the Azure portal under Monitoring -> Alerts -> Alert Rules tab. Conclusion That's a wrap for the April 2025 feature updates! Stay tuned for our Build announcements, as we have a lot of exciting updates and enhancements for Azure Database for PostgreSQL flexible server coming up this month. We’ve also published our Yearly Recap Blog, highlighting many improvements and announcements we’ve delivered over the past year. Take a look at our yearly recap blog here: What's new with Postgres at Microsoft, 2025 edition We are always dedicated to improving our service with new array of features, if you have any feedback or suggestions we would love to hear from you. 📢 Share your thoughts here: aka.ms/pgfeedback Thanks for being part of our growing Azure Postgres community.Announcing Public Preview of Terraform Export from the Azure Portal
Scenario Imagine you have an existing networking configuration you would like to bring to Terraform. Whether you’re interested in learning Terraform or an expert, understanding how Azure resources are reflected in the azurerm and azapi providers is critical to your team. With Terraform export, you can quickly see how your resources are represented in either provider, whether it’s one resource from the configuration or the entire resource group. Benefits Azure Export for Terraform is a tool designed to provide a seamless and efficient way to generate Terraform configuration files that accurately represent your Azure resources. With the new Portal experience, you can easily understand your infrastructure’s representation in either the AzureRM or AzAPI providers within Terraform. Usage Prerequisite Subscriptions will need to register the Microsoft.AzureTerraform resource provider. Portal Usage Find the experience in the Automation tab under the “Export template” blade. This experience is supported for individual resources as well as resource groups. Next Steps We invite you to try out the Azure Portal Export for Terraform feature and share your feedback with us via the Feedback button. Your input is valuable as we continue to improve and expand our offerings to better meet your needs. For scripting or exporting many resource groups or resource types, we encourage you to check out the Azure Export for Terraform tool, which comes with customization features. If you wish to utilize the underlying APIs directly or via CLI/PS, visit the new Azure Terraform resource provider documentation. As always, thank you for being a part of our Azure Terraform community and for your continued support.15KViews6likes0CommentsAn Update on Bicep Azure Verified Modules for Platform Landing Zone (ALZ)
But first some history and context As you may of heard in one of our Azure Landing Zone (ALZ) community calls over the past year, across ALZ we have been working hard to refactor both our Terraform and Bicep implementation options to be built upon Azure Verified Modules (AVM). Earlier this year we announced that the work for Terraform, which we started on first, was complete; and you can read more about that in the announcement blog post we posted here. But whilst this work was going on the ALZ Bicep team where already busy planning how they would go about doing the same and rebuilding ALZ Bicep from AVM modules. You can see the original plans and where we also asked for feedback in the GitHub issue (#791) . Enough history, what's the latest? Now to answer the question everyone has and rightly so 😁 Well, it's good news! We have been busy working away on getting a number of the AVM Bicep Resource Modules updated with missing bits and pieces that we need from an ALZ perspective. All fairly minor in most cases but some required some bigger updates than others, and some modules didn't exist at all so we have had to propose, create, and publish those of which we are pretty much done with 👍 We are still working towards an end of Q4 (June/July) target for a preview release of all the modules, accelerator and guidance on how to use the new version of ALZ Bicep, which will be called "Bicep Azure Verified Modules for Platform Landing Zone (ALZ)"; this is to align with Terraform and also to provide clear distinction between ALZ Bicep and the new AVM based version. Please note that the timeline shared above is an ETA and may move Announcing the preview release of `avm/ptn/alz/empty` AVM Pattern Module Before we get to a more complete release of all the required resources and modules to build the entire ALZ architecture with the new Bicep Azure Verified Modules for Platform Landing Zone (ALZ), we wanted to share an early look at the module that will be at the heart of all of your ALZ deployments. That module is called `avm/ptn/alz/empty` and is available in the Public Bicep Registry for you to try out today (currently version `0.1.0`)! Tip: Checkout the "max" test in the tests directory for advanced usage examples! module testMg 'br/public:avm/ptn/alz/empty:0.1.0' = { params: { managementGroupName: 'test-mg' // Other parameters here... } } This module is 1 of 11 modules that will all be based off the same code. The module optionally creates all of the below: The Management Group itself Can also target an existing Management Group Management Group Subscription Associations RBAC Custom Role Definitions RBAC Role Assignments Policy Assignments Custom Policy Definitions Custom Policy Set Definitions (Initiatives) There will also be 1 x Bicep Azure Verified Modules for Platform Landing Zone (ALZ) pattern module for each of the ALZ Architectures Management Groups, plus this empty one for custom and advanced scenarios. A reminder of those Management Groups and the associated modules that will be created for each of them: `avm/ptn/alz/int-root` `avm/ptn/alz/platform` `avm/ptn/alz/platform-management` `avm/ptn/alz/platform-identity` `avm/ptn/alz/platform-connectivity` `avm/ptn/alz/landing-zones` `avm/ptn/alz/landing-zones-corp` `avm/ptn/alz/landing-zones-online` `avm/ptn/alz/decommissioned` `avm/ptn/alz/sandbox` These Management Group aligned pattern modules will create the same resources as above, but will have the latest release of the ALZ Library baked in to each of the modules. Meaning that for the `avm/ptn/alz/int-root` pattern module, you won't have to declare all of the ALZ RBAC Custom Role Definitions, Custom Policy Definitions, Policy Assignments etc. via the input parameters as they'll be hardcoded in the module based off the latest release from the ALZ Library at the point the version of the module was released. This means that to build the ALZ Management Group hierarchy and make all of the default ALZ policy assignments, as documented here, you'd need a bicep file that would look something like this as a starting point: Important: None of these modules exist below today! module intRootMg 'br/public:avm/ptn/alz/int-root:0.1.0' = { params: { managementGroupName: 'int-root-mg' } } module platformMg 'br/public:avm/ptn/alz/platform:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: intRootMg.outputs.managementGroupId } } module platformConnectivityMg 'br/public:avm/ptn/alz/platform-connectivity:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: platformMg.outputs.managementGroupId } } This will make getting the ALZ Architecture out of the box really fast, and also really easy to upgrade and get the latest updates, by just bumping the version number as you desire when you are ready. Coupled with the `avm/ptn/alz/empty` module to add your own additional Policy Definitions and assignments, etc. at the same Management Groups scopes also helps you decouple the constant updates to the ALZ architecture and policies etc. from your own additional requirements. Helping you keep your code cleaner and our modules simple to maintain as we won't have to cater for handling additional custom definitions and assignments alongside the defaults from ALZ that are baked into the modules. Note: We are looking at suggesting that all of these are deployed via Deployment Stacks to help with lifecycle management of resources. e.g. help clean-up resources as well as deploy new ones; think policy assignments and definitions etc. We need to complete a lot more testing on this, but would love your feedback on experiences if you have any using Deployment Stacks to manage these kind of resources today. Open an issue/discussion on the ALZ Bicep GitHub repo 👍 Our asks to you 🫵 Please go try out and test the new `avm/ptn/alz/empty` module and test it out for all the scenarios you can think of relating to Management Groups, RBAC, Policies etc. we want to make sure it's "match fit/ready" before we then build the Management Group aligned modules and bake in the ALZ defaults to them. So please go and put the module through its paces and test it out. Tip: Checkout the "max" test in the tests directory for advanced usage examples! If you find any issues, bugs, feature requests or just have a question on how to use it, please just raise them as GitHub issues here (make sure to select the `avm/ptn/alz/empty` module from the drop down 👍) Thanks in advance for all your efforts and assistance and we look forward to hearing and getting your feedback on the module 👏1.5KViews4likes1CommentTechnical Walkthrough: Deploying a SQL DB like it's Terraform
Introduction This post will be a union of multiple topics. It is part of the SQL CI/CD series and as such will build upon Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub and Managed SQL Deployments Like Terraform | Microsoft Community Hub while also crossing over with the YAML Pipeline series This is an advanced topic in regard to both Azure DevOps YAML and SQL CI/CD. If both of these concepts are newer to you, please refer to the links above as this is not designed to be a beginner's approach in either one of these domains. Assumptions To get the most out of this and follow along we are going to assume that you are 1.) On board with templating your Azure DevOps YAML Pipelines. By doing this we will see the benefit of quickly onboarding new pipelines, standardizing our deployment steps, and increasing our security. We also are going to assume you are on board with Managed SQL Deployments Like Terraform | Microsoft Community Hub for deploying your SQL Projects. By adopting this we can increase our data security, confidence in source control, and speed our time to deployment. For this post we will continue to leverage the example cicd-adventureWorks repository for the source of our SQL Project and where the pipeline definition will live. Road mapping the Templates Just like my other YAML posts let's outline the pieces required in this stage and we will then break down each job Build Stage Build .dacpac job run `dotnet build` and pass in appropriate arguments execute a Deploy Report from the dacpac produced by the build and the target environment copy the Deploy Report to the build output directory publish the pipeline artifact Deploy Stage Deploy .dacpac job run Deploy Report from dacpac artifact (optional) deploy dacpac, including pre/postscripts Build Stage For the purposes of this stage, we should think of building our .dacpac similar to a terraform or single page application build. What I am referring to is we will produce an artifact per environment, and this will be generated from the same codebase. Additionally, we will run a 'plan' which will be the proposed result of deploying our dacpac file. Build Job We will have one instance of the build job for each environment. Each instance will produce a different artifact as they will be passing different build configurations which in turn will result in a different .dacpac per environment. If you are familiar with YAML templating, then feel free to jump to the finish job template. One of the key differences with this job structure, as opposed to the one outlined in Deploying .dacpacs to Multiple Environments via ADO Pipelines is the need for a Deploy Report. This is the key to unlocking the CI/CD approach which aligns with Terraform. This Deploy Report detects our changes on build, similar to running a terraform plan. Creating a Deploy Report is achieved by setting the DeployAction attribute in the SQLAzureDacpacDeployment@1 action to 'DeployReport' Now there is one minor "bug" in the Microsoft SQLAzureDacpacDeployment task, which I have raised with the ADO task. It appears the output path for the Deploy Report as well as the Drift Report are hardcoded to the same location. To get around this I had to find out where the Deploy Report was being published and, for our purposes, have a task to copy the Deploy Report to the same location as the .dacpac and then publish them both as a single folder. Here is the code for the for a single environment to build the associated .dacpac and produce the Deploy Repo stages: - stage: adventureworksentra_build variables: - name: solutionPath value: $(Build.SourcesDirectory)// jobs: - job: build_publish_sql_sqlmoveme_dev_dev steps: - task: UseDotNet@2 displayName: Use .NET SDK vlatest inputs: packageType: 'sdk' version: '' includePreviewVersions: true - task: NuGetAuthenticate@1 displayName: 'NuGet Authenticate' - task: DotNetCoreCLI@2 displayName: dotnet build inputs: command: build projects: $(Build.SourcesDirectory)/src/sqlmoveme/*.sqlproj arguments: --configuration dev /p:NetCoreBuild=true /p:DacVersion=1.0.1 - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\s/src/sqlmoveme/bin/dev/sqlmoveme.dacpac AdditionalArguments: '' DeleteFirewallRule: True - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: $(Build.SourcesDirectory)/src/sqlmoveme/bin/dev/cus - task: PublishPipelineArtifact@1 displayName: 'Publish Pipeline Artifact sqlmoveme_dev_dev ' inputs: targetPath: $(Build.SourcesDirectory)/src/sqlmoveme/bin/dev artifact: sqlmoveme_dev_dev properties: '' The end result will be similar to: (I have two environments in the screenshot below) One can see I have configured this to run a Deploy Report across each regional instance, thus the `cus` folder, of a SQL DB I do this is to identify and catch any potential schema and data issues. The Deploy Reports are the keys to tie this to the thought of deploying and managing SQL Databases like Terraform. These reports will execute when a pull request is created as part of the Build and again at Deployment to ensure changes from PR to deployment that may have occurred. For the purposes of this blog here is a deployment report indicating a schema change: This is an important artifact for organizations whose auditing policy requires documentation around deployments. This information is also available in the ADO job logs: This experience should feel similar to Terraform CI/CD...THAT'S A GOOD THING! It means we are working on developing and refining practices and principals across our tech stacks when it comes to SDLC. If this feels new to you then please read Terraform, CI/CD, Azure DevOps, and YAML Templates - John Folberth Deploy Stage We will have a deploy stage for each environment and within that stage will be a job for each region and/or database we are deploying our dacpac to. This job can be a template because, in theory, our deploying process across environments is identical. We will run a deployment report and deploy the .dacpac which was built for the specific environment and will include any and all associated pre/post scripts. Again this process has already been walked through in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub Deploy Job The deploy job will take what we built in the deployment process in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub and we will add a perquisite job to create a second Deployment Report. This process is to ensure we are aware of any changes in the deployed SQL Database that may have occurred after the original dacpac and Deployment Report were created at the time of the Pull Request. By doing this we now have a tight log identifying any changes that were being made right before we deployed the code. Next, we need to make a few changes to override the default arguments of the .dacpac publish command in order to automatically deploy changes that may result in data loss. Here is a complete list of all the available properties SqlPackage Publish - SQL Server | Microsoft Learn. The ones we are most interested in are DropObjectsNotInSource and BlockOnPossibleDataLoss. DropObjectsNotInSource is defined as: Specifies whether objects that do not exist in the database snapshot (.dacpac) file will be dropped from the target database when you publish to a database. This value takes precedence over DropExtendedProperties. This is important as it will drop and delete objects that are not defined in our source code. As I've written about previously this will drop all those instances of "Shadow Data" or copies of tables we were storing. This value, by default, is set to false as a safeguard from a destructive data action. Our intention though is to ensure our deployed database objects match our definitions in source control, as such we want to enable this. BlockOnPossibleDataLoss is defined as: Specifies that the operation will be terminated during the schema validation step if the resulting schema changes could incur a loss of data, including due to data precision reduction or a data type change that requires a cast operation. The default (True) value causes the operation to terminate regardless if the target database contains data. An execution with a False value for BlockOnPossibleDataLoss can still fail during deployment plan execution if data is present on the target that cannot be converted to the new column type. This is another safeguard that has been put in place to ensure data isn't lost in the situation of type conversion or schema changes such as dropping a column. We want this set to `true` so that our deployment will actually deploy in an automated fashion. If this is set to `false` and we are wanting to update schemas/columns then we would be creating an anti-pattern of a manual deployment to accommodate. When possible, we want to automate our deployments and in this specific case we have already taken the steps of mitigating unintentional data loss through our implementation of a Deploy Report. Again, we should have confidence in our deployment and if we have this then we should be able to automate it. Here is that same deployment process, including now the Deploy Report steps: - stage: adventureworksentra_dev_cus_dacpac_deploy jobs: - deployment: adventureworksentra_app_dev_cus environment: name: dev dependsOn: [] strategy: runOnce: deploy: steps: - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_dev_dev\**\*.dacpac AdditionalArguments: '' DeleteFirewallRule: False - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: postDeploy/sql-adventureworksentra-dev-cus.database.windows.net/sqlmoveme - task: SqlAzureDacpacDeployment@1 displayName: Publish sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: Publish azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_dev_dev\**\*.dacpac AdditionalArguments: /p:DropObjectsNotInSource=true /p:BlockOnPossibleDataLoss=false DeleteFirewallRule: True Putting it Together Let's put together all these pieces. This example will show an expanded pipeline that has the following stages and jobs Build a stage Build Dev job Build Tst job Deploy Dev stage Deploy Dev Job Deploy tst stage Deploy tst Job And here is the code: resources: repositories: - repository: templates type: github name: JFolberth/TheYAMLPipelineOne endpoint: JFolberth trigger: branches: include: - none pool: vmImage: 'windows-latest' parameters: - name: projectNamesConfigurations type: object default: - projectName: 'sqlmoveme' environmentName: 'dev' regionAbrvs: - 'cus' projectExtension: '.sqlproj' buildArguments: '/p:NetCoreBuild=true /p:DacVersion=1.0.1' sqlServerName: 'adventureworksentra' sqlDatabaseName: 'moveme' resourceGroupName: adventureworksentra ipDetectionMethod: 'AutoDetect' deployType: 'DacpacTask' authenticationType: 'servicePrincipal' buildConfiguration: 'dev' dacpacAdditionalArguments: '/p:DropObjectsNotInSource=true /p:BlockOnPossibleDataLoss=false' - projectName: 'sqlmoveme' environmentName: 'tst' regionAbrvs: - 'cus' projectExtension: '.sqlproj' buildArguments: '/p:NetCoreBuild=true /p:DacVersion=1.0' sqlServerName: 'adventureworksentra' sqlDatabaseName: 'moveme' resourceGroupName: adventureworksentra ipDetectionMethod: 'AutoDetect' deployType: 'DacpacTask' authenticationType: 'servicePrincipal' buildConfiguration: 'tst' dacpacAdditionalArguments: '/p:DropObjectsNotInSource=true /p:BlockOnPossibleDataLoss=false' - name: serviceName type: string default: 'adventureworksentra' stages: - stage: adventureworksentra_build variables: - name: solutionPath value: $(Build.SourcesDirectory)// jobs: - job: build_publish_sql_sqlmoveme_dev_dev steps: - task: UseDotNet@2 displayName: Use .NET SDK vlatest inputs: packageType: 'sdk' version: '' includePreviewVersions: true - task: NuGetAuthenticate@1 displayName: 'NuGet Authenticate' - task: DotNetCoreCLI@2 displayName: dotnet build inputs: command: build projects: $(Build.SourcesDirectory)/src/sqlmoveme/*.sqlproj arguments: --configuration dev /p:NetCoreBuild=true /p:DacVersion=1.0.1 - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\s/src/sqlmoveme/bin/dev/sqlmoveme.dacpac AdditionalArguments: '' DeleteFirewallRule: True - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: $(Build.SourcesDirectory)/src/sqlmoveme/bin/dev/cus - task: PublishPipelineArtifact@1 displayName: 'Publish Pipeline Artifact sqlmoveme_dev_dev ' inputs: targetPath: $(Build.SourcesDirectory)/src/sqlmoveme/bin/dev artifact: sqlmoveme_dev_dev properties: '' - job: build_publish_sql_sqlmoveme_tst_tst steps: - task: UseDotNet@2 displayName: Use .NET SDK vlatest inputs: packageType: 'sdk' version: '' includePreviewVersions: true - task: NuGetAuthenticate@1 displayName: 'NuGet Authenticate' - task: DotNetCoreCLI@2 displayName: dotnet build inputs: command: build projects: $(Build.SourcesDirectory)/src/sqlmoveme/*.sqlproj arguments: --configuration tst /p:NetCoreBuild=true /p:DacVersion=1.0 - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-tst-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureTstServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-tst-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\s/src/sqlmoveme/bin/tst/sqlmoveme.dacpac AdditionalArguments: '' DeleteFirewallRule: True - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: $(Build.SourcesDirectory)/src/sqlmoveme/bin/tst/cus - task: PublishPipelineArtifact@1 displayName: 'Publish Pipeline Artifact sqlmoveme_tst_tst ' inputs: targetPath: $(Build.SourcesDirectory)/src/sqlmoveme/bin/tst artifact: sqlmoveme_tst_tst properties: '' - stage: adventureworksentra_dev_cus_dacpac_deploy jobs: - deployment: adventureworksentra_app_dev_cus environment: name: dev dependsOn: [] strategy: runOnce: deploy: steps: - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_dev_dev\**\*.dacpac AdditionalArguments: '' DeleteFirewallRule: False - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: postDeploy/sql-adventureworksentra-dev-cus.database.windows.net/sqlmoveme - task: SqlAzureDacpacDeployment@1 displayName: Publish sqlmoveme on sql-adventureworksentra-dev-cus.database.windows.net inputs: DeploymentAction: Publish azureSubscription: AzureDevServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-dev-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_dev_dev\**\*.dacpac AdditionalArguments: /p:DropObjectsNotInSource=true /p:BlockOnPossibleDataLoss=false DeleteFirewallRule: True - stage: adventureworksentra_tst_cus_dacpac_deploy jobs: - deployment: adventureworksentra_app_tst_cus environment: name: tst dependsOn: [] strategy: runOnce: deploy: steps: - task: SqlAzureDacpacDeployment@1 displayName: DeployReport sqlmoveme on sql-adventureworksentra-tst-cus.database.windows.net inputs: DeploymentAction: DeployReport azureSubscription: AzureTstServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-tst-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_tst_tst\**\*.dacpac AdditionalArguments: '' DeleteFirewallRule: False - task: CopyFiles@2 inputs: SourceFolder: GeneratedOutputFiles Contents: '**' TargetFolder: postDeploy/sql-adventureworksentra-tst-cus.database.windows.net/sqlmoveme - task: SqlAzureDacpacDeployment@1 displayName: Publish sqlmoveme on sql-adventureworksentra-tst-cus.database.windows.net inputs: DeploymentAction: Publish azureSubscription: AzureTstServiceConnection AuthenticationType: servicePrincipal ServerName: sql-adventureworksentra-tst-cus.database.windows.net DatabaseName: sqlmoveme deployType: DacpacTask DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_tst_tst\**\*.dacpac AdditionalArguments: /p:DropObjectsNotInSource=true /p:BlockOnPossibleDataLoss=false DeleteFirewallRule: True In ADO it will look like: We can see the important Deploy Report being created and can confirm that there are Deploy Reports for each environment/region combination: Conclusion With the inclusion of deploy reports we now have the ability to create Azure SQL Deployments that adhere to modern DevOps approaches. We can ensure our environments will be in sync with how we have defined them in source control. By doing this we achieve a higher level of security, confidence in our code, and reduction in shadow data. To learn more on these approaches with SQL Deployments be sure to check out my other blog articles on the topic "SQL Database Series" in "Healthcare and Life Sciences Blog" | Microsoft Community Hub and be sure to follow me on LinkedInManaged SQL Deployments Like Terraform
Introduction This is the next post in our series on CI/CD for SQL projects. In this post we will challenge some long held beliefs on how we should manage SQL Deployments. Traditionally we've always had this notion that we should never drop data in any environment. Deployments should almost extensively be done via SQL scripts and manually ran to ensure completion and to prevent any type of data loss. We will challenge this and propose a solution that falls more in line with other modern DevOps tooling and practices. If this sounds appealing to you then let's dive into it. Why We've always approached the data behind our applications as the differentiating factor when it comes to Intellectual Property (IP). No one wants to hear the words that we've lost data or that the data is unrecoverable. Let me be clear and throw a disclaimer on what I am going to propose, this is not a substitute for proper data management techniques to prevent data loss. Rather we are going to look at a way to thread the needle on keeping the data that we need while removing the data that we don't. Shadow Data We've all heard about "shadow IT", well what about "shadow data"? I think every developer has been there. For example, taking a backup of a table/database to ensure we don't inadvertently drop it during a deployment. Heck sometimes we may even go a step further and backup this up into a lower environment. The caveat is that we very rarely ever go back and clean up that backup. We've effectively created a snapshot of data which we kept for our own comfort. This copy is now in an ungoverned, unmanaged, and potentially in an insecure state. This issue then gets compounded if we have automated backups or restore to QA operations. Now we keep amplifying and spreading our shadow data. Shouldn't we focus on improving the Software Delivery Lifecycle (SDLC), ensuring confidence in our data deployments? Let's take it a step further and shouldn't we invest in our data protection practice? Why should we be doing this when we have technology that backs up our SQL schema and databases? Another consideration, what about those quick "hot fixes" that we applied in production. The ones where we changed a varchar() column length to accommodate the size of a field in production. I am not advocating for making these changes in production...but when your CIO or VP is escalating since this is holding up your businesses Data Warehouse and you so happen to have the SQL admin login credentials...stuff happens. Wouldn't it be nice if SQL had a way to report back that this change needs to be accommodated for in the source schema? Again, the answer is in our SDLC process. So, where is the book of record for our SQL schemas? Well, if this is your first read in this series or if you are unfamiliar with source control I'd encourage you to read Leveraging DotNet for SQL Builds via YAML | Microsoft Community Hub where I talk about the importance of placing your database projects under source control. The TL/DR...your database schema definitions should be defined under source control, ideally as a .sqlproj. Where Terraform Comes In At this point I've already pointed out a few instances on how our production database instance can differ from what we have defined in our source project. This certainly isn't anything new in software development. So how does other software development tooling and technologies account for this? Generally, application code simply gets overwritten, and we have backup versions either via release branches, git tags, or other artifacts. Cloud infrastructure can be defined as Infrastructure as Code (IaC) and as such still follow something similar to our application code workflow. There are two main flavors of IaC for Azure: Bicep/ARM and Terraform. Bicep/ARM adheres to an incremental deployment, which has its pros and cons. The quick version is Azure Resource Manager (ARM) deployments will not delete resources that are not defined in its template. Part of this has led to Azure Deployment Stacks which can help enforce resource deletion when it's been removed from a template. If interested in understanding a Terraform workflow I will point you to one of my other posts on the topic. At a high level Terraform evaluates your IaC definition and determines what properties need to be updated, and more importantly, what resources need to be removed. Now how does Terraform do this and more importantly, how can we tell what properties will be updated and/or removed? Terraform has a concept known as a plan. This plan will run your deployment against what is known as the state file, in Bicep/ARM this is the Deployment Stack, and produce a summary of changes that will occur. This includes new resources to be created, modification of existing resources, and deletion of resources previously deployed to the same state file. Typically, I recommend running a Terraform plan across all environments at CI. This ensures one can evaluate changes being proposed across all potential environments and summarize these changes at the time of the Pull Request (PR). I then advise re-executing this plan prior to deployment as a way to confirm/re-evaluate if anything has been updated since the original plan ran. Some will argue the previous plan can be "approved" to deploy to the next environment; however, there is little overhead in running a second plan and I prefer this option. Here's the thing....SQL actually has this same functionality. Deploy Reports Via SqlPackage there is additional functionality we can leverage with our .dacpacs. We are going to dive a little more on Deploy Reports. If you have followed this series, you may know we use the SqlPackage Publish command wrapped behind the SqlAzureDacpacDeployment@1 task. More information on this can be found at Deploying .dapacs to Azure SQL via Azure DevOps Pipelines | Microsoft Community Hub . So, what is a Deploy Report? A Deploy Report is the XML representation of the changes your .dacpac will make to a database. Here is an example of one denoting that there is a risk of potential data loss: This report is the key to our whole argument for modeling a SQL Continuous Integration/Continous Delivery after one that Terraform uses. We already will have a separate .dacpac file, built from the same .sqlproj, for each environment when leveraging pre/post scripts as we saw in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. So now we need to take each one of those and run a Deploy Report against the appropriate target. This is the same as effectively running a `tf plan` with a different variable file against each environment to determine what actions a Terraform `apply` will execute. These Deploy Reports are then what we will include in our PR approval to validate and approve any changes we will make to our SQL database. Dropping What's Not in Source Control This is the controversial part and the biggest sell in our adoption of a Terraform like approach to SQL deployments. It has long been considered a best practice to have whatever is deployed match what is under source control. This provides for a consistent experience when developing and then deploying across multiple environments. Within IaC, we have our cloud infrastructure defined in source control and deployed across environments. Typically, it is seen as a good practice to delete resources which have been removed from source control. This helps simplify the environment, reduces cost, and reduces potential security surface areas. So why not the same for databases? Typically, it is due to us having the fear of losing data. To prevent this, we should have proper data protection and recovery processes in place. Again, I am not addressing that aspect. If we have those accounted for, then by all means, our source control version of our databases should match our deployed environments. What about security and indexing? Again, this can be accounted for in Deploying .dacpacs to Multiple Environments via ADO Pipelines | Microsoft Community Hub. Where we have two different post deployment security scripts, and these scripts are under source control! How can we see if data loss will occur? Refer back to the Deploy Reports for this! There potentially is some natural hesitation as the default method for deploying a .dacpac has safeguards to prevent deployments in the event of potential data loss. This is not a bad thing as it prevents a destructive activity from automatically occurring; however, we by no means need to accept the default behavior. We will need to refer to SqlPackage Publish - SQL Server | Microsoft Learn. From this list we will be able to identify and explicitly set the value for various parameters. These will enable our package to deploy even in the event of potential data loss. Conclusion This post hopefully challenges the mindset we have when it comes to database deployments. By taking an approach that more closely relates to modern DevOps practices, we can gain confidence that our source control and database match, increased reliability and speed with our deployments, and we are closing potential security gaps in our database deployment lifecycle. This content was not designed to be technical. In our next post we will demo, provide examples, and talk through how to leverage YAML Pipelines to accomplish what we have outlined here. Be sure to follow me on LinkedIn for the latest publications. For those who are technically sound and want to skip ahead feel free to check out my code on my GitHub : https://github.com/JFolberth/cicd-adventureWorks and https://github.com/JFolberth/TheYAMLPipelineOneAnnouncing General Availability of Terraform Azure Verified Modules for Platform Landing Zone (ALZ)
Azure Verified Modules ALZ ❤️ AVM. We are moving to a more modular approach to deploying your platform landing zones. In line with consistent feedback from you, we have now released a set of modules that together will deploy your platform landing zone architecture (ALZ). Azure Verified Modules for Platform Landing Zones (ALZ) is collection of Azure Verified Modules that are composed together to create your Platform Landing Zone. This replaces the existing CAF Enterprise Scale module that you may already be familiar with. The core Azure Verified Modules that are composed together are: Management Groups and Policy Pattern Module: avm-ptn-alz Management Resources Pattern Module: avm-ptn-management-alz Hub Virtual Networking Pattern Module: avm-ptn-hubnetworking Virtual Network Gateway Pattern Module: avm-ptn-vnetgateway Virtual WAN Networking Pattern Module: avm-ptn-virtualwan Private DNS Zone for Private Link Pattern Module: avm-ptn-network-private-link-private-dns-zones This means that you can now choose your own adventure by selecting only the modules that you need. It also means we can add new features faster and allows us the opportunity to do more rigorous testing of each module. To improve deployment reliability, we now use our own Terraform provider. The provider generates data for use by the module and does not directly deploy any resources. The move to a provider allows us to add many more features and checks to improve your deployment reliability. ALZ IaC Accelerator updates for Terraform The Azure Landing Zones IaC Accelerator is our recommended approach for deploying the Terraform Azure Verified Modules for Platform Landing Zone (ALZ). The Azure Verified Modules for Platform Landing Zone is now our default selection for the Terraform ALZ IaC Accelerator. This module will be the focus of our development and improvement efforts moving forward. The module implements best practices by default, including multi-region and availability zones for resiliency. The ALZ IaC Accelerator bootstrap continues to implement best practices, such as version control and Workload identity federation security. Along with supporting the Azure Verified Modules for Platform Landing Zone (ALZ) approach, we have also made many enhancements to the ALZ IaC Accelerator process. A summary of the improvements include: We now support HCL (HashiCorp Configuration language) tfvars file as the platform landing zone configuration file format We have introduced a Phase 0 to help you plan for your ALZ IaC Accelerator deployment We have introduced the concepts of Scenarios and Options to simplify the decisions you need to make Platform landing zone configuration file Before the introduction of the Azure Verified Modules for Platform Landing Zone (ALZ) starter module the platform landing zone configuration file was supplied in YAML format. Due to the lack of support for YAML in Terraform, we had to then convert this to JSON. Once converted to JSON the configuration file lost all it's ordering, formatting and comments. This made day 2 updates to the configuration very cumbersome. With the support for the tfvars file (in HashiCorp Configuration Language format), we are now able to pass the configuration file in its original format to the version control system repository. This makes for a much easier day 2 update process as the file retains it's ordering, comments and formatting as defined by you. Phase 0 Phase 0 is a new planning phase we have added to the documentation. This phase takes you through 3 sets of decisions you need to make about the ALZ IaC Accelerator deployment: Bootstrap decisions Platform Landing Zone Scenarios Platform Landing Zone Options In order to assist with this, we also provide a downloadable Excel checklist , which lists all the decisions so you can consider them up front prior to completing any configuration file updates. Phase 0 guides you through this process and provides explanations of the decisions. The Bootstrap decisions relate to the resources deployed to Azure and the configuration of your Version Control System required for the Continuous Delivery pipeline. These decisions are not new to the ALZ IaC Accelerator, but we now provide more structured guidance. Platform Landing Zone Scenarios The Scenarios are a new concept introduced for the Azure Verified Modules for Platform Landing Zone (ALZ) starter module. We aim to cover the most common Platform landing zone use cases we hear requested from partners and customers with the ALZ IaC Accelerator. These include: Multi-Region Hub and Spoke Virtual Network with Azure Firewall Multi-Region Virtual WAN with Azure Firewall Multi-Region Hub and Spoke Virtual Network with Network Virtual Appliance (NVA) Multi-Region Virtual WAN with Network Virtual Appliance (NVA) Management Groups, Policy and Management Resources Only Single-Region Hub and Spoke Virtual Network with Azure Firewall Single-Region Virtual WAN with Azure Firewall For each scenario we provide an example Platform landing zone configuration file that is ready to deploy immediately. We know that customers will want to modify some of the settings and that is where Options come in. NOTE: At the time this blog post was published, we support the 7 Scenarios listed above. We may update or add to these Scenarios based on feedback we hear from you, so keep an eye on our documentation. Platform Landing Zone Options The Options build on the Scenarios. For each Scenario, you can choose to customise it with one or more Options. Each Options includes detailed instructions of how to update the Platform landing zone configuration file or introduce library files to implement to the option. The Options are: Customise Resource Names Customize Management Group Names and IDs Turn off DDOS protection plan Turn off Bastion host Turn off Private DNS zones and Private DNS resolver Turn off Virtual Network Gateways Additional Regions IP Address Ranges Change a policy assignment enforcement mode Remove a policy assignment Turn off Azure Monitoring Agent Deploy Azure Monitoring Baseline Alerts (AMBA) Turn off Defender Plans Implement Zero Trust Networking NOTE: At the time this blog post was published, we support the 14 Options listed above. We may update or add to these Options based on feedback we hear from you, so keep an eye on our documentation. Azure Landing Zones Library Another new offering is the Azure Landing Zones Library. This is an evolution of the library concept in the caf-enterprise-scale module. Principally, the Library allows us to decouple the update cycle of the ALZ architecture, from the module and provider. We are separating the data from the deployment logic. This allows you to update the module to take advantage of a bug fix, without having to change the policies that are deployed. Something that wasn't easily possible before. Conversely, you are able to update to the latest policy refresh of Azure Landing Zones without updating the module itself. The Library has its own documentation site, which introduces the concepts. We plan to make the library the single source of truth for all Azure Landing Zones implementation options (e.g. Portal, Terraform and Bicep) in the future. Azure Landing Zones Documentation Site Furthermore, we have a new place to go for all technical documentation for Azure Verified Modules for Platform Landing Zones (ALZ). With the move to multiple modules, and the new accelerator all having multiple GitHub repositories, we felt the need to centralize the documentation to make it the one place to go to get technical details. We currently have documentation for the Accelerator and Terraform, with Bicep coming soon. The new vanity URL is: https://aka.ms/alz/tech-docs. Please let us know what you think! What about ALZ-Bicep? Finally, some of you may be wondering what the future for our Bicep implementation option (ALZ Bicep) for Azure Verified Modules for Platform Landing Zones (ALZ) may be with this evolution on the Terraform side. And we have good news to share! Work is underway to also build the next version of ALZ in Bicep, which will be known as “Bicep Azure Verified Modules for Platform Landing Zone (ALZ)”. This will also use the new Azure Landing Zones Library and be built from Azure Verified Modules (where appropriate). We are currently looking to complete this work before August 2025, if not a lot sooner than this; as we are making good progress as we speak! But for now, for Bicep you do not do anything and continue to use ALZ Bicep via the ALZ IaC Accelerator and we will provide more updates on the next version of Bicep ALZ in the coming months! Staying up-to-date We highly recommend joining, or watching back, our quarterly Azure Landing Zones Community Calls, to get all the latest and greatest from the ALZ team. Our next one is on the 29th January 2025 and you can find the link to sign up to attend or watch back previous ones at: aka.ms/ALZ/Community We look forward to seeing you all there soon!8.8KViews14likes0Comments