Cost Optimization
12 TopicsAzure Database for MySQL - Service update
As our CEO Satya Nadella puts it, “We have seen two years of digital transformation in two months.” Azure Database for MySQL service is at the heart of this transformation, empowering online education, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government and healthcare websites to support unprecedented growth at optimized cost. Over the last six months, we’ve focused on enhancing security and governance for customers, simplifying performance tuning, and reducing cost for our customers A complete list of all the features we’ve released is available via Azure updates, but I’ll summarize a few important updates below.6.8KViews5likes0CommentsOptimize Azure Log Costs: Split Tables and Use the Auxiliary Tier with DCR
This blog is continuation of my previous blog where I discussed about saving ingestion costs by splitting logs into multiple tables and opting for the basic tier! Now that the transformation feature for Auxiliary logs has entered Public Preview stage, I’ll take a deeper dive, showing how to implement transformations to split logs across tables and route some of them to the Auxiliary tier. A quick refresher: Azure Monitor offers several log plans which our customers can opt for depending on their use cases. These log plans include: Analytics Logs – This plan is designed for frequent, concurrent access and supports interactive usage by multiple users. This plan drives the features in Azure Monitor Insights and powers Microsoft Sentinel. It is designed to manage critical and frequently accessed logs optimized for dashboards, alerts, and business advanced queries. Basic Logs – Improved to support even richer troubleshooting and incident response with fast queries while saving costs. Now available with a longer retention period and the addition of KQL operators to aggregate and lookup. Auxiliary Logs – Our new, inexpensive log plan that enables ingestion and management of verbose logs needed for auditing and compliance scenarios. These may be queried with KQL on an infrequent basis and used to generate summaries. Following diagram provides detailed information about the log plans and their use cases: More details about Azure Monitor Logs can be found here: Azure Monitor Logs - Azure Monitor | Microsoft Learn **Note** This blog will be focussed on switching to Auxiliary logs only. I would recommend going through our public documentation for detailed insights about feature-wise comparison for the log plans which should help you in taking right decisions for choosing the correct log plans. At this stage, I assume you’re aware about different log tiers that Azure Monitor offers and you’ve decided to switch to Auxiliary logs for high volume, low-fidelity logs. Let’s look at the high-level approach we’re going to follow to achieve this: Review the relevant tables and figure out which portion of the log can be moved to Auxiliary tier Create a DCR-based custom table which same schema as of the original table. For Ex. If you wish to split Syslog table and ingest a portion of the table into Auxiliary tier, then create a DCR-based custom table with same schema as of the Syslog table. At this point, switching table plan via UI is not possible, so I’d recommend using PowerShell script to create the DCR-based custom table. Once DCR-based custom table is created, implement DCR transformation to split the table. Configure total retention period of the Auxiliary table (this configuration will be done while creating the table) Let’s get started Use Case: In this demo, I’ll split Syslog table and route “Informational” logs to the Auxiliary table. Creating a DCR-based custom table: Previously a complex task, creating custom tables is now easy, thanks to a PowerShell script by MarkoLauren. Simply input the name of an existing table, and the script creates a DCR-based custom table with the same schema. Let’s see it in action now: Download the script locally. Update the resourceID details in this script and save it. Upload the updated script in Azure Shell. Load the file and enter the table name from which you wish to copy the schema. In my case, it's going to be "Syslog" table. Enter new table name, table type and total retention period, shown below: **Note** We highly recommend you review the PowerShell script thoroughly and do proper testing before executing it in production. We don't take any responsibility for the script. As you can see, Aux_Syslog_CL table has been created. Let’s validate in log analytics workspace > table section. Since the Auxiliary table has been created now, next step is to implement transformation logic at data collection rule level. The next step is to update the Data Collection Rule template to split the logs Since we already created custom table, we should create a transformation logic to split the Syslog table and route the logs with SeverityLevel “info” to the Auxiliary table. Let’s see how it works: Browse to Data Collection Rule blade. Open the DCR for Syslog table, click on Export template > Deploy > Edit Template as shown below: In the dataFlows section, I’ve created 2 streams for splitting the logs. Details about the streams as follows: 1 st Stream: It’s going to drop the Syslog messages where SeverityLevel is “info” and send rest of the logs to Syslog table. 2 nd Stream: It’s going to capture all Syslog messages where SeverityLevel is “info” and send the logs to Aux_Syslog_CL table. Save and deploy the updated template. Let’s see if it works as expected Browse to Azure > Microsoft Sentinel > Logs; and query the Auxiliary table to confirm if data is being ingested into this table. As we can see, the logs where SeverityLevel is “info” is being ingested in the Aux_Syslog_CL table and rest of the logs are flowing into Syslog table. Some nice cost savings are coming your way, hope this helps!Enhance Cost Optimization in Azure Cosmos DB Without Compromising Service
Optimizing Azure cosmos DB involves using strategies and best practices to reduce the overall spending on the service while maintaining and improving performance and availability. In this blog, I’ll explore actionable strategies and tools for reducing your Azure Cosmos DB costs without sacrificing the speed, scalability, or reliability of your database. Whether you're managing large-scale applications or developing on a budget, these insights will help you make the most out of Azure Cosmos DB. The main sub-topics we shall dive into includes: Pricing model Free development Plan for Optimization1.2KViews2likes0CommentsOn-Demand Backups in Azure Database for PostgreSQL – Flexible Server Now Generally Available
We’re excited to announce the General Availability of On-Demand Backups in Azure Database for PostgreSQL – Flexible Server! In today’s dynamic data management landscape, ensuring the protection and recoverability of your databases is essential. Azure Database for PostgreSQL – Flexible Server streamlines this responsibility through comprehensive backup management, including automated, scheduled storage volume snapshots encompassing the entire database instance and all associated transaction logs. With the introduction of On-demand backups you now have the flexibility to initiate backups at any time, supplementing the existing scheduled backups. This capability is particularly valuable in scenarios involving high-risk operations, such as system upgrades or schema modifications, or when performing periodic data refreshes that do not align with the standard backup schedule. Benefits Instant Backup Creation: Trigger a full backup of your server on demand—no more waiting for the automated backup schedule. Cost Optimization: While Azure manages automated backups that cannot be deleted until the retention window is met, on-demand backups provide greater control over storage costs. Delete these backups once their purpose is served to avoid unnecessary storage expense. Enhanced Control & Safety: Take backups before schema changes, major deployments, or periodic refresh activities to meet your business requirements. Seamless Integration: Accessible via Azure Portal, Azure CLI, ARM templates, and REST APIs. Azure Database for PostgreSQL Flexible Server provides a comprehensive, user-friendly backup solution, giving you the confidence to manage your data effectively and securely. Let us explore how on-demand backups can elevate your database management strategy and provide peace of mind during high-stakes operations. Automated Backups vs On-Demand Backups Feature Automated Backups On-Demand Backups Creation Scheduled by Azure Manually initiated by the user Retention Based on the backup policy Based on the backup policy Deletion Managed by Azure User-controlled Use Cases Regular data protection High-risk operations, ad-hoc needs How to take On-Demand Backups using the portal. In the Azure portal, choose your Azure Database for PostgreSQL flexible server. Click Settings from the left panel and choose Backup and Restore. Click Backup and provide your backup name. Click Backup. A notification is shown that an On-demand backup trigger has been initiated. For more information: How to perform On-demand backups using Portal How to take On-Demand Backups using CLI. You can run the following command to perform an on-demand backup of a server. az postgres flexible-server backup create --resource-group <resource_group> --name <server> --backup-name <backup> Example: For more information: How to perform On-demand backups using CLI How to list all on-demand backups using CLI You can list currently available on-demand backups of a server via the az postgres flexible-server backup list command. az postgres flexible-server backup list --resource-group <resource_group> --name <server> --query "[?backupType=='Customer On-Demand']" --output table For more information: How to list all backups using Portal What's Next Once you have taken an on-demand backup based on your business needs, you can retain it until your high-risk operation is complete or use it to refresh your reporting or non-production environments. You can delete the backups to optimize storage costs when the backup is no longer needed. To restore or delete on-demand backups, you can use the Azure portal, CLI, or API for seamless management. Limitations & Considerations: SKU Support: On-demand backups are available for General Purpose and Memory-Optimized SKUs. Burstable SKUs are not supported. Storage Tier Compatibility: Currently, only the SSDv1 storage tier is supported. Support for SSDv2 is on our roadmap and will be introduced in a future update. You can take up to 7 on-demand backups per flexible server. This limit is intentional to help manage backup costs, as on-demand backups are meant for occasional use. The managed service already provides support for up to 35 backups in total, excluding on-demand backups. Take Control of Your Database Protection Today! The ability to create on-demand backups is critical for managing and safeguarding your data. Whether you're preparing for high-risk operations or refreshing non-production environments, this feature puts flexibility and control in your hands. Get started now: Create your first on-demand backup using the Azure Portal or CLI. Optimize your storage costs by deleting backups when no longer needed. Restore with ease to keep your database resilient and ready for any challenge. Protect your data effectively and ensure your database is always prepared for the unexpected. Learn more about Azure Database for PostgreSQL Flexible Server and explore the possibilities with on-demand backups today! You can always find the latest features added to Flexible server in this release notes page. We are eager to hear all the great scenarios this new feature helps you optimize, and look forward to receiving your feedback at https://aka.ms/PGfeedback.Azure VMWare (AVS) Cost Optimization Using Azure Migrate Tool
What is AVS? Azure VMware Solution provides private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and Azure Government. The minimum initial deployment is three hosts, with the option to add more hosts, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. Learn More: https://learn.microsoft.com/en-us/azure/azure-vmware/introduction What is Azure Migrate Tool? Azure Migrate is a comprehensive service designed to help you plan and execute your migration to Azure. It provides a unified platform to discover, assess, and migrate your on-premises resources, including servers, databases, web apps, and virtual desktops, to Azure. The tool offers features like dependency analysis, cost estimation, and readiness assessments to ensure a smooth and efficient migration process. Learn More: https://learn.microsoft.com/en-us/azure/migrate/migrate-services-overview How Azure Migrate can be used to Discover and Assess AVS? Azure Migrate enables the discovery and assessment of Azure VMware Solution (AVS) environments by collecting inventory and performance data from on-premises VMware environments, either through direct integration with vCenter (via Appliance) or by importing data from tools like RVTools. Using Azure Migrate, organizations can analyze the compatibility of their VMware workloads for migration to AVS, assess costs, and evaluate performance requirements. The process involves creating an Azure Migrate project, discovering VMware VMs, and generating assessments that provide insights into resource utilization, right-sizing recommendations, and estimated costs in AVS. This streamlined approach helps plan and execute migrations effectively while ensuring workloads are optimized for the target AVS environment. Note: We will be narrating the RVtools Import method in this article. What Is RVTools? RVTools is a lightweight, free utility designed for VMware administrators to collect, analyze, and export detailed inventory and performance data from VMware vSphere environments. Developed by Rob de Veij, RVTools connects to vCenter or ESXi hosts using VMware's vSphere Management SDK to retrieve comprehensive information about the virtual infrastructure. Key Features of RVTools: Inventory Management: Provides detailed information about virtual machines (VMs), hosts, clusters, datastores, networks, and snapshots. Includes details like VM names, operating systems, IP addresses, resource allocations (CPU, memory, storage), and more. Performance Insights: Offers visibility into resource utilization, including CPU and memory usage, disk space, and VM states (e.g., powered on/off). Snapshot Analysis: Identifies unused or orphaned snapshots, helping to optimize storage and reduce overhead. Export to Excel: Allows users to export all collected data into an Excel spreadsheet (.xlsx) for analysis, reporting, and integration with tools like Azure Migrate. Health Checks: Identifies configuration issues, such as disconnected hosts, orphaned VMs, or outdated VMware Tools versions. User-Friendly Interface: Displays information in tabular form across multiple tabs, making it easy to navigate and analyze specific components of the VMware environment. Hand-on LAB Disclaimer: The data used for this LAB has no relationship with real world scenarios. This sample data is self-created by the author and purely for understanding the concept. To discover and assess your Azure VMware Solution (AVS) environment using an RVTools extract report in the Azure Migrate tool, follow these steps: Prerequisites RVTools Setup: Download and install RVTools from the Official Website Ensure connectivity to your vCenter server. Extract the data by running RVTools and saving the output as an Excel (.xlsx) file Permissions: You need at least the Contributor role on the Azure Migrate project. Ensure that you have appropriate permissions in your vCenter environment to collect inventory and performance data. File Requirements: The RVTools file must be saved in .xlsx format without renaming or modifying the tabs or column headers. Note: Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Procedure Step 1: Export Data from RVTools Follow the steps provided in official website to get RVTools Extract Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Step 2: Discover Log in to the Azure portal. Navigate to Azure Migrate and select your project or create new project. Under Migration goals, select Servers, databases and web apps. On Azure Migrate | Servers, databases and web apps page, under Assessment tools, select Discover and then select Using import. In Discover page, in File type, select VMware inventory (RVTools XLSX). In the Step 1: Import the file section, select the RVTools XLSX file and then select Import. Wait for some time to Import Once import completed check for Error Messages if any and rectify those and re upload, otherwise wait 10-15 minutes to reflect imported VMs in the discovery. Post discovery Reference Link: https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-import-vmware-using-rvtools-xlsx?context=%2Fazure%2Fmigrate%2Fcontext%2Fvmware-context Step 3: Assess After the upload is complete, navigate to the Servers tab. Click on Assess -->Azure VMware Solution to assess the discovered machines. Edit assessment settings based on your requirements and Save Target region: Select the Azure region for the migration. Node Type: Specify the Azure VMware Solution series (e.g., AV36, AV36P). Pricing model: Select pay-as-you-go or reserved instance pricing. Discount: Specify any available discounts. Note: We will be explaining all the parameters in optimize session. As of now just review and leave parameters as it is. In Assess Servers, select Next. In Select servers to assess > Assessment name > specify a name for the assessment. In Select or create a group > select Create New and specify a group name. Select the appliance and select the servers you want to add to the group. Then select Next. In Review + create assessment, review the assessment details, and select Create Assessment to create the group and run the assessment. Step 4: Review the Assessment View an assessment In Windows, Linux and SQL Server > Azure Migrate: Discovery and assessment, select the number next to Azure VMware Solution. In Assessments, select an assessment to open it. As an example (estimations and costs, for example, only): Review the assessment summary. You can select Sizing assumptions to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties or recalculate the assessment. Step 5: Optimize We have received a report without any optimization in our previous steps. Now we can follow below steps to optimize the cost and node count even further High level steps: Find limiting factor Find which component in settings are mapped for optimization depending on limiting factor Try to adjust the mapped component according to Scenario and Comfort Find Limiting factor: First understand which component (CPU, memory and storage) is deciding your ESXI Node count. This will be highlighted in the report The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource, which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor. Find which option in the setting can be used to optimize: This is depending on the limiting factor. For eg: If Limiting factor is CPU, which means you have high CPU requirement and CPU oversubscription can be used to optimize ESXI Node. Likewise, if storage is the limiting factor editing FTT, RAID or introducing External storage like ANF will help you to reduce Node count. Even reducing one node count will create a huge impact in dollar value. Let's understand how over commitment or over subscription works with simple example. Let's suppose I have two VMs with below specification Name CPU Memory Storage VM1 9 vCPU 200 GB 500 GB VM2 4 vCPU 200 GB 500 GB Total 13 vCPU 400 GB 1000 GB We have EXSI Node which has below capacity: vCPU 10 Memory 500 GB storage 1024 GB Now without optimization I need two ESXI node to accommodate 13 vCPU of total requirement. But let's suppose VM1 and VM2 doesn't consume entire capacity all the time. The total capacity usage at a time will not go beyond 10. then I can accommodate both VM in same ESXI node, Hence I can reduce my node count and cost. Which means it is possible to share resources among both VMs. Without optimization With optimization Parameters effecting Sizing and Pricing CPU Oversubscription Specifies the ratio of number of virtual cores tied to one physical core in the Azure VMware Solution node. The default value in the calculations is 4 vCPU:1 physical core in Azure VMware Solution. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. Memory overcommit factor Specifies the ratio of memory overcommit on the cluster. A value of 1 represents 100% memory use, 0.5, for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. Deduplication and compression factor Specifies the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage configurations. These vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no deduplication or compression. You can only add values from 1 to 10 up to one decimal place. FTT : How many device failure can be tolerated for a VM RAID : RAID stands for Redundant Arrays of Independent Disks Explains how data should be stored for redundancy Mirroring : Data will be duplicated as it is to another disk E.g.: To protect a 100 GB VM object by using RAID-1 (Mirroring) with an FTT of 1, you consume 200 GB. Erasure Coding : Erasure coding divides data into chunks and calculates parity information (redundant data) across multiple storage devices. This allows data reconstruction even if some chunks are lost, similar to RAID, but typically more space-efficient E.g.: to protect a 100 GB VM object by using RAID-5 (Erasure Coding) with an FTT of 1, you consume 133.33 GB. Comfort Factor: Azure Migrate considers a buffer (comfort factor) during assessment. This buffer is applied on top of server utilization data for VMs (CPU, memory and disk). The comfort factor accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, a 10-core VM with 20% utilization normally results in a 2-core VM. However, with a comfort factor of 2.0x, the result is a 4-core VM instead. AVS SKU Sizes Optimization Result In this example we got to know that CPU is my limiting factor hence I have adjusted CPU over subscription value from 4:1 to 8:1 Reduced node count from 6 (3 AV36P+3 AV64) to 5 AV36P Reduced Cost by 31% Note: Over-provisioning or over-committing can put your VMs at risk. However, in Azure Cloud, you can create alarms to warn you of unexpected demand increases and add new ESXi nodes on demand. This is the beauty of the cloud: if your resources are under-provisioned, you can scale up or down at any time. Running your resources in an optimized environment not only saves your budget but also allows you to allocate funds for more innovative ideas.2.4KViews1like0Comments