Azure
2321 TopicsAzure VMWare (AVS) Cost Optimization Using Azure Migrate Tool
What is AVS? Azure VMware Solution provides private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and Azure Government. The minimum initial deployment is three hosts, with the option to add more hosts, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. Learn More: https://learn.microsoft.com/en-us/azure/azure-vmware/introduction What is Azure Migrate Tool? Azure Migrate is a comprehensive service designed to help you plan and execute your migration to Azure. It provides a unified platform to discover, assess, and migrate your on-premises resources, including servers, databases, web apps, and virtual desktops, to Azure. The tool offers features like dependency analysis, cost estimation, and readiness assessments to ensure a smooth and efficient migration process. Learn More: https://learn.microsoft.com/en-us/azure/migrate/migrate-services-overview How Azure Migrate can be used to Discover and Assess AVS? Azure Migrate enables the discovery and assessment of Azure VMware Solution (AVS) environments by collecting inventory and performance data from on-premises VMware environments, either through direct integration with vCenter (via Appliance) or by importing data from tools like RVTools. Using Azure Migrate, organizations can analyze the compatibility of their VMware workloads for migration to AVS, assess costs, and evaluate performance requirements. The process involves creating an Azure Migrate project, discovering VMware VMs, and generating assessments that provide insights into resource utilization, right-sizing recommendations, and estimated costs in AVS. This streamlined approach helps plan and execute migrations effectively while ensuring workloads are optimized for the target AVS environment. Note: We will be narrating the RVtools Import method in this article. What Is RVTools? RVTools is a lightweight, free utility designed for VMware administrators to collect, analyze, and export detailed inventory and performance data from VMware vSphere environments. Developed by Rob de Veij, RVTools connects to vCenter or ESXi hosts using VMware's vSphere Management SDK to retrieve comprehensive information about the virtual infrastructure. Key Features of RVTools: Inventory Management: Provides detailed information about virtual machines (VMs), hosts, clusters, datastores, networks, and snapshots. Includes details like VM names, operating systems, IP addresses, resource allocations (CPU, memory, storage), and more. Performance Insights: Offers visibility into resource utilization, including CPU and memory usage, disk space, and VM states (e.g., powered on/off). Snapshot Analysis: Identifies unused or orphaned snapshots, helping to optimize storage and reduce overhead. Export to Excel: Allows users to export all collected data into an Excel spreadsheet (.xlsx) for analysis, reporting, and integration with tools like Azure Migrate. Health Checks: Identifies configuration issues, such as disconnected hosts, orphaned VMs, or outdated VMware Tools versions. User-Friendly Interface: Displays information in tabular form across multiple tabs, making it easy to navigate and analyze specific components of the VMware environment. Hand-on LAB Disclaimer: The data used for this LAB has no relationship with real world scenarios. This sample data is self-created by the author and purely for understanding the concept. To discover and assess your Azure VMware Solution (AVS) environment using an RVTools extract report in the Azure Migrate tool, follow these steps: Prerequisites RVTools Setup: Download and install RVTools from the RVTools Download Ensure connectivity to your vCenter server. Extract the data by running RVTools and saving the output as an Excel (.xlsx) file Permissions: You need at least the Contributor role on the Azure Migrate project. Ensure that you have appropriate permissions in your vCenter environment to collect inventory and performance data. File Requirements: The RVTools file must be saved in .xlsx format without renaming or modifying the tabs or column headers. Note: Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Procedure Step 1: Export Data from RVTools Follow the steps provided in official website to get RVTools Extract Sample Sheet: Please check the attachment included with this article. Note that this is not the complete format; some tabs and columns have been removed for simplicity. During the actual discovery and assessment process, please do not modify the tabs or columns. Step 2: Discover Log in to the Azure portal. Navigate to Azure Migrate and select your project or create new project. Under Migration goals, select Servers, databases and web apps. On Azure Migrate | Servers, databases and web apps page, under Assessment tools, select Discover and then select Using import. In Discover page, in File type, select VMware inventory (RVTools XLSX). In the Step 1: Import the file section, select the RVTools XLSX file and then select Import. Wait for some time to Import Once import completed check for Error Messages if any and rectify those and re upload, otherwise wait 10-15 minutes to reflect imported VMs in the discovery. Post discovery Reference Link: https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-import-vmware-using-rvtools-xlsx?context=%2Fazure%2Fmigrate%2Fcontext%2Fvmware-context Step 3: Assess After the upload is complete, navigate to the Servers tab. Click on Assess -->Azure VMware Solution to assess the discovered machines. Edit assessment settings based on your requirements and Save Target region: Select the Azure region for the migration. Node Type: Specify the Azure VMware Solution series (e.g., AV36, AV36P). Pricing model: Select pay-as-you-go or reserved instance pricing. Discount: Specify any available discounts. Note: We will be explaining all the parameters in optimize session. As of now just review and leave parameters as it is. In Assess Servers, select Next. In Select servers to assess > Assessment name > specify a name for the assessment. In Select or create a group > select Create New and specify a group name. Select the appliance and select the servers you want to add to the group. Then select Next. In Review + create assessment, review the assessment details, and select Create Assessment to create the group and run the assessment. Step 4: Review the Assessment View an assessment In Windows, Linux and SQL Server > Azure Migrate: Discovery and assessment, select the number next to Azure VMware Solution. In Assessments, select an assessment to open it. As an example (estimations and costs, for example, only): Review the assessment summary. You can select Sizing assumptions to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties or recalculate the assessment. Step 5: Optimize We have received a report without any optimization in our previous steps. Now we can follow below steps to optimize the cost and node count even further High level steps: Find limiting factor Find which component in settings are mapped for optimization depending on limiting factor Try to adjust the mapped component according to Scenario and Comfort Find Limiting factor: First understand which component (CPU, memory and storage) is deciding your ESXI Node count. This will be highlighted in the report The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource, which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor. Find which option in the setting can be used to optimize: This is depending on the limiting factor. For eg: If Limiting factor is CPU, which means you have high CPU requirement and CPU oversubscription can be used to optimize ESXI Node. Likewise, if storage is the limiting factor editing FTT, RAID or introducing External storage like ANF will help you to reduce Node count. Even reducing one node count will create a huge impact in dollar value. Let's understand how over commitment or over subscription works with simple example. Let's suppose I have two VMs with below specification Name CPU Memory Storage VM1 9 vCPU 200 GB 500 GB VM2 4 vCPU 200 GB 500 GB Total 13 vCPU 400 GB 1000 GB We have EXSI Node which has below capacity: vCPU 10 Memory 500 GB storage 1024 GB Now without optimization I need two ESXI node to accommodate 13 vCPU of total requirement. But let's suppose VM1 and VM2 doesn't consume entire capacity all the time. The total capacity usage at a time will not go beyond 10. then I can accommodate both VM in same ESXI node, Hence I can reduce my node count and cost. Which means it is possible to share resources among both VMs. Without optimization With optimization Parameters effecting Sizing and Pricing CPU Oversubscription Specifies the ratio of number of virtual cores tied to one physical core in the Azure VMware Solution node. The default value in the calculations is 4 vCPU:1 physical core in Azure VMware Solution. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. Memory overcommit factor Specifies the ratio of memory overcommit on the cluster. A value of 1 represents 100% memory use, 0.5, for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. Deduplication and compression factor Specifies the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage configurations. These vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no deduplication or compression. You can only add values from 1 to 10 up to one decimal place. FTT : How many device failure can be tolerated for a VM RAID : RAID stands for Redundant Arrays of Independent Disks Explains how data should be stored for redundancy Mirroring : Data will be duplicated as it is to another disk E.g.: To protect a 100 GB VM object by using RAID-1 (Mirroring) with an FTT of 1, you consume 200 GB. Erasure Coding : Erasure coding divides data into chunks and calculates parity information (redundant data) across multiple storage devices. This allows data reconstruction even if some chunks are lost, similar to RAID, but typically more space-efficient E.g.: to protect a 100 GB VM object by using RAID-5 (Erasure Coding) with an FTT of 1, you consume 133.33 GB. Comfort Factor: Azure Migrate considers a buffer (comfort factor) during assessment. This buffer is applied on top of server utilization data for VMs (CPU, memory and disk). The comfort factor accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, a 10-core VM with 20% utilization normally results in a 2-core VM. However, with a comfort factor of 2.0x, the result is a 4-core VM instead. AVS SKU Sizes Optimization Result In this example we got to know that CPU is my limiting factor hence I have adjusted CPU over subscription value from 4:1 to 8:1 Reduced node count from 6 (3 AV36P+3 AV64) to 5 AV36P Reduced Cost by 31% Note: Over-provisioning or over-committing can put your VMs at risk. However, in Azure Cloud, you can create alarms to warn you of unexpected demand increases and add new ESXi nodes on demand. This is the beauty of the cloud: if your resources are under-provisioned, you can scale up or down at any time. Running your resources in an optimized environment not only saves your budget but also allows you to allocate funds for more innovative ideas.2.5KViews1like1CommentHow Azure AI is Revolutionizing Supply Chain Forecasting and Inventory
In today’s fast-paced global marketplace, supply chain efficiency can make or break a business. Companies face constant challenges such as demand fluctuations, supplier disruptions, and shifting customer expectations. Traditional forecasting methods—often reliant on historical data and rigid models—are no longer enough. This is where Azure AI is stepping in, transforming supply chain forecasting and inventory management with intelligent, adaptive, and real-time solutions. https://dellenny.com/how-azure-ai-is-revolutionizing-supply-chain-forecasting-and-inventory/15Views0likes0CommentsEnd-to-End Confidence in the Cloud A Walkthrough of Azure Playwright Testing (Preview)
If you’ve been using Playwright for your end-to-end testing, you know how powerful it is for browser automation. But running large test suites locally or in CI can be slow, flaky, and resource-hungry. That’s where Azure Playwright Testing (Preview) — also called Microsoft Playwright Testing — comes in. This walkthrough will show you how to go from a plain Playwright project to running tests at scale in the Azure cloud, complete with reporting, debugging, and parallel execution. https://dellenny.com/end-to-end-confidence-in-the-cloud-a-walkthrough-of-azure-playwright-testing-preview/17Views0likes0CommentsHow to update the proxyAddresses of a Cloud-only Entra ID user
I currently have a client with an Entra ID user (not migrated from on-premises) that is cloud-based, but has proxyAddresses values assigned. Now, I want to update the proxyAddresses through the Graph Explorer and have used this link as a guide: https://learn.microsoft.com/en-us/answers/questions/2280046/entra-connect-sync-blocking-user-creation-due-to-h. Now this guide is suggesting you can use the BETA model and this URL format... https://graph.microsoft.com/beta/users/%USERGUID% It states you can use that URL to do both 'GET' and 'PATCH' queries - the PATCH query being the one that will change the settings. You have to put forth a body for the proxyAddresses property in the PATCH query, which represents all of the addresses you want the user to utilise as proxy addresses. Now the GET query works... The PATCH query does not... Screenshot provided: Now, regarding the error message, I have applied ALL possible permissions in the 'Modify Permissions' tab. It is still erroring, Now I cannot use Exchange Online PowerShell, as the user does not have a mailbox! Aside from potentially using a license for Exchange Online or provisioning a mailbox for the user, and making the necessary changes, would the only other option be to delete/recreate the user?Solved129Views0likes3CommentsAzure automation feature, improvements and bugs
This is by no means meant as critic as i love the Azure Automation Account product and its current features but these are thing that i would love to see as an offering/fixed for the future. Source Control (I can only speak for Github as that is what i use): Bugs: Tags being overwritten / removed by source controll both on full sync but also on incremential syncs (Already reported in case #2508010040002105) Features: Runbooks in source control is not being deleted in automation account when they have been deleted in source control. Support for diffrent sync types other than PowerShell 5.1 (Personally we will not consider upgrading to a newer version before there is source control implemented) Support for syncing the full repository instead of only a specific folder. So recursive source control for easier organisation in repositories I know we can setup multiple source control in azure automation but that seems a bit redundant and more maintance as the source control integration expires after 1 year does not matter if your PAT token is set to never expires Add support for syncing synopsis / description for at least PowerShell scripts so it grabs it directly from the given script and inputs it into the description field. Just the output of get-help .\ScriptName.ps1 Logging: Bugs: From time to time we see that logs is being displayed twice after each other so lets say you get the first result of logs. For this example lets say the first 10 entries in the All log page and scroll down further then the same 10 entries are repeated again and again and again this can also be seen by the time stamp of the log entry. (No new network requests for logs is being made so i believe this might be a bug in a javascript without being 100% certain) The most often time we see this bug is when a runbook is still running so it might be the log output stream that messes this up. And just to provide a picture for refrence without exposing anything sensitive the bug can be seen based on timestamps here: PowerShell 7 and above log outputs seems to contain some non escaped ASCI characters which makes the logs harder to read and also makes a log object being split into multiple log entries in Azure automation Log outputs Seems to have been fixed since i last tested Features: Searching for a specific job id in the general job list. Currently there is a work arround by going into a specific runbook - go to jobs - Press "Find job" and then you can lookup a jobid globally but the UI is not being updated correctly as displayed here: Would love to see a button here or be able to search for a jobid Formatting log outputs so you can do multi line output in a single log output entry E.G. "Write-output "New´r´nLine" So the output entry contains multiple lines for easier human readable log outputs Runbook page: Bugs: Searching for runbook names seems a bit buggy as far as i have seen there is 3 diffrent results for the end user Base image intialy looking at all runbooks One option is that it is not able to find a runbook with that name I have not been able to replicate it to get a picture of it. Another is that it displays a list of runbooks none of which matches what you searched for Third is that when you have searched for something and remove your search it does not return the original view Features: Ability to go to a previous job and re-run it/restart it with the same parameters. Think a bit like the way you can restart a github action run Scheduling: Features: More of a feature request but adding the schedule for a runbook directly in the code is awesome. (This is something we currently do by adding a parameter that contains the scheduling information then we have a runbook going over all our runbooks every hour and looking for this parameter and then constructing a schedule if it does not exist and links the runbook to the schedule and finally we also add a tag mentioning If the schedule name is enabled or not (*back to the issue in source control removing the tag*)) Hybrid workers: Features: I personally would love the ability to pause a hybrid worker in a hybrid worker group - Why? - Well we currently have 4 hybrid workers all running windows and have monthly patch windows and if a job hits a hybrid worker that is in patch then the jobs would go into a suspended state and not be picked up again Now we could remove the hybrid worker from the group but that would also remove the extension which would be reinstalled when added and then we would hit this https://learn.microsoft.com/en-us/azure/automation/troubleshoot/extension-based-hybrid-runbook-worker#scenario-runbooks-go-into-a-suspended-state-on-a-hybrid-runbook-worker-when-using-a-custom-account-on-a-server-with-user-account-control-uac-enabled This is an issue we originally started experiencing when we migrated from agent-based hybrid workers to extension based due to the discontinuation of agent-based. Another great reason is when needing to troubleshoot something on a specific hybrid worker or even when needing to update modules on a specific hybrid worker as this can not be done while the hybrid worker is still running jobs unless you use force or hit a time that it is not running or by manually stopping the service and then again end up with suspended jobs that is not being picked up again. Additional features that i personally would love to see as an offering: A front end for azure automation for end users (Think self-service portal) as some kind of add-on feature allowing a specific group of people to start a given runbook but supplying a more user friendly front end for it while also including some more limitations for end user groupings. I know there is already third party solutions for this and tbh I almost created one my self on my last maternity leave but my company chose not to pursue it further as the statement is we have 1 self service platform being servicenow can be viewed https://github.com/Mynster9361/Self-Service-Frontend-Azure-Automation just to give some inspiration if needed RBAC permissions for individual runbooks (as far as i remember this can already be done through cli) A General overview management blade for managing webhooks and the associated runbooks Currently there is no way to know which runbooks has an active / inactive webhook assigned to them as the only way to see this is by going to a runbook go to the webhooks blade and look if there is one or not. Personally i would love to see a blade on the general overview called "Webhooks" that looks similar to this table maybe: RunbookNameExpirationLast triggeredStatusRunbook1 (Clickable to get directly to the runbook)Custom_name_for_this webhook02/01/2022 16:00 EnabledRunbook2webhook211/11/2026 16:00TodayDisabledRunbook3webhook311/11/2027 16:00TodayEnabled Instead of webhook being a gentleman agreemnet on when you can enable and when you shouldn't enable and naming and such you have 1 general overview of all webhooks which would give value in regards to security and easier management of webhooks The things i see as most critical or highest on my wish list: To list 2 things i would like to see sooner rather than later Source control definitely needs to be updated/revamped so it both supports other languages/versions and also does not remove tags. Another thing that would be nice to have is to force it to follow source control so if i delete something that is in source control it is also deleted in azure automation Hybrid workers in maintenance mode so it completes running jobs and you are able to work on the hybrid worker whether it be bugs or just regular updates.26Views2likes0CommentsAzure AI Studio / Azure AI Foundry A Powerful Platform for Generative AI
In recent years, generative AI has moved rapidly from research labs to real-world applications. Microsoft’s offering in this space has evolved to meet demand: Azure AI Studio (also known under the broader banner Azure AI Foundry) is Microsoft’s integrated environment for creating, customizing, deploying, and managing AI models, agents, and applications. This blog explores what Azure AI Studio is, why it matters, what features it offers, its advantages and constraints, and how you might leverage it in your own projects. https://dellenny.com/azure-ai-studio-azure-ai-foundry-a-powerful-platform-for-generative-ai/34Views0likes0CommentsGenerative AI in Azure A Practical Guide to Getting Started
Generative AI has quickly become one of the most transformative technologies in the cloud era, enabling businesses to create content, enhance productivity, and unlock entirely new use cases. With Microsoft Azure’s AI services, developers and organizations can harness powerful generative AI capabilities without the need to build everything from scratch. In this blog, we’ll explore what generative AI in Azure looks like, the key services available, and how you can get started using them in your applications. https://dellenny.com/generative-ai-in-azure-a-practical-guide-to-getting-started/17Views0likes0CommentsBurst / B series VMs
Hi everyone, in Cost Management reports I see a line item called “Windows Server Burst” associated with B-Series VMs. I’ve been trying to understand this for a while, but I haven’t been able to find any official Microsoft documentation that clearly explains what this charge covers. My assumption is that it refers to the Windows Server license cost for burstable VMs, and not an additional variable charge when the CPU actually goes into burst mode but I’d like to confirm. Has anyone found an official Microsoft reference or can provide clarification? Thanks a lot!78Views0likes1CommentService Trust Portal no longer support Microsoft Account (MSA) access
Dear all, We need to access certain documents (i.e., SOC 2 or ISO 27xxx) on the https://servicetrust.microsoft.com/DocumentPage/d013b518-c1fe-462c-8124-de901f3b68dc. To download documents you need to be signed in first. However, when I click on "sign in" (using the same email/account as for our azure account) I get the error message "Service Trust Portal no longer support Microsoft Account (MSA) access." (see screenshot below). It seems that I am not the only one since other users had similar issues but they also could not find a solution (or at least it was not mentioned in their post): https://techcommunity.microsoft.com/t5/security-compliance-and-identity/cannot-login-to-service-trust-portal/m-p/3632978 I have been trying this now since more than a week and also created a support ticket (which has not been assigned to a support agent yet). It is quite cumbersome and I hope some of you could have an idea since getting these documents is quite crucial for us.2.1KViews0likes7Comments