Cloud Services
33 TopicsBoosting Performance with the Latest Generations of Virtual Machines in Azure
Microsoft Azure recently announced the availability of the new generation of VMs (v6)—including the Dl/Dv6 (general purpose) and El/Ev6 (memory-optimized) series. These VMs are powered by the latest Intel Xeon processors and are engineered to deliver: Up to 30% higher per-core performance compared to previous generations. Greater scalability, with options of up to 128 vCPUs (Dv6) and 192 vCPUs (Ev6). Significant enhancements in CPU cache (up to 5× larger), memory bandwidth, and NVMe-enabled storage. Improved security with features like Intel® Total Memory Encryption (TME) and enhanced networking via the new Microsoft Azure Network Adaptor (MANA). By Microsoft Evaluated Virtual Machines and Geekbench Results The table below summarizes the configuration and Geekbench results for the two VMs we tested. VM1 represents a previous-generation machine with more vCPUs and memory, while VM2 is from the new Dld e6 series, showing superior performance despite having fewer vCPUs. VM1 features VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM2 features VM2 - D16ls v6 (16 Vcpus - 32GB RAM) VM2 - D16ls v6 (16 Vcpus - 32GB RAM) Key Observations: Single-Core Performance: VM2 scores 2013 compared to VM1’s 1570, a 28.2% improvement. This demonstrates that even with half the vCPUs, the new Dld e6 series provides significantly better performance per core. Multi-Core Performance: Despite having fewer cores, VM2 achieves a multi-core score of 12,566 versus 9,454 for VM1, showing a 32.9% increase in performance. VM 1 VM 2 Enhanced Throughput in Specific Workloads: File Compression: 1909 MB/s (VM2) vs. 1654 MB/s (VM1) – a 15.4% improvement. Object Detection: 2851 images/s (VM2) vs. 1592 images/s (VM1) – a remarkable 79.2% improvement. Ray Tracing: 1798 Kpixels/s (VM2) vs. 1512 Kpixels/s (VM1) – an 18.9% boost. These results reflect the significant advancements enabled by the new generation of Intel processors. Score VM 1 VM 1 VM 1 Score VM 2 VM 2 VM 2 Evolution of Hardware in Azure: From Ice Lake-SP to Emerald Rapids Technical Specifications of the Processors Evaluated Understanding the dramatic performance improvements begins with a look at the processor specifications: Intel Xeon Platinum 8370C (Ice Lake-SP) Architecture: Ice Lake-SP Base Frequency: 2.79 GHz Max Frequency: 3.5 GHz L3 Cache: 48 MB Supported Instructions: AVX-512, VNNI, DL Boost VM 1 Intel Xeon Platinum 8573C (Emerald Rapids) Architecture: Emerald Rapids Base Frequency: 2.3 GHz Max Frequency: 4.2 GHz L3 Cache: 260 MB Supported Instructions: AVX-512, AMX, VNNI, DL Boost VM 2 Impact on Performance Cache Size Increase: The jump from 48 MB to 260 MB of L3 cache is a key factor. A larger cache reduces dependency on RAM accesses, thereby lowering latency and significantly boosting performance in memory-intensive workloads such as AI, big data, and scientific simulations. Enhanced Frequency Dynamics: While the base frequency of the Emerald Rapids processor is slightly lower, its higher maximum frequency (4.2 GHz vs. 3.5 GHz) means that under load, performance-critical tasks can benefit from this burst capability. Advanced Instruction Support: The introduction of AMX (Advanced Matrix Extensions) in Emerald Rapids, along with the robust AVX-512 support, optimizes the execution of complex mathematical and AI workloads. Efficiency Gains: These processors also offer improved energy efficiency, reducing the energy consumed per compute unit. This efficiency translates into lower operational costs and a more sustainable cloud environment. Beyond Our Tests: Overview of the New v6 Series While our tests focused on the Dld e6 series, Azure’s new v6 generation includes several families designed for different workloads: 1. Dlsv6 and Dldsv6-series Segment: General purpose with NVMe local storage (where applicable) vCPUs Range: 2 – 128 Memory: 4 – 256 GiB Local Disk: Up to 7,040 GiB (Dldsv6) Highlights: 5× increased CPU cache (up to 300 MB) and higher network bandwidth (up to 54 Gbps) 2. Dsv6 and Ddsv6-series Segment: General purpose vCPUs Range: 2 – 128 Memory: Up to 512 GiB Local Disk: Up to 7,040 GiB in Ddsv6 Highlights: Up to 30% improved performance over the previous Dv5 generation and Azure Boost for enhanced IOPS and network performance 3. Esv6 and Edsv6-series Segment: Memory-optimized vCPUs Range: 2 – 192* (with larger sizes available in Q2) Memory: Up to 1.8 TiB (1832 GiB) Local Disk: Up to 10,560 GiB in Edsv6 Highlights: Ideal for in-memory analytics, relational databases, and enterprise applications requiring vast amounts of RAM Note: Sizes with higher vCPUs and memory (e.g., E128/E192) will be generally available in Q2 of this year. Key Innovations in the v6 Generation Increased CPU Cache: Up to 5× more cache (from 60 MB to 300 MB) dramatically improves data access speeds. NVMe for Storage: Enhanced local and remote storage performance, with up to 3× more IOPS locally and the capability to reach 400k IOPS remotely via Azure Boost. Azure Boost: Delivers higher throughput (up to 12 GB/s remote disk throughput) and improved network bandwidth (up to 200 Gbps for larger sizes). Microsoft Azure Network Adaptor (MANA): Provides improved network stability and performance for both Windows and Linux environments. Intel® Total Memory Encryption (TME): Enhances data security by encrypting the system memory. Scalability: Options ranging from 128 vCPUs/512 GiB RAM in the Dv6 family to 192 vCPUs/1.8 TiB RAM in the Ev6 family. Performance Gains: Benchmarks and internal tests (such as SPEC CPU Integer) indicate improvements of 15%–30% across various workloads including web applications, databases, analytics, and generative AI tasks. My personal perspective and point of view The new Azure v6 VMs mark a significant advancement in cloud computing performance, scalability, and security. Our Geekbench tests clearly show that the Dld e6 series—powered by the latest Intel Xeon Platinum 8573C (Emerald Rapids)—delivers up to 30% better performance than previous-generation machines with more resources. Coupled with the hardware evolution from Ice Lake-SP to Emerald Rapids—which brings a dramatic increase in cache size, improved frequency dynamics, and advanced instruction support—the new v6 generation sets a new standard for high-performance workloads. Whether you’re running critical enterprise applications, data-intensive analytics, or next-generation AI models, the enhanced capabilities of these VMs offer significant benefits in performance, efficiency, and cost-effectiveness. References and Further Reading: Microsoft’s official announcement: Azure Dld e6 VMs Internal tests performed with Geekbench 6.4.0 (AVX2) in the Germany West Central Azure region.569Views0likes2CommentsBackup vaults Vs Recovery Service Vault
Hello Team, Microsoft has introduced multiple vault types, each serving different backup and disaster recovery needs. Below is a high-level differentiation: Recovery Services Vault (RSV) Supports Azure Backup (VMs, SQL, SAP HANA, Files) and Azure Site Recovery (disaster recovery). Offers backup policies, recovery points, replication, and failover management. Backup Vault A newer, streamlined vault designed for Azure Backup only. Supports Backup Short-Term Retention (Instant Restore) and Cross-region Restore. Primarily used with Azure Policy & Backup Center for better management at scale. Microsoft Continuity Center (MCC) A centralized disaster recovery hub in Azure. Integrates Azure Site Recovery (ASR) and backup services into a single pane of glass. Allows for failover, backup monitoring, and business continuity planning. Do we have any document talks about little deeper about the above topics.Solved655Views0likes1CommentMove Azure resources from Tenant x to Tenant Y
Hi community I have one subscription-A in azure under Tenant-A with few resources like Staging/UAT vm & storage account which has certain data like in size 3TB(overall); keyvault webapp now I got new subscription-B in Azure Under Tenant-B(Subscription-B) which is provided by CSP Tenant A (Subscription-A)is pay-as you go Tenant B (Subscription-B)is under CSP pay as you go model what is the process/procedure to move/migrate resources from Tenant A(Subscription-A) to Tenant B(Subscription-B)11KViews0likes3CommentsManage serverless APIs with Apache APISIX
Serverless computing enables developers to build applications faster by eliminating the need for them to manage infrastructure. With serverless APIs in the cloud, the cloud service provider automatically provisions, scales, and manages the infrastructure required to run the code. This article shows a simple example of how to manage Java-based serverless APIs built with https://azure.microsoft.com/en-in/products/functions/. It uses https://apisix.apache.org/docs/apisix/plugins/azure-functions/ plugin to integrate https://apisix.apache.org/ with Azure Serverless Function that invokes the HTTP trigger functions and returns the response from https://azure.microsoft.com/en-us. Apache APISIX offers https://apisix.apache.org/docs/apisix/plugins/serverless/ that can be used with other serverless solutions like https://aws.amazon.com/lambda/. Learning objectives You will learn the following throughout the article: What serverless APIs? The role of API Gateway in managing complex serverless API traffic. How to set up Apache APISIX Gateway. How to build serverless APIs with Azure Functions. How to expose serverless APIs as upstream services. How to secure serverless APIs with APISIX authentication plugins. How to apply rate-limiting policies. Before we get started with the practical side of the tutorial, let's go through some concepts. What serverless APIs? https://www.koombea.com/blog/serverless-apis/ are the same as traditional APIs, except they utilize a serverless backend. For businesses and developers, serverless computing means they no longer have to worry about server maintenance or scaling server resources to meet user demands. Also, serverless APIs avoid the issue of scaling because they create server resources every time a request is made. Serverless APIs reduce latency because they are hosted on an origin server. Last but not least serverless computing is far more cost-efficient than the traditional alternative such as building entire https://www.ideamotive.co/blog/serverless-vs-microservices-architecture#:~:text=Serverless%20vs%20Microservices%20%E2%80%93%20Main%20Differences,can%20host%20microservices%20on%20serverless.. Serverless APIs using Azure function An https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview is a simple way of running small pieces of code in the cloud. You don’t have to worry about the infrastructure required to host that code. You can write the Function in C#, Java, JavaScript, PowerShell, Python, or any of the languages that are listed in the https://learn.microsoft.com/en-us/azure/azure-functions/supported-languages. With Azure Functions, you can rapidly build HTTP APIs for your web apps without the headache of web frameworks. Azure Functions is serverless, so you're only charged when an HTTP endpoint is called. When the endpoints aren't being used, you aren't being charged. These two things combined make serverless platforms like Azure Functions an ideal choice for APIs where you experience unexpected spikes in traffic. API Gateway for Serverless APIs traffic management An https://apisix.apache.org/docs/apisix/terminology/api-gateway/ is the fundamental part of serverless API because it is responsible for the connection between a defined API and the function handling requests to that API. There are many benefits of API Gateway in the serverless-based APIs architecture. In addition to API Gateway’s Primary edge functionalities such as authentication, rate throttling, observability, caching, and so on, it is capable of invoking serverless APIs, subscribing to events, then processing them using callbacks and forward authentication requests to external authorization services with completely custom serverless function logic. Manage serverless APIs with Apache APISIX demo With enough theoretical knowledge in mind, now we can jump into a practical session. We use an example project repo https://github.com/Boburmirzo/apisix-manage-serverless-apis hosted on GitHub. You can find the source code and sample curl commands we use in this tutorial. For our mini-project, we’ll work with two simple Azure functions written in Java that simulates our serverless APIs for https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/products and https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/reviews services. Prerequisites Must be familiar with fundamental API concepts Must have a working knowledge of Azure Functions, for example https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/1-introduction shows how to build an HTTP API using the Azure Functions extension for Visual Studio Code. https://www.docker.com/products/docker-desktop/ https://azure.microsoft.com/en-us/free/ https://docs.microsoft.com/cli/azure/install-azure-cli https://aka.ms/azure-jdks, at least version 8 https://maven.apache.org/ https://www.npmjs.com/package/azure-functions-core-tools https://code.visualstudio.com/download https://dev.to/apisix/manage-serverless-apis-with-apache-apisix-55a8 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions Set up the project This first thing you clone the project repo from GitHub: git clone https://github.com/Boburmirzo/apisix-manage-serverless-apis.git Open the project folder in your favorite code editor. The tutorial leverages https://code.visualstudio.com/. Run Apache APISIX To run Apache APISIX and Azure functions locally, you can follow these steps: Open a new terminal window and run docker compose up command from the root folder of the project: docker compose up -d Above command will run Apache APISIX and etcd together with Docker. For example, if Docker desktop is installed on your machine, you can see running containers there: We installed APISIX on our local environment in this demo but you can also deploy it to Azure and run it on https://learn.microsoft.com/en-us/azure/container-instances/. See the https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Run Azure functions Then, navigate to /upstream folder: mvn clean installmvn azure-functions:run The two functions will start in a terminal window. You can request both serverless APIs in your browser: For example: Deploy Azure functions Next, we deploy functions code to Azure Function App by running below cmd: mvn azure-functions:deploy Or you can simply follow this tutorial on https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-java?tabs=bash%2Cazure-cli%2Cbrowser#deploy-the-function-project-to-azure Note that the function app name is randomly generated based on your artifactId, appended with a randomly generated number. In the tutorial cmds, the function app name serverless-apis is mentioned. Just to make sure our function works, we can test an invocation call directly requesting it URL in the browser: https://serverless-apis.azurewebsites.net/api/products https://serverless-apis.azurewebsites.net/api/reviews Exposing serverless APIs in APISIX Once the setup is complete, now we will expose serverless Azure function APIs as upstream services in APISIX. To do so, we need to create a new https://apisix.apache.org/docs/apisix/terminology/route/ with azure-function plugin enabled for both products and review serverless backend APIs. If azure-function plugin is enabled on a route, APISIX listens for requests on that route’s path, and then it invokes the remote Azure Function code with the parameters from that request. Create a Route for Products To create a route for Products function, run the following command: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false } }, "uri": "/products" }' Note that we set ssl_verify attribute of azure-functions plugin to false to disable SSL verification for only the demo purpose. You can also enable it to perform more secure requests from APISIX to Azure Functions. Learn other https://apisix.apache.org/docs/apisix/plugins/azure-functions/#attributes. Test With a Curl Request We can use curl to send a request, seeing if APISIX listens on the path correctly and forwards the request to the upstream service successfully: curl -i -XGET http://127.0.0.1:9080/products HTTP/1.1 200 OK [ { "id": 1, "name": "Product1", "description": "Description1" }, { "id": 2, "name": "Product2", "description": "Description2" } ] Great! We got a response from the actual serverless API on Azure Function. Next, we will make a similar configuration for the reviews function. Create a Route for Reviews and test Create the second route with Azure function plugin enabled: curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false } }, "uri": "/reviews" }' Test serverless API response: curl -i -XGET http://127.0.0.1:9080/reviews In this section, we introduced the new route and added azure-functions plugin to our serverless APIs so that APISIX can invoke remote Azure functions and manage the traffic. In the following sections, we will learn how to authenticate API consumers and apply runtime policies like rate-limiting. Secure serverless APIs with APISIX authentication plugins Up to now, our serverless APIs are public and accessible by unauthorized users. In this section, we will enable the authentication feature to disallow unauthorized requests to serverless APIs. Apache APISIX can verify the identity associated with API requests through credential and token validation. Also, it is capable of determining which traffic is authorized to pass through the API to backend services. You can check all available https://apisix.apache.org/docs/apisix/plugins/key-auth/. Let's create a new consumer for our serverless APIs and add https://apisix.apache.org/docs/apisix/plugins/basic-auth/ plugin for the existing route so that only allowed users can access them. Create a new consumer for serverless APIs The below command will create our new consumer with its credentials such as username and password: curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "consumer1", "plugins": { "basic-auth": { "username": "username1", "password": "password1" } } } Add a basic auth plugin to the existing Products and Services routes. Now we configure the basic-auth plugin for routes to let APISIX check the request header with the API consumer credentials each time APIs are called: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {} }, "uri": "/products" }' curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false }, "basic-auth": {} }, "uri": "/reviews" }' Test basic auth plugin Now if we request the serverless APIs without user credentials in the header, we will get an unauthorized error: curl -i http://127.0.0.1:9080/products HTTP/1.1 401 Unauthorized {"message":"Missing authorization in request"} The result is as we expected. But if we provide the correct user credentials in the request and access the same endpoint, it should work well: curl -i -u username1:password1 http://127.0.0.1:9080/products HTTP/1.1 200 OK We have validated the client’s identity attempting to request serverless APIs by using basic authentication plugin with the help of Apache APISIX. Apply rate limiting policies for serverless APIs In this section, we will protect serverless APIs from abuse by applying a throttling policy. In Apache APISIX Gateway we can apply rate limiting to restrict the number of incoming calls. Apply and test the rate-limit policy With the existing route configurations for Products and Reviews functions selected, we can apply a rate-limit policy with https://apisix.apache.org/docs/apisix/plugins/limit-count/ plugin to protect our API from abnormal usage. We will limit the number of API calls to 2 per 60s per API consumer. To enable the limit-count plugin for the existing Products route, we need to add the plugin to the plugins attribute in our Json route configuration: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {}, "limit-count": { "count": 2, "time_window": 60, "rejected_code": 403, "rejected_msg": "Requests are too frequent, please try again later." } }, "uri": "/products" }' Apache APISIX will handle the first two requests as usual. However, a third request in the same period will return a 403 HTTP Forbidden code with our custom error message: HTTP/1.1 403 Forbidden {"error_msg":"Requests are too frequent, please try again later."} Next steps In this article, we learned step by step how to create Java-based serverless APIs with Azure Functions and Apache APISIX Gateway to manage your APIs throughout their full lifecycle from the exposing serverless APIs as upstream services in APISIX to properly secure and apply rate-limiting to limit the number of requests. This opens the doors to other use-cases of API Gateway and serverless APIs integration. You can explore other capabilities of APISIX Gateway by chaining of various https://apisix.apache.org/plugins/ to transform requests, https://dev.to/apisix/apis-observability-with-apache-apisix-plugins-1bnm, performance, and usage of our serverless APIs, https://apisix.apache.org/blog/2022/12/14/web-caching-server/ and further evolve them by https://blog.frankel.ch/evolve-apis/#:~:text=Joe%0AHello%20Joe-,Version%20the%20API,-Evolving%20an%20API that helps you to reduce development time, increase scalability, and cost savings. Apache APISIX is fully open-source API Gateway solution. If you require to have more advanced API management features for serverless APIs, you can use https://api7.ai/apisix-vs-api7 or https://api7.ai/cloud which are powered by APISIX. Related resources https://azure.microsoft.com/en-in/products/functions/. https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/. https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-java. https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Recommended content https://iambobur.com/2022/11/22/how-to-choose-the-right-api-gateway/. https://api7.ai/blog/why-is-apache-apisix-the-best-api-gateway https://api7.ai/blog/api-gateway-policies. Community:right_arrow_curving_down: 🙋 https://apisix.apache.org/docs/general/join/ :bird: https://twitter.com/ApacheAPISIX :memo: https://join.slack.com/t/the-asf/shared_invite/zt-vlfbf7ch-HkbNHiU_uDlcH_RvaHv9gQ2.5KViews0likes0CommentsHow pricing will be calculated for data which is stored from dataverse table using azure synapselink
Hi folks, I am storing stale data from dataverse table to azure data lake storage using Azure synapse Link. which actually stores data day by day for the whole year. ex, each day it transforms 10-50 records to azure data lake. So my file size in azure data lake storage will increments day by day. so here i want to know how will be the pricing calculated for storing and updating file in azure data lake? it will also costs for write operations which executes for each day?1.2KViews0likes1CommentCapturing image of VM
Difference between generalized state vs specialized state while capturing VM? is sysprep really needed before capturing image irrespective of OS state we capture later? and also how can we use existing vm from which we capture image to work as it is before , because we know that it become unusable. please need answers for this1.4KViews0likes2CommentsUpdates in Azure Firewall
Hello Folks Today I will discuss about various features that are updated ( that I have used , in my work ) in Azure Firewall. Obviously in this dynamicity , everything changes in a second . But I here I am referring to those updates , which I have gone through in recent times. So Let's start . 1) IDPS signature's lookup - Perhaps this is the most interesting feature that , I found in azure firewall and that I have used it in my projects and labs . You can go to IDPS option in Azure Firewall and enable your own signature and set there mode a Alert or deny . What it does like , if you found a false positive where your request is blocked by faulty signature , you can use he signature id and set it to IDPS mode off. 2) TLS Certificate Auto generator - The second feature that I have worked on is TLS Certificate generator. For non-production you can use this mechanism , which generally creates this mechanism managed identity , key vaults , Self-signed CA certificate . 3) Web Categories Lookup - Web Categories is a filtering feature that allows administrators to allow or deny web traffic based on categories, such as gambling, social media, and more. They added tools that help manage these web categories: Category Check and MI's-Categorization Request. 4) IDPS Private range IP's - In Azure Firewall Premium IDPS, Private IP address ranges are used to identify if traffic is inbound or outbound. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove or add ranges as needed.932Views0likes0CommentsWith the PowerShell collect details about all Azure VM's in a subscription!
Hi Microsoft Azure Friends, I used the PowerShell ISE for this configuration. But you are also very welcome to use Visual Studio Code, just as you wish. Please start with the following steps to begin the deployment (the Hashtags are comments): #The first two lines have nothing to do with the configuration, but make some space below in the blue part of the ISE Set-Location C:\Temp Clear-Host #So that you can carry out the configuration, you need the necessary cmdlets, these are contained in the module Az (is the higher-level module from a number of submodules) Install-Module -Name Az -Force -AllowClobber -Verbose #Log into Azure Connect-AzAccount #Select the correct subscription Get-AzContext Get-AzSubscription Get-AzSubscription -SubscriptionName "your subscription name" | Select-AzSubscription #Provide the subscription Id where the VMs reside $subscriptionId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" #Provide the name of the csv file to be exported $reportName = "myReport.csv" #If you didn't select the subscription in the step above, you can do so now (or just skip it) Select-AzSubscription $subscriptionId #Some variables $report = @() $vms = Get-AzVM $publicIps = Get-AzPublicIpAddress $nics = Get-AzNetworkInterface | ?{ $_.VirtualMachine -NE $null} #Now start the loop foreach ($nic in $nics) { $info = "" | Select VmName, ResourceGroupName, Region, VirturalNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress $vm = $vms | ? -Property Id -eq $nic.VirtualMachine.id foreach($publicIp in $publicIps) { if($nic.IpConfigurations.id -eq $publicIp.ipconfiguration.Id) { $info.PublicIPAddress = $publicIp.ipaddress } } $info.OsType = $vm.StorageProfile.OsDisk.OsType $info.VMName = $vm.Name $info.ResourceGroupName = $vm.ResourceGroupName $info.Region = $vm.Location $info.VirturalNetwork = $nic.IpConfigurations.subnet.Id.Split("/")[-3] $info.Subnet = $nic.IpConfigurations.subnet.Id.Split("/")[-1] $info.PrivateIpAddress = $nic.IpConfigurations.PrivateIpAddress $report+=$info } #Now let's look at the result $report | ft VmName, ResourceGroupName, Region, VirturalNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress #We save the file in our home folder $report | Export-CSV "$home/$reportName" Now you have used the PowerShell to create a report with the details about the vm's in a subscription! Congratulations! I hope this article was useful. Best regards, Tom Wechsler P.S. All scripts (#PowerShell, Azure CLI, #Terraform, #ARM) that I use can be found on github! https://github.com/tomwechsler4KViews0likes0CommentsAzure Recovery Services Vault Pricing for specific VMs
Hi, We have recently setup a few VMs for our security department and they will be paying for them out of their budget. We can easily filter the costs by the resource group for the VMs, storage etc but we are struggling with re-charging them for the cost of their backups. We are using Azure Recovery Services Vault and performing a selective disk backup on two of their VMs. I can manually cost this using the pricing calculator, but i don;t want to have to work this out every time based on how much data they have used. Is there a way to see the cost per VM for backups? I can only see a cost per Recovery Services vault and nothing more granuar. Thanks J2.7KViews0likes2Comments