Forum Widgets
Latest Discussions
Azure IMDS (Instance Metadata Service) calls to 168.63.129.16 blocked after July 1st, 2025
[ACTION REQUIRED] After 1 July 2025, it will no longer be possible to query Azure IMDS endpoints at the IP address 168.63.129.16. Please begin using 169.254.169.254 to communicate with Azure IMDS as soon as possible. Officially, IMDS APIs can only be queried at 169.254.169.254. However, due to the internal design of Azure, IMDS endpoints can also be queried at the IP address 168.63.129.16 from within a virtual machine. Some customers are using this unofficial pathway to communicate with IMDS. An upcoming change in Azure will permanently block IMDS requests on 168.63.129.16. After 1 July 2025, you won’t be able to access Azure IMDS endpoints with that IP. You can continue to use 168.63.129.16 to call into IMDS APIs until up until that date, but we recommend you begin your transition now. HOW TO CHECK IF YOU ARE IMPACTED Code analysis in your application. IMDS has a reserved IP address of “169.254.169.254" VM’s Private communication channel has reserved IP address of "168.63.129.16". Use code search to evaluate that your client is not using IP address “168.63.129.16” for making metadata requests. All IMDS REST requests starts with “/metadata” and all endpoints can be found at IMDS Public endpoints. REQUIRED ACTION Fix all URLs using 168.63.129.16 to prepare for its decoupling from IMDS. For example, this IMDS token endpoint URL would soon be blocked: curl -s -H Metadata:true --noproxy "*" "http://168.63.129.16/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" To avoid service disruptions, fix URLs to include 169.254.169.254., as in this example: curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”MinnieLahotiDec 13, 2024Microsoft1.2KViews0likes0CommentsAutomating Azure VM Snapshot Creation Across Subscriptions
Introduction Managing virtual machines in Azure can be time-consuming, especially when creating snapshots across multiple subscriptions. Typically, this involves logging into the Azure portal, manually locating the VM, and creating snapshots for both the OS disk and attached data disks an inefficient and tedious process. To simplify this, I developed a PowerShell script that automates snapshot creation, allowing me to create snapshots by simply inputting the VM name. This script is part of my toolkit for automating repetitive Azure tasks. It iterates through all subscriptions linked to my Azure account, identifies the specified VM, and generates snapshots for both the OS and data disks within the VM’s resource group, adhering to a consistent naming convention. This article describes the script, the rationale behind its design, and how it improves the efficiency of managing Azure resources. Design Considerations When designing this script for automating Azure VM snapshot creation, several key considerations were prioritized to enhance efficiency and user experience: 1. Subscription Handling All-Subscription Search: The script loops through all Azure subscriptions associated with the account. This design ensures that the script can locate the VM across any subscription without manual intervention to switch between them. This is particularly useful for environments with multiple subscriptions. 2. Dynamic VM Search Automatic VM Discovery: Instead of requiring users to manually input resource group and subscription details, the script dynamically searches for the VM by its name across all subscriptions. This automation simplifies the process and reduces the likelihood of errors. 3. Snapshot Naming Convention Consistent Naming Format: Snapshots are named using the format VMname_diskname_dd-MM-yyyy_HH_mm. This approach ensures that snapshots are well-organized and easily identifiable. The script also removes random characters, such as GUIDs, often appended to disk names, resulting in clean and consistent snapshot names. 4. OS and Data Disk Snapshots Comprehensive Backup: The script separately handles snapshots for both the OS disk and data disks. This ensures that all disks attached to the VM are included in the backup process, providing complete coverage. 5. Time Efficiency Streamlined Process: The script is designed to eliminate the need for repeated manual input and navigation within the Azure portal. By simply providing the VM name, users can automate the entire process, from VM identification to snapshot creation. This saves considerable time and effort, particularly in environments with many VMs and subscriptions. By focusing on these design considerations, the script offers a robust and user-friendly solution for automating VM snapshot creation across Azure subscriptions. Prerequisites To use this script, you need: Azure PowerShell module installed (Az module). Active Azure account with sufficient permissions to access VMs and create snapshots across subscriptions. A VM name as input. Why Automate Snapshot Creation? In many organizations, virtual machines (VMs) are critical for running services, and regularly creating snapshots of these VMs is essential for disaster recovery and version control. Traditionally, creating snapshots for Azure VMs involves several manual steps: Log in to the Azure Portal: Access the Azure portal to start the snapshot creation process. Navigate Through Subscriptions: Switch between different Azure subscriptions to find the correct VM. Locate the Correct VM: Search for and select the specific VM for which you want to create snapshots. Create Snapshots: Manually create snapshots for both the OS disk and any attached data disks. Repeat the Process: Perform these steps for each disk across multiple VMs or subscriptions. This manual process is not only time-consuming but also prone to errors. Automating snapshot creation simplifies and streamlines the process: Reduces Manual Effort: The entire process can be accomplished with a few clicks. Saves Time: Automation eliminates the need to repeat steps across multiple VMs and subscriptions. Minimizes Errors: By automating the process, you reduce the risk of human error. With the automation script, you only need to provide the VM name, and the script handles the rest, making snapshot management more efficient and reliable. Script Overview Below is the PowerShell script that automates the process of creating snapshots for a VM across multiple subscriptions in Azure: <# .SYNOPSIS This script automates the process of creating snapshots for a virtual machine (VM) in Azure across multiple subscriptions. The script will locate the VM by its name, determine the resource group where it exists, and create snapshots for both the OS disk and any attached data disks. It ensures that the snapshot names follow a specific naming convention while removing any random characters appended to the disk names. .DESCRIPTION - Loops through all Azure subscriptions attached to the account. - Searches for a specified VM by name across all subscriptions. - Identifies the resource group of the VM. - Creates snapshots for the OS disk and all data disks in the same resource group as the VM. - Follows the snapshot naming convention: computername_diskname_dd-mm-yyyy_hh_mm. - Removes random characters (e.g., GUIDs) after the disk name in snapshot naming. .NOTES Author: Vivek Chandran Date Created: 11-09-2023 #> # Login to Azure (if not already logged in) Connect-AzAccount # Prompt the user to enter the VM name $computerName = Read-Host -Prompt "Please enter the name of the VM you want to snapshot" # Get all subscriptions available to the account $subscriptions = Get-AzSubscription # Loop through each subscription to find the specified VM foreach ($subscription in $subscriptions) { # Set the subscription context so that all subsequent commands target this subscription Set-AzContext -SubscriptionId $subscription.Id # Retrieve all VMs in the current subscription $vms = Get-AzVM # Check if a VM with the specified name exists in this subscription $vm = $vms | Where-Object { $_.Name -eq $computerName } if ($vm) { # Output message indicating the VM was found Write-Host "VM '$computerName' found in subscription '$($subscription.Name)'" # Retrieve the resource group where the VM resides $resourceGroup = $vm.ResourceGroupName # Loop through each data disk attached to the VM and create a snapshot foreach ($disk in $vm.StorageProfile.DataDisks) { # Get the name of the data disk $diskName = $disk.Name # Remove any random characters from the disk name after the first underscore (if present) $cleanedDiskName = ($diskName -split '_')[0..1] -join '_' # Get the current date and time in the format 'dd-MM-yyyy_HH_mm' for use in the snapshot name $currentDateTime = Get-Date -Format 'dd-MM-yyyy_HH_mm' # Construct the snapshot name using the cleaned disk name and the date/time $snapshotNameWithDataDisk = "$computerName-$cleanedDiskName-$currentDateTime" # Define the snapshot configuration using the disk's managed disk ID $snapshotConfig = New-AzSnapshotConfig -SourceUri $disk.ManagedDisk.Id -Location $vm.Location -CreateOption Copy -AccountType Standard_LRS # Create the snapshot in the same resource group as the VM New-AzSnapshot -Snapshot $snapshotConfig -ResourceGroupName $resourceGroup -SnapshotName $snapshotNameWithDataDisk # Output message indicating that the snapshot was successfully created for the data disk Write-Host "Snapshot created for data disk: $snapshotNameWithDataDisk" } # Create a snapshot for the OS disk of the VM $osDisk = $vm.StorageProfile.OsDisk # Get the name of the OS disk $osDiskName = $osDisk.Name # Remove any random characters from the OS disk name after the first underscore (if present) $cleanedOsDiskName = ($osDiskName -split '_')[0..1] -join '_' # Get the current date and time in the format 'dd-MM-yyyy_HH_mm' for use in the snapshot name $currentDateTime = Get-Date -Format 'dd-MM-yyyy_HH_mm' # Construct the snapshot name using the cleaned OS disk name and the date/time $snapshotNameWithOSDisk = "$computerName-$cleanedOsDiskName-$currentDateTime" # Define the snapshot configuration using the OS disk's managed disk ID $snapshotConfig = New-AzSnapshotConfig -SourceUri $osDisk.ManagedDisk.Id -Location $vm.Location -CreateOption Copy -AccountType Standard_LRS # Create the snapshot in the same resource group as the VM New-AzSnapshot -Snapshot $snapshotConfig -ResourceGroupName $resourceGroup -SnapshotName $snapshotNameWithOSDisk # Output message indicating that the snapshot was successfully created for the OS disk Write-Host "Snapshot created for OS disk: $snapshotNameWithOSDisk" # Exit the loop since the VM has been found and processed break } else { # Output message indicating that the VM was not found in this subscription Write-Host "VM '$computerName' not found in subscription '$($subscription.Name)'" } } # Output a final message indicating that the snapshot process has completed Write-Host "Snapshots process completed!" How the Script Works 1. Azure Authentication Connect to Azure: The script starts by authenticating the user to Azure using the Connect-AzAccount command. If the user is already logged in, this step is skipped. 2. Input the VM Name Prompt for VM Name: After successful authentication, the script prompts you to enter the name of the virtual machine (VM) you want to create snapshots for. 3. Subscription Looping Retrieve Subscriptions: The script retrieves all Azure subscriptions associated with the account using Get-AzSubscription. Check Each Subscription: It iterates through each subscription to check if the specified VM exists. When the VM is found, the script switches the context to that subscription using Set-AzContext. 4. Snapshot Creation Data Disk Snapshots: For each data disk attached to the VM, the script creates a snapshot. It follows a consistent naming convention that includes the VM name, disk name, and timestamp to ensure clarity and organization. OS Disk Snapshot: After handling the data disks, the script creates a snapshot for the OS disk, using the same naming convention. 5. Completion Confirmation Message: Once all snapshots (for both OS and data disks) are created, the script outputs a message confirming the successful completion of the snapshot creation process. Conclusion This PowerShell script has greatly improved my workflow for managing Azure VMs. By automating the snapshot creation process, it eliminates the need to manually log into the Azure portal, locate the VM, and create snapshots for each disk individually. Instead, I can simply run the script, provide the VM name, and let it handle the entire process. For anyone managing multiple Azure subscriptions and seeking a reliable method to automate snapshot creation, this script offers a quick and effective solution. It ensures that backups are created consistently and stored properly, enhancing overall backup management and efficiency.VivekChandranSep 11, 2024Copper Contributor974Views0likes0CommentsExperience with Automanage Machine Configuration
Hi, I am experimenting with Automanage Machine Configuration and find the overall experience and documentation very poor. I am wondering if there are others who have experience with automanage and can point me towards additional documentation. I have built an example implementation which I am using as my lab setup: https://github.com/JelleBroekhuijsen/azure-automanage-example What I am experiencing is that the compliance-state reporting seems to be very inconsistent. Sometimes these sample configurations reports everything being compliant while maybe 30 minutes later the same configuration is marked as incompliant (while the VM is observably in a compliant state). Additionally, there seems to be no option to get support or report issues with the GuestConfiguration-extension (for Windows) or the GuestConfiguration PowerShell module. For instance, in my experimenting with the module I found that calling the PackageManagement DSC resource from within a configuration leads to a conflict with the embedded PackageManagement module in the windows extension. I found a workaround for this, but I have no way to report this issue.566Views0likes0CommentsAzure Update Management Centre (Preview) - Security Intelligence Update for Microsoft Defender AV
Afternoon all, We are exploring using Update Management Centre and scheduling the update of our fleet of IaaS servers. Monthly patching is fairly easy to understand and deploy. Our dev machines are successfully patching. However, what is the implementation pattern for updating Defender AV signatures? In a WSUS deployment, this would likely be a separate job that runs hourly. However, UMC doesn't allow that level of frequency, nor does the maintenance Window accept anything less than 1 hour. Many thanks PaulPaul BendallAug 08, 2023Iron Contributor522Views1like0CommentsAzure function Start-Stop-V2 ignores ResourceGroups
Hello, Microsoft pushes customers to use Start-Stop-V2 for scheduled starting and stopping machines. This new functionality is based on Azure Functions, not based on automation accounts and runbooks. Unfortunately, the Azure Function provided by Microsoft (deployed from marketplace) does not work properly. This function ignores the property ResourceGroups and starts all VMs within a subscription, instead of starting only the machines within the given ResourceGroups. An exact description of the problem can be found here: https://github.com/microsoft/startstopv2-deployments/issues/90 Updating / redeploying the function over github is also broken. Microsoft pushes the customers to a broken function without support. The function should be furthermore supported over Github, but isn't. Did anybody resolve this issue (e.g. by updating the function, but how?) Or has anybody found another solution, except of filtering machine names?AzAdmin89Jun 15, 2023Copper Contributor705Views0likes0CommentsUnable to load large delta table in azure ml studio
I am writing to report an issue that I am currently experiencing while trying to read a delta table from Azure ML. I have already created data assets to register the delta table, which is located at an ADLS location. However, when attempting to load the data, I have noticed that for large data sizes it is taking an exceedingly long time to load. I have confirmed that for small data sizes, the data is returned within few seconds, which leads me to believe that there may be an issue with the scalability of the data loading process. I would greatly appreciate it if you could investigate this issue and provide me with any recommendations or solutions to resolve this issue. I can provide additional details such as the size of the data, the steps I am taking to load the data, and any error messages if required. I'm following this document: https://learn.microsoft.com/en-us/python/api/mltable/mltable.mltable?view=azure-ml-py#mltable-mltable-from-delta-lake Using this command to read delta table using data asset URI from mltable import from_delta_lake mltable_ts = from_delta_lake(delta_table_uri=<DATA ASSET URI>, timestamp_as_of="2999-08-26T00:00:00Z", include_path_column=True)MustafaAliMirJun 07, 2023Copper Contributor594Views0likes0CommentsManage serverless APIs with Apache APISIX
Serverless computing enables developers to build applications faster by eliminating the need for them to manage infrastructure. With serverless APIs in the cloud, the cloud service provider automatically provisions, scales, and manages the infrastructure required to run the code. This article shows a simple example of how to manage Java-based serverless APIs built with https://azure.microsoft.com/en-in/products/functions/. It uses https://apisix.apache.org/docs/apisix/plugins/azure-functions/ plugin to integrate https://apisix.apache.org/ with Azure Serverless Function that invokes the HTTP trigger functions and returns the response from https://azure.microsoft.com/en-us. Apache APISIX offers https://apisix.apache.org/docs/apisix/plugins/serverless/ that can be used with other serverless solutions like https://aws.amazon.com/lambda/. Learning objectives You will learn the following throughout the article: What serverless APIs? The role of API Gateway in managing complex serverless API traffic. How to set up Apache APISIX Gateway. How to build serverless APIs with Azure Functions. How to expose serverless APIs as upstream services. How to secure serverless APIs with APISIX authentication plugins. How to apply rate-limiting policies. Before we get started with the practical side of the tutorial, let's go through some concepts. What serverless APIs? https://www.koombea.com/blog/serverless-apis/ are the same as traditional APIs, except they utilize a serverless backend. For businesses and developers, serverless computing means they no longer have to worry about server maintenance or scaling server resources to meet user demands. Also, serverless APIs avoid the issue of scaling because they create server resources every time a request is made. Serverless APIs reduce latency because they are hosted on an origin server. Last but not least serverless computing is far more cost-efficient than the traditional alternative such as building entire https://www.ideamotive.co/blog/serverless-vs-microservices-architecture#:~:text=Serverless%20vs%20Microservices%20%E2%80%93%20Main%20Differences,can%20host%20microservices%20on%20serverless.. Serverless APIs using Azure function An https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview is a simple way of running small pieces of code in the cloud. You don’t have to worry about the infrastructure required to host that code. You can write the Function in C#, Java, JavaScript, PowerShell, Python, or any of the languages that are listed in the https://learn.microsoft.com/en-us/azure/azure-functions/supported-languages. With Azure Functions, you can rapidly build HTTP APIs for your web apps without the headache of web frameworks. Azure Functions is serverless, so you're only charged when an HTTP endpoint is called. When the endpoints aren't being used, you aren't being charged. These two things combined make serverless platforms like Azure Functions an ideal choice for APIs where you experience unexpected spikes in traffic. API Gateway for Serverless APIs traffic management An https://apisix.apache.org/docs/apisix/terminology/api-gateway/ is the fundamental part of serverless API because it is responsible for the connection between a defined API and the function handling requests to that API. There are many benefits of API Gateway in the serverless-based APIs architecture. In addition to API Gateway’s Primary edge functionalities such as authentication, rate throttling, observability, caching, and so on, it is capable of invoking serverless APIs, subscribing to events, then processing them using callbacks and forward authentication requests to external authorization services with completely custom serverless function logic. Manage serverless APIs with Apache APISIX demo With enough theoretical knowledge in mind, now we can jump into a practical session. We use an example project repo https://github.com/Boburmirzo/apisix-manage-serverless-apis hosted on GitHub. You can find the source code and sample curl commands we use in this tutorial. For our mini-project, we’ll work with two simple Azure functions written in Java that simulates our serverless APIs for https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/products and https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/reviews services. Prerequisites Must be familiar with fundamental API concepts Must have a working knowledge of Azure Functions, for example https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/1-introduction shows how to build an HTTP API using the Azure Functions extension for Visual Studio Code. https://www.docker.com/products/docker-desktop/ https://azure.microsoft.com/en-us/free/ https://docs.microsoft.com/cli/azure/install-azure-cli https://aka.ms/azure-jdks, at least version 8 https://maven.apache.org/ https://www.npmjs.com/package/azure-functions-core-tools https://code.visualstudio.com/download https://dev.to/apisix/manage-serverless-apis-with-apache-apisix-55a8 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions Set up the project This first thing you clone the project repo from GitHub: git clone https://github.com/Boburmirzo/apisix-manage-serverless-apis.git Open the project folder in your favorite code editor. The tutorial leverages https://code.visualstudio.com/. Run Apache APISIX To run Apache APISIX and Azure functions locally, you can follow these steps: Open a new terminal window and run docker compose up command from the root folder of the project: docker compose up -d Above command will run Apache APISIX and etcd together with Docker. For example, if Docker desktop is installed on your machine, you can see running containers there: We installed APISIX on our local environment in this demo but you can also deploy it to Azure and run it on https://learn.microsoft.com/en-us/azure/container-instances/. See the https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Run Azure functions Then, navigate to /upstream folder: mvn clean installmvn azure-functions:run The two functions will start in a terminal window. You can request both serverless APIs in your browser: For example: Deploy Azure functions Next, we deploy functions code to Azure Function App by running below cmd: mvn azure-functions:deploy Or you can simply follow this tutorial on https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-java?tabs=bash%2Cazure-cli%2Cbrowser#deploy-the-function-project-to-azure Note that the function app name is randomly generated based on your artifactId, appended with a randomly generated number. In the tutorial cmds, the function app name serverless-apis is mentioned. Just to make sure our function works, we can test an invocation call directly requesting it URL in the browser: https://serverless-apis.azurewebsites.net/api/products https://serverless-apis.azurewebsites.net/api/reviews Exposing serverless APIs in APISIX Once the setup is complete, now we will expose serverless Azure function APIs as upstream services in APISIX. To do so, we need to create a new https://apisix.apache.org/docs/apisix/terminology/route/ with azure-function plugin enabled for both products and review serverless backend APIs. If azure-function plugin is enabled on a route, APISIX listens for requests on that route’s path, and then it invokes the remote Azure Function code with the parameters from that request. Create a Route for Products To create a route for Products function, run the following command: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false } }, "uri": "/products" }' Note that we set ssl_verify attribute of azure-functions plugin to false to disable SSL verification for only the demo purpose. You can also enable it to perform more secure requests from APISIX to Azure Functions. Learn other https://apisix.apache.org/docs/apisix/plugins/azure-functions/#attributes. Test With a Curl Request We can use curl to send a request, seeing if APISIX listens on the path correctly and forwards the request to the upstream service successfully: curl -i -XGET http://127.0.0.1:9080/products HTTP/1.1 200 OK [ { "id": 1, "name": "Product1", "description": "Description1" }, { "id": 2, "name": "Product2", "description": "Description2" } ] Great! We got a response from the actual serverless API on Azure Function. Next, we will make a similar configuration for the reviews function. Create a Route for Reviews and test Create the second route with Azure function plugin enabled: curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false } }, "uri": "/reviews" }' Test serverless API response: curl -i -XGET http://127.0.0.1:9080/reviews In this section, we introduced the new route and added azure-functions plugin to our serverless APIs so that APISIX can invoke remote Azure functions and manage the traffic. In the following sections, we will learn how to authenticate API consumers and apply runtime policies like rate-limiting. Secure serverless APIs with APISIX authentication plugins Up to now, our serverless APIs are public and accessible by unauthorized users. In this section, we will enable the authentication feature to disallow unauthorized requests to serverless APIs. Apache APISIX can verify the identity associated with API requests through credential and token validation. Also, it is capable of determining which traffic is authorized to pass through the API to backend services. You can check all available https://apisix.apache.org/docs/apisix/plugins/key-auth/. Let's create a new consumer for our serverless APIs and add https://apisix.apache.org/docs/apisix/plugins/basic-auth/ plugin for the existing route so that only allowed users can access them. Create a new consumer for serverless APIs The below command will create our new consumer with its credentials such as username and password: curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "consumer1", "plugins": { "basic-auth": { "username": "username1", "password": "password1" } } } Add a basic auth plugin to the existing Products and Services routes. Now we configure the basic-auth plugin for routes to let APISIX check the request header with the API consumer credentials each time APIs are called: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {} }, "uri": "/products" }' curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false }, "basic-auth": {} }, "uri": "/reviews" }' Test basic auth plugin Now if we request the serverless APIs without user credentials in the header, we will get an unauthorized error: curl -i http://127.0.0.1:9080/products HTTP/1.1 401 Unauthorized {"message":"Missing authorization in request"} The result is as we expected. But if we provide the correct user credentials in the request and access the same endpoint, it should work well: curl -i -u username1:password1 http://127.0.0.1:9080/products HTTP/1.1 200 OK We have validated the client’s identity attempting to request serverless APIs by using basic authentication plugin with the help of Apache APISIX. Apply rate limiting policies for serverless APIs In this section, we will protect serverless APIs from abuse by applying a throttling policy. In Apache APISIX Gateway we can apply rate limiting to restrict the number of incoming calls. Apply and test the rate-limit policy With the existing route configurations for Products and Reviews functions selected, we can apply a rate-limit policy with https://apisix.apache.org/docs/apisix/plugins/limit-count/ plugin to protect our API from abnormal usage. We will limit the number of API calls to 2 per 60s per API consumer. To enable the limit-count plugin for the existing Products route, we need to add the plugin to the plugins attribute in our Json route configuration: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {}, "limit-count": { "count": 2, "time_window": 60, "rejected_code": 403, "rejected_msg": "Requests are too frequent, please try again later." } }, "uri": "/products" }' Apache APISIX will handle the first two requests as usual. However, a third request in the same period will return a 403 HTTP Forbidden code with our custom error message: HTTP/1.1 403 Forbidden {"error_msg":"Requests are too frequent, please try again later."} Next steps In this article, we learned step by step how to create Java-based serverless APIs with Azure Functions and Apache APISIX Gateway to manage your APIs throughout their full lifecycle from the exposing serverless APIs as upstream services in APISIX to properly secure and apply rate-limiting to limit the number of requests. This opens the doors to other use-cases of API Gateway and serverless APIs integration. You can explore other capabilities of APISIX Gateway by chaining of various https://apisix.apache.org/plugins/ to transform requests, https://dev.to/apisix/apis-observability-with-apache-apisix-plugins-1bnm, performance, and usage of our serverless APIs, https://apisix.apache.org/blog/2022/12/14/web-caching-server/ and further evolve them by https://blog.frankel.ch/evolve-apis/#:~:text=Joe%0AHello%20Joe-,Version%20the%20API,-Evolving%20an%20API that helps you to reduce development time, increase scalability, and cost savings. Apache APISIX is fully open-source API Gateway solution. If you require to have more advanced API management features for serverless APIs, you can use https://api7.ai/apisix-vs-api7 or https://api7.ai/cloud which are powered by APISIX. Related resources https://azure.microsoft.com/en-in/products/functions/. https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/. https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-java. https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Recommended content https://iambobur.com/2022/11/22/how-to-choose-the-right-api-gateway/. https://api7.ai/blog/why-is-apache-apisix-the-best-api-gateway https://api7.ai/blog/api-gateway-policies. Community:right_arrow_curving_down: 🙋 https://apisix.apache.org/docs/general/join/ :bird: https://twitter.com/ApacheAPISIX :memo: https://join.slack.com/t/the-asf/shared_invite/zt-vlfbf7ch-HkbNHiU_uDlcH_RvaHv9gQbumurzokovJan 18, 2023Copper Contributor2.5KViews0likes0CommentsAzure AKS Security Hardening
Hello Folks !! I am back with a new blog . This time I will try give a brief overview about Azure AKS Security and Baseline. Lets gooo !!!! What is Azure AKS - Azure Kubernetes Service (AKS) offers the quickest way to start developing and deploying cloud-native apps in Azure, datacenters, or at the edge with built-in code-to-cloud pipelines and guardrails. It is mostly used as a scalable platforms nowadays. Current Application requirement includes the scaling , performing and most importantly zero downtime , which is covered by AKS service of Azure. Containerization of any application in AKS is the best way to reduce downtime and cost optimization of your infrastructure. AKS features and benefits The primary benefits of AKS are flexibility, automation and reduced management overhead for administrators and developers. For example, AKS automatically configures all of the Kubernetes nodes that control and manage the worker nodes during the deployment process and handles a range of other tasks, including Azure Active Directory (AD ) integration, connections to monitoring services and configuration of advanced networking features such as HTTP application routing. Users can monitor a cluster directly or view all clusters with Azure Monitor. Now having a brief overview of Azure AKS , lets move on Azure security features or we can call it as Azure Baseline for security of AKS , that it offer's - Security related to AKS Related to Networking - By default, a network security group and route table are automatically created with the creation of a Microsoft Azure Kubernetes Service (AKS) cluster. AKS automatically modifies network security groups for appropriate traffic flow as services are created with load balancers, port mappings, or ingress routes. Use AKS network policies to limit network traffic by defining rules for ingress and egress traffic between Linux pods in a cluster based on choice of namespaces and label selectors. Networking allows the filtering of traffic to not only to AKS but also entering it to current infrastructure. Since mentioned about the namespaces in AKS , it is a whole virtual environment that is separated within Kubernetes cluster , we can configure alert based networking rules for particular namespace also. 2) Using the traditional method ( i.e. auth from AD or role creation) for AKS - Kubernetes includes security components, such as pods , and nodes security . Meanwhile, Azure includes components like Active Directory, Azure Policy, Azure Key Vault, and orchestrated cluster upgrades. AKS combines these security components to: Provide a complete Authentication and Authorization story. Leverage AKS Built-in Azure Policy to secure your applications. Authenticating with the password and keys for developers using Azure key vault .Setting up Azure policy like conditional access policy for better security for Azure updates. 3) Using Azure Application Gateway and WAF - Use an Azure Application Gateway enabled Web Application Firewall (WAF) in front of an AKS cluster to provide an additional layer of security by filtering the incoming traffic to your web applications. Web Application firewall uses a set of rules for filtering out the traffic , which we will get injected into your cluster or nodes. Also Application gateway act as proxy for all the traffic , you can also configure route table for routing of the traffic , when the traffic injects inside the application gateway. Application gateway also provides an external IP , which helps to not expose our main ip in which our application or pods are running. Also using an API gateway for authentication, authorization, and monitoring for APIs used in your AKS environment. It acts as a front door to the microservices, , and decreases the complexity of your microservices by removing the burden of handling cross cutting concerns. 4) Configure central security log management - Enable audit logs from Azure Kubernetes Services (AKS) master components, kube-apiserver and kube-controller-manager, which are provided as a managed service. kube-auditaksService: The display name in audit log for the control plane operation masterclient: This is the display name in audit log for MasterClientCertificate, the certificate that you get from aks get-credentials node client: The display name for Client Certificate, which is used by agent nodes. You can also export these logs to Log Analytics . Use Log Analytics workspaces to query and perform analytics. Use Azure blob storage for storing of the logs and archiving them with various tiers options in Azure. 5) Locations approving in Azure - Use Conditional Access Named Locations to allow access to Azure Kubernetes Service (AKS) clusters from only specific logical groupings of IP address ranges or countries/regions. This requires integrated authentication for AKS with Azure Active Directory (Azure AD). Limit the access to the AKS API server from a limited set of IP address ranges, as it receives requests to perform actions in the cluster to create resources or scale the number of nodes. If you want to know how you can configure this named locations , you can go to this Azure link - https://learn.microsoft.com/en-us/azure/active-directory/reports-monitoring/quickstart-configure-named-locations 6) Isolate the system which are storing data - Logically isolate teams and workloads in the same cluster with Azure Kubernetes Service (AKS) to provide the least number of privileges, scoped to the resources required by each team. Use the namespace in Kubernetes to create a logical isolation boundary. You can also implement separate subscriptions or working directory of the AKS cluster , which are containing the pods with sensitive information or any type of Database, which are prone to attacks. 7) Encryption of all the sensitive information - It is always good to encrypt our data that is exposable to internet in HTTPS. You can create an HTTPS ingress and use your own TLS certificates for your Azure Kubernetes Service (AKS) deployments. Kubernetes egress traffic is encrypted over HTTPS/TLS by default. You can review any potentially un-encrypted egress traffic from your AKS instances. This may include NTP traffic, DNS traffic, HTTP traffic for retrieving updates in some cases. Here are some of the methods , for hardening and maintaining your AKS cluster security. There are also many third party applications which you can integrate with your AKS cluster , but I will recommend to you use them wisely . Go through there files and changes that they will make to your cluster. Thanks !!!!!Shashwat3105Oct 16, 2022Brass Contributor2.6KViews2likes0CommentsHow to get data directory path from class DatasetConsumptionConfig?
I am trying to read my data files from an Azure ML dataset. My code is as follows: from azureml.core import Dataset dataset = Dataset.get_by_name(aml_workspace, "mydatasetname") dataset_mount = dataset.as_named_input("mydatasetname").as_mount(path_on_compute="dataset") The type of dataset_mount is class DatasetConsumptionConfig. How do I get the actual directory path from that class? I can do it in a very complicated manner by passing the dataset_mount into a script as follows: PythonScriptStep(script_name="myscript.py", arguments=["--dataset_mount", dataset_mount], ...) Then, when that script step is run, "myscript.py" mysteriously gets the real directory path of the data in the argument "dataset_mount", instead of it being DatasetConsumptionConfig. However, that's an overcomplicated and strange approach. Is there any direct way to get the data path from DatasetConsumptionConfig? Or maybe I have misunderstood something here?jarmnikuSep 21, 2022Copper Contributor1.2KViews0likes0CommentsMicrosoft at SC22
Microsoft will be attending Supercomputing 2022 at booth 2433, from November 13-18, 2022. Will you be attending? What advancements are you looking forward to in the world of high performance computing? https://techcommunity.microsoft.com/t5/azure-compute-blog/microsoft-at-sc22/ba-p/3613923#M157EricStarkerSep 07, 2022Former Employee743Views2likes0Comments
Resources
Tags
- virtual machine224 Topics
- Compute103 Topics
- Cloud Services31 Topics
- Azure Containers25 Topics
- app service15 Topics
- Hands-on-Labs13 Topics
- machine learning8 Topics
- Cloud Essentials8 Topics
- Backup7 Topics
- azure5 Topics