powershell
93 TopicsCollaborative Function App Development Using Repo Branches
In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode. Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created. How to use it: Create a Function App You can create a Function App using any method of your choice. Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code. Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous HTTP trigger. Then, we manually add three additional files. The resulting directory structure looks like this: deploy.yml trigger: branches: include: - '*' pool: vmImage: 'ubuntu-latest' variables: azureSubscription: '<YOUR_CONNECTION_STRING_FROM_ADO>' functionAppName: '<YOUR_FUNCTION_APP_NAME>' resourceGroup: '<YOUR_RG_NAME>' location: '<YOUR_LOCATION_NAME>' steps: - checkout: self - task: AzureCLI@2 name: DeploySlotInfra inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying production infrastructure" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy-master.bicep \ --parameters functionAppName=$(functionAppName) location=$(location) else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying slot: $SLOT_NAME" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy.bicep \ --parameters functionAppName=$(functionAppName) slotName=$SLOT_NAME location=$(location) fi - task: ArchiveFiles@2 displayName: 'Package Function App as ZIP' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/' includeRootFolder: false archiveType: zip archiveFile: '$(Build.ArtifactStagingDirectory)/functionapp.zip' replaceExistingArchive: true - task: AzureCLI@2 name: ZipDeploy inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying code to production" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying code to slot: $SLOT_NAME" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --slot $SLOT_NAME \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" fi Please replace all <YOUR_XXX> placeholders with values relevant to your environment. Additionally, update the two instances of "master" to match your repo's default branch name (e.g., main), as updates from this branch will always deploy to the production slot. deploy-master.bicep @description('Function App Name') param functionAppName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = { name: 'appsettings' parent: functionApp properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } deploy.bicep @description('Function App Name') param functionAppName string @description('Slot Name (e.g., dev, test, feature-xxx)') param slotName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource functionSlot 'Microsoft.Web/sites/slots@2022-09-01' = { name: slotName parent: functionApp location: location properties: { serverFarmId: functionApp.properties.serverFarmId } } resource slotAppSettings 'Microsoft.Web/sites/slots/config@2022-09-01' = { name: 'appsettings' parent: functionSlot properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } Deploy from the master branch Once deployed, the HTTP trigger becomes active in the production slot, and can be accessed via: https://<FUNCTION_APP_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Switch to a custom branch like member1 and create a test HTTP trigger After publishing, a new deployment slot named member1 will be created (if not already existing). You can open it in the Azure Portal and view its dedicated interface. The branch-specific HTTP trigger will now work at the following URL: https://<FUNCTION_APP_NAME>-<BRANCH_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Notice: Using deployment slots for collaborative development is subject to slot count and SKU limits. For example, the Premium SKU supports up to 20 slots. See the Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn for details. If you need to delete a slot after use, you can do so using PowerShell with the Remove-AzWebAppSlot command: Remove-AzWebAppSlot (Az.Websites) | Microsoft Learn165Views0likes0CommentsBulk Start/Stop of Azure Virtual Desktop Session Hosts in a Host Pool via Single Trigger
Hi Community, We manage an Azure Virtual Desktop (AVD) host pool with a large number of session hosts (e.g., around 100), and we’re looking for a way to start or stop all session hosts in bulk using a single trigger—preferably via PowerShell or an API. Currently, we use a scheduled script that loops through each VM individually to start or stop them, but this approach doesn't scale well. We’ve noticed that the Azure Portal provides a one-click option to start or stop all session hosts in a host pool, and we’re trying to replicate that behavior programmatically. What We’re Looking For: A PowerShell command or script that can start/stop all session hosts in a host pool without iterating through each VM. If PowerShell doesn’t support this directly, is there an ARM template, Azure CLI command, REST API, or any other method that can be triggered from PowerShell to perform this bulk action? Any official documentation, community guidance, or examples from someone who has achieved this would be greatly appreciated. Goal: To simplify and optimize our automation by using a single command or API call to manage all session hosts in a host pool, rather than looping through each machine individually. Thanks in advance for your help and suggestions!91Views0likes3CommentsAzure CLI and Azure PowerShell Build 2025 Announcement
The key investment areas for Azure CLI and Azure PowerShell in 2025 are quality and security. We’ve also made meaningful efforts to improve the overall user experience. In parallel, we've enhanced the quality and performance of Azure CLI and Azure PowerShell responses in Copilot, ensuring a more reliable user experience. We encourage you to try out the improved Azure CLI and Azure PowerShell in the Copilot experience and see how it can help streamline your Azure workflows. At Microsoft Build 2025, we're excited to announce several new capabilities aligned with these priorities: Improvements in quality and security. Enhancements to user experience. Ongoing improvements to Copilot's response quality and performance. Improvements in quality and security Azure CLI and Azure PowerShell Long Term Support (LTS) releases support In November 2024, Azure PowerShell became the first to introduce both Standard Term Support (STS) and Long-Term Support (LTS) versions, providing users with more flexibility in managing their tools. At Microsoft Build 2025, we are excited to announce that Azure CLI now also supports both STS and LTS release models. This allows users to choose the version that best fits their project needs, whether they prefer the stability of LTS releases or want to stay up to date with the latest features in STS releases. Users can continue using an LTS version until the next LTS becomes available or choose to upgrade more frequently with STS versions. To learn more about the definitions and support timelines for Azure CLI and Azure PowerShell STS and LTS versions, please refer to the following documentation: Azure CLI lifecycle and support | Microsoft Learn Azure PowerShell support lifecycle | Microsoft Learn Users can choose between the Long-Term Support (LTS) and Short-Term Support (STS) versions of Azure CLI based on their specific needs. It is important to understand the trade-offs: LTS versions provide a stable and predictable environment with a support cycle of up to 12 months, making them ideal for scenarios where stability and minimal maintenance are priorities. STS versions, on the other hand, offer access to the latest features and more frequent bug fixes. However, this comes with the potential need for more frequent script updates as changes are introduced with each release. It is also worth noting that platforms such as Azure DevOps and GitHub Actions typically default to using newer CLI versions. That said, users still have the option to pin to a specific version if greater consistency is required in their CI/CD pipelines. When using Azure CLI to deploy services like Azure Functions within CI/CD workflows, the actual CLI version in use will depend on the version selected by the pipeline environment (e.g., GitHub Actions or Azure DevOps), and it is recommended to verify or explicitly set the version to align with your deployment requirements. SecureString update for Azure PowerShell Our team is gradually transitioning to using SecureString for tokens, account keys, and secrets, replacing the traditional string types. In November 2024, we offered an opt-in method for the Get-AzAccessToken cmdlet. At the 2025 Build event, we’ve made this option mandatory, which is a breaking change: Get-AzAccessToken Get-AzAccessToken Token : System.Security.SecureString ExpiresOn : 5/13/2025 1:09:15 AM +00:00 TenantId : 00000000-0000-0000-0000-000000000000 UserId : user@mail.com Type : Bearer In 2026, we plan to implement this secure method in more commands, converting all keys, tokens, and similar data from string types to SecureString. Please continue to pay attention to our upcoming breaking changes documentation. Install Azure PowerShell from Microsoft Artifact Registry (MAR) Installing Azure PowerShell from Microsoft Artifact Registry (MAR) brings several key advantages for enterprise users, particularly in terms of security, performance, and simplified artifact management. Stronger Security and Supply Chain Integrity Microsoft Artifact Registry (MAR) enhances security by ensuring only Microsoft can publish official packages, eliminating risks like name squatting. It also improves software supply chain integrity by offering greater transparency and control over artifact provenance. Faster and More Reliable Delivery By caching Az modules in your own ACR instances with MAR as an upstream source, customers benefit from faster downloads and higher reliability, especially within the Azure network. You can try installing Azure PowerShell from MAR using the following PowerShell command: $acrUrl = 'https://mcr.microsoft.com' Register-PSResourceRepository -Name MAR -Uri $acrUrl -ApiVersion ContainerRegistry Install-PSResource -Name Az -Repository MAR For detailed installation instructions and prerequisites, refer to the official documentation: Optimize the installation of Azure PowerShell | Microsoft Learn Enhancements to user experience Azure PowerShell Enhancements at Microsoft Build 2025 As part of the Microsoft Build 2025 announcements, Azure PowerShell has introduced several significant improvements to enhance usability, automation flexibility, and overall user experience. Real-Time Progress Bar for Long-Running Operations Cmdlets that perform long-running operations now display a real-time progress bar, offering users clear visual feedback during execution. Smarter Output Formatting Based on Result Count Output formatting is now dynamically adjusted based on the number of results returned: A detailed list view is shown when a single result is returned, helping users quickly understand the full details. A table view is presented when multiple results are returned, providing a concise summary that's easier to scan. JSON-Based Resource Creation for Improved Automation Azure PowerShell now supports creating resources using raw JSON input, making it easier to integrate with infrastructure-as-code (IaC) pipelines. When this feature is enabled (by default in Azure environments), applicable cmdlets accept: JSON strings directly via *ViaJsonString External JSON files via *ViaJsonFilePath This capability streamlines scripting and automation, especially for users managing complex configurations. We're always looking for feedback, so try the new features and let us know what you think. Improved for custom and disconnected clouds: Azure CLI now reads extended ARM metadata In disconnected environments like national clouds, air-gapped setups, or Azure Stack, customers often define their own cloud configurations, including custom dataplane endpoints. However, older versions of Azure CLI and its extensions relied heavily on hardcoded endpoint values based only on the cloud name, limiting functionality in these isolated environments. To address this, Azure CLI now supports reading richer cloud metadata from Azure Resource Manager (ARM) using API version 2022-09-01. This metadata includes extended data plane endpoints, such as those for Arc-enabled services and private registries previously unavailable in older API versions. When running az cloud register with the --endpoint-resource-manager flag, Azure CLI automatically parses and loads these custom endpoints into its runtime context. All extensions, like connectedk8s, k8s-configuration, and others, can now dynamically use accurate, environment-specific endpoints without needing hardcoded logic. Key Benefits: Improved Support for Custom Clouds: Enables more reliable automation and compatibility with Azure Local. Increased Security and Maintainability: Removes the need for manually hardcoding endpoints. Unified Extension Behavior: Ensures consistent behavior across CLI and its extensions using centrally managed metadata. Try it out: Register cloud az cloud register -n myCloud --endpoint-resource-manager https://management.azure.com/ Check cloud az cloud show -n myCloud For the original implementation, please refer to https://github.com/Azure/azure-cli/pull/30682. Azure PowerShell WAM authentication update Since Azure PowerShell 12.0.0, Azure PowerShell supports Web Authentication Manager (WAM) as the default authentication mechanism. Using Web Account Manager (WAM) for authentication in Azure enhances security through its built-in identity broker and default system browser integration. It also delivers a faster and more seamless sign-in experience. All major blockers have been resolved, and we are actively working on the pending issues. For detailed announcements on specific issues, please refer to the WAM issues and Workarounds issue. We encourage users to enable WAM functionality using the command: Update-AzConfig -EnableLoginByWam $true. under Windows operating systems to ensure security. If you encounter issues, please report them in Issues · Azure/azure-powershell. Improve Copilot's response quality and performance Azure CLI/PS enhancement with Copilot in Azure In the first half of 2025, we improved the knowledge of Azure CLI and Azure PowerShell commands for Azure Copilot end-to-end scenarios based on best practices to answer questions related to commands and scripts. In the past six months, we have optimized the following scenarios: Introduced Azure concept documents to RAG to provide more accurate and comprehensive answers. Improved the accuracy and relevance of knowledge retrieval query and chunking strategies Support more accurate rejection of the out-of-scope questions. AI Shell brings AI to the command line, enabling natural conversations with language models and customizable workflows. AI Shell is in public preview and allows you to access Copilot in Azure. All the optimizations apply to AI Shell. For more information about AI Shell releases, see: AI Shell To learn more about Microsoft Copilot for Azure and how it can help you, visit: Microsoft Copilot for Azure Breaking Changes You can find the latest breaking change guidance documents at the links below. To learn more about the breaking changes, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell, see the release notes and migration guides. Azure CLI: Release notes & updates – Azure CLI | Microsoft Learn Azure PowerShell: Migration guide for Az 14.0.0 | Microsoft Learn Milestone timelines: Azure CLI Milestones Azure PowerShell Milestones Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Microsoft Build and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime. GitHub: o https://github.com/Azure/azure-cli o https://github.com/Azure/azure-powershell Let's stay in touch on X (Twitter) : @azureposh @AzureCli1.1KViews2likes1CommentSteps to Manually Add PowerShell Modules in Function App
When using Azure Function Apps on a Consumption plan, you may encounter issues with dependency management due to the 500 MB temp storage limit, causing module installation failures. To avoid upgrading to a more expensive premium plan, you can manually add PowerShell modules using the provided steps.5.3KViews4likes2CommentsAn Update on Bicep Azure Verified Modules for Platform Landing Zone (ALZ)
But first some history and context As you may of heard in one of our Azure Landing Zone (ALZ) community calls over the past year, across ALZ we have been working hard to refactor both our Terraform and Bicep implementation options to be built upon Azure Verified Modules (AVM). Earlier this year we announced that the work for Terraform, which we started on first, was complete; and you can read more about that in the announcement blog post we posted here. But whilst this work was going on the ALZ Bicep team where already busy planning how they would go about doing the same and rebuilding ALZ Bicep from AVM modules. You can see the original plans and where we also asked for feedback in the GitHub issue (#791) . Enough history, what's the latest? Now to answer the question everyone has and rightly so 😁 Well, it's good news! We have been busy working away on getting a number of the AVM Bicep Resource Modules updated with missing bits and pieces that we need from an ALZ perspective. All fairly minor in most cases but some required some bigger updates than others, and some modules didn't exist at all so we have had to propose, create, and publish those of which we are pretty much done with 👍 We are still working towards an end of Q4 (June/July) target for a preview release of all the modules, accelerator and guidance on how to use the new version of ALZ Bicep, which will be called "Bicep Azure Verified Modules for Platform Landing Zone (ALZ)"; this is to align with Terraform and also to provide clear distinction between ALZ Bicep and the new AVM based version. Please note that the timeline shared above is an ETA and may move Announcing the preview release of `avm/ptn/alz/empty` AVM Pattern Module Before we get to a more complete release of all the required resources and modules to build the entire ALZ architecture with the new Bicep Azure Verified Modules for Platform Landing Zone (ALZ), we wanted to share an early look at the module that will be at the heart of all of your ALZ deployments. That module is called `avm/ptn/alz/empty` and is available in the Public Bicep Registry for you to try out today (currently version `0.1.0`)! Tip: Checkout the "max" test in the tests directory for advanced usage examples! module testMg 'br/public:avm/ptn/alz/empty:0.1.0' = { params: { managementGroupName: 'test-mg' // Other parameters here... } } This module is 1 of 11 modules that will all be based off the same code. The module optionally creates all of the below: The Management Group itself Can also target an existing Management Group Management Group Subscription Associations RBAC Custom Role Definitions RBAC Role Assignments Policy Assignments Custom Policy Definitions Custom Policy Set Definitions (Initiatives) There will also be 1 x Bicep Azure Verified Modules for Platform Landing Zone (ALZ) pattern module for each of the ALZ Architectures Management Groups, plus this empty one for custom and advanced scenarios. A reminder of those Management Groups and the associated modules that will be created for each of them: `avm/ptn/alz/int-root` `avm/ptn/alz/platform` `avm/ptn/alz/platform-management` `avm/ptn/alz/platform-identity` `avm/ptn/alz/platform-connectivity` `avm/ptn/alz/landing-zones` `avm/ptn/alz/landing-zones-corp` `avm/ptn/alz/landing-zones-online` `avm/ptn/alz/decommissioned` `avm/ptn/alz/sandbox` These Management Group aligned pattern modules will create the same resources as above, but will have the latest release of the ALZ Library baked in to each of the modules. Meaning that for the `avm/ptn/alz/int-root` pattern module, you won't have to declare all of the ALZ RBAC Custom Role Definitions, Custom Policy Definitions, Policy Assignments etc. via the input parameters as they'll be hardcoded in the module based off the latest release from the ALZ Library at the point the version of the module was released. This means that to build the ALZ Management Group hierarchy and make all of the default ALZ policy assignments, as documented here, you'd need a bicep file that would look something like this as a starting point: Important: None of these modules exist below today! module intRootMg 'br/public:avm/ptn/alz/int-root:0.1.0' = { params: { managementGroupName: 'int-root-mg' } } module platformMg 'br/public:avm/ptn/alz/platform:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: intRootMg.outputs.managementGroupId } } module platformConnectivityMg 'br/public:avm/ptn/alz/platform-connectivity:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: platformMg.outputs.managementGroupId } } This will make getting the ALZ Architecture out of the box really fast, and also really easy to upgrade and get the latest updates, by just bumping the version number as you desire when you are ready. Coupled with the `avm/ptn/alz/empty` module to add your own additional Policy Definitions and assignments, etc. at the same Management Groups scopes also helps you decouple the constant updates to the ALZ architecture and policies etc. from your own additional requirements. Helping you keep your code cleaner and our modules simple to maintain as we won't have to cater for handling additional custom definitions and assignments alongside the defaults from ALZ that are baked into the modules. Note: We are looking at suggesting that all of these are deployed via Deployment Stacks to help with lifecycle management of resources. e.g. help clean-up resources as well as deploy new ones; think policy assignments and definitions etc. We need to complete a lot more testing on this, but would love your feedback on experiences if you have any using Deployment Stacks to manage these kind of resources today. Open an issue/discussion on the ALZ Bicep GitHub repo 👍 Our asks to you 🫵 Please go try out and test the new `avm/ptn/alz/empty` module and test it out for all the scenarios you can think of relating to Management Groups, RBAC, Policies etc. we want to make sure it's "match fit/ready" before we then build the Management Group aligned modules and bake in the ALZ defaults to them. So please go and put the module through its paces and test it out. Tip: Checkout the "max" test in the tests directory for advanced usage examples! If you find any issues, bugs, feature requests or just have a question on how to use it, please just raise them as GitHub issues here (make sure to select the `avm/ptn/alz/empty` module from the drop down 👍) Thanks in advance for all your efforts and assistance and we look forward to hearing and getting your feedback on the module 👏1.2KViews4likes1CommentHow to delete pipeline tags with special characters?
I want to delete specific tags attached to Azure pipeline builds, for example "hello: world". I've come to the conclusion that the ADO REST API endpoint for handling Tag deletions cannot parse special characters in the URL's slug i.e. colons and whitespaces. According to the docs here, the tag should be specified in the URL slug, followed by query string parameters if applicable. I tried the following: 1. If I insert the tag directly into the URL it will look like this: https://dev.azure.com/organisation/project/_apis/build/builds/1234567/tags/hello: world?api-version=7.1 This returns: "Response status code does not indicate success: 400 (Bad Request)." 2. But if I encode my slug using `[System.Web.HttpUtility]::UrlEncode($tag)`, the URL looks like this: https://dev.azure.com/organisation/project/_apis/build/builds/1234567/tags/hello%3a+world?api-version=7.1 This returns "Response status code does not indicate success: 404 (Not Found)." So it seems the encoding might have worked, although it appears to be searching for a tag without decoding the URL first? Does anyone know if there is a way for deleting tags with special characters? I have over 1600+ tags that need to be deleted so manually doing this through the UI would not be a viable option. EDIT: I just realised the documentation has a small note saying: This API will not work for tags with special characters. To remove tags with special characters, use the PATCH method instead (in 6.0+) Tried the PATCH method instead of DELETE and still not working. And there's no examples provided in the docs.65Views0likes2CommentsKeep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you: ✅ Identify all Function Apps using a specific runtime version ✅ Find apps running on unsupported or soon-to-be-retired versions ✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming languages, with new stack versions being introduced and older ones retired regularly. Staying on a supported language version is critical to ensure: Continued access to support and security updates Avoidance of performance degradation and unexpected failures Compliance with best practices for cloud reliability Failure to upgrade can lead to security vulnerabilities, performance issues, and unsupported workloads that may eventually break. Azure's language support policy follows a structured deprecation timeline, which you can review here. How Will You Know When a Version Is Nearing its End-of-Life? The Azure Functions team communicates retirements well in advance through multiple channels: Azure Portal notifications Emails to subscription owners Warnings in client tools and Azure Portal UI when an app is running on a version that is either retired, or about to be retired in the next 6 months Official Azure Functions Supported Languages document here To help you track these changes, we recommend reviewing the language version support timelines in the Azure Functions Supported Languages document. However, identifying all affected apps across multiple subscriptions can be challenging. To simplify this process, I've built some Azure CLI scripts below that can help you list all impacted Function Apps in your environment. Linux* Function Apps with their language stack versions: az functionapp list --query "[?siteConfig.linuxFxVersion!=null && siteConfig.linuxFxVersion!=''].{Name:name, ResourceGroup:resourceGroup, OS:'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Linux* Function Apps on a specific language stack version: Ex: Node.js 18 az functionapp list --query "[?siteConfig.linuxFxVersion=='Node|18'].{Name:name, ResourceGroup:resourceGroup, OS: 'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Windows Function Apps only: az functionapp list --query "[?!contains(kind, 'linux')].{Name:name, ResourceGroup:resourceGroup, OS:'Windows'}" --output table Windows Function Apps with their language stack versions: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion, javaVersion: javaVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } 'java' { $siteConfig.javaVersion } default { 'Unknown' } } [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } | Format-Table -AutoSize Windows Function Apps running on Node.js runtime: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value if ($runtime -eq 'node') { $version = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Windows Function Apps running on a specific language version: Ex: Node.js 18 az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $nodeVersion = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ($runtime -eq 'node' -and $nodeVersion -eq '~18') { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $nodeVersion } } } | Format-Table -AutoSize All windows Apps running on unsupported language runtimes: (as of March 2025) az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { $nodeVer = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ([string]::IsNullOrEmpty($nodeVer)) { 'Unknown' } else { $nodeVer } } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } default { 'Unknown' } } # Check if runtime version is unsupported $isUnsupported = switch($runtime) { 'node' { $ver = $version -replace '~','' [double]$ver -le 16 } 'powershell' { $ver = $version -replace '~','' [double]$ver -le 7.2 } 'dotnet' { $ver = $siteConfig.netFrameworkVersion $ver -notlike 'v7*' -and $ver -notlike 'v8*' } default { $false } } if ($isUnsupported) { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Take Action Now By using these scripts, you can proactively identify and update Function Apps before they reach end-of-support status. Stay ahead of runtime retirements and ensure the reliability of your Function Apps. For step-by-step instructions to upgrade your Function Apps, check out the Azure Functions Language version upgrade guide. For more details on Azure Functions' language support lifecycle, visit the official documentation. Have any questions? Let us know in the comments below!2.3KViews1like2CommentsSuperfast Installing Code Push Server in a Windows Web App
TOC Introduction Setup Debugging References 1. Introduction CodePush Server is a self-hosted backend for Microsoft CodePush, allowing you to manage and deploy over-the-air updates for React Native and Cordova apps. It provides update versioning, deployment history, and authentication controls. It is typically designed to run on Linux-based Node environments. If you want to deploy it on Azure Windows Web App, you can follow this tutorial to apply the necessary modifications. 2. Setup 1. Create a Windows Node.js Web App. In this example, we use Node.js 20 LTS. 2. After the Web App is created, go to the Overview tab and copy its FQDN. You'll need this in later steps. 3. Create a standard Storage Account. 4. Once created, go to Access keys and copy the Storage Account’s name and key for later use. 5. Return to the Web App's Environment Variables and add the following configuration values. Variable Name Variable Value AZURE_STORAGE_ACCOUNT <Storage Account name you've copied from step 4> AZURE_STORAGE_ACCESS_KEY <Storage Account key you've copied from step 4> SERVER_URL <https:// + Web app FQDN you've copied from step 2> e.g., https://az-7135-app.azurewebsites.net CORS_ORIGIN <https:// + Web app FQDN you've copied from step 2> e.g., https://az-7135-app.azurewebsites.net LOGGING false 6. On your local machine, open a terminal and clone the CodePush Server source code. Then create your own project folder—for example, az-7135-app. # Change to you working dir and your project name (e.g, az-7135-app) git clone https://github.com/microsoft/code-push-server.git mkdir az-7135-app cp -R code-push-server/api/* az-7135-app cp az-7135-app/.env.example az-7135-app/.env 7. Open the project folder in VSCode, create server.js and web.config, and modify the relevant files as described. File name Change Reason .env Please setup AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_ACCESS_KEY, and SERVER_URL follow step 4 From Official Tutorial Bicep Template web.config <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <handlers> <add name="iisnode" path="server.js" verb="*" modules="iisnode" /> </handlers> <rewrite> <rules> <rule name="NodeJsApp" stopProcessing="true"> <match url=".*" /> <action type="Rewrite" url="server.js" /> </rule> </rules> </rewrite> <iisnode loggingEnabled="false" debuggingEnabled="true" devErrorsEnabled="true" /> </system.webServer> </configuration> When using a Windows App, the HTTP server is IIS. As a reverse proxy, IIS needs to forward incoming requests to another web server running on the same machine—in this case, node.exe. Due to several path limitations in web.config (such as not supporting nested directories for the entry point), the server.js file must reside in the same directory as web.config. Since the original project uses bin/script/server.js as the entry point, which cannot be directly referenced here, you need to create a new server.js file in the root directory as a wrapper to forward execution. The iisnode section in web.config is useful for debugging purposes. However, it requires the debugging settings described below to work correctly. Once debugging is complete, this section can be safely removed. server.js // Wrapper to launch actual entry point require("./bin/script/server.js"); Same as above script/server.ts //const port: number = Number(process.env.API_PORT) || Number(process.env.PORT) || defaultPort; const port: number = process.env.API_PORT || process.env.PORT || defaultPort; All traffic in Windows Web Apps is routed through IIS to node.exe, so the actual listening port is not a traditional number but a system-generated internal pipe (e.g., \\.\pipe\dff22378-aeb3-4ede-8d1e-7c1e1bdc0c46). Therefore, adjustments are needed to align with this architecture. package.json Before: "main": "./script/server.js", After: "main": "server.js", Before: "start": "node ./bin/script/server.js", After: "start": "node server.js", Before: "start:env": "node -r dotenv/config server.js dotenv_config_path=.env dotenv_config_silent=true", After: "start:env": "node -r dotenv/config server.js dotenv_config_path=.env dotenv_config_silent=true", Before: "build": "tsc && shx cp -r ./script/views ./bin/script", After: "build": "npm install typescript --save-dev && tsc && shx cp -r ./script/views ./bin/script", Like web.config, we change most entry points to use a root-level server.js instead of a path with nested folders. Additionally, note that the Oryx build process differs between Windows and Linux Web Apps. On Windows, the build step only runs npm install and npm run start, but not npm run build. Also, the underlying OS doesn’t come with TypeScript pre-installed. This causes npm run build to fail unless adjusted. To resolve this, modify the build script to include TypeScript installation. After deployment, you must also manually run npm run build once using the Kudu interface. 8. Use VSCode to publish the project. Then, in the Azure Portal's Web App Deployment Center, wait for the deployment to complete. This may take around 10 minutes, and this step alone doesn't mean the app is ready to run. 9. Open the Kudu interface. Here, you need to perform the task mentioned in step 7: manually run npm run build once. This will generate the bin folder containing the compiled runtime code. This process takes about 5 minutes. 10. With the build complete, the deployment process is finished. You can now visit the Web App's homepage. The first load may take up to 30 seconds due to cold start; subsequent requests will be faster. 3. Debugging If you need to debug, enable App Service Logs and ensure that your web.config in step 7 has the appropriate debug settings. Once enabled, go to the Kudu interface, navigate to the LogFiles/Application folder, and review the stdout and stderr logs generated by node.exe. 4. References code-push-server/api at main · microsoft/code-push-server Troubleshooting Common iisnode Issues -667Views0likes0CommentsUsing OpenAI on Azure Web App
TOC Introduction to OpenAI System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Running Locally Training Models and Training Data Predicting with the Model Publishing the Project to Azure Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Startup Command Issue App Becomes Unresponsive After a Period az cli command for Linux webjobs fail Others Conclusion References 1. Introduction to OpenAI OpenAI is a leading artificial intelligence research and deployment company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI)—highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. OpenAI focuses on developing safe and scalable AI technologies and ensuring equitable access to these innovations. Known for its groundbreaking advancements in natural language processing, OpenAI has developed models like GPT (Generative Pre-trained Transformer), which powers applications for text generation, summarization, translation, and more. GPT models have revolutionized fields like conversational AI, creative writing, and programming assistance. OpenAI has also released models like Codex, designed to understand and generate computer code, and DALL·E, which creates images from textual descriptions. OpenAI operates with a unique hybrid structure: a for-profit company governed by a nonprofit entity to balance the development of AI technology with ethical considerations. The organization emphasizes safety, research transparency, and alignment to human values. By providing access to its models through APIs and fostering partnerships, OpenAI empowers developers, businesses, and researchers to leverage AI for innovative solutions across diverse industries. Its long-term goal is to ensure AI advances benefit humanity as a whole. 2. System Architecture Architecture Development Environment OS: Ubuntu Version: Ubuntu 18.04 Bionic Beaver Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources File and Directory Structure Please open a bash terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./openai/tools/add-venv.sh If you are using a Windows platform, use the following alternative PowerShell commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\openai\tools\add-venv.cmd After completing the execution, you should see the following directory structure: File and Path Purpose openai/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/openai-webjob/ A virtual environment specifically used for training models (i.e., calculating embedding vectors indeed). openai/webjob/requirements.txt The list of packages (with exact versions) required for the openai-webjob virtual environment. .venv/openai/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions (i.e., suggestion). openai/requirements.txt The list of packages (with exact versions) required for the openai virtual environment. openai/ The main folder for this tutorial. openai/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including an App Service Plan, a Web App, and a Storage Account. openai/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. openai/tools/download-sample-training-set.* A script to download a sample training set from News-Headlines-Dataset-For-Sarcasm-Detection, containing headlines data from TheOnion and HuffPost, into the train directory of the File Share. openai/webjob/cal_embeddings.py A script for calculating embedding vectors from headlines. It loads the training set, applies the transformation on OpenAI API, and saves the embedding vectors in the model directory of the File Share. openai/App_Data/jobs/triggered/cal-embeddings/cal_embeddings.sh A shell script for Azure App Service web jobs. It activates the openai-webjob virtual environment and starts the cal_embeddings.py script. openai/api/app.py Code for the Flask application, including routes, port configuration, input parsing, vectors loading, predictions, and output generation. openai/start.sh A script executed after deployment (as specified in the ARM template startup command I will introduce it later). It sets up the virtual environment and starts the Flask application to handle web requests. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource (plan) App Service Yes Resource (app) Storage Account Yes Resource (storageAccount) File Share Yes Service Let’s take a look at the openai/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountFileShareName data-and-model [Purpose 1: Link File Share to Web App] Use this fixed name for File Share storageAccountFileShareShareQuota 5120 [Purpose 1: Link File Share to Web App] The value is in GB storageAccountFileShareEnabledProtocols SMB [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsType AzureFiles [Purpose 1: Link File Share to Web App] appSiteConfigAzureStorageAccountsProtocol Smb [Purpose 1: Link File Share to Web App] planKind linux [Purpose 2: Specify platform and stack runtime] Select Linux (default if Python stack is chosen) planSkuTier Premium0V3 [Purpose 2: Specify platform and stack runtime] Choose at least Premium Plan to ensure enough memory for your AI workloads planSkuName P0v3 [Purpose 2: Specify platform and stack runtime] Same as above appKind app,linux [Purpose 2: Specify platform and stack runtime] Same as above appSiteConfigLinuxFxVersion PYTHON|3.9 [Purpose 2: Specify platform and stack runtime] Select Python 3.9 to avoid dependency issues appSiteConfigAppSettingsWEBSITES_CONTAINER_START_TIME_LIMIT 600 [Purpose 3: Deploying] The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages) appSiteConfigAppCommandLine [ -f /home/site/wwwroot/start.sh ] && bash /home/site/wwwroot/start.sh || GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app [Purpose 3: Deploying] This is the Startup Command, which can be break down into 3 parts: First (-f /home/site/wwwroot/start.sh): Checks whether start.sh exists. This is used to determine whether the app is in its initial state (just created) or has already been deployed. Second (bash /home/site/wwwroot/start.sh): If the file exists, it means the app has already been deployed. The start.sh script will be executed, which installs the necessary packages and starts the Flask application. Third (GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app): If the file does not exist, the command falls back to the default HTTP server (gunicorn) to start the web app. Since the command is enclosed in double quotes within the ARM template, during actual execution, replace \" with " appSiteConfigAppSettingsSCM_DO_BUILD_DURING_DEPLOYMENT false [Purpose 3: Deploying] Since we have already defined the handling for different virtual environments in start.sh, we do not need to initiate the default build process of the Web App appSiteConfigAppSettingsWEBSITES_ENABLE_APP_SERVICE_STORAGE true [Purpose 4: Webjobs] This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training) storageAccountPropertiesAllowSharedKeyAccess true [Purpose 5: Troubleshooting] This setting is enabled by default. The reason for highlighting it is that certain enterprise IT policies may enforce changes to this configuration after a period, potentially causing a series of issues. For more details, please refer to the Troubleshooting section below. Return to bash terminal and execute the following commands (their purpose has been described earlier). # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file ./openai/tools/arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> If you are using a Windows platform, use the following alternative PowerShell commands instead: # Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai # Please change <RegionName> to your prefer region, for example: eastus2 # Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account) az group create --name <ResourceGroupName> --location <RegionName> az deployment group create --resource-group <ResourceGroupName> --template-file .\openai\tools\arm-template.json --parameters resourcePrefix=<ResourcesPrefixName> After execution, please copy the output section containing 3 key-value pairs from the result like this. Return to bash terminal and execute the following commands: # Please setup 3 variables you've got from the previous step OUTPUT_STORAGE_NAME="<outputStorageName>" OUTPUT_STORAGE_KEY="<outputStorageKey>" OUTPUT_SHARE_NAME="<outputShareName>" sudo mkdir -p /mnt/$OUTPUT_SHARE_NAME if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi CREDENTIALS_FILE="/etc/smbcredentials/$OUTPUT_STORAGE_NAME.cred" if [ ! -f "$CREDENTIALS_FILE" ]; then sudo bash -c "echo \"username=$OUTPUT_STORAGE_NAME\" >> $CREDENTIALS_FILE" sudo bash -c "echo \"password=$OUTPUT_STORAGE_KEY\" >> $CREDENTIALS_FILE" fi sudo chmod 600 $CREDENTIALS_FILE sudo bash -c "echo \"//$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME cifs nofail,credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30\" >> /etc/fstab" sudo mount -t cifs //$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME -o credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30 Or you could simply go to Azure Portal, navigate to the File Share you just created, and refer to the diagram below to copy the required command. You can choose Windows or Mac if you are using such OS in your dev environment. After executing the command, the network drive will be successfully mounted. You can use df to verify, as illustrated in the diagram. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. Once the creation process is complete, click Outputs on the left menu to retrieve the connection information for the File Share. 4. Running Locally Training Models and Training Data In the next steps, you will need to use OpenAI services. Please ensure that you have registered as a member and added credits to your account (Billing overview - OpenAI API). For this example, adding $10 USD will be sufficient. Additionally, you will need to generate a new API key (API keys - OpenAI API), you may choose to create a project as well for future project organization, depending on your needs (Projects - OpenAI API). After getting the API key, create a text file named apikey.txt in the openai/tools/ folder. Paste the key you just copied into the file and save it. Return to bash terminal and execute the following commands (their purpose has been described earlier). source .venv/openai-webjob/bin/activate bash ./openai/tools/create-folder.sh bash ./openai/tools/download-sample-training-set.sh python ./openai/webjob/cal_embeddings.py --sampling_ratio 0.002 If you are using a Windows platform, use the following alternative PowerShell commands instead: .\.venv\openai-webjob\Scripts\Activate.ps1 .\openai\tools\create-folder.cmd .\openai\tools\download-sample-training-set.cmd python .\openai\webjob\cal_embeddings.py --sampling_ratio 0.002 After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the GitHub. The right side of the image explains each field of the data. This dataset was originally used to detect whether news headlines contain sarcasm. However, I am repurposing it for another application. In this example, I will use the "headline" field to create embeddings. The left side displays the raw data, where each line is a standalone JSON string containing the necessary fields. In the code, I first extract the "headline" field from each record and send it to OpenAI to compute the embedding vector for the text. This embedding represents the position of the text in a semantic space (akin to coordinates in a multi-dimensional space). After the computation, I obtain an embedding vector for each headline. Moving forward, I will refer to these simply as embeddings. By the way, the sampling_ratio parameter in the command is something I configured to speed up the training process. The original dataset contains nearly 30,000 records, which would result in a training time of around 8 hours. To simplify the tutorial, you can specify a relatively low sampling_ratio value (ranging from 0 to 1, representing 0% to 100% sampling from the original records). For example, a value of 0.01 corresponds to a 1% sample, allowing you to accelerate the experiment. In this semantic space, vectors that are closer to each other often have similar values, which corresponds to similar meanings. In this context, the distance between vectors will serve as our metric to evaluate the semantic similarity between pieces of text. For this, we will use a method called cosine similarity. In the subsequent tutorial, we will construct some test texts. These test texts will also be converted into embeddings using the same method. Each test embedding will then be compared against the previously computed headline embeddings. The comparison will identify the nearest headline embeddings in the multi-dimensional vector space, and their original text will be returned. Additionally, we will leverage OpenAI's well-known generative AI capabilities to provide a textual explanation. This explanation will describe why the constructed test text is related to the recommended headline. Predicting with the Model Return to terminal and execute the following commands. First, deactivate the virtual environment used for calculating the embeddings, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Linux or Mac: deactivate source .venv/openai/bin/activate python ./openai/api/app.py Commands for Windows: deactivate .\.venv\openai\Scripts\Activate.ps1 python .\openai\api\app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample query data: education Next, open a terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect?text=education You should see the calculation results, confirming that the embeddings and Gen AI is working as expected. PS: Your results may differ from mine due to variations in the sampling of your training dataset compared to mine. Additionally, OpenAI's generative content can produce different outputs depending on the timing and context. Please keep this in mind. 5. Publishing the Project to Azure Return to terminal and execute the following commands. Commands for Linux or Mac: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project zip -r openai/app.zip openai/* # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai/app.zip --type zip # Delete the Zip file rm openai/app.zip Commands for Windows: # Please change <resourcegroup_name> and <webapp_name> to your own # Create the Zip file from project Compress-Archive -Path openai\* -DestinationPath openai\app.zip # Deploy the App az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai\app.zip --type zip # Delete the Zip file del openai\app.zip PS: WebJobs follow the directory structure of App_Data/jobs/triggered/<webjob_name>/. As a result, once the Web App is deployed, the WebJob is automatically deployed along with it, requiring no additional configuration. 6. Running on Azure Web App Training the Model Return to terminal and execute the following commands to invoke the WebJobs. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' You could see the training status by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 Processing Complete And you can get the latest detail log by execute the following commands. Commands for Linux or Mac: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; history_id=$(az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | sed 's|.*/history/||') ; response=$(curl -X GET -H "Authorization: Bearer $token" -H "Content-Type: application/json" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01") ; log_url=$(echo "$response" | jq -r '.properties.output_url') ; curl -X GET -H "Authorization: Bearer $token" "$log_url" Commands for Windows: # Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own $token = az account get-access-token --resource https://management.azure.com --query accessToken -o tsv ; $history_id = az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | ForEach-Object { ($_ -split "/history/")[-1] } ; $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01" -Headers @{ Authorization = "Bearer $token" } -Method GET ; $log_url = $response.properties.output_url ; Invoke-RestMethod -Uri $log_url -Headers @{ Authorization = "Bearer $token" } -Method GET Once you see the report in the Logs, it indicates that the embeddings calculation is complete, and the Flask app is ready for predictions. You can also find the newly calculated embeddings in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a bash terminal and use the following curl commands to send requests to the app: # Please change <webapp_name> to your own curl -X GET https://<webapp_name>.azurewebsites.net/api/detect?text=education As with the local environment, you should see the expected results. 7. Troubleshooting Startup Command Issue Symptom: Without any code changes and when the app was previously functioning, updating the Startup Command causes the app to stop working. The related default_docker.log shows multiple attempts to run the container without errors in a short time, but the container does not respond on port 8000 as seen in docker.log. Cause: Since Linux Web Apps actually run in containers, the final command in the Startup Command must function similarly to the CMD instruction in a Dockerfile. CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] This command must ensure it runs in the foreground (i.e., not in daemon mode) and cannot exit the process unless manually interrupted. Resolution: Check the final command in the Startup Command to ensure it does not include a daemon execution mode. Alternatively, use the Web SSH interface to execute and verify these commands directly. App Becomes Unresponsive After a Period Symptom: An app that runs normally becomes unresponsive after some time. Both the front-end webpage and the Kudu page display an "Application Error," and the deployment log shows "Too many requests." Additionally, the local environment cannot connect to the associated File Share. Cause: Clicking on "diagnostic resources" in the initial error screen provides more detailed error information. In this example, the issue is caused by internal enterprise Policies or Automations (e.g., enterprise applications) that periodically or randomly scan storage account settings created by employees. If the settings are deemed non-compliant with security standards, they are automatically adjusted. For instance, the allowSharedKeyAccess parameter may be forcibly set to false, preventing both the Web App and the local development environment from connecting to the File Share under the Storage Account. Modification history for such settings can be checked via the Activity Log of the Storage Account (note that only the last 90 days of data are retained). Resolution: The proper approach is to work offline with the enterprise IT team to coordinate and request the necessary permissions. As a temporary workaround, modify the affected settings to Enable during testing periods and revert them to Disabled afterward. You can find the setting for allowSharedKeyAccess here. Note: Azure Storage Mount currently does not support access via Managed Identity. az cli command for Linux webjobs fail Symptom: Got "Operation returned an invalid status 'Unauthorized'" message from different platforms even in Azure CloudShell with latest az version Cause: After using "--debug --verbose" from the command I can see the actual error occurred on which REST API, for example, I'm using this command (az webapp webjob triggered): az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app --debug --verbose Which represent that the operation has invoked under this API: /Microsoft.Web/sites/{app_name}/triggeredwebjobs (Web Apps - List Triggered Web Jobs) After I directly test that API from the official doc, I still get such the error, which means this preview feature is still under construction, and we cannot use it currently. Resolution: I found a related API endpoint via Azure Portal: /Microsoft.Web/sites/{app_name}/webjobs (Web Apps - List Web Jobs) After I directly test that API from the official doc, I can get the trigger list now. So I have modified the original command: az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app To the following command (please note the differences between Linux/Mac and Windows commands). Make sure to replace <subscription_id>, <resourcegroup_name>, and <webapp_name> with your specific values. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10 For "run" commands, due to the same issue when invoking the problematic API, so I also modify the operation. Commands for Linux or Mac: token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" Commands for Windows: $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/029b4739-1f55-4cab-bf84-a9393f8ac8fe/resourceGroups/azure-appservice-ai/providers/Microsoft.Web/sites/openai-arm-app/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}' Others Using Scikit-learn on Azure Web App 8. Conclusion Beyond simple embedding vector calculations, OpenAI's most notable strength is generative AI. You can provide instructions to the GPT model through natural language (as a prompt), clearly specifying the format you need in the instruction. You can then parse the returned content easily. While PaaS products are not ideal for heavy vector calculations, they are well-suited for acting as intermediaries to forward commands to generative AI. These outputs can even be used for various applications, such as patent infringement detection, plagiarism detection in research papers, or trending news analysis. I believe that in the future, we will see more similar applications on Azure Web Apps. 9. References Overview - OpenAI API News-Headlines-Dataset-For-Sarcasm-Detection Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure - Azure App Service Configure a custom startup file for Python apps on Azure App Service on Linux - Python on Azure Mount Azure Storage as a local share - Azure App Service Deploy to Azure button - Azure Resource Manager Using Scikit-learn on Azure Web App1.1KViews0likes0CommentsPowerShell script to delete all Containers from a Storage Account
After move the BootDiag settings out of the Custom Storage Account, the original Storage Account used for are still consuming space for nothing. This is part of the standard Clean Up stream need to be consider into the FinOps Plan. This script will help you to clean these Storage Accounts quickly and avoid cost paid for nothing. Connect-AzAccount #Your Subscription $MySubscriptionToClean = "MyGuid-MyGuid-MyGuid-MyGuid-MyGuid" $MyStorageAccountName = "MyStorageAccountForbootdiags" $MyStorageAccountKey = "MySAKeyWithAllCodeProvidedByYourStorageAccountSetting+MZ3cUvdQ==" $ContainerStartName = "bootdiag*" #Set subscription ID Set-AzContext -Subscription $MySubscriptionToClean Get-AzContext $ctx = New-AzStorageContext -StorageAccountName $MyStorageAccountName -StorageAccountKey $MyStorageAccountKey $myContainers = Get-AzStorageContainer -Name $ContainerStartName -Context $ctx -MaxCount 1000 foreach($mycontainer in $myContainers) { Remove-AzStorageContainer -Name $mycontainer.Name -Force -Context $ctx } I used this script to remove millions of BootDiag Containers from several Storage Accounts. You can also use it for any other cleanup use case if you need it. Fab137Views0likes1Comment