powershell
97 TopicsGaining Confidence with Az CLI and Az PowerShell: Introducing What if & Export Bicep
Ever hesitated before hitting Enter on a command, wondering what changes it might make? You’re not alone. Whether you’re deploying resources or updating configurations, the fear of unintended consequences can slow you down. That’s why we’re introducing new powerful features in Azure CLI and Azure PowerShell to preview the changes the commands may make: the What if and Export Bicep features. These capabilities allow you to preview the impact of your commands and allow you to export them as Bicep templates, all before making any changes to your Azure environment. Think of them as your safety net: you can validate actions, confirm resource changes, and even generate reusable infrastructure-as-code templates with confidence. Currently, these features are in private preview, and we’re excited to share how you can get early access. Why This Matters Reduce risk: Avoid accidental resource deletions or costly misconfigurations. Build confidence: Understand exactly what your command will do before execution. Accelerate adoption of IaC: Convert CLI commands into Bicep templates automatically. Improve productivity: Validate scripts quickly without trial-and-error deployments. How It Works What if preview of commands All you have to do is add the `--what-if` parameter to Azure CLI commands and then the `-DryRun` command to Azure PowerShell commands like below. Azure CLI: az storage account create --name "mystorageaccount" --resource-group "myResourceGroup" --location "eastus" --what-if Azure PowerShell: New-AzVirtualNetwork -name MyVNET -ResourceGroupName MyResourceGroup -Location eastus -AddressPrefix "10.0.0.0/16" -DryRun Exporting commands to Bicep To generate bicep from the command you will have to add the `--export-bicep` command with the --what-if parameter to generate a bicep file. The bicep code will be saved under the `~/.azure/whatif` directory on your machine. The command will specific exactly where the file is saved on your machine. Behind the scenes, AI translates your CLI command into Bicep code, creating a reusable template for future deployments. After generating the Bicep file, the CLI automatically runs a What-If analysis on the Bicep template to show you the expected changes before applying them. Here is a video of it in action! Here is another example where there is delete, modify and create actions happening all together. Private Preview Access These features are available in private preview. To sign up: Visit the aka.ms/PreviewSignupPSCLI Submit your request for access. Once approved, you’ll receive instructions to download the preview package. Supported Commands (Private Preview) Given these features are in a preview we have only added support for a small set of commands for the time being. Here’s a list of commands that will support these features during the private preview: Azure CLI Az vm create Az vm update az storage account create az storage container create az storage share create az network vnet create az network vnet update az storage account network-rule add az vm disk attach az vm disk detach az vm nic remove Azure PowerShell New-AzVM Update-AzVM New-AzStorageAccount New-AzRmStorageShare New-AzRmStorageContainer New-AzVirtualNetwork Set-AzVirtualNetwork Add-AzStorageAccountNetworkRule Next Steps Sign up for the private preview. Install the packages using the upcoming script. Start using --what-if, -DryRun, and --export-bicep to make safer, smarter decisions and accelerate your IaC journey. Give us feedback on what you think of the feature! At https://aka.ms/PreviewFeedbackWhatIf Thanks so much! Steven Bucher PM for Azure Client Tools496Views2likes0CommentsAzure CLI and Azure PowerShell Ignite 2025 Announcement
In 2025, the key investment areas for Azure CLI and Azure PowerShell are quality and security. We have also made significant efforts to improve the overall user experience. Meanwhile, AI remains a central theme. At Microsoft Ignite 2025, we are pleased to announce several new features related to these priorities: In terms of security: MFA enforcement Azure CLI Upgrade and Python 3.13 Compatibility explanation New feature: Azure CLI and Azure PowerShell -What-If and -export bicep parameter Extending our coverage We’ve rolled out significant updates across Azure CLI and Azure PowerShell to enhance functionality: Azure CLI and Azure PowerShell Upgrades Services updated: ACR, ACS, AKS, App Config, App Service, ARM, ARO, Backup, Batch, Cloud, Compute, Consumption, Container, Container app, Core, Cosmos DB, Cognitive Services, DMS, Eventhub, HDInsight, Identity, IoT, Key Vault, MySQL, NetAppFiles, Network, Packaging, Profile, RDBMS, Service Fabric, SQL, Storage. New Extensions for Azure CLI and Azure PowerShell Extensions added: arize-ai,connectedmachine,containerapp,lambda-test,migrate,neon,pscloud,sftp,site,storage-blob-preview. New GA Modules for Azure CLI and Azure PowerShell Modules are now generally available: DeviceRegistry, DataMigration, FirmwareAnalysis,LoadTesting,StorageDiscovery , DataTransfer, ArizeAI, Fabric, StorageAction, Oracle For detailed release notes: Azure CLI: https://learn.microsoft.com/cli/azure/release-notes-azure-cli Azure PowerShell: https://learn.microsoft.com/powershell/azure/release-notes-azureps Azure CLI Upgrade and Python 3.13 Compatibility Notes Azure CLI has been upgraded from version 2.76 to 2.77 primarily to address several security vulnerabilities (CVE), including issues related to remote code execution risks and certificate validation flaws in underlying dependencies, ensuring compliance with the latest security standards. This upgrade requires Python to move from 3.12 to 3.13, which introduces a significant change: Python 3.13 enforces stricter SSL verification rules, causing failures for users running behind proxies that intercept HTTPS traffic. Solution: Update your proxy certificate to comply with strict mode. For instance, Mitmproxy fixed this in version v10.1.2 (reference: https://github.com/Azure/azure-cli/issues/32083#issuecomment-3274196488). For more Python3.13 details, see What’s New In Python 3.13 . Handling Claims Challenges for MFA in Azure CLI and Azure PowerShell Claims challenges appear when ARM begins enforcing MFA requirements. If a user performs create, update, or delete operations without the necessary MFA claims, ARM rejects the request and returns a claims challenge, indicating that higher-level authentication is required before the API call can proceed. This mechanism is designed to ensure sensitive operations are performed only by users who have completed MFA. The challenge arises because Azure CLI and Azure PowerShell can only acquire MFA claims during the login phase, and only if the user’s account is configured to require MFA. Changing this setting affects all services associated with the account, and many customers are reluctant to enable MFA at the account level. As a result, when a claims challenge occurs, Azure CLI and Azure PowerShell cannot automatically trigger MFA in the same way Azure Portal does. Azure CLI example: az login --tenant "aaaabbbb-0000-cccc-1111-dddd2222eeee" --scope "https://management.core.windows.net//.default" --claims-challenge "<claims-challenge-token>" For more details, see: Azure CLI: Troubleshooting Azure CLI | Microsoft Learn Azure PowerShell example: Connect-AzAccount -Tenant yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyy -Subscription zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzz -ClaimsChallenge <claims-challenge-token> For more details, see: Azure PowerShell: Troubleshooting the Az PowerShell module | Microsoft Learn Advanced cloud analysis capabilities, involving capacity insights or forecasting in Azure CLI With this update, Azure CLI now uses the latest ARM API version (2022-09-01) for endpoint discovery during cloud registration and updates, replacing the older API versions previously used. This ensures more accurate and up-to-date service endpoints, simplifies the configuration of custom Azure clouds, and improves reliability when retrieving required endpoints. By adopting the new API, Azure CLI stays aligned with the latest Azure platform capabilities, increasing both compatibility and forward-compatibility. As a result, users benefit from more accurate endpoint discovery and improved support for new Azure features and service endpoints as they become available. For more details about managing cloud environments in Azure CLI, please refer to the official documentation: Azure cloud management with the Azure CLI | Microsoft Learn Azure PowerShell - Add Pagination Support for 'Invoke-AzRestMethod' via '-Paginate' parameter Invoke-AzRestMethod is a flexible fallback for calling Azure Management APIs, returning raw HTTP responses from underlying endpoints, but it currently lacks built-in pagination, forcing users to implement custom logic when working with large datasets. Since pagination was not part of the original design, changing the default behavior could break existing scripts that depend on the current response format and nextLink handling. To address this without disruption, we plan to introduce pagination as an optional opt-in feature, enabling users to retrieve complete datasets through server-driven pagination without writing custom code while preserving the current behavior by default for full backward compatibility. For more details, see the official documentation for Invoke-AzRestMethod: Invoke-AzRestMethod (Az.Accounts) | Microsoft Learn Introducing Azure CLI and Azure PowerShell -What-If and -export bicep parameter We’re introducing two new features in both Azure CLI and Azure PowerShell: the What-If and Export Bicep parameters. The What-If parameter gives you an intelligent preview of which resources will be created, updated, or deleted before a command runs, helping you catch issues early and avoid unexpected changes. The Export Bicep parameter generates the corresponding Bicep templates to streamline your infrastructure-as-code workflows. Both features leverage AI to assist with command interpretation and template generation. If you’d like to try these capabilities in Azure CLI and Azure PowerShell, you can sign up through our form. Please stay tuned for more updates. Breaking Changes The latest breaking change guidance documents can be found at the links below. To read more about the breaking changes migration guide, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell. Azure CLI: Release notes & updates – Azure CLI | Microsoft Learn Azure PowerShell: Migration guide for Az 15.0.0 | Microsoft Learn Milestone timelines: Azure CLI Milestones Azure PowerShell Milestones Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Ignite and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime. GitHub: o https://github.com/Azure/azure-cli o https://github.com/Azure/azure-powershell Let's be in touch on X (Twitter) : @azureposh @AzureCli612Views3likes1CommentAgentic Power for AKS: Introducing the Agentic CLI in Public Preview
We are excited to announce the agentic CLI for AKS, available now in public preview directly through the Azure CLI. A huge thank you to all our private preview customers who took the time to try out our beta releases and provide feedback to our team. The agentic CLI is now available for everyone to try--continue reading to learn how you can get started. Why we built the agentic CLI for AKS The way we build software is changing with the democratization of coding agents. We believe the same should happen for how users manage their Kubernetes environments. With this feature, we want to simplify the management and troubleshooting of AKS clusters, while reducing the barrier to entry for startups and developers by bridging the knowledge gap. The agentic CLI for AKS is designed to simplify this experience by bringing agentic capabilities to your cluster operations and observability, translating natural language into actionable guidance and analysis. Whether you need to right-size your infrastructure, troubleshoot complex networking issues like DNS or outbound connectivity, or ensure smooth K8s upgrades, the agentic CLI helps you make informed decisions quickly and confidently. Our goal: streamline cluster operations and empower teams to ask questions like “Why is my pod restarting?” or “How can I optimize my cluster for cost?” and get instant, actionable answers. The agentic CLI for AKS is built on the open-source HolmesGPT project, which has recently been accepted as a CNCF Sandbox project. With a pluggable LLM endpoint structure and open-source backing, the agentic CLI is purpose-built for customizability and data privacy. From private to public preview: what's new? Earlier this year, we launched the agentic CLI in private beta for a small group of AKS customers. Their feedback has shaped what's new in our public preview release, which we are excited to share with the broader AKS community. Let’s dig in: Simplified setup: One-time initialization for LLM parameters with ‘az aks agent-init'. Configure your LLM parameters such as API key and model through a simple, guided user interface. AKS MCP integration: Enable the agent to install and run the AKS MCP server locally (directly in your CLI client) for advanced context-aware operations. The AKS MCP server includes tools for AKS clusters and associated Azure resources. Try it out: az aks agent “list all my unhealthy nodepools” --aks-mcp -n <cluster-name> -g <resource-group> Deeper investigations: New "Task List" feature which helps the agent plan and execute on complex investigations. Checklist-style tracker that allows you to stay updated on the agent's progress and planned tool calls. Provide in-line feedback: Share insights directly from the CLI about the agent's performance using /feedback. Provide a rating of the agent's analysis and optional written feedback directly to the agentic CLI team. Your feedback is highly appreciated and will help us improve the agentic CLI's capabilities. Performance and security improvements: Minor improvements for faster load times and reduced latency, as well as hardened initialization and token handling. Getting Started Install the extension az extension add --name aks-agent Set up you LLM endpoint az aks agent-init Start asking questions Some recommended scenarios to try out: Troubleshoot cluster health: az aks agent "Give me an overview of my cluster's health" Right-size your cluster: az aks agent "How can I optimize my node pool for cost?" Try out the AKS MCP integration: az aks agent "Show me CPU and memory usage trends" --aks-mcp -n <cluster-name> -g <resource-group> Get upgrade guidance: az aks agent "What should I check before upgrading my AKS cluster?" Update the agentic CLI extension az extension update --name aks-agent Join the Conversation We’d love your feedback! Use the built-in '/feedback' command or visit our GitHub repository to share ideas and issues. Learn more: https://aka.ms/aks/agentic-cli Share feedback: https://aka.ms/aks/agentic-cli/issues781Views1like0CommentsUsing Scikit-learn on Azure Web App
TOC Introduction to Scikit-learn System Architecture Architecture Focus of This Tutorial Setup Azure Resources Web App Storage Running Locally File and Directory Structure Training Models and Training Data Predicting with the Model Publishing the Project to Azure Deployment Configuration Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Missing Environment Variables After Deployment Virtual Environment Resource Lock Issues Package Version Dependency Issues Default Binding Missing System Commands in Restricted Environments Conclusion References 1. Introduction to Scikit-learn Scikit-learn is a popular open-source Python library for machine learning, built on NumPy, SciPy, and matplotlib. It offers an efficient and easy-to-use toolkit for data analysis, data mining, and predictive modeling. Scikit-learn supports a variety of machine learning algorithms, including classification, regression, clustering, and dimensionality reduction (e.g., SVM, Random Forest, K-means). Its preprocessing utilities handle tasks like scaling, encoding, and missing data imputation. It also provides tools for model evaluation (e.g., accuracy, precision, recall) and pipeline creation, enabling users to chain preprocessing and model training into seamless workflows. 2. System Architecture Architecture Development Environment OS: Windows 11 Version: 24H2 Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources Web App We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource App Service Yes Resource Storage Account Yes Resource File Share Yes Service Go to the Azure Portal and create an App Service. Important configuration: OS: Select Linux (default if Python stack is chosen). Stack: Select Python 3.9 to avoid dependency issues. SKU: Choose at least Premium Plan to ensure enough memory for your AI workloads. Storage Create a Storage Account in the Azure Portal. Create a file share named data-and-model in the Storage Account. Mount the File Share to the App Service: Use the name data-and-model for consistency with tutorial paths. At this point, all Azure resources and services have been successfully created. Let’s take a slight detour and mount the recently created File Share to your Windows development environment. Navigate to the File Share you just created, and refer to the diagram below to copy the required command. Before copying, please ensure that the drive letter remains set to the default "Z" as the sample code in this tutorial will rely on it. Return to your development environment. Open a PowerShell terminal (do not run it as Administrator) and input the command copied in the previous step, as shown in the diagram. After executing the command, the network drive will be successfully mounted. You can open File Explorer to verify, as illustrated in the diagram. 4. Running Locally File and Directory Structure Please use VSCode to open a PowerShell terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\scikit-learn\tools\add-venv.cmd If you are using a Linux or Mac platform, use the following alternative commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./scikit-learn/tools/add-venv.sh After completing the execution, you should see the following directory structure: File and Path Purpose scikit-learn/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/scikit-learn-webjob/ A virtual environment specifically used for training models. scikit-learn/webjob/requirements.txt The list of packages (with exact versions) required for the scikit-learn-webjob virtual environment. .venv/scikit-learn/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions. scikit-learn/requirements.txt The list of packages (with exact versions) required for the scikit-learn virtual environment. scikit-learn/ The main folder for this tutorial. scikit-learn/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. scikit-learn/tools/download-sample-training-set.* A script to download a sample training set from the UCI Machine Learning Repository, containing heart disease data, into the train directory of the File Share. scikit-learn/webjob/train_heart_disease_model.py A script for training the model. It loads the training set, applies a machine learning algorithm (Logistic Regression), and saves the trained model in the model directory of the File Share. scikit-learn/webjob/train_heart_disease_model.sh A shell script for Azure App Service web jobs. It activates the scikit-learn-webjob virtual environment and starts the train_heart_disease_model.py script. scikit-learn/webjob/train_heart_disease_model.zip A ZIP file containing the shell script for Azure web jobs. It must be recreated manually whenever train_heart_disease_model.sh is modified. Ensure it does not include any directory structure. scikit-learn/api/app.py Code for the Flask application, including routes, port configuration, input parsing, model loading, predictions, and output generation. scikit-learn/.deployment A configuration file for deploying the project to Azure using VSCode. It disables the default Oryx build process in favor of custom scripts. scikit-learn/start.sh A script executed after deployment (as specified in the Portal's startup command). It sets up the virtual environment and starts the Flask application to handle web requests. Training Models and Training Data Return to VSCode and execute the following commands (their purpose has been described earlier). .\.venv\scikit-learn-webjob\Scripts\Activate.ps1 .\scikit-learn\tools\create-folder.cmd .\scikit-learn\tools\download-sample-training-set.cmd python .\scikit-learn\webjob\train_heart_disease_model.py If you are using a Linux or Mac platform, use the following alternative commands instead: source .venv/scikit-learn-webjob/bin/activate bash ./scikit-learn/tools/create-folder.sh bash ./scikit-learn/tools/download-sample-training-set.sh python ./scikit-learn/webjob/train_heart_disease_model.py After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the public dataset website. The right side of the figure describes the meaning of each column in the dataset, while the left side shows the actual training data (after preprocessing). This is a predictive model that uses an individual’s physiological characteristics to determine the likelihood of having heart disease. Columns 1-13 represent various physiological features and background information of the patients, while Column 14 (originally Column 58) is the label indicating whether the individual has heart disease. The supervised learning process involves using a large dataset containing both features and labels. Machine learning algorithms (such as neural networks, SVMs, or in this case, logistic regression) identify the key features and their ranges that differentiate between labels. The trained model is then saved and can be used in services to predict outcomes in real time by simply providing the necessary features. Predicting with the Model Return to VSCode and execute the following commands. First, deactivate the virtual environment used for training the model, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Windows: deactivate .\.venv\scikit-learn\Scripts\Activate.ps1 python .\scikit-learn\api\app.py Commands for Linux or Mac: deactivate source .venv/scikit-learn/bin/activate python ./scikit-learn/api/app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample human feature data: [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] Referring to the feature description table from earlier, we can see that the only modified field is Column 4 ("Resting Blood Pressure"), with the second sample having an abnormally high value. (Note: Normal resting blood pressure ranges are typically 90–139 mmHg.) Next, open a PowerShell terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' You should see the prediction results, confirming that the trained model is working as expected. 5. Publishing the Project to Azure Deployment In the VSCode interface, right-click on the target App Service where you plan to deploy your project. Manually select the local project folder named scikit-learn as the deployment source, as shown in the image below. Configuration After deployment, the App Service will not be functional yet and will still display the default welcome page. This is because the App Service has not been configured to build the virtual environment and start the Flask application. To complete the setup, go to the Azure Portal and navigate to the App Service. The following steps are critical, and their execution order must be correct. To avoid delays, it’s recommended to open two browser tabs beforehand, complete the settings in each, and apply them in sequence. Refer to the following two images for guidance. You need to do the following: Set the Startup Command: Specify the path to the script you deployed bash /home/site/wwwroot/start.sh Set Two App Settings: WEBSITES_CONTAINER_START_TIME_LIMIT=600 The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages). WEBSITES_ENABLE_APP_SERVICE_STORAGE=1 This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training). Step-by-Step Process: Before clicking Continue, switch to the next browser tab and set up all the app settings. In the second tab, apply all app settings, then switch back to the first tab. Click Continue in the first tab and wait for several seconds for the operation to complete. Once completed, switch to the second tab and click Continue within 5 seconds. Ensure to click Continue promptly within 5 seconds after the previous step to finish all settings. After completing the configuration, wait for about 10 minutes for the settings to take effect. Then, navigate to the WebJobs section in the Azure Portal and upload the ZIP file mentioned in the earlier sections. Set its trigger type to Manual. At this point, the entire deployment process is complete. For future code updates, you only need to redeploy from VSCode; there is no need to reconfigure settings in the Azure Portal. 6. Running on Azure Web App Training the Model Go to the Azure Portal, locate your App Service, and navigate to the WebJobs section. Click on Start to initiate the job and wait for the results. During this process, you may need to manually refresh the page to check the status of the job execution. Refer to the image below for guidance. Once you see the model report in the Logs, it indicates that the model training is complete, and the Flask app is ready for predictions. You can also find the newly trained model in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a PowerShell terminal and use the following curl commands to send requests to the app: # Note: Replace both instances of scikit-learn-portal-app with the name of your web app. curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' As with the local environment, you should see the expected results. 7. Troubleshooting Missing Environment Variables After Deployment Symptom: Even after setting values in App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT), they do not take effect. Cause: App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT, WEBSITES_ENABLE_APP_SERVICE_STORAGE) are reset after updating the startup command. Resolution: Use Azure CLI or the Azure Portal to reapply the App Settings after deployment. Alternatively, set the startup command first, and then apply app settings. Virtual Environment Resource Lock Issues Symptom: The app fails to redeploy, even though no configuration or code changes were made. Cause: The virtual environment folder cannot be deleted due to active resource locks from the previous process. Files or processes from the previous virtual environment session remain locked. Resolution: Deactivate processes before deletion and use unique epoch-based folder names to avoid conflicts. Refer to scikit-learn/start.sh in this tutorial for implementation. Package Version Dependency Issues Symptom: Conflicts occur between package versions specified in requirements.txt and the versions required by the Python environment. This results in errors during installation or runtime. Cause: Azure deployment environments enforce specific versions of Python and pre-installed packages, leading to mismatches when older or newer versions are explicitly defined. Additionally, the read-only file system in Azure App Service prevents modifying global packages like typing-extensions. Resolution: Pin compatible dependency versions. For example, follow the instructions for installing scikit-learn from the scikit-learn 1.5.2 documentation. Refer to scikit-learn/requirements.txt in this tutorial. Default Binding Symptom: Despite setting the WEBSITES_PORT parameter in App Settings to match the port Flask listens on (e.g., Flask's default 5000), the deployment still fails. Cause: The Flask framework's default settings are not overridden to bind to 0.0.0.0 or the required port. Resolution: Explicitly bind Flask to 0.0.0.0:8000 in app.py . To avoid additional issues, it’s recommended to use the Azure Python Linux Web App's default port (8000), as this minimizes the need for extra configuration. Missing System Commands in Restricted Environments Symptom: In the WebJobs log, an error is logged stating that the ls command is missing. Cause: This typically occurs in minimal environments, such as Azure App Services, containers, or highly restricted shells. Resolution: Use predefined paths or variables in the script instead of relying on system commands. Refer to scikit-learn/webjob/train_heart_disease_model.sh in this tutorial for an example of handling such cases. 8. Conclusion Azure App Service, while being a PaaS product with less flexibility compared to a VM, still offers several powerful features that allow us to fully leverage the benefits of AI frameworks. For example, the resource-intensive model training phase can be offloaded to a high-performance local machine. This approach enables the App Service to focus solely on loading models and serving predictions. Additionally, if the training dataset is frequently updated, we can configure WebJobs with scheduled triggers to retrain the model periodically, ensuring the prediction service always uses the latest version. These capabilities make Azure App Service well-suited for most business scenarios. 9. References Scikit-learn Documentation UCI Machine Learning Repository Azure App Service Documentation770Views1like1CommentSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**1.1KViews1like3CommentsCollaborative Function App Development Using Repo Branches
In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode. Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created. How to use it: Create a Function App You can create a Function App using any method of your choice. Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code. Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous HTTP trigger. Then, we manually add three additional files. The resulting directory structure looks like this: deploy.yml trigger: branches: include: - '*' pool: vmImage: 'ubuntu-latest' variables: azureSubscription: '<YOUR_CONNECTION_STRING_FROM_ADO>' functionAppName: '<YOUR_FUNCTION_APP_NAME>' resourceGroup: '<YOUR_RG_NAME>' location: '<YOUR_LOCATION_NAME>' steps: - checkout: self - task: AzureCLI@2 name: DeploySlotInfra inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying production infrastructure" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy-master.bicep \ --parameters functionAppName=$(functionAppName) location=$(location) else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying slot: $SLOT_NAME" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy.bicep \ --parameters functionAppName=$(functionAppName) slotName=$SLOT_NAME location=$(location) fi - task: ArchiveFiles@2 displayName: 'Package Function App as ZIP' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/' includeRootFolder: false archiveType: zip archiveFile: '$(Build.ArtifactStagingDirectory)/functionapp.zip' replaceExistingArchive: true - task: AzureCLI@2 name: ZipDeploy inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying code to production" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying code to slot: $SLOT_NAME" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --slot $SLOT_NAME \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" fi Please replace all <YOUR_XXX> placeholders with values relevant to your environment. Additionally, update the two instances of "master" to match your repo's default branch name (e.g., main), as updates from this branch will always deploy to the production slot. deploy-master.bicep @description('Function App Name') param functionAppName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = { name: 'appsettings' parent: functionApp properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } deploy.bicep @description('Function App Name') param functionAppName string @description('Slot Name (e.g., dev, test, feature-xxx)') param slotName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource functionSlot 'Microsoft.Web/sites/slots@2022-09-01' = { name: slotName parent: functionApp location: location properties: { serverFarmId: functionApp.properties.serverFarmId } } resource slotAppSettings 'Microsoft.Web/sites/slots/config@2022-09-01' = { name: 'appsettings' parent: functionSlot properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } Deploy from the master branch Once deployed, the HTTP trigger becomes active in the production slot, and can be accessed via: https://<FUNCTION_APP_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Switch to a custom branch like member1 and create a test HTTP trigger After publishing, a new deployment slot named member1 will be created (if not already existing). You can open it in the Azure Portal and view its dedicated interface. The branch-specific HTTP trigger will now work at the following URL: https://<FUNCTION_APP_NAME>-<BRANCH_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Notice: Using deployment slots for collaborative development is subject to slot count and SKU limits. For example, the Premium SKU supports up to 20 slots. See the Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn for details. If you need to delete a slot after use, you can do so using PowerShell with the Remove-AzWebAppSlot command: Remove-AzWebAppSlot (Az.Websites) | Microsoft Learn413Views1like0CommentsBulk Start/Stop of Azure Virtual Desktop Session Hosts in a Host Pool via Single Trigger
Hi Community, We manage an Azure Virtual Desktop (AVD) host pool with a large number of session hosts (e.g., around 100), and we’re looking for a way to start or stop all session hosts in bulk using a single trigger—preferably via PowerShell or an API. Currently, we use a scheduled script that loops through each VM individually to start or stop them, but this approach doesn't scale well. We’ve noticed that the Azure Portal provides a one-click option to start or stop all session hosts in a host pool, and we’re trying to replicate that behavior programmatically. What We’re Looking For: A PowerShell command or script that can start/stop all session hosts in a host pool without iterating through each VM. If PowerShell doesn’t support this directly, is there an ARM template, Azure CLI command, REST API, or any other method that can be triggered from PowerShell to perform this bulk action? Any official documentation, community guidance, or examples from someone who has achieved this would be greatly appreciated. Goal: To simplify and optimize our automation by using a single command or API call to manage all session hosts in a host pool, rather than looping through each machine individually. Thanks in advance for your help and suggestions!263Views0likes3CommentsAzure CLI and Azure PowerShell Build 2025 Announcement
The key investment areas for Azure CLI and Azure PowerShell in 2025 are quality and security. We’ve also made meaningful efforts to improve the overall user experience. In parallel, we've enhanced the quality and performance of Azure CLI and Azure PowerShell responses in Copilot, ensuring a more reliable user experience. We encourage you to try out the improved Azure CLI and Azure PowerShell in the Copilot experience and see how it can help streamline your Azure workflows. At Microsoft Build 2025, we're excited to announce several new capabilities aligned with these priorities: Improvements in quality and security. Enhancements to user experience. Ongoing improvements to Copilot's response quality and performance. Improvements in quality and security Azure CLI and Azure PowerShell Long Term Support (LTS) releases support In November 2024, Azure PowerShell became the first to introduce both Standard Term Support (STS) and Long-Term Support (LTS) versions, providing users with more flexibility in managing their tools. At Microsoft Build 2025, we are excited to announce that Azure CLI now also supports both STS and LTS release models. This allows users to choose the version that best fits their project needs, whether they prefer the stability of LTS releases or want to stay up to date with the latest features in STS releases. Users can continue using an LTS version until the next LTS becomes available or choose to upgrade more frequently with STS versions. To learn more about the definitions and support timelines for Azure CLI and Azure PowerShell STS and LTS versions, please refer to the following documentation: Azure CLI lifecycle and support | Microsoft Learn Azure PowerShell support lifecycle | Microsoft Learn Users can choose between the Long-Term Support (LTS) and Short-Term Support (STS) versions of Azure CLI based on their specific needs. It is important to understand the trade-offs: LTS versions provide a stable and predictable environment with a support cycle of up to 12 months, making them ideal for scenarios where stability and minimal maintenance are priorities. STS versions, on the other hand, offer access to the latest features and more frequent bug fixes. However, this comes with the potential need for more frequent script updates as changes are introduced with each release. It is also worth noting that platforms such as Azure DevOps and GitHub Actions typically default to using newer CLI versions. That said, users still have the option to pin to a specific version if greater consistency is required in their CI/CD pipelines. When using Azure CLI to deploy services like Azure Functions within CI/CD workflows, the actual CLI version in use will depend on the version selected by the pipeline environment (e.g., GitHub Actions or Azure DevOps), and it is recommended to verify or explicitly set the version to align with your deployment requirements. SecureString update for Azure PowerShell Our team is gradually transitioning to using SecureString for tokens, account keys, and secrets, replacing the traditional string types. In November 2024, we offered an opt-in method for the Get-AzAccessToken cmdlet. At the 2025 Build event, we’ve made this option mandatory, which is a breaking change: Get-AzAccessToken Get-AzAccessToken Token : System.Security.SecureString ExpiresOn : 5/13/2025 1:09:15 AM +00:00 TenantId : 00000000-0000-0000-0000-000000000000 UserId : user@mail.com Type : Bearer In 2026, we plan to implement this secure method in more commands, converting all keys, tokens, and similar data from string types to SecureString. Please continue to pay attention to our upcoming breaking changes documentation. Install Azure PowerShell from Microsoft Artifact Registry (MAR) Installing Azure PowerShell from Microsoft Artifact Registry (MAR) brings several key advantages for enterprise users, particularly in terms of security, performance, and simplified artifact management. Stronger Security and Supply Chain Integrity Microsoft Artifact Registry (MAR) enhances security by ensuring only Microsoft can publish official packages, eliminating risks like name squatting. It also improves software supply chain integrity by offering greater transparency and control over artifact provenance. Faster and More Reliable Delivery By caching Az modules in your own ACR instances with MAR as an upstream source, customers benefit from faster downloads and higher reliability, especially within the Azure network. You can try installing Azure PowerShell from MAR using the following PowerShell command: $acrUrl = 'https://mcr.microsoft.com' Register-PSResourceRepository -Name MAR -Uri $acrUrl -ApiVersion ContainerRegistry Install-PSResource -Name Az -Repository MAR For detailed installation instructions and prerequisites, refer to the official documentation: Optimize the installation of Azure PowerShell | Microsoft Learn Enhancements to user experience Azure PowerShell Enhancements at Microsoft Build 2025 As part of the Microsoft Build 2025 announcements, Azure PowerShell has introduced several significant improvements to enhance usability, automation flexibility, and overall user experience. Real-Time Progress Bar for Long-Running Operations Cmdlets that perform long-running operations now display a real-time progress bar, offering users clear visual feedback during execution. Smarter Output Formatting Based on Result Count Output formatting is now dynamically adjusted based on the number of results returned: A detailed list view is shown when a single result is returned, helping users quickly understand the full details. A table view is presented when multiple results are returned, providing a concise summary that's easier to scan. JSON-Based Resource Creation for Improved Automation Azure PowerShell now supports creating resources using raw JSON input, making it easier to integrate with infrastructure-as-code (IaC) pipelines. When this feature is enabled (by default in Azure environments), applicable cmdlets accept: JSON strings directly via *ViaJsonString External JSON files via *ViaJsonFilePath This capability streamlines scripting and automation, especially for users managing complex configurations. We're always looking for feedback, so try the new features and let us know what you think. Improved for custom and disconnected clouds: Azure CLI now reads extended ARM metadata In disconnected environments like national clouds, air-gapped setups, or Azure Stack, customers often define their own cloud configurations, including custom dataplane endpoints. However, older versions of Azure CLI and its extensions relied heavily on hardcoded endpoint values based only on the cloud name, limiting functionality in these isolated environments. To address this, Azure CLI now supports reading richer cloud metadata from Azure Resource Manager (ARM) using API version 2022-09-01. This metadata includes extended data plane endpoints, such as those for Arc-enabled services and private registries previously unavailable in older API versions. When running az cloud register with the --endpoint-resource-manager flag, Azure CLI automatically parses and loads these custom endpoints into its runtime context. All extensions, like connectedk8s, k8s-configuration, and others, can now dynamically use accurate, environment-specific endpoints without needing hardcoded logic. Key Benefits: Improved Support for Custom Clouds: Enables more reliable automation and compatibility with Azure Local. Increased Security and Maintainability: Removes the need for manually hardcoding endpoints. Unified Extension Behavior: Ensures consistent behavior across CLI and its extensions using centrally managed metadata. Try it out: Register cloud az cloud register -n myCloud --endpoint-resource-manager https://management.azure.com/ Check cloud az cloud show -n myCloud For the original implementation, please refer to https://github.com/Azure/azure-cli/pull/30682. Azure PowerShell WAM authentication update Since Azure PowerShell 12.0.0, Azure PowerShell supports Web Authentication Manager (WAM) as the default authentication mechanism. Using Web Account Manager (WAM) for authentication in Azure enhances security through its built-in identity broker and default system browser integration. It also delivers a faster and more seamless sign-in experience. All major blockers have been resolved, and we are actively working on the pending issues. For detailed announcements on specific issues, please refer to the WAM issues and Workarounds issue. We encourage users to enable WAM functionality using the command: Update-AzConfig -EnableLoginByWam $true. under Windows operating systems to ensure security. If you encounter issues, please report them in Issues · Azure/azure-powershell. Improve Copilot's response quality and performance Azure CLI/PS enhancement with Copilot in Azure In the first half of 2025, we improved the knowledge of Azure CLI and Azure PowerShell commands for Azure Copilot end-to-end scenarios based on best practices to answer questions related to commands and scripts. In the past six months, we have optimized the following scenarios: Introduced Azure concept documents to RAG to provide more accurate and comprehensive answers. Improved the accuracy and relevance of knowledge retrieval query and chunking strategies Support more accurate rejection of the out-of-scope questions. AI Shell brings AI to the command line, enabling natural conversations with language models and customizable workflows. AI Shell is in public preview and allows you to access Copilot in Azure. All the optimizations apply to AI Shell. For more information about AI Shell releases, see: AI Shell To learn more about Microsoft Copilot for Azure and how it can help you, visit: Microsoft Copilot for Azure Breaking Changes You can find the latest breaking change guidance documents at the links below. To learn more about the breaking changes, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell, see the release notes and migration guides. Azure CLI: Release notes & updates – Azure CLI | Microsoft Learn Azure PowerShell: Migration guide for Az 14.0.0 | Microsoft Learn Milestone timelines: Azure CLI Milestones Azure PowerShell Milestones Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Microsoft Build and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime. GitHub: o https://github.com/Azure/azure-cli o https://github.com/Azure/azure-powershell Let's stay in touch on X (Twitter) : @azureposh @AzureCli1.4KViews3likes1CommentSteps to Manually Add PowerShell Modules in Function App
When using Azure Function Apps on a Consumption plan, you may encounter issues with dependency management due to the 500 MB temp storage limit, causing module installation failures. To avoid upgrading to a more expensive premium plan, you can manually add PowerShell modules using the provided steps.6.2KViews4likes2CommentsAn Update on Bicep Azure Verified Modules for Platform Landing Zone (ALZ)
But first some history and context As you may of heard in one of our Azure Landing Zone (ALZ) community calls over the past year, across ALZ we have been working hard to refactor both our Terraform and Bicep implementation options to be built upon Azure Verified Modules (AVM). Earlier this year we announced that the work for Terraform, which we started on first, was complete; and you can read more about that in the announcement blog post we posted here. But whilst this work was going on the ALZ Bicep team where already busy planning how they would go about doing the same and rebuilding ALZ Bicep from AVM modules. You can see the original plans and where we also asked for feedback in the GitHub issue (#791) . Enough history, what's the latest? Now to answer the question everyone has and rightly so 😁 Well, it's good news! We have been busy working away on getting a number of the AVM Bicep Resource Modules updated with missing bits and pieces that we need from an ALZ perspective. All fairly minor in most cases but some required some bigger updates than others, and some modules didn't exist at all so we have had to propose, create, and publish those of which we are pretty much done with 👍 We are still working towards an end of Q4 (June/July) target for a preview release of all the modules, accelerator and guidance on how to use the new version of ALZ Bicep, which will be called "Bicep Azure Verified Modules for Platform Landing Zone (ALZ)"; this is to align with Terraform and also to provide clear distinction between ALZ Bicep and the new AVM based version. Please note that the timeline shared above is an ETA and may move Announcing the preview release of `avm/ptn/alz/empty` AVM Pattern Module Before we get to a more complete release of all the required resources and modules to build the entire ALZ architecture with the new Bicep Azure Verified Modules for Platform Landing Zone (ALZ), we wanted to share an early look at the module that will be at the heart of all of your ALZ deployments. That module is called `avm/ptn/alz/empty` and is available in the Public Bicep Registry for you to try out today (currently version `0.1.0`)! Tip: Checkout the "max" test in the tests directory for advanced usage examples! module testMg 'br/public:avm/ptn/alz/empty:0.1.0' = { params: { managementGroupName: 'test-mg' // Other parameters here... } } This module is 1 of 11 modules that will all be based off the same code. The module optionally creates all of the below: The Management Group itself Can also target an existing Management Group Management Group Subscription Associations RBAC Custom Role Definitions RBAC Role Assignments Policy Assignments Custom Policy Definitions Custom Policy Set Definitions (Initiatives) There will also be 1 x Bicep Azure Verified Modules for Platform Landing Zone (ALZ) pattern module for each of the ALZ Architectures Management Groups, plus this empty one for custom and advanced scenarios. A reminder of those Management Groups and the associated modules that will be created for each of them: `avm/ptn/alz/int-root` `avm/ptn/alz/platform` `avm/ptn/alz/platform-management` `avm/ptn/alz/platform-identity` `avm/ptn/alz/platform-connectivity` `avm/ptn/alz/landing-zones` `avm/ptn/alz/landing-zones-corp` `avm/ptn/alz/landing-zones-online` `avm/ptn/alz/decommissioned` `avm/ptn/alz/sandbox` These Management Group aligned pattern modules will create the same resources as above, but will have the latest release of the ALZ Library baked in to each of the modules. Meaning that for the `avm/ptn/alz/int-root` pattern module, you won't have to declare all of the ALZ RBAC Custom Role Definitions, Custom Policy Definitions, Policy Assignments etc. via the input parameters as they'll be hardcoded in the module based off the latest release from the ALZ Library at the point the version of the module was released. This means that to build the ALZ Management Group hierarchy and make all of the default ALZ policy assignments, as documented here, you'd need a bicep file that would look something like this as a starting point: Important: None of these modules exist below today! module intRootMg 'br/public:avm/ptn/alz/int-root:0.1.0' = { params: { managementGroupName: 'int-root-mg' } } module platformMg 'br/public:avm/ptn/alz/platform:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: intRootMg.outputs.managementGroupId } } module platformConnectivityMg 'br/public:avm/ptn/alz/platform-connectivity:0.1.0' = { params: { managementGroupName: 'platform-mg' managementGroupParentId: platformMg.outputs.managementGroupId } } This will make getting the ALZ Architecture out of the box really fast, and also really easy to upgrade and get the latest updates, by just bumping the version number as you desire when you are ready. Coupled with the `avm/ptn/alz/empty` module to add your own additional Policy Definitions and assignments, etc. at the same Management Groups scopes also helps you decouple the constant updates to the ALZ architecture and policies etc. from your own additional requirements. Helping you keep your code cleaner and our modules simple to maintain as we won't have to cater for handling additional custom definitions and assignments alongside the defaults from ALZ that are baked into the modules. Note: We are looking at suggesting that all of these are deployed via Deployment Stacks to help with lifecycle management of resources. e.g. help clean-up resources as well as deploy new ones; think policy assignments and definitions etc. We need to complete a lot more testing on this, but would love your feedback on experiences if you have any using Deployment Stacks to manage these kind of resources today. Open an issue/discussion on the ALZ Bicep GitHub repo 👍 Our asks to you 🫵 Please go try out and test the new `avm/ptn/alz/empty` module and test it out for all the scenarios you can think of relating to Management Groups, RBAC, Policies etc. we want to make sure it's "match fit/ready" before we then build the Management Group aligned modules and bake in the ALZ defaults to them. So please go and put the module through its paces and test it out. Tip: Checkout the "max" test in the tests directory for advanced usage examples! If you find any issues, bugs, feature requests or just have a question on how to use it, please just raise them as GitHub issues here (make sure to select the `avm/ptn/alz/empty` module from the drop down 👍) Thanks in advance for all your efforts and assistance and we look forward to hearing and getting your feedback on the module 👏1.8KViews4likes1Comment