powershell
2222 TopicsAutomating Microsoft 365 with PowerShell Second Edition
The Office 365 for IT Pros team are thrilled to announce the availability of Automating Microsoft 365 with PowerShell (2nd edition). This completely revised 350-page book delivers the most comprehensive coverage of how to use Microsoft Graph APIs and the Microsoft Graph PowerShell SDK with Microsoft 365 workloads (Entra ID, Exchange Online, SharePoint Online, Teams, Planner, and more). Existing subscribers can download the second edition now free of charge. https://office365itpros.com/2025/06/30/automating-microsoft-365-with-powershell2/784Views2likes10CommentsAdd-PublicFolderClientPermission: Object reference not set to an instance of an object.
Running into an issue with adding public folder permissions in Exchange Online. I've used this PowerShell script for a few years without any issues, but suddenly getting this error no matter what I try. I do have Owner permissions and there are Default and Anonymous permissions on the public folder, tried completely removing and reinstalling the ExchangeOnlineManagement module as well. Anyone else having this problem? $PF = Get-MailPublicFolder -Identity "\pf1" $User = Get-User -Anr "User1" $AccessRights = @( "ReadItems", "CreateItems", "EditOwnedItems", "EditAllItems", "FolderVisible" ) Add-PublicFolderClientPermission -Identity "\$($PF.Id)" -User $User.UserPrincipalName -AccessRights $AccessRights -Verbose VERBOSE: Returning precomputed version info: 3.9.2 VERBOSE: Requested HTTP/1.1 POST with 227-byte payload VERBOSE: Received HTTP/1.1 response of content type application/json of unknown size VERBOSE: Query 1 failed. Add-PublicFolderClientPermission: Object reference not set to an instance of an object. Thank you143Views0likes3CommentsGPU Partitioning in Windows Server 2025 Hyper-V
GPU Partitioning (GPU-P) is a feature in Windows Server 2025 Hyper-V that allows multiple virtual machines to share a single physical GPU by dividing it into isolated fractions. Each VM is allocated a dedicated portion of the GPU’s resources (memory, compute, encoders, etc.) instead of using the entire GPU. This is achieved via Single-Root I/O Virtualization (SR-IOV), which provides a hardware-enforced isolation between GPU partitions, ensuring each VM can access only its assigned GPU fraction with predictable performance and security. In contrast, GPU Passthrough (also known as Discrete Device Assignment, DDA) assigns a whole physical GPU exclusively to one VM. With DDA, the VM gets full control of the GPU, but no other VMs can use that GPU simultaneously. GPU-P’s ability to time-slice or partition the GPU allows higher utilization and VM density for graphics or compute workloads, whereas DDA offers maximum performance for a single VM at the cost of flexibility. GPU-P is ideal when you want to share a GPU among multiple VMs, such as for VDI desktops or AI inference tasks that only need a portion of a GPU’s power. DDA (passthrough) is preferred when a workload needs the full GPU (e.g. large model training) or when the GPU doesn’t support partitioning. Another major difference is mobility: GPU-P supports live VM mobility and failover clustering, meaning a VM using a GPU partition can move or restart on another host with minimal downtime. DDA-backed VMs cannot live-migrate. If you need to move a DDA VM, it must be powered off and then started on a target host (in clustering, a DDA VM will be restarted on a node with an available GPU upon failover, since live migration isn’t supported). Additionally, you cannot mix modes on the same device. A physical GPU can be either partitioned for GPU-P or passed through via DDA, but not both simultaneously. Supported GPU Hardware and Driver Requirements GPU Partitioning in Windows Server 2025 is supported on select GPU hardware that provides SR-IOV or similar virtualization capabilities, along with appropriate drivers. Only specific GPUs support GPU-P and you won’t be able to configure it on a consumer gaming GPU like your RTX 5090. In addition to the GPU itself, certain platform features are required: Modern CPU with IOMMU: The host processors must support Intel VT-d or AMD-Vi with DMA remapping (IOMMU). This is crucial for mapping device memory securely between host and VMs. Older processors lacking these enhancements may not fully support live migration of GPU partitions. BIOS Settings: Ensure that in each host’s UEFI/BIOS, Intel VT-d/AMD-Vi and SR-IOV are enabled. These options may be under virtualization or PCIe settings. Without SR-IOV enabled at the firmware level, the OS will not recognize the GPU as partitionable (in Windows Admin Center it might show status “Paravirtualization” indicating the driver is capable but the platform isn’t). Host GPU Drivers: Use vendor-provided drivers that support GPU virtualization. For NVIDIA, this means installing the NVIDIA virtual GPU (vGPU) driver on the Windows Server 2025 host (the driver package that supports GPU-P). Check the GPU vendor’s documentation for installation for specifics. After installing, you can verify the GPU’s status via PowerShell or WAC. Guest VM Drivers: The guest VMs also need appropriate GPU drivers installed (within the VM’s OS) to make use of the virtual GPU. For instance, if using Windows 11 or Windows Server 2025 as a guest, install the GPU driver inside the VM (often the same data-center driver or a guest-compatible subset from the vGPU package) so that the GPU is usable for DirectX/OpenGL or CUDA in that VM. Linux guests (Ubuntu 18.04/20.04/22.04 are supported) likewise need the Linux driver installed. Guest OS support for GPU-P in WS2025 covers Windows 10/11, Windows Server 2019+, and certain Ubuntu LTS versions. After hardware setup and driver installation, it’s important to verify that the host recognizes the GPU as “partitionable.” You can use Windows Admin Center or PowerShell for this: in WAC’s GPU tab, check the “Assigned status” of the GPU it should show “Partitioned” if everything is configured correctly (if it shows “Ready for DDA assignment” then the partitioning driver isn’t active, and if “Not assignable” then the GPU/driver doesn’t support either method). In PowerShell, you can run: Get-VMHostPartitionableGpu | FL Name, ValidPartitionCounts, PartitionCount This will list each GPU device’s identifier and what partition counts it supports. For example, an NVIDIA A40 might return ValidPartitionCounts : {16, 8, 4, 2 …} indicating the GPU can be split into 2, 4, 8, or 16 partitions, and also show the current PartitionCount setting (by default it may equal the max or current configured value). If no GPUs are listed, or the list is empty, the GPU is not recognized as partitionable (check drivers/BIOS). If the GPU is listed but ValidPartitionCounts is blank or shows only “1,” then it may not support SR-IOV and can only be used via DDA. Enabling and Configuring GPU Partitioning Once the hardware and drivers are ready, enabling GPU Partitioning involves configuring how the GPU will be divided and ensuring all Hyper-V hosts (especially in a cluster) have a consistent setup. Each physical GPU must be configured with a partition count (how many partitions to create on that GPU). You cannot define an arbitrary number – it must be one of the supported counts reported by the hardware/driver. The default might be the maximum supported (e.g., 16). To set a specific partition count, use PowerShell on each host: Decide on a partition count that suits your workloads. Fewer partitions means each VM gets more GPU resources (more VRAM and compute per partition), whereas more partitions means you can assign the GPU to more VMs concurrently (each getting a smaller slice). For AI/ML, you might choose a moderate number – e.g. split a 24 GB GPU into 4 partitions of ~6 GB each for inference tasks. Run the Set-VMHostPartitionableGpu cmdlet. Provide the GPU’s device ID (from the Name field of the earlier Get-VMHostPartitionableGpu output) and the desired -PartitionCount. For example: Set-VMHostPartitionableGpu -Name "<GPU-device-ID>" -PartitionCount 4 This would configure the GPU to be divided into 4 partitions. Repeat this for each GPU device if the host has multiple GPUs (or specify -Name accordingly for each). Verify the setting by running: Get-VMHostPartitionableGpu | FL Name,PartitionCount It should now show the PartitionCount set to your chosen value (e.g., PartitionCount : 4 for each listed GPU). If you are in a clustered environment, apply the same partition count on every host in the cluster for all identical GPUs. Consistency is critical: a VM using a “quarter GPU” partition can only fail over to another host that also has its GPU split into quarters. Windows Admin Center will actually enforce this by warning you if you try to set mismatched counts on different nodes. You can also configure the partition count via the WAC GUI. In WAC’s GPU partitions tool, select the GPU (or a set of homogeneous GPUs across hosts) and choose Configure partition count. WAC will present a dropdown of valid partition counts (as reported by the GPU). Selecting a number will show a tooltip of how much VRAM each partition would have (e.g., selecting 8 partitions on a 16 GB card might show ~2 GB per partition). WAC helps ensure you apply the change to all similar GPUs in the cluster together. After applying, it will update the partition count on each host automatically. After this step, the physical GPUs on the host (or cluster) are partitioned into the configured number of virtual GPUs. They are now ready to be assigned to VMs. The host’s perspective will show each partition as a shareable resource. (Note: You cannot assign more partitions to VMs than the number configured) Assigning GPU Partitions to Virtual Machines With the GPU partitioned at the host level, the next step is to attach a GPU partition to a VM. This is analogous to plugging a virtual GPU device into the VM. Each VM can have at most one GPU partition device attached, so choose the VM that needs GPU acceleration and assign one partition to it. There are two main ways to do this: using PowerShell commands or using the Windows Admin Center UI. Below are the instructions for each method. To add the GPU Partition to the VM use the Add-VMGpuPartitionAdapter cmdlet to attach a partitioned GPU to the VM. For example: Add-VMGpuPartitionAdapter -VMName "<VMName>" This will allocate one of the available GPU partitions on the host to the specified VM. (There is no parameter to specify which partition or GPU & Hyper-V will auto-select an available partition from a compatible GPU. If no partition is free or the host GPUs aren’t partitioned, this cmdlet will return an error) You can check that the VM has a GPU partition attached by running: Get-VMGpuPartitionAdapter -VMName "<VMName>" | FL InstancePath,PartitionId This will show details like the GPU device instance path and a PartitionId for the VM’s GPU device. If you see an entry with an instance path (matching the GPU’s PCI ID) and a PartitionId, the partition is successfully attached. Power on the VM. On boot, the VM’s OS will detect a new display adapter. In Windows guests, you should see a GPU in Device Manager (it may appear as a GPU with a specific model, or a virtual GPU device name). Install the appropriate GPU driver inside the VM if not already installed, so that the VM can fully utilize the GPU (for example, install NVIDIA drivers in the guest to get CUDA, DirectX, etc. working). Once the driver is active in the guest, the VM will be able to leverage the GPU partition for AI/ML computations or graphics rendering. Using Windows Admin Center: Open Windows Admin Center and navigate to your Hyper-V cluster or host, then go to the GPUs extension. Ensure you have added the GPUs extension v2.8.0 or later to WAC. In the GPU Partitions tab, you’ll see a list of the physical GPUs and any existing partitions. Click on “+ Assign partition”. This opens an assignment wizard. Select the VM: First choose the host server where the target VM currently resides (WAC will list all servers in the cluster). Then select the VM from that host to assign a partition to. (If a VM is greyed out in the list, it likely already has a GPU partition assigned or is incompatible.) Select Partition Size (VRAM): Choose the partition size from the dropdown. WAC will list options that correspond to the partition counts you configured. For example, if the GPU is split into 4, you might see an option like “25% of GPU (≈4 GB)” or similar. Ensure this matches the partition count you set. You cannot assign more memory than a partition contains. Offline Action (HA option): If the VM is clustered and you want it to be highly available, check the option for “Configure offline action to force shutdown” (if presented in the UI). Proceed to assign. WAC will automatically: shut down the VM (if it was running), attach a GPU partition to it, and then power the VM back on. After a brief moment, the VM should come online with the GPU partition attached. In the WAC GPU partitions list, you will now see an entry showing the VM name under the GPU partition it’s using. At this point, the VM is running with a virtual GPU. You can repeat the process for other VMs, up to the number of partitions available. Each physical GPU can only support a fixed number of active partitions equal to the PartitionCount set. If you attempt to assign more VMs than partitions, the additional VMs will not get a GPU (or the Add command will fail). Also note that a given VM can only occupy one partition on one GPU – you cannot span a single VM across multiple GPU partitions or across multiple GPUs with GPU-P. GPU Partitioning in Clustered Environments (Failover Clustering) One of the major benefits introduced with Windows Server 2025 is that GPU partitions can be used in Failover Clustering scenarios for high availability. This means you can have a Hyper-V cluster where VMs with virtual GPUs are clustered roles, capable of moving between hosts either through live migration (planned) or failover (unplanned). To utilize GPU-P in a cluster, you must pay special attention to configuration consistency and understand the current limitations: Use Windows Server 2025 Datacenter: As mentioned, clustering features (like failover) for GPU partitions are supported only on Datacenter edition. Homogeneous GPU Configuration: All hosts in the cluster should have identical GPU hardware and partitioning setup. Failover/Live Migration with GPU-P does not support mixing GPU models or partition sizes in a GPU-P cluster. Each host should have the same GPU model. The partition count configured (e.g., 4 or 8 etc.) must be the same on every host. This uniformity ensures that a VM expecting a certain size partition will find an equivalent on any other node. Windows Server 2025 introduces support for live migrating VMs that have a GPU partition attached. However, there are important caveats: Hardware support: Live migration with GPU-P requires that the hosts’ CPUs and chipsets fully support isolating DMA and device state. In practice, as noted, you need Intel VT-d or AMD-Vi enabled, and the CPUs ideally supporting “DMA bit tracking.” If this is in place, Hyper-V will attempt to live migrate the VM normally. During such a migration, the GPU’s state is not seamlessly copied like regular memory; instead, Windows will fallback to a slower migration process to preserve integrity. Specifically, when migrating a VM using GPU-P, Hyper-V automatically uses TCP/IP with compression (even if you have faster methods like RDMA configured). This is because device state transfer is more complex. The migration will still succeed, but you may notice higher CPU usage on the host and a longer migration time than usual. Cross-node compatibility: Ensure that the GPU driver versions on all hosts are the same, and that each host has an available partition for the VM. If a VM is running and you trigger a live migrate, Hyper-V will find a target where the VM can get an identical partition. If none are free, the migration will not proceed (or the VM may have to be restarted elsewhere as a failover). Failover (Unplanned Moves): If a host crashes or goes down, a clustered VM with a GPU partition will be automatically restarted on another node, much like any HA VM. The key difference is that the VM cannot save its state, so it will be a cold start on the new node, attaching to a new GPU partition there. When the VM comes up on the new node, it will request a GPU partition. Hyper-V will allocate one if available. If NodeB had no free partition (say all were assigned to other VMs), the VM might start but not get a GPU (and likely Windows would log an error that the virtual GPU could not start). Administrators should monitor and possibly leverage anti-affinity rules to avoid packing too many GPU VMs on one host if full automatic failover is required. To learn more about GPU-P on Windows Server 2025, consult the documentation on Learn: https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/gpu-partitioning4.3KViews1like3CommentsFile Type Version Limits
Hi all, In trying to solve an old issue I stumbled across this new feature currently in preview and am wondering if the file type arrays will be editable or if new arrays could be or will be added? I have a handful of file types which do not need 100 versions, let alone a version every 2-5 minutes, requiring frequent culling... Ling. https://learn.microsoft.com/en-us/sharepoint/file-type-version-limits99Views0likes2CommentsI built a free, open-source M365 security assessment tool - looking for feedback
I work as an IT consultant, and a good chunk of my time is spent assessing Microsoft 365 environments for small and mid-sized businesses. Every engagement started the same way: connect to five different PowerShell modules, run dozens of commands across Entra ID, Exchange Online, Defender, SharePoint, and Teams, manually compare each setting against CIS benchmarks, then spend hours assembling everything into a report the client could actually read. The tools that automate this either cost thousands per year, require standing up Azure infrastructure just to run, or only cover one service area. I wanted something simpler: one command that connects, assesses, and produces a client-ready deliverable. So I built it. What M365 Assess does https://github.com/Daren9m/M365-Assess is a PowerShell-based security assessment tool that runs against a Microsoft 365 tenant and produces a comprehensive set of reports. Here is what you get from a single run: 57 automated security checks aligned to the CIS Microsoft 365 Foundations Benchmark v6.0.1, covering Entra ID, Exchange Online, Defender for Office 365, SharePoint Online, and Teams 12 compliance frameworks mapped simultaneously -- every finding is cross-referenced against NIST 800-53, NIST CSF 2.0, ISO 27001:2022, SOC 2, HIPAA, PCI DSS v4.0.1, CMMC 2.0, CISA SCuBA, and DISA STIG (plus CIS profiles for E3 L1/L2 and E5 L1/L2) 20+ CSV exports covering users, mailboxes, MFA status, admin roles, conditional access policies, mail flow rules, device compliance, and more A self-contained HTML report with an executive summary, severity badges, sortable tables, and a compliance overview dashboard -- no external dependencies, fully base64-encoded, just open it in any browser or email it directly The entire assessment is read-only. It never modifies tenant settings. Only Get-* cmdlets are used. A few things I'm proud of Real-time progress in the console. As the assessment runs, you see each check complete with live status indicators and timing. No staring at a blank terminal wondering if it hung. The HTML report is a single file. Logos, backgrounds, fonts -- everything is embedded. You can email the report as an attachment and it renders perfectly. It supports dark mode (auto-detects system preference), and all tables are sortable by clicking column headers. Compliance framework mapping. This was the feature that took the most work. The compliance overview shows coverage percentages across all 12 frameworks, with drill-down to individual controls. Each finding links back to its CIS control ID and maps to every applicable framework control. Pass/Fail detail tables. Each security check shows the CIS control reference, what was checked, what the expected value is, what the actual value is, and a clear Pass/Fail/Warning status. Findings include remediation descriptions to help prioritize fixes. Quick start If you want to try it out, it takes about 5 minutes to get running: # Install prerequisites (if you don't have them already) Install-Module Microsoft.Graph, ExchangeOnlineManagement -Scope CurrentUser Clone and run git clone https://github.com/Daren9m/M365-Assess.git cd M365-Assess .\Invoke-M365Assessment.ps1 The interactive wizard walks you through selecting assessment sections, entering your tenant ID, and choosing an authentication method (interactive browser login, certificate-based, or pre-existing connections). Results land in a timestamped folder with all CSVs and the HTML report. Requires PowerShell 7.x and runs on Windows (macOS and Linux are experimental -- I would love help testing those platforms). Cloud support M365 Assess works with: Commercial (global) tenants GCC, GCC High, and DoD environments If you work in government cloud, the tool handles the different endpoint URIs automatically. What is next This is actively maintained and I have a roadmap of improvements: More automated checks -- 140 CIS v6.0.1 controls are tracked in the registry, with 57 automated today. Expanding coverage is the top priority. Remediation commands -- PowerShell snippets and portal steps for each finding, so you can fix issues directly from the report. XLSX compliance matrix -- A spreadsheet export for audit teams who need to work in Excel. Standalone report regeneration -- Re-run the report from existing CSV data without re-assessing the tenant. I would love your feedback I have been building this for my own consulting work, but I think it could be useful to the broader community. If you try it, I would genuinely appreciate hearing: What checks should I prioritize next? Which security controls matter most in your environment? What compliance frameworks are most requested by your clients or auditors? How does the report land with non-technical stakeholders? Is the executive summary useful, or does it need work? macOS/Linux users -- does it run? What breaks? I have tested it on macOS, but not extensively. Bug reports, feature requests, and contributions are all welcome on GitHub. Repository: https://github.com/Daren9m/M365-Assess License: MIT (free for commercial and personal use) Runtime: PowerShell 7.x Thanks for reading. Happy to answer any questions in the comments.208Views1like0CommentsHow to Remove Sensitivity Labels from SharePoint Files at Scale
It’s easy to remove sensitivity labels from SharePoint Online files when only a few files are involved. Doing the same task at scale requires automation. In this article, we explain how to use the Microsoft Graph PowerShell SDK to find and remove sensitivity labels from files stored in SharePoint Online and OneDrive for Business. https://office365itpros.com/2026/03/10/remove-sensitivity-labels-from-file/38Views0likes0CommentsPS script for moving clustered VMs to another node
Windows Server 2022, Hyper-V, Failover cluster We have a Hyper-V cluster where the hosts reboot once a month. If the host being rebooted has any number of VMs running on it the reboot can take hours. I've proven this by manually moving VM roles off of the host prior to reboot and the host reboots in less than an hour, usually around 15 minutes. Does anyone know of a powershell script that will detect clustered VMs running on the host and move them to another host within the cluster? I'd rather not reinvent this if someone's already done it.37Views0likes0CommentsAzure CLI and Azure PowerShell Build 2025 Announcement
The key investment areas for Azure CLI and Azure PowerShell in 2025 are quality and security. We’ve also made meaningful efforts to improve the overall user experience. In parallel, we've enhanced the quality and performance of Azure CLI and Azure PowerShell responses in Copilot, ensuring a more reliable user experience. We encourage you to try out the improved Azure CLI and Azure PowerShell in the Copilot experience and see how it can help streamline your Azure workflows. At Microsoft Build 2025, we're excited to announce several new capabilities aligned with these priorities: Improvements in quality and security. Enhancements to user experience. Ongoing improvements to Copilot's response quality and performance. Improvements in quality and security Azure CLI and Azure PowerShell Long Term Support (LTS) releases support In November 2024, Azure PowerShell became the first to introduce both Standard Term Support (STS) and Long-Term Support (LTS) versions, providing users with more flexibility in managing their tools. At Microsoft Build 2025, we are excited to announce that Azure CLI now also supports both STS and LTS release models. This allows users to choose the version that best fits their project needs, whether they prefer the stability of LTS releases or want to stay up to date with the latest features in STS releases. Users can continue using an LTS version until the next LTS becomes available or choose to upgrade more frequently with STS versions. To learn more about the definitions and support timelines for Azure CLI and Azure PowerShell STS and LTS versions, please refer to the following documentation: Azure CLI lifecycle and support | Microsoft Learn Azure PowerShell support lifecycle | Microsoft Learn Users can choose between the Long-Term Support (LTS) and Short-Term Support (STS) versions of Azure CLI based on their specific needs. It is important to understand the trade-offs: LTS versions provide a stable and predictable environment with a support cycle of up to 12 months, making them ideal for scenarios where stability and minimal maintenance are priorities. STS versions, on the other hand, offer access to the latest features and more frequent bug fixes. However, this comes with the potential need for more frequent script updates as changes are introduced with each release. It is also worth noting that platforms such as Azure DevOps and GitHub Actions typically default to using newer CLI versions. That said, users still have the option to pin to a specific version if greater consistency is required in their CI/CD pipelines. When using Azure CLI to deploy services like Azure Functions within CI/CD workflows, the actual CLI version in use will depend on the version selected by the pipeline environment (e.g., GitHub Actions or Azure DevOps), and it is recommended to verify or explicitly set the version to align with your deployment requirements. SecureString update for Azure PowerShell Our team is gradually transitioning to using SecureString for tokens, account keys, and secrets, replacing the traditional string types. In November 2024, we offered an opt-in method for the Get-AzAccessToken cmdlet. At the 2025 Build event, we’ve made this option mandatory, which is a breaking change: Get-AzAccessToken Get-AzAccessToken Token : System.Security.SecureString ExpiresOn : 5/13/2025 1:09:15 AM +00:00 TenantId : 00000000-0000-0000-0000-000000000000 UserId : user@mail.com Type : Bearer In 2026, we plan to implement this secure method in more commands, converting all keys, tokens, and similar data from string types to SecureString. Please continue to pay attention to our upcoming breaking changes documentation. Install Azure PowerShell from Microsoft Artifact Registry (MAR) Installing Azure PowerShell from Microsoft Artifact Registry (MAR) brings several key advantages for enterprise users, particularly in terms of security, performance, and simplified artifact management. Stronger Security and Supply Chain Integrity Microsoft Artifact Registry (MAR) enhances security by ensuring only Microsoft can publish official packages, eliminating risks like name squatting. It also improves software supply chain integrity by offering greater transparency and control over artifact provenance. Faster and More Reliable Delivery By caching Az modules in your own ACR instances with MAR as an upstream source, customers benefit from faster downloads and higher reliability, especially within the Azure network. You can try installing Azure PowerShell from MAR using the following PowerShell command: $acrUrl = 'https://mcr.microsoft.com' Register-PSResourceRepository -Name MAR -Uri $acrUrl -ApiVersion ContainerRegistry Install-PSResource -Name Az -Repository MAR For detailed installation instructions and prerequisites, refer to the official documentation: Optimize the installation of Azure PowerShell | Microsoft Learn Enhancements to user experience Azure PowerShell Enhancements at Microsoft Build 2025 As part of the Microsoft Build 2025 announcements, Azure PowerShell has introduced several significant improvements to enhance usability, automation flexibility, and overall user experience. Real-Time Progress Bar for Long-Running Operations Cmdlets that perform long-running operations now display a real-time progress bar, offering users clear visual feedback during execution. Smarter Output Formatting Based on Result Count Output formatting is now dynamically adjusted based on the number of results returned: A detailed list view is shown when a single result is returned, helping users quickly understand the full details. A table view is presented when multiple results are returned, providing a concise summary that's easier to scan. JSON-Based Resource Creation for Improved Automation Azure PowerShell now supports creating resources using raw JSON input, making it easier to integrate with infrastructure-as-code (IaC) pipelines. When this feature is enabled (by default in Azure environments), applicable cmdlets accept: JSON strings directly via *ViaJsonString External JSON files via *ViaJsonFilePath This capability streamlines scripting and automation, especially for users managing complex configurations. We're always looking for feedback, so try the new features and let us know what you think. Improved for custom and disconnected clouds: Azure CLI now reads extended ARM metadata In disconnected environments like national clouds, air-gapped setups, or Azure Stack, customers often define their own cloud configurations, including custom dataplane endpoints. However, older versions of Azure CLI and its extensions relied heavily on hardcoded endpoint values based only on the cloud name, limiting functionality in these isolated environments. To address this, Azure CLI now supports reading richer cloud metadata from Azure Resource Manager (ARM) using API version 2022-09-01. This metadata includes extended data plane endpoints, such as those for Arc-enabled services and private registries previously unavailable in older API versions. When running az cloud register with the --endpoint-resource-manager flag, Azure CLI automatically parses and loads these custom endpoints into its runtime context. All extensions, like connectedk8s, k8s-configuration, and others, can now dynamically use accurate, environment-specific endpoints without needing hardcoded logic. Key Benefits: Improved Support for Custom Clouds: Enables more reliable automation and compatibility with Azure Local. Increased Security and Maintainability: Removes the need for manually hardcoding endpoints. Unified Extension Behavior: Ensures consistent behavior across CLI and its extensions using centrally managed metadata. Try it out: Register cloud az cloud register -n myCloud --endpoint-resource-manager https://management.azure.com/ Check cloud az cloud show -n myCloud For the original implementation, please refer to https://github.com/Azure/azure-cli/pull/30682. Azure PowerShell WAM authentication update Since Azure PowerShell 12.0.0, Azure PowerShell supports Web Authentication Manager (WAM) as the default authentication mechanism. Using Web Account Manager (WAM) for authentication in Azure enhances security through its built-in identity broker and default system browser integration. It also delivers a faster and more seamless sign-in experience. All major blockers have been resolved, and we are actively working on the pending issues. For detailed announcements on specific issues, please refer to the WAM issues and Workarounds issue. We encourage users to enable WAM functionality using the command: Update-AzConfig -EnableLoginByWam $true. under Windows operating systems to ensure security. If you encounter issues, please report them in Issues · Azure/azure-powershell. Improve Copilot's response quality and performance Azure CLI/PS enhancement with Copilot in Azure In the first half of 2025, we improved the knowledge of Azure CLI and Azure PowerShell commands for Azure Copilot end-to-end scenarios based on best practices to answer questions related to commands and scripts. In the past six months, we have optimized the following scenarios: Introduced Azure concept documents to RAG to provide more accurate and comprehensive answers. Improved the accuracy and relevance of knowledge retrieval query and chunking strategies Support more accurate rejection of the out-of-scope questions. AI Shell brings AI to the command line, enabling natural conversations with language models and customizable workflows. AI Shell is in public preview and allows you to access Copilot in Azure. All the optimizations apply to AI Shell. For more information about AI Shell releases, see: AI Shell To learn more about Microsoft Copilot for Azure and how it can help you, visit: Microsoft Copilot for Azure Breaking Changes You can find the latest breaking change guidance documents at the links below. To learn more about the breaking changes, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell, see the release notes and migration guides. Azure CLI: Release notes & updates – Azure CLI | Microsoft Learn Azure PowerShell: Migration guide for Az 14.0.0 | Microsoft Learn Milestone timelines: Azure CLI Milestones Azure PowerShell Milestones Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Microsoft Build and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime. GitHub: o https://github.com/Azure/azure-cli o https://github.com/Azure/azure-powershell Let's stay in touch on X (Twitter) : @azureposh @AzureCli1.6KViews3likes2CommentsHow to Use Scoped Graph Permissions with SharePoint Lists
This article explains how to use scoped Graph permissions to restrict app access to lists and list items in SharePoint Online and OneDrive for Business sites. It's a follow-up to other articles covering how to restrict app access to SharePoint Online sites and files. Scoping app access to specific objects is important because otherwise apps can access everything in SharePoint Online, and that isn't good. https://office365itpros.com/2026/02/25/scoped-graph-permission-lists/26Views0likes0CommentsUsing Dev Proxy with the Microsoft Graph PowerShell SDK
Dev Proxy is a Microsoft tool built to help developers figure out the most effective way of using Microsoft Graph API requests. On the surface, Dev Proxy doesn’t seem like a tool that would interest people who use the Microsoft Graph PowerShell SDK to write scripts for Microsoft 365. But all tools have some use, and Dev Proxy can help. https://office365itpros.com/2026/02/19/dev-proxy-graph-sdk/31Views0likes0Comments