powershell
2234 TopicsPowershell Entra and General Forum Layout Questions
Hello, I am returning to PowerShell, and it seems a lot has changed. I need to create some Security Groups in MS Entra and would like to know the best way to do so. I have a .csv file for the groups. Also, what is the best way to display the topic titles as a list in this forum? At this moment, I have to go scroll through pages of posts, and it's not easy. I used to like the old formats that let you see all the thread titles. Thanks49Views0likes1CommentExtracting and Auditing Azure DevOps Permissions at Scale with PowerShell
Managing access in Azure DevOps is easy at small scale — and increasingly opaque as organizations grow. This post introduces ADO Permissions Output, an open-source PowerShell toolset that queries Azure DevOps REST APIs across 30+ security namespaces, decodes bitmask permissions, resolves cryptic GUIDs and tokens into readable names, and produces structured JSON/CSV output ready for Power BI. It also surfaces "ghost" members — users who appear in ADO through nested Entra groups but hold no active entitlement — which the standard Graph API alone cannot detect. Whether you're preparing for a compliance review or just want to know who actually has access to what, this tool closes the gap between the ADO portal and a complete audit picture.The Microsoft Graph PowerShell SDK and the additionalProperties Property
The additionalProperties property is available for many Microsoft Graph PowerShell SDK cmdlets. In this article, we explain the function of the additionalProperties property and how it functions in holding output for Microsoft Graph PowerShell SDK cmdlets. It’s all because of the lack of strongly-typed properties, or so the AutoRest process would have us believe. https://office365itpros.com/2026/04/21/additionalproperties-property/31Views0likes0CommentsSharePoint List Migration to new Tenant
Hi All, I am preparing for a tenant-to-tenant migration of 60+ SharePoint lists that function as the back-end for various PowerApps. Since we are doing a staggered cutover, I need to perform an initial migration now and then run 'Delta' syncs over the next few weeks to catch new records, updates, and deletes. My primary challenge is that SharePoint's native ID column is not preserved during manual migrations (PowerShell/CSV), which will break our App logic and Lookups. How have others handled cross-tenant list synchronization at this scale? Specifically: How do you maintain record relationships and deep links when the system IDs change? What is the most efficient way to handle deltas across 60 lists without buying expensive 3rd-party migration tools? thanks, Jake96Views0likes1CommentAutomating Microsoft 365 with PowerShell Second Edition
The Office 365 for IT Pros team are thrilled to announce the availability of Automating Microsoft 365 with PowerShell (2nd edition). This completely revised 350-page book delivers the most comprehensive coverage of how to use Microsoft Graph APIs and the Microsoft Graph PowerShell SDK with Microsoft 365 workloads (Entra ID, Exchange Online, SharePoint Online, Teams, Planner, and more). Existing subscribers can download the second edition now free of charge. https://office365itpros.com/2025/06/30/automating-microsoft-365-with-powershell2/927Views2likes11CommentsBLOG: Determine and modernize Filesystem Deduplication
Version history - 1.6 Added references / links - 1.5 Added insights from Steven Ekren. Many thanks! / Added ReFS Docs link and added clarification about drawbacks. - 1.4 revised script so ReFS volumes with classic dedup will be identified, added more eligibly checks and error handling - 1.3 added point #4 in migration guidance - 1.2 revised script - 1.1 formatting This blog explains the two Windows deduplication modes classic Windows Data Deduplication (ReFS or NTFS) and ReFS Deduplication (ReFS). It covers how they differ, why you should consider upgrading to Windows Server 2025 to leverage the new ReFS dedup engine, and clear warnings about scenarios where ReFS is not recommended. Practical migration guidance and detection commands are included. Differences between classic dedup and ReFS dedup File system: Classic dedup runs on NTFS or ReFS; ReFS dedup runs on ReFS and Windows Server 2025 or later, only. Implementation: They are separate engines with different metadata formats and management cmdlets. Management: Classic dedup uses the Dedup PowerShell module (Get‑DedupVolume, Start‑DedupJob, Disable‑DedupVolume). ReFS dedup uses its own ReFS dedup cmdlets (Get‑ReFSDedupStatus, Enable‑ReFSDedup). Conversion: There is no in‑place conversion between the two; metadata and chunk formats are incompatible. Improvements: the new in-line ReFS Deduplication leverages the advantages of ReFS files system. This makes deduplication more efficient and less CPU intensive. The new ReFS Deduplication can also compress data in-line using L1Z algorithm. This makes it up to par with enterprise solutions, often found in SAN storage or Linux appliances. Compression needs to be set per volume, and optional. Edit: Steven Ekren, a former Senior Product Manager for Hyper-V shared valuable insights on how both engines operate in a comment on LinkedIn: [...] the basic conceptual difference between WS Deduplication and ReFS deduplication is that the Windows Server [dedup] version takes the duplicate file data and moves it to a repository and puts a reparse point in the file system from each point that references the data. This involves data movement and therefore not recommended for workloads that are changing it's data often, but best for more static data like documents and picture/videos. ReFS is a file system that uses links natively for all the objects so leaving the data in place and managing the links is much more efficient and doesn't involve the data copy and managing a repository. Effectively it's built into the file system. As the blog notes, there are some situations not recommended for this version of dedupe, but generally it's lower performance and storage I/O impact. Why upgrade to Windows Server 2025 Improved version of ReFS Filesystem Improved ReFS in-line deduplication + optional L1Z compression: Server 2025 includes enhancements to ReFS dedup performance, scalability, and integration with modern storage features. Support and fixes: Windows Server 2016 and 2019 are past mainstream support, increasing the likelihood of costly support cases and delayed fixes; upgrading reduces operational risk and ensures access to ongoing improvements. Future compatibility: Newer OS releases receive optimizations and bug fixes for ReFS and dedup scenarios that older releases will not. SMB compression: for reasonably faster data transfer at minimal CPU when transferring data through the networks. Feature and security related improvements refer to availabile Microsoft Windows Server 2025 Summit content on techcommunity.microsoft.com Scenarios where ReFS is not recommended ReFS on SAN in clustered CSV environments: Avoid placing ReFS dedup on top of SAN‑backed Cluster Shared Volumes (CSVFS) in production clusters; clustered SAN/CSV scenarios causing severe performance issues in practice. Please refer to the ReFS documentation. (personal opinion and experience, not endorsed by Microsoft): Many small, fast‑changing files: Workloads with frequent small writes, such as user profiles, folder redirection of AppData folders, or applications that churn small config files (for example, Lotus Notes config files) can cause locks, performance degradation, or unexpected behavior on ReFS. Exclude these disks from dedup or keep them on NTFS. Note: Restrictions on high churn rate, like lockups or high RAM consumption, deadlocks / BSOD might have been addressed in Windows Server 2025 and the ReFS Dedup, see comment of Steven Ekren. Improving reliability and performance is a top goal for ReFS, to improve the adoption and feature parity with NTFS. For information about feature parity please refer to the ReFS documentation. Migration guidance The following instructions describe a high level and supported migration path from Windows deduplication using the NTFS file system to native ReFS Deduplication. Note: Step #3, data migration is not required when already using ReFS with Data Deduplication. In this case it's enough to execute step #1 and #2. Note: Validate on non‑production data first. Plan for rehydration time and network/storage throughput. Ensure backups are current before starting. Make sure to have a full backup before upgrading Server OS or making changes. 1. Disable classic dedup on the NTFS source: Disable-DedupVolume -Volume YourDriveLetter: 2. Rehydrate (un‑deduplicate) the data: Start-DedupJob -Volume YourDriveLetter: -Type Unoptimization 3. Copy or move data to a ReFS volume (new target): For straightforward NTFS→ReFS copies, robocopy is recommended. A GUI and job based alternative to this is the File Server Migration Feature (uses robocopy) in Windows Admin Center. For complex scenarios, open files long path names very large datasets (< 5 TB) or many small files restructuring, GUI (including Windows Server Core) automation, improved logging cloud/hybrid migrations I recommend the usage of GS RichCopy Enterprise by GuruSquad for higher speed (up to 40%) and reliability, compared to robocopy. 4. Optionally remove the Windows Server feature When there is no old deduplication in use consider to remove the feature. Your advantages of doing so: removes an unneccessary service. removes the file system filter driver for dedup, which causes performance impacts, even when not in use. removes the PowerShell commandlets for the old dedup, so they cannot mistakenly used by existing scripts, unaware admins etc. When migrating files over network: SMB compression: consider both source and target run Windows Server 2025 and leverage SMB compression. SMB Compression is available in Microsoft xcopy, Microsoft robocopy and Gurusquad GScopy Enterprise. Balancing and Teaming with SMB: SMB does not require LFBO or SET Teaming. It automagically detects network links and actively balances on its own on Windows Server 2016 and later. Using teaming, depending the configuration, can negatively affect transfer speed. Quick detection and diagnostic commands Check file systems: Get-Volume | Select DriveLetter, FileSystem Check classic dedup feature: Get-WindowsFeature -Name FS-Data-Deduplication Get-DedupVolume Get-DedupStatus Check ReFS dedup: Get-Command -Module Microsoft.ReFsDedup.Commands Get-ReFSDedupStatus -Volume YourDriveLetter: Diagnostic script to detect both: <# .SYNOPSIS Detects classic NTFS Data Deduplication and ReFS Deduplication across local volumes. .DESCRIPTION - Reports NTFS volumes with classic Data Dedup enabled. - Lists ReFS volumes present on the host. - If the ReFS dedup cmdlet exists AND OS build >= 26100, checks ReFS dedup status per ReFS volume. - Color coding: * Classic dedup enabled → Yellow * Classic dedup not enabled → Cyan * ReFS dedup enabled → Green * ReFS dedup not enabled → Cyan .NOTES Version: 1.7 Author: Karl Wester-Ebbinghaus + Copilot Requirements: Elevated PowerShell session, PowerShell 5.1 or newer Supported OS: Windows Server 2025, Azure Stack HCI 24H2 or newer Unsupported OS: Windows 10, Windows 11 (script terminates) #> #region Initialization Write-Verbose "Initializing variables and environment..." $Volumes = $null $Volume = $null $DedupVolumesList = $null $DedupReFSVolumesList = $null $DedupReFSVolumesListLetters = $null $DedupReFSStatus = $null $refsCmd = $null $OSBuild = $null $runReFSDedupChecks = $null #endregion Initialization #region Volume Discovery Clear-Host Write-Verbose "Querying NTFS and ReFS volumes..." $Volumes = Get-Volume | Where-Object FileSystem -in 'NTFS','ReFS' #endregion Volume Discovery #region ReFS Dedup Cmdlet, OS Build and OS SKU Detection Write-Verbose "Checking for ReFS deduplication cmdlet..." $refsCmd = Get-Command -Name Get-ReFSDedupStatus -ErrorAction SilentlyContinue Write-Verbose "Reading OS build number..." try { $OSBuild = [int](Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' -Name CurrentBuildNumber).CurrentBuildNumber } catch { Write-Verbose "Registry read for OS build failed. Falling back to Environment OSVersion." $OSBuild = [int][Environment]::OSVersion.Version.Build } # end try/catch for OS build detection Write-Verbose "Checking OS InstallationType and EditionID..." $CurrentVersionKey = Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' $InstallationType = $CurrentVersionKey.InstallationType # "Client" or "Server" $EditionID = $CurrentVersionKey.EditionID # e.g. "AzureStackHCI", "ServerStandard", etc. Write-Verbose "Detected InstallationType: $InstallationType" Write-Verbose "Detected EditionID: $EditionID" Write-Verbose "Detected OSBuild: $OSBuild" # Block Windows 10/11 (Client OS) if ($InstallationType -eq 'Client') { Write-Error "Unsupported OS detected: Windows Client (Windows 10/11). Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } # Allow Azure Stack HCI explicitly if ($EditionID -eq 'AzureStackHCI') { Write-Verbose "Azure Stack HCI detected. Supported platform." } else { # Must be Windows Server if ($InstallationType -ne 'Server') { Write-Error "Unsupported OS detected. Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } Write-Verbose "Windows Server detected (EditionID: $EditionID). Supported platform." } Write-Verbose "Evaluating ReFS dedup eligibility based on cmdlet presence and build >= 26100..." $runReFSDedupChecks = $false if ($refsCmd -and ($OSBuild -ge 26100)) { $runReFSDedupChecks = $true Write-Verbose "ReFS dedup checks ENABLED (cmdlet present and OS build >= 26100)." } else { Write-Verbose "ReFS dedup checks DISABLED (cmdlet missing or OS build < 26100)." } #endregion ReFS Dedup Cmdlet, OS Build and OS SKU Detection #region Main Loop foreach ($Volume in $Volumes) { # begin foreach volume loop Write-Host "Volume $($Volume.DriveLetter): ($($Volume.FileSystem))" Write-Verbose "Processing volume $($Volume.DriveLetter)..." #region Classic Dedup + ReFS Volume Listing if ($Volume.FileSystem -eq 'NTFS' -or $Volume.FileSystem -eq 'ReFS') { Write-Verbose "Checking classic deduplication status for volume $($Volume.DriveLetter)..." $DedupVolumesList = Get-DedupVolume -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupVolumesList) { Write-Host " → Classic Data Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Yellow } else { Write-Host " → Classic Data Dedup NOT enabled on $($Volume.DriveLetter),$($Volume.FileSystem)" -ForegroundColor Cyan } # end if classic dedup enabled Write-Verbose "Listing ReFS volumes on host..." $DedupReFSVolumesList = Get-Volume | Where-Object FileSystem -eq 'ReFS' if ($DedupReFSVolumesList) { $DedupReFSVolumesListLetters = ($DedupReFSVolumesList | ForEach-Object { $_.DriveLetter }) -join ',' Write-Host " → ReFS volumes present on host: $DedupReFSVolumesListLetters" } else { Write-Host " → No ReFS volumes detected on host" } # end if ReFS volumes present } # end NTFS/ReFS block #endregion Classic Dedup + ReFS Volume Listing #region ReFS Dedup Status if ($Volume.FileSystem -eq 'ReFS') { if ($runReFSDedupChecks) { Write-Verbose "Checking ReFS deduplication status for volume $($Volume.DriveLetter)..." $DedupReFSStatus = Get-ReFSDedupStatus -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupReFSStatus) { Write-Host " → ReFS Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Green } else { Write-Host " → ReFS Dedup NOT enabled on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Cyan } # end if ReFS dedup enabled } else { if (-not $refsCmd) { Write-Error " → Skipping ReFS dedup check: Get-ReFSDedupStatus cmdlet not present" -ForegroundColor Cyan } else { Write-Error " → Skipping ReFS dedup check: OS build $OSBuild < required 26100" -ForegroundColor Cyan } # end reason for skipping ReFS dedup check } # end if runReFSDedupChecks } # end if ReFS filesystem block #endregion ReFS Dedup Status Write-Host "" } # end foreach volume loop #endregion Main Loop #region End Write-Verbose "Script completed." #endregion End Recommendations and next steps Inventory: Identify volumes using NTFS dedup and ReFS dedup, and map workloads that create many small or rapidly changing files. Plan: Schedule rehydration and migration windows; test ReFS dedup on representative datasets. Upgrade: Prioritize upgrading servers still on 2016/2019 (End of Mainstream Support) to reduce support risk and gain the latest ReFS dedup improvements. Kindly consider reading my Windows Server Installation Guidance and Windows Server Upgrade Guidance Exclude: Keep user profiles, AppData, and other high‑churn small‑file paths off ReFS dedup or on NTFS. Consider ReFS Dedup with Compression: Enable compression optionally. Mind ReFS dedup compression is not the same as compress files integration in File Explorer or File Explorer properties (Windows 9x). It's transparent to the application Make smart decisions: Avoid using dedup when the dataset is changing fast or your dedup + compression rate is below 20%. Usually you can expect 40% or more savings, and up to 80% in specific use cases like VDI VHDX with ReFS Dedup + Compression. Plan your dedup jobs: Ensure of making use of the planning features for dedup jobs through PowerShell or Windows Admin Center (WAC) when using ReFS dedup on more than one volume per Server. Otherwise they might all run at the same time and impact your storage performance (esp. spinning rust) and consumption of RAM and CPU. Share and Educate: Inform your infrastructure team about the changes so they avoid using the traditional dedup on ReFS. Related blogposts: https://splitbrain.com/windows-data-deduplication-vs-refs-deduplication/ , Thanks Darryl van der Peijl and team. https://www.veeam.com/kb2023 Veeam best practices about Windows Deduplication and ReFS Deduplication.920Views2likes3CommentsI need some simple layman explanation
Hi, I am involved with an implementation of an epm system that is integrated into sharepoint M365 and I started reading on its manual on the setting it up for the first time. I know the steps but I wish to get some simple understanding of why the steps are needed since I am not a very technical person. The tool involves the deployment of an addin in Microsoft word (both web and desktop app). The manual said the addin app can be installed by the user directly from app store or being deployed by the M365 administrator to group of users...but in the section for M365 administrator to deploy this addin app, it said that permission needs to be granted to the app. The permissions are: openid profile sites.selected user.read So why is it ok to let user install directly (without any instruction to set permissions) but when M365 administrator do it, it suddenly needs the given permission? In addition, the manual said to run a powershell script in order to grant permission to the sharepoint site created for the epm system integration. it wrote that sharepoint admin must have Microsoft graph powershell SDK installed and run the script being signed in as site owner. What is this powershell script that it needs special installation to run? Then something mention that when deploying the addin to a group of users, there is a step to run a manifest script. This step might need to be re-execute if there is changes in the addin development. What is this Manifest meant for in Sharepoint? What does it do? Thank you in advance.85Views1like2CommentsUnderstanding action_id discrepancies in Azure SQL Database Audit Logs (BCM vs AL / CR / DR)
Overview While working with Azure SQL Database auditing in enterprise environments, you may encounter an inconsistency in how the action_id field is captured across different PaaS SQL servers. In one such scenario, a customer observed: PaaS Server action_id observed for similar DDL statements Server A AL, CR, DR Server B BCM only This inconsistency impacted downstream compliance pipelines, as the audit data was expected to be captured and interpreted uniformly across all servers. This article explains: How Azure SQL Auditing works by default What causes BCM to appear instead of AL/CR/DR How to standardize audit logs across PaaS servers How Azure SQL Database Auditing Works? Azure SQL Database auditing uses a managed and fixed audit policy at the service level. When auditing is enabled at the server level, the default auditing policy includes the following action groups: BATCH_COMPLETED_GROUP SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP FAILED_DATABASE_AUTHENTICATION_GROUP These groups audit: All query execution activity Successful authentication attempts Failed authentication attempts As a result, SQL batches — including DDL statements like CREATE, ALTER, or DROP on database objects — are captured under the BATCH_COMPLETED_GROUP and appear with action_id = BCM Why AL, CR, and DR are not captured by default? Audit action IDs such as AL, CR and DR are considered Security / DDL-level audit events. These events are not included in the default Azure SQL auditing policy. Instead, they are generated only when the corresponding Security-related AuditActionGroups are explicitly enabled. For example: AuditActionGroup Captures DATABASE_OBJECT_CHANGE_GROUP CREATE / ALTER / DROP on database objects DATABASE_PRINCIPAL_CHANGE_GROUP User / role changes DATABASE_ROLE_MEMBER_CHANGE_GROUP Role membership updates DDL operations such as CREATE / ALTER / DROP on database objects are captured under action groups like DATABASE_OBJECT_CHANGE_GROUP. Observed Behavior in a Newly Created Test Server Running the following PowerShell command on a newly provisioned logical server showed only the default audit action groups enabled. (Get-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName").AuditActionGroup Therefore, DDL statements were audited but recorded as action_id = BCM Enabling AL / CR / DR Action IDs To capture DDL operations under their respective audit action IDs, configure the required security audit action groups at the SQL Server level. For example: In this customer scenario, we executed the following command: Set-AzSqlServerAudit -ResourceGroupName "RGName" -ServerName "ServerName" -AuditActionGroup "DATABASE_PRINCIPAL_CHANGE_GROUP", "DATABASE_ROLE_MEMBER_CHANGE_GROUP", "DATABASE_OBJECT_CHANGE_GROUP" After applying this configuration: DDL operations were captured in the audit logs as action_id = CR, AL and DR instead of BCM. Ensuring Consistent Compliance Across PaaS Servers To standardize audit logging behavior across environments: Step 1: Compare AuditActionGroups Run the following command on all servers: (Get-AzSqlServerAudit -ResourceGroupName "<RG>" -ServerName "<ServerName>").AuditActionGroup Step 2: Align AuditActionGroups Configure all server with same AuditActionGroup values. In this case, value used was below: Set-AzSqlServerAudit -ResourceGroupName "<RG>" -ServerName "<ServerName>" -AuditActionGroup ` "DATABASE_PRINCIPAL_CHANGE_GROUP", "DATABASE_ROLE_MEMBER_CHANGE_GROUP", "DATABASE_OBJECT_CHANGE_GROUP" Step 3: Validate Once aligned, similar SQL statements across all PaaS servers should now generate consistent action_id values in audit logs. Accepted values for AuditActionGroups. Ensure appropriate groups are enabled based on your organization’s compliance needs. Accepted values: BATCH_STARTED_GROUP, BATCH_COMPLETED_GROUP, APPLICATION_ROLE_CHANGE_PASSWORD_GROUP, BACKUP_RESTORE_GROUP, DATABASE_LOGOUT_GROUP, DATABASE_OBJECT_CHANGE_GROUP, DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP, DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, DATABASE_OPERATION_GROUP, DATABASE_PERMISSION_CHANGE_GROUP, DATABASE_PRINCIPAL_CHANGE_GROUP, DATABASE_PRINCIPAL_IMPERSONATION_GROUP, DATABASE_ROLE_MEMBER_CHANGE_GROUP, FAILED_DATABASE_AUTHENTICATION_GROUP, SCHEMA_OBJECT_ACCESS_GROUP, SCHEMA_OBJECT_CHANGE_GROUP, SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP, SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP, SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP, USER_CHANGE_PASSWORD_GROUP, LEDGER_OPERATION_GROUP, DBCC_GROUP, DATABASE_OWNERSHIP_CHANGE_GROUP, DATABASE_CHANGE_GROUP Links: Get-AzSqlServerAudit (Az.Sql) | Microsoft Learn Set-AzSqlServerAudit (Az.Sql) | Microsoft Learn104Views0likes0CommentsCreating a Planner Weekly Notification Email for Incomplete Tasks
A reader wanted a weekly incomplete task report to send details of Planner tasks to people with outstanding work to do. We used PowerShell to scan for incomplete tasks for people who are members of a group, perform some analysis on the data, and create and send email. Despite some deficiencies in the Planner Graph API, the code is pretty straightforward. https://office365itpros.com/2026/04/15/weekly-incomplete-task-report/41Views0likes0CommentsWriting PowerShell for the Eventually Consistent Entra ID Database
Entra ID uses an eventually consistent multi-region database architecture. PowerShell code that fetches and updates Entra ID objects needs to interact with the database in the most efficient manner. This article illustrates some guidance from Microsoft engineering with examples from the Microsoft Graph PowerShell SDK. I’m sure your scripts already use these techniques, but if not, we have some helpful pointers. https://office365itpros.com/2026/04/13/eventually-consistent-entra-id/30Views0likes0Comments