powershell
2197 TopicsBLOG: Determine and modernize Filesystem Deduplication
Version history - 1.6 Added references / links - 1.5 Added insights from Steven Ekren. Many thanks! / Added ReFS Docs link and added clarification about drawbacks. - 1.4 revised script so ReFS volumes with classic dedup will be identified, added more eligibly checks and error handling - 1.3 added point #4 in migration guidance - 1.2 revised script - 1.1 formatting This blog explains the two Windows deduplication modes classic Windows Data Deduplication (ReFS or NTFS) and ReFS Deduplication (ReFS). It covers how they differ, why you should consider upgrading to Windows Server 2025 to leverage the new ReFS dedup engine, and clear warnings about scenarios where ReFS is not recommended. Practical migration guidance and detection commands are included. Differences between classic dedup and ReFS dedup File system: Classic dedup runs on NTFS or ReFS; ReFS dedup runs on ReFS and Windows Server 2025 or later, only. Implementation: They are separate engines with different metadata formats and management cmdlets. Management: Classic dedup uses the Dedup PowerShell module (Get‑DedupVolume, Start‑DedupJob, Disable‑DedupVolume). ReFS dedup uses its own ReFS dedup cmdlets (Get‑ReFSDedupStatus, Enable‑ReFSDedup). Conversion: There is no in‑place conversion between the two; metadata and chunk formats are incompatible. Improvements: the new in-line ReFS Deduplication leverages the advantages of ReFS files system. This makes deduplication more efficient and less CPU intensive. The new ReFS Deduplication can also compress data in-line using L1Z algorithm. This makes it up to par with enterprise solutions, often found in SAN storage or Linux appliances. Compression needs to be set per volume, and optional. Edit: Steven Ekren, a former Senior Product Manager for Hyper-V shared valuable insights on how both engines operate in a comment on LinkedIn: [...] the basic conceptual difference between WS Deduplication and ReFS deduplication is that the Windows Server [dedup] version takes the duplicate file data and moves it to a repository and puts a reparse point in the file system from each point that references the data. This involves data movement and therefore not recommended for workloads that are changing it's data often, but best for more static data like documents and picture/videos. ReFS is a file system that uses links natively for all the objects so leaving the data in place and managing the links is much more efficient and doesn't involve the data copy and managing a repository. Effectively it's built into the file system. As the blog notes, there are some situations not recommended for this version of dedupe, but generally it's lower performance and storage I/O impact. Why upgrade to Windows Server 2025 Improved version of ReFS Filesystem Improved ReFS in-line deduplication + optional L1Z compression: Server 2025 includes enhancements to ReFS dedup performance, scalability, and integration with modern storage features. Support and fixes: Windows Server 2016 and 2019 are past mainstream support, increasing the likelihood of costly support cases and delayed fixes; upgrading reduces operational risk and ensures access to ongoing improvements. Future compatibility: Newer OS releases receive optimizations and bug fixes for ReFS and dedup scenarios that older releases will not. SMB compression: for reasonably faster data transfer at minimal CPU when transferring data through the networks. Feature and security related improvements refer to availabile Microsoft Windows Server 2025 Summit content on techcommunity.microsoft.com Scenarios where ReFS is not recommended ReFS on SAN in clustered CSV environments: Avoid placing ReFS dedup on top of SAN‑backed Cluster Shared Volumes (CSVFS) in production clusters; clustered SAN/CSV scenarios causing severe performance issues in practice. Please refer to the ReFS documentation. (personal opinion and experience, not endorsed by Microsoft): Many small, fast‑changing files: Workloads with frequent small writes, such as user profiles, folder redirection of AppData folders, or applications that churn small config files (for example, Lotus Notes config files) can cause locks, performance degradation, or unexpected behavior on ReFS. Exclude these disks from dedup or keep them on NTFS. Note: Restrictions on high churn rate, like lockups or high RAM consumption, deadlocks / BSOD might have been addressed in Windows Server 2025 and the ReFS Dedup, see comment of Steven Ekren. Improving reliability and performance is a top goal for ReFS, to improve the adoption and feature parity with NTFS. For information about feature parity please refer to the ReFS documentation. Migration guidance The following instructions describe a high level and supported migration path from Windows deduplication using the NTFS file system to native ReFS Deduplication. Note: Step #3, data migration is not required when already using ReFS with Data Deduplication. In this case it's enough to execute step #1 and #2. Note: Validate on non‑production data first. Plan for rehydration time and network/storage throughput. Ensure backups are current before starting. Make sure to have a full backup before upgrading Server OS or making changes. 1. Disable classic dedup on the NTFS source: Disable-DedupVolume -Volume YourDriveLetter: 2. Rehydrate (un‑deduplicate) the data: Start-DedupJob -Volume YourDriveLetter: -Type Unoptimization 3. Copy or move data to a ReFS volume (new target): For straightforward NTFS→ReFS copies, robocopy is recommended. A GUI and job based alternative to this is the File Server Migration Feature (uses robocopy) in Windows Admin Center. For complex scenarios, open files long path names very large datasets (< 5 TB) or many small files restructuring, GUI (including Windows Server Core) automation, improved logging cloud/hybrid migrations I recommend the usage of GS RichCopy Enterprise by GuruSquad for higher speed (up to 40%) and reliability, compared to robocopy. 4. Optionally remove the Windows Server feature When there is no old deduplication in use consider to remove the feature. Your advantages of doing so: removes an unneccessary service. removes the file system filter driver for dedup, which causes performance impacts, even when not in use. removes the PowerShell commandlets for the old dedup, so they cannot mistakenly used by existing scripts, unaware admins etc. When migrating files over network: SMB compression: consider both source and target run Windows Server 2025 and leverage SMB compression. SMB Compression is available in Microsoft xcopy, Microsoft robocopy and Gurusquad GScopy Enterprise. Balancing and Teaming with SMB: SMB does not require LFBO or SET Teaming. It automagically detects network links and actively balances on its own on Windows Server 2016 and later. Using teaming, depending the configuration, can negatively affect transfer speed. Quick detection and diagnostic commands Check file systems: Get-Volume | Select DriveLetter, FileSystem Check classic dedup feature: Get-WindowsFeature -Name FS-Data-Deduplication Get-DedupVolume Get-DedupStatus Check ReFS dedup: Get-Command -Module Microsoft.ReFsDedup.Commands Get-ReFSDedupStatus -Volume YourDriveLetter: Diagnostic script to detect both: <# .SYNOPSIS Detects classic NTFS Data Deduplication and ReFS Deduplication across local volumes. .DESCRIPTION - Reports NTFS volumes with classic Data Dedup enabled. - Lists ReFS volumes present on the host. - If the ReFS dedup cmdlet exists AND OS build >= 26100, checks ReFS dedup status per ReFS volume. - Color coding: * Classic dedup enabled → Yellow * Classic dedup not enabled → Cyan * ReFS dedup enabled → Green * ReFS dedup not enabled → Cyan .NOTES Version: 1.7 Author: Karl Wester-Ebbinghaus + Copilot Requirements: Elevated PowerShell session, PowerShell 5.1 or newer Supported OS: Windows Server 2025, Azure Stack HCI 24H2 or newer Unsupported OS: Windows 10, Windows 11 (script terminates) #> #region Initialization Write-Verbose "Initializing variables and environment..." $Volumes = $null $Volume = $null $DedupVolumesList = $null $DedupReFSVolumesList = $null $DedupReFSVolumesListLetters = $null $DedupReFSStatus = $null $refsCmd = $null $OSBuild = $null $runReFSDedupChecks = $null #endregion Initialization #region Volume Discovery Clear-Host Write-Verbose "Querying NTFS and ReFS volumes..." $Volumes = Get-Volume | Where-Object FileSystem -in 'NTFS','ReFS' #endregion Volume Discovery #region ReFS Dedup Cmdlet, OS Build and OS SKU Detection Write-Verbose "Checking for ReFS deduplication cmdlet..." $refsCmd = Get-Command -Name Get-ReFSDedupStatus -ErrorAction SilentlyContinue Write-Verbose "Reading OS build number..." try { $OSBuild = [int](Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' -Name CurrentBuildNumber).CurrentBuildNumber } catch { Write-Verbose "Registry read for OS build failed. Falling back to Environment OSVersion." $OSBuild = [int][Environment]::OSVersion.Version.Build } # end try/catch for OS build detection Write-Verbose "Checking OS InstallationType and EditionID..." $CurrentVersionKey = Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion' $InstallationType = $CurrentVersionKey.InstallationType # "Client" or "Server" $EditionID = $CurrentVersionKey.EditionID # e.g. "AzureStackHCI", "ServerStandard", etc. Write-Verbose "Detected InstallationType: $InstallationType" Write-Verbose "Detected EditionID: $EditionID" Write-Verbose "Detected OSBuild: $OSBuild" # Block Windows 10/11 (Client OS) if ($InstallationType -eq 'Client') { Write-Error "Unsupported OS detected: Windows Client (Windows 10/11). Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } # Allow Azure Stack HCI explicitly if ($EditionID -eq 'AzureStackHCI') { Write-Verbose "Azure Stack HCI detected. Supported platform." } else { # Must be Windows Server if ($InstallationType -ne 'Server') { Write-Error "Unsupported OS detected. Only Windows Server or Azure Stack HCI are supported. Script will terminate." exit } Write-Verbose "Windows Server detected (EditionID: $EditionID). Supported platform." } Write-Verbose "Evaluating ReFS dedup eligibility based on cmdlet presence and build >= 26100..." $runReFSDedupChecks = $false if ($refsCmd -and ($OSBuild -ge 26100)) { $runReFSDedupChecks = $true Write-Verbose "ReFS dedup checks ENABLED (cmdlet present and OS build >= 26100)." } else { Write-Verbose "ReFS dedup checks DISABLED (cmdlet missing or OS build < 26100)." } #endregion ReFS Dedup Cmdlet, OS Build and OS SKU Detection #region Main Loop foreach ($Volume in $Volumes) { # begin foreach volume loop Write-Host "Volume $($Volume.DriveLetter): ($($Volume.FileSystem))" Write-Verbose "Processing volume $($Volume.DriveLetter)..." #region Classic Dedup + ReFS Volume Listing if ($Volume.FileSystem -eq 'NTFS' -or $Volume.FileSystem -eq 'ReFS') { Write-Verbose "Checking classic deduplication status for volume $($Volume.DriveLetter)..." $DedupVolumesList = Get-DedupVolume -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupVolumesList) { Write-Host " → Classic Data Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Yellow } else { Write-Host " → Classic Data Dedup NOT enabled on $($Volume.DriveLetter),$($Volume.FileSystem)" -ForegroundColor Cyan } # end if classic dedup enabled Write-Verbose "Listing ReFS volumes on host..." $DedupReFSVolumesList = Get-Volume | Where-Object FileSystem -eq 'ReFS' if ($DedupReFSVolumesList) { $DedupReFSVolumesListLetters = ($DedupReFSVolumesList | ForEach-Object { $_.DriveLetter }) -join ',' Write-Host " → ReFS volumes present on host: $DedupReFSVolumesListLetters" } else { Write-Host " → No ReFS volumes detected on host" } # end if ReFS volumes present } # end NTFS/ReFS block #endregion Classic Dedup + ReFS Volume Listing #region ReFS Dedup Status if ($Volume.FileSystem -eq 'ReFS') { if ($runReFSDedupChecks) { Write-Verbose "Checking ReFS deduplication status for volume $($Volume.DriveLetter)..." $DedupReFSStatus = Get-ReFSDedupStatus -Volume $Volume.DriveLetter -ErrorAction SilentlyContinue if ($DedupReFSStatus) { Write-Host " → ReFS Dedup ENABLED on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Green } else { Write-Host " → ReFS Dedup NOT enabled on $($Volume.DriveLetter), $($Volume.FileSystem)" -ForegroundColor Cyan } # end if ReFS dedup enabled } else { if (-not $refsCmd) { Write-Error " → Skipping ReFS dedup check: Get-ReFSDedupStatus cmdlet not present" -ForegroundColor Cyan } else { Write-Error " → Skipping ReFS dedup check: OS build $OSBuild < required 26100" -ForegroundColor Cyan } # end reason for skipping ReFS dedup check } # end if runReFSDedupChecks } # end if ReFS filesystem block #endregion ReFS Dedup Status Write-Host "" } # end foreach volume loop #endregion Main Loop #region End Write-Verbose "Script completed." #endregion End Recommendations and next steps Inventory: Identify volumes using NTFS dedup and ReFS dedup, and map workloads that create many small or rapidly changing files. Plan: Schedule rehydration and migration windows; test ReFS dedup on representative datasets. Upgrade: Prioritize upgrading servers still on 2016/2019 (End of Mainstream Support) to reduce support risk and gain the latest ReFS dedup improvements. Kindly consider reading my Windows Server Installation Guidance and Windows Server Upgrade Guidance Exclude: Keep user profiles, AppData, and other high‑churn small‑file paths off ReFS dedup or on NTFS. Consider ReFS Dedup with Compression: Enable compression optionally. Mind ReFS dedup compression is not the same as compress files integration in File Explorer or File Explorer properties (Windows 9x). It's transparent to the application Make smart decisions: Avoid using dedup when the dataset is changing fast or your dedup + compression rate is below 20%. Usually you can expect 40% or more savings, and up to 80% in specific use cases like VDI VHDX with ReFS Dedup + Compression. Plan your dedup jobs: Ensure of making use of the planning features for dedup jobs through PowerShell or Windows Admin Center (WAC) when using ReFS dedup on more than one volume per Server. Otherwise they might all run at the same time and impact your storage performance (esp. spinning rust) and consumption of RAM and CPU. Share and Educate: Inform your infrastructure team about the changes so they avoid using the traditional dedup on ReFS. Related blogposts: https://splitbrain.com/windows-data-deduplication-vs-refs-deduplication/ , Thanks Darryl van der Peijl and team. https://www.veeam.com/kb2023 Veeam best practices about Windows Deduplication and ReFS Deduplication.290Views2likes1CommentBuild PowerShell as "framework-dependent"
Before I start a time expensive and maybe unsuccessful attempt: Is it relatively easily possible to compile PowerShell from the source and make it "framework-dependent" instead of "self-contained"? In doing so, PowerShell could be de-coupled from the .NET runtime (which is most probably already installed), and maybe an even newer runtime version could be used (currently, PowerShell has no .NET 10 packed and I have to wait for a release that supports it)...12Views0likes0CommentsWINGET is not recognized as a commandlet on win 2k19 server fresh setup
I have setup a new win2k19, I followed the instructions Install-PackageProvider -Name NuGet -Force | Out-Null Install-Module -Name Microsoft.WinGet.Client -Force -Repository PSGallery | Out-Null Repair-WinGetPackageManager When I try anu winget command I get winget is not recognized as a commandletSolved94Views0likes2CommentsStep-by-Step: How to work with Group Managed Service Accounts (gMSA)
Services Accounts are recommended to use when install application or services in infrastructure. It is dedicated account with specific privileges which use to run services, batch jobs, management tasks. In most of the infrastructures, service accounts are typical user accounts with “Password never expire” option. Since these service accounts are not been use regularly, Administrators have to keep track of these accounts and their credentials. I have seen in many occasions where engineers face in to issues due to outdated or misplace service account credential details. Pain of it is, if you reset the password of service accounts, you will need to update services, databases, application settings to get application or services up and running again. Apart from it Engineers also have to manage service principle names (SPN) which helps to identify service instance uniquely.140KViews6likes16CommentsHow to Report DLP Alerts
MC1169572 announces that administrators can add classifications to DLP alerts to help with reporting. But how do you report DLP alerts? As it turns out, it’s relatively easy to retrieve DLP alerts via the Microsoft Graph Security API, and using the Get-MgSecurityAlertV2 cmdlet from the Microsoft Graph PowerShell SDK makes it even easier to find and report the data. https://office365itpros.com/2025/12/29/dlp-alerts-report/21Views0likes0CommentsMicrosoft Graph PowerShell SDK V2.34 Makes WAM the Default
The Web Account Manager (WAM) authentication broker becomes the default method for handling interactive Microsoft Graph PowerShell SDK connections from V2.34 onwards. The rapid release of a new version (V2.33 appeared 12 days beforehand) is usually a sign of a big problem, but in this case the reason is more likely to be a security vulnerability that’s just come to light. We’ll find out after the holidays. We've released V19.1 of the Automating Microsoft 365 with PowerShell eBook (the advantage of ePublishing) to keep track with developments. https://office365itpros.com/2025/12/23/web-account-manager-graph-sdk/180Views0likes0CommentsAutomating Microsoft 365 with PowerShell Second Edition
The Office 365 for IT Pros team are thrilled to announce the availability of Automating Microsoft 365 with PowerShell (2nd edition). This completely revised 350-page book delivers the most comprehensive coverage of how to use Microsoft Graph APIs and the Microsoft Graph PowerShell SDK with Microsoft 365 workloads (Entra ID, Exchange Online, SharePoint Online, Teams, Planner, and more). Existing subscribers can download the second edition now free of charge. https://office365itpros.com/2025/06/30/automating-microsoft-365-with-powershell2/507Views2likes7CommentsPowerShell Basics: Don't Fear Hitting Enter with -WhatIf
Chances are you've run into this situation. You've built a script, or a one-liner, to perform a specific task, but you don't have a way to thoroughly test it without hitting Enter. That moment before hitting enter can be difficult. Knowing this need, there is a switch available with many PowerShell commands called -WhatIf .163KViews7likes9CommentsSearch Less, Build More: Inner Sourcing with GitHub Copilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub Copilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using Copilot) and without extra prompting. GitHub Copilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way Copilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means Copilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for Copilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects Copilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of Copilot Instructions? One of the less obvious but most powerful features of GitHub Copilot is its ability to follow instructions files. Copilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells Copilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a Copilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a Copilot instructions document (which could be part of a repo scaffold or organisation wide) which tells Copilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that Copilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted Copilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to Copilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your Copilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a Copilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when Copilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } Copilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**1.3KViews1like3Commentsadd user to SharePoint group via PowerShell error
Hi, I'm trying to use PowerShell to add a user to an existing SharePoint group. I ran the following to connect to the SharePoint site: Connect-SPOService -Url https://site1-admin.sharepoint.com This prompts me to login, password and MFA code. Afterwards I type in the following to add a user: Add-SPOUser -Site https://site1-admin.sharepoint.com/sites/company -Group "Company Info Members" -LoginName mailto:email address removed for privacy reasons There is a SharePoint group with the name "Company Info Members", which I want to add mailto:email address removed for privacy reasons to. But I get the following error: Add-SPOUser : Unknown Error At line:1 char:1 + Add-SPOUser -Site https://site1-admin.sharepoint.com/s ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Add-SPOUser], ServerException + FullyQualifiedErrorId : Microsoft.SharePoint.Client.ServerException,Microsoft.Online.SharePoint.PowerShell.AddSP OUser I was wondering what could be causing this error. I searched for the error on Copilot and it suggested I use "PnP.PowerShell". But I'm having some issues installing this app. I was wondering if there are any suggestions I could do? Thanks! Jason121Views0likes3Comments