powershell
2165 TopicsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**67Views1like1CommentAccessing Content explorer data via SPN
Hi all, I am trying to get all the data from Content explorer for SITs matched files using https://learn.microsoft.com/en-us/powershell/module/exchange/export-contentexplorerdata?view=exchange-ps. I can run the command(Export-ContentExplorerData) when using User-Principle login but having issues while running it on SPN. For SPN Permissions, we followed the steps here https://learn.microsoft.com/en-us/powershell/exchange/app-only-auth-powershell-v2?view=exchange-ps assigned all the permissions on the page but still having issues when running the script. One of the permission for SPN that seems mandatory is Content Explorer Content viewer. Now in purview portal, we are not able to assign this permissions to SPN as it throws an errror "Adding SPN to purview role groups is not supported" Can we run this command(Export-ContentExplorerData) based on SPN(using application permission)? if yes what are the permission we need to assign to that SPN. Thanks in advanceSolved146Views0likes3CommentsRunning Teams PowerShell Cmdlets in Azure Automation
This article describes the prerequisites and how to run cmdlets from the Teams PowerShell module in Azure Automation runbooks. We also consider when you’d want to consider using Teams PowerShell cmdlets instead of Graph API requests or cmdlets from the Microsoft Graph PowerShell SDK. The bottom line is that it’s possible, but maybe not a frequently-used option. https://office365itpros.com/2025/09/12/teams-powershell-azure-automation/22Views0likes0CommentsGpresult Like Tool For Intune
Hi, Jonas here! Or as we say in the north of Germany: "Moin Moin!" I had to troubleshoot a lot of Intune policies lately and I used a variety of tools for that. At the end, I built my own script to have a result which looks similar to what “GPresult /h” creates for on-premises group polices. The script is inspired by the following article: https://doitpshway.com/get-a-better-intune-policy-report-part-2 by Ondrej Sebela. It follows a similar approach, but without any module dependencies and fewer output options, as my script only generates an HTML page. What started as a script is now a module which might have more functions in the future. Feel free to read any of my other articles here: https://aka.ms/JonasOhmsenBlogs How to get the module The PowerShell module is called: "IntuneDebug" and can be installed or downloaded from the PowerShell Gallery. Install the module by running the following command: Install-Module -Name IntuneDebug The module repository can be found here https://aka.ms/IntuneDebug in case you want to download the module manually or want to contribute to it. The command to get the report is called: “Get-MDMPolicyReport” How to use Get-MDMPolicyReport The function can run without administrative permissions and without any parameters on a windows machine. But you can also start the function with administrative permissions to get more data about Intune Win32Apps and their install status. Use parameter “-MDMDiagReportPath” to load MDM report data captured on a remote machine. But more on that in section “How to use parameter -MDMDiagReportPath“ So, in summary, the function can run locally to output information specific to that device, or it can parse already captured data via the “-MDMDiagReportPath” parameter. It cannot gather data remotely, though. The function output As mentioned earlier, the only output of the function is an HTML file which will automatically open in Edge. The output is grouped into sections to make the report easier to read. The page looks like this when all sections are collapsed: Section: "DeviceInfo <Devicename>" DeviceInfo shows general information about the device and the Intune sync status: Section: "PolicyScope: Device" This section shows all the settings applied to the device grouped by area/product. Note: If you’re coming from ConfigMgr you might expect a policy ID in the report. While an Intune policy has an ID, the ID is not stored on the device. That’s by-design and that’s the reason why we just see the settings that apply to a device in this report. The following example shows some basic Defender and Delivery Optimization settings grouped together. You can also see the system's default value if there is one and the winning settings provider. This should typically be the MDM provider like Intune, but it could also be a different provider for some settings depending on the setup. Section: "PolicyScope: <SID> <UPN>" This section shows all the policies applied to a user. The user’s SID and UPN (UPN only when run locally) are visible in the policy-scope header. If there are multiple users working on a machine, each user will have their own section in the report. Section: "PolicyScope: EnterpriseDesktopAppManagement" This section shows all MSI installation policies from Intune. NOTE: Win32 and store apps are visible in the “Win32Apps” section. The application name is not available, instead I show the MSI filename to give an indication of what type of app that is. Section: "PolicyScope: Resources" Under resources we will see policies which typically contain some sort of payload. Like a certificate or Defender firewall rule. I tried to make each section as readable as possible. So, the output varies by type. Certificates for example, are shown in a different format as Defender firewall rules. NOTE: If the function runs without the parameter “-MDMDiagReportPath” it will try to enrich the policy info with as much data as possible. This is not possible when working with captured MDM-reports from a remote machine. The output might be limited in that case. Section: "PolicyScope: Local Admin Password Solution (LAPS)" This section shows all the settings applied to the device coming from a LAPS policy as well as some local settings. Section: "PolicyScope: Win32Apps" This section shows all available Win32App policies. Those apps can be installed already or just assigned as available. If you need more information about the installation status, you need to run the function with administrative permission. This only works locally and cannot be used with parameter “-MDMDiagReportPath” since the extra data is coming from the local registry. If a script is used for the detection or requirement, the script will be parsed and shown as it is. Use the copy button to copy the script and test it locally if needed. When the script is run as administrator locally, it will try to get more information about the actual installation status of an application: Section: "PolicyScope: Intune Scripts" Intune Scripts will show script policies and their current state. The example below shows a remediation script with the detection output string "Found". It does not have an remediation action and therefore no data for the related properties. Unfortunately, the script name is not part of the policy and cannot be shown here. But you can use Graph Explorer https://aka.ms/ge and use the following endpoint to get the script name by entering the script ID of your script: "https://graph.microsoft.com/beta/deviceManagement/deviceHealthScripts/<ScriptID>?$select=id,displayName" Where the data comes from The function will use the following command to generate an MDM report: MdmDiagnosticsTool.exe -out “C:\Users\PUBLIC\Documents\MDMDiagnostics\<DateTime>” NOTE: The tool MdmDiagnosticsTool.exe is part of the Windows operating system. More about it can be found HERE The tool will export the data to C:\Users\PUBLIC\Documents\MDMDiagnostics to a folder in the following format: "yyyy-MM-dd_HH-mm-ss" The function will then parse the following two files to extract the required data without administrative privileges: MDMDiagReport.html MDMDiagReport.xml Some data is directly read from the registry to enrich the output and in some cases administrator permissions are required. The Win32Apps and Intune script policy data is coming from the Intune Management Extension logfiles: C:\ProgramData\Microsoft\IntuneManagementExtension\Logs\AppWorkload*.log C:\ProgramData\Microsoft\IntuneManagementExtension\Logs\HealthScripts*.log NOTE: The folders under “C:\Users\PUBLIC\Documents\MDMDiagnostics” will be deleted when the creation time is older than one day. This can be changed with parameter “-CleanUpDays” set to a higher value than one day. How to use parameter “-MDMDiagReportPath” Simply generate MDM report data, either with the MdmDiagnosticsTool.exe, via the settings app or via Intune. Then copy the files to a system with the IntuneDebug module on it and unpack the report data. You can now run the function with the parameter “-MDMDiagReportPath” and point it to the unpacked report data. NOTE: The report header will contain the following when the parameter was used: “Generated from captured MDM Diagnostics Report” MdmDiagnosticsTool.exe example: mdmdiagnosticstool.exe -area "DeviceEnrollment;DeviceProvisioning;Autopilot" -zip C:\temp\MDMDiagnosticsData.zip Settings app example: Intune Example: I hope you find this tool helpful. In case of any issues or suggestions, head over to GitHub via https://aka.ms/IntuneDebug and create an issue or pull request. Stay safe! Jonas Ohmsen Code disclaimer This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.SharePoint page to display Computer Information
Is it possible to have a sharepoint page display computer information on the page from the users device (Machine name, ip, logged in user, OS Version etc)? I have a powershell script that can get this information however I've not found a way i can run this on the page when it opens. Any help would be apricated. thanks66Views1like3CommentsTeams Voice
One of my client was asking that Is there any way if they have already Voice devices from different vendor. He wants to configure PSTN with old number which he was using before with different device. How can we incorporate old PSTN and number with Teams environment?67Views0likes3Comments