best practices
1592 TopicsHow to embed large text blocks in a chat message
I am trying to include a large-ish code snippet in a Teams message. I understand that the message itself can only be so many characters, but is there a way to create an ad-hoc object to place a larger text chunk in, rather than save my content to a file and attach the file?23Views0likes1CommentAMA: Azure AI Foundry Voice Live API: Build Smarter, Faster Voice Agents
Join us LIVE in the Azure AI Foundry Discord on the 14th October, 2025, 10am PT to learn more about Voice Live API Voice is no longer a novelty, it's the next-gen interface between humans and machines. From automotive assistants to educational tutors, voice-driven agents are reshaping how we interact with technology. But building seamless, real-time voice experiences has often meant stitching together a patchwork of services: STT, GenAI, TTS, avatars, and more. Until now. Introducing Azure AI Foundry Voice Live API Launched into general availability on October 1, 2025, the Azure AI Foundry Voice Live API is a game-changer for developers building voice-enabled agents. It unifies the entire voice stack—speech-to-text, generative AI, text-to-speech, avatars, and conversational enhancements, into a single, streamlined interface. That means: ⚡ Lower latency 🧠 Smarter interactions 🛠️ Simplified development 📈 Scalable deployment Whether you're prototyping a voice bot for customer support or deploying a full-stack assistant in production, Voice Live API accelerates your journey from idea to impact. Ask Me Anything: Deep Dive with the CoreAI Speech Team Join us for a live AMA session where you can engage directly with the engineers behind the API: 🗓️ Date: 14th Oct 2025 🕒 Time: 10am PT 📍 Location: https://aka.ms/foundry/discord See the EVENTS 🎤 Speakers: Qinying Liao, Principal Program Manager, CoreAI Speech Jan Gorgen, Senior Program Manager, CoreAI Speech They’ll walk through real-world use cases, demo the API in action, and answer your toughest questions, from latency optimization to avatar integration. Who Should Attend? This AMA is designed for: AI engineers building multimodal agents Developers integrating voice into enterprise workflows Researchers exploring conversational UX Foundry users looking to scale voice prototypes Why It Matters Voice Live API isn’t just another endpoint, it’s a foundation for building natural, responsive, and production-ready voice agents. With Azure AI Foundry’s orchestration and deployment tools, you can: Skip the glue code Focus on experience design Deploy with confidence across platforms Bring Your Questions Curious about latency benchmarks? Want to know how avatars sync with TTS? Wondering how to integrate with your existing Foundry workflows? This is your chance to ask the team directly.From Cloud to Chip: Building Smarter AI at the Edge with Windows AI PCs
As AI engineers, we’ve spent years optimizing models for the cloud, scaling inference, wrangling latency, and chasing compute across clusters. But the frontier is shifting. With the rise of Windows AI PCs and powerful local accelerators, the edge is no longer a constraint it’s now a canvas. Whether you're deploying vision models to industrial cameras, optimizing speech interfaces for offline assistants, or building privacy-preserving apps for healthcare, Edge AI is where real-world intelligence meets real-time performance. Why Edge AI, Why Now? Edge AI isn’t just about running models locally, it’s about rethinking the entire lifecycle: - Latency: Decisions in milliseconds, not round-trips to the cloud. - Privacy: Sensitive data stays on-device, enabling HIPAA/GDPR compliance. - Resilience: Offline-first apps that don’t break when the network does. - Cost: Reduced cloud compute and bandwidth overhead. With Windows AI PCs powered by Intel and Qualcomm NPUs and tools like ONNX Runtime, DirectML, and Olive, developers can now optimize and deploy models with unprecedented efficiency. What You’ll Learn in Edge AI for Beginners The Edge AI for Beginners curriculum is a hands-on, open-source guide designed for engineers ready to move from theory to deployment. Multi-Language Support This content is available in over 48 languages, so you can read and study in your native language. What You'll Master This course takes you from fundamental concepts to production-ready implementations, covering: Small Language Models (SLMs) optimized for edge deployment Hardware-aware optimization across diverse platforms Real-time inference with privacy-preserving capabilities Production deployment strategies for enterprise applications Why EdgeAI Matters Edge AI represents a paradigm shift that addresses critical modern challenges: Privacy & Security: Process sensitive data locally without cloud exposure Real-time Performance: Eliminate network latency for time-critical applications Cost Efficiency: Reduce bandwidth and cloud computing expenses Resilient Operations: Maintain functionality during network outages Regulatory Compliance: Meet data sovereignty requirements Edge AI Edge AI refers to running AI algorithms and language models locally on hardware, close to where data is generated without relying on cloud resources for inference. It reduces latency, enhances privacy, and enables real-time decision-making. Core Principles: On-device inference: AI models run on edge devices (phones, routers, microcontrollers, industrial PCs) Offline capability: Functions without persistent internet connectivity Low latency: Immediate responses suited for real-time systems Data sovereignty: Keeps sensitive data local, improving security and compliance Small Language Models (SLMs) SLMs like Phi-4, Mistral-7B, Qwen and Gemma are optimized versions of larger LLMs, trained or distilled for: Reduced memory footprint: Efficient use of limited edge device memory Lower compute demand: Optimized for CPU and edge GPU performance Faster startup times: Quick initialization for responsive applications They unlock powerful NLP capabilities while meeting the constraints of: Embedded systems: IoT devices and industrial controllers Mobile devices: Smartphones and tablets with offline capabilities IoT Devices: Sensors and smart devices with limited resources Edge servers: Local processing units with limited GPU resources Personal Computers: Desktop and laptop deployment scenarios Course Modules & Navigation Course duration. 10 hours of content Module Topic Focus Area Key Content Level Duration 📖 00 Introduction to EdgeAI Foundation & Context EdgeAI Overview • Industry Applications • SLM Introduction • Learning Objectives Beginner 1-2 hrs 📚 01 EdgeAI Fundamentals Cloud vs Edge AI comparison EdgeAI Fundamentals • Real World Case Studies • Implementation Guide • Edge Deployment Beginner 3-4 hrs 🧠 02 SLM Model Foundations Model families & architecture Phi Family • Qwen Family • Gemma Family • BitNET • μModel • Phi-Silica Beginner 4-5 hrs 🚀 03 SLM Deployment Practice Local & cloud deployment Advanced Learning • Local Environment • Cloud Deployment Intermediate 4-5 hrs ⚙️ 04 Model Optimization Toolkit Cross-platform optimization Introduction • Llama.cpp • Microsoft Olive • OpenVINO • Apple MLX • Workflow Synthesis Intermediate 5-6 hrs 🔧 05 SLMOps Production Production operations SLMOps Introduction • Model Distillation • Fine-tuning • Production Deployment Advanced 5-6 hrs 🤖 06 AI Agents & Function Calling Agent frameworks & MCP Agent Introduction • Function Calling • Model Context Protocol Advanced 4-5 hrs 💻 07 Platform Implementation Cross-platform samples AI Toolkit • Foundry Local • Windows Development Advanced 3-4 hrs 🏭 08 Foundry Local Toolkit Production-ready samples Sample applications (see details below) Expert 8-10 hrs Each module includes Jupyter notebooks, code samples, and deployment walkthroughs, perfect for engineers who learn by doing. Developer Highlights - 🔧 Olive: Microsoft's optimization toolchain for quantization, pruning, and acceleration. - 🧩 ONNX Runtime: Cross-platform inference engine with support for CPU, GPU, and NPU. - 🎮 DirectML: GPU-accelerated ML API for Windows, ideal for gaming and real-time apps. - 🖥️ Windows AI PCs: Devices with built-in NPUs for low-power, high-performance inference. Local AI: Beyond the Edge Local AI isn’t just about inference, it’s about autonomy. Imagine agents that: - Learn from local context - Adapt to user behavior - Respect privacy by design With tools like Agent Framework, Azure AI Foundry and Windows Copilot Studio, and Foundry Local developers can orchestrate local agents that blend LLMs, sensors, and user preferences, all without cloud dependency. Try It Yourself Ready to get started? Clone the Edge AI for Beginners GitHub repo, run the notebooks, and deploy your first model to a Windows AI PC or IoT devices Whether you're building smart kiosks, offline assistants, or industrial monitors, this curriculum gives you the scaffolding to go from prototype to production.Multimedia Redirection and WebRTC Redirector plug-in updates for Windows 365 & Azure Virtual Desktop
Automating plug-in maintenance with GitHub Scripts Keeping your Windows 365 and Azure Virtual Desktop environment up to date is crucial for optimal performance and security. Two essential plug-ins, Multimedia Redirection service and WebRTC (Web Real-Time Communication) redirector service, require periodic updates. However, these plug-ins do not update automatically, which can lead to compatibility or performance issues if left unattended. Understanding the Challenge Unlike most of the components of Windows 365 and Azure Virtual Desktop, both the Multimedia Redirection and WebRTC plug-ins must be updated manually. This manual process can be time-consuming for IT administrators and disruptive if not managed properly, especially in enterprise environments where user experience and uptime are top priorities. Cloud PCs that are provisioned or reprovisioned with Gallery images do have the latest plug-ins installed and Azure Virtual Desktop session hosts deployed with Azure Marketplace images have the latest WebRTC plug-in installed. But as these Cloud PCs and Session Hosts age, these plug-ins will become outdated over time. Note: Windows 365 Gallery images that include the latest Multimedia Redirection and WebRTC plug-ins are only the Windows Enterprise + Microsoft 365 Apps images. For Azure Marketplace images, only the Windows multi-session + Microsoft 365 Apps images include the latest WebRTC plug-in. Features dependent on WebRTC and Multimedia Redirection WebRTC: Microsoft Teams media optimizations Users connect from non-Windows physical endpoints Users connect from Windows endpoints and SlimCore fails Multimedia Redirection: Video playback and call redirection on Edge or Chrome browsers Users who visit websites with embedded videos Users who use Contact Center as a Service (CCaaS) solutions Manually Updating Administrators can update these binaries by deploying their respective MSI installers to users’ Cloud PCs and personal Session Hosts either through Intune or their management engine of choice. This may be the simplest way of upgrading endpoints, but there is a chance that end users would be disrupted during their work because WebRTC installer will forcefully stop Teams processes while installing, and Multimedia Redirection could break video streams or calls while the binaries are upgraded. If choosing this method, administrators should leverage maintenance windows to minimize disruptions. The MSI installers can be manually downloaded from the links below: WebRTC redirector installer MSI Multimedia redirector installer MSI Automating Updates with GitHub Scripts To address this challenge, our team has developed a series of PowerShell scripts available in our GitHub repository. These scripts automate the update process for both Multimedia Redirection and WebRTC plug-ins, ensuring that the latest versions are installed without the need for direct user intervention. The benefits of these scripts are: No End User Impact: The scripts are designed to run silently in the background, so end users experience no downtime or interruptions. Consistent Plugin Versions: Automated updates help maintain consistency across all Windows 365 instances, reducing troubleshooting time and compatibility issues. Easy Integration: The scripts can be deployed with Intune via Remediations, or as a standalone script. Remediations provides the best admin experience as it will report back on compliance and any errors encountered during deployment. Getting Started To begin using the automated update scripts: Visit our GitHub repository and download the latest versions of the update scripts. WebRTC Updater Multimedia Redirector Updater Review the step-by-step setup and configuration instructions. Deploy the scripts to your Windows 365 or Azure Virtual Desktop environment, either in standalone mode or with Remediations. IMPORTANT: The scripts currently do not support Windows Multi-Session hosts. Updating Azure Virtual Desktop Multi-Session Hosts Updating these plugins should be done during the build process of the golden image or the Session Hosts. This can be achieved by using an automated building solution, like Azure Image Builder, to install the latest versions. The URLs for these plugins are static, meaning that administrators can use the same URL without having to be concerned about the version that is being downloaded. For reference, the URLs are below: WebRTC - https://aka.ms/msrdcwebrtcsvc/msi Multimedia Redirection - https://aka.ms/avdmmr/msi More information on WebRTC and Multimedia Redirection plug-ins Microsoft updates these binaries periodically for functional and security enhancements. To stay current on the latest releases and what they contain, please visit the following links: What’s new for WebRTC Redirector Service What’s new for Multimedia redirection service Conclusion Regularly updating the Multimedia Redirection and WebRTC plugins is essential for a secure and efficient Windows 365 environment. By leveraging the automation scripts from our GitHub repository, IT administrators can ensure plugins remain current, all while eliminating manual effort and minimizing any impact on end users. For more details and to access the scripts, check out our GitHub page.610Views2likes0CommentsHow to organise Posts within channels
Hi, Hoping that someone could share some thoughts and/or advice in relation to a scenario we're trying to find a solution for. We have a channel, in which we use Posts to share information, which can be grouped into two or three themes, with channel members. For clarity, please refer to the below pic for what I mean when referring to Posts (as distinct/different functionality from chats). What we'd like to do is mark the posts, with the respective theme, and be able to filter the theme so that users don't need to scroll through the whole list to manually find posts related to the theme they'd like to see. Some other considerations: we like the idea of using a consolidated channel for multiple themes, so that there are less different channels etc that users need to navigate/keep across all channel members need to have access all posts (not any specific subset of members/users) if there was a way to have multiple "Posts" tabs, this would work for us to be able to share the themed posts to a centralised posts space, as then users could access the separate Posts tab for specific themes as/if required we're trying to find the least "clunky" solution/approach to achieve an ease of grouping/finding posts related to a specific theme Thanks in advance for your time and thoughts : -) Cam.41Views1like3CommentsWho Created This Azure Resource? Here's How to Find Out
One of the most common questions Azure customers and administrators ask is: “How do I know who created this resource?” If you’ve ever been in charge of managing a large subscription with dozens (or even thousands) of resources, you know how important it is to answer this question quickly. Whether it’s for troubleshooting, governance, or compliance, tracking the origin of a resource can save time and reduce confusion. The good news: Azure makes this information available. You just need to know where to look. Step 1: Open the Resource Overview Navigate to the Overview page of the resource in question. This gives you the usual metadata like resource group, subscription, location, login server, and provisioning state. At first glance, however, you won’t see who created the resource. That information isn’t shown in the overview fields. Step 2: Switch to JSON View On the Overview page, look for the link labeled “JSON View” in the top right corner. Clicking this opens the full resource definition in JSON format. Step 3: Scroll to the systemData Section Within the JSON, scroll until you find the systemData object. This is where Azure tracks metadata about the resource lifecycle. Here’s what you’ll find: "systemData": { "createdBy": "someuser@domain.com", "createdByType": "User", "createdAt": "2025–05–20T19:50:33.1511397Z", "lastModifiedBy": "someuser@domain.com", "lastModifiedByType": "User", "lastModifiedAt": "2025–05–20T19:50:33.1511397Z" } What This Tells You createdBy → The user or service principal that created the resource. createdByType → Whether it was created by a human user, managed identity, or another Azure service. createdAt → The exact timestamp of creation (UTC). lastModifiedBy, lastModifiedByType, and lastModifiedAt → Useful if the resource was updated after creation. This metadata gives you clear visibility into who provisioned the resource and when. Why It Matters Governance — Understand ownership and responsibility. Troubleshooting — Track down configuration changes. Compliance & Auditing — Satisfy requirements for accountability in your cloud environment. By making the systemData object part of your standard investigation checklist, you’ll save yourself the guesswork the next time you’re wondering, “Who created this resource?”1.8KViews3likes7CommentsEssential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.Transforming Emergency Response: How AI is reshaping public safety
Brand new released Smart City Trend Report: Discover how AI is transforming emergency response and public safety in cities worldwide. In an era of escalating climate events, urban complexity, and rising public expectations, emergency response systems are under pressure like never before. From wildfires and floods to public health crises and infrastructure failures, cities must respond faster, smarter, and more collaboratively. The newly released Transform Emergency Response Trend Report offers a compelling roadmap for how artificial intelligence (AI) is helping cities meet these challenges head-on, by modernizing operations, improving situational awareness, and building resilient, resident-centered safety ecosystems. As Dave Williams, Director of Global Public Safety and Justice at Microsoft, puts it: AI models are increasingly embedded in public safety workflows to enhance both anticipation and real-time awareness. Predictive analytics are used to forecast crime hotspots, traffic incidents, and natural disasters by analyzing historical and real-time data, enabling proactive resource deployment and faster response times. This transformation is not theoretical, it’s happening now. And at the upcoming Smart City Expo World Congress in Barcelona, November 4–6, Microsoft and leading technology innovators will showcase how AI is driving real-world impact across emergency services, law enforcement, and city operations. Government AI Transformation in Action: Oklahoma City Fire Department: Digitizing Operations for Faster Response Serving over 700,000 residents, the Oklahoma City Fire Department (OKCFD) faced mounting challenges due to outdated, paper-based workflows. From rig inspections to fuel logging, manual processes slowed response times and increased risk. Partnering with AgreeYa Solutions and leveraging Microsoft Power Platform, OKCFD built 15+ custom mobile-first apps to digitize core operations. The results were transformative: Helped drive a 40% reduction in manual tasks Real-time dashboards for leadership visibility Improved data accuracy and faster emergency response This modernization not only boosted internal efficiency but also strengthened community trust by ensuring timely, reliable service delivery. North Wales Fire and Rescue Service: Empowering Remote Teams with Secure Access With 44 stations and a mix of full-time and on-call firefighters, North Wales Fire and Rescue Service (NWFRS) needed a better way to support staff across a wide geographic area. Their legacy on-premises systems limited remote access to critical data. By deploying a SharePoint-based intranet integrated with Microsoft 365 tools, NWFRS enabled secure, mobile access to documents, forms, and departmental updates. Improved communication and workflow efficiency Reduced travel time for on-call staff Enhanced compliance and data security This shift empowered firefighters to stay informed and prepared—no matter where they were. San Francisco Police Department: Real-Time Vehicle Recovery Reporting Managing thousands of stolen vehicle cases annually, the San Francisco Police Department (SFPD) struggled with a slow, manual reporting process that delayed updates and eroded public trust. Using Microsoft Power Apps, SFPD built RESTVOS (Returning Stolen Vehicle to Owner System), allowing officers to update vehicle status in real time from the field. Helped reduce reporting time from 2 hours to 2 minutes Supported 500 officer hours saved per month Improved resident experience and reduced mistaken stops This digital leap not only streamlined operations but also reinforced transparency and accountability. Join Us in Barcelona: See Emergency Response in Action At Smart City Expo World Congress 2025, Microsoft and our AI transformations partners will showcase emergency response AI transformation with immersive demos, theater sessions, and roundtable discussions. Transform Emergency Response will be a central focus, showcasing how AI, cloud platforms, and agentic solutions are enabling cities to: Modernize emergency operation centers Enable real-time situational awareness Foster community engagement and trust Featured AI demos from innovative partners: 3AM Innovations Disaster Tech PRATUS Sentient Hubs Tomorrow.io Unified Emergency Response with Microsoft Fabric and Copilot These solutions are not just about technology, they’re about outcomes. They help cities cut response times, improve coordination, and build public trust. Why This Matters Now As Dave Williams emphasizes, the future of emergency response is not just faster, it’s smarter and more resilient: Modern emergency response increasingly relies on unified data platforms that integrate inputs from IoT sensors, satellite imagery, social media, and agency databases. AI-powered analytics systems synthesize this data to support real-time decision-making and resource allocation across agencies. Cities must also invest in governance frameworks, ethical AI policies, and inclusive design to ensure these technologies serve all residents fairly. Let’s Connect Whether you’re a city CIO, emergency services leader, or public safety innovator, we invite you to join us at Smart City Expo World Congress in Barcelona, November 4–6. Explore how Microsoft and its partners are helping cities transform emergency response, and build safer, more resilient communities. Visit our booth at Hall 3, Stand #3D51, attend our theater sessions, and see demos from AI transformation partners delivering demos on Transform Emergency Response. Together, we can reimagine public safety for the challenges of today and the possibilities of tomorrow.130Views0likes0CommentsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**827Views1like3Comments