updates
468 TopicsBeyond the Desktop: The Future of Development with Microsoft Dev Box and GitHub Codespaces
The modern developer platform has already moved past the desktop. We’re no longer defined by what’s installed on our laptops, instead we look at what tooling we can use to move from idea to production. An organisations developer platform strategy is no longer a nice to have, it sets the ceiling for what’s possible, an organisation can’t iterate it's way to developer nirvana if the foundation itself is brittle. A great developer platform shrinks TTFC (time to first commit), accelerates release velocity, and maybe most importantly, helps alleviate everyday frictions that lead to developer burnout. Very few platforms deliver everything an organization needs from a developer platform in one product. Modern development spans multiple dimensions, local tooling, cloud infrastructure, compliance, security, cross-platform builds, collaboration, and rapid onboarding. The options organizations face are then to either compromise on one or more of these areas or force developers into rigid environments that slow productivity and innovation. This is where Microsoft Dev Box and GitHub Codespaces come into play. On their own, each addresses critical parts of the modern developer platform: Microsoft Dev Box provides a full, managed cloud workstation. DevBox gives developers a consistent, high-performance environment while letting central IT apply strict governance and control. Internally at Microsoft, we estimate that usage of DevBox by our development teams delivers savings of 156 hours per year per developer purely on local environment setup and upkeep. We have also seen significant gains in other key SPACE metrics reducing context-switching friction and improving build/test cycles. Although the benefits of DevBox are clear in the results demonstrated by our customers it is not without its challenges. The biggest challenge often faced by DevBox customers is its lack of native Linux support. At the time of writing and for the foreseeable future DevBox does not support native Linux developer workstations. While WSL2 provides partial parity, I know from my own engineering projects it still does not deliver the full experience. This is where GitHub Codespaces comes into this story. GitHub Codespaces delivers instant, Linux-native environments spun up directly from your repository. It’s lightweight, reproducible, and ephemeral ideal for rapid iteration, PR testing, and cross-platform development where you need Linux parity or containerized workflows. Unlike Dev Box, Codespaces can run fully in Linux, giving developers access to native tools, scripts, and runtimes without workarounds. It also removes much of the friction around onboarding: a new developer can open a repository and be coding in minutes, with the exact environment defined by the project’s devcontainer.json. That said, Codespaces isn’t a complete replacement for a full workstation. While it’s perfect for isolated project work or ephemeral testing, it doesn’t provide the persistent, policy-controlled environment that enterprise teams often require for heavier workloads or complex toolchains. Used together, they fill the gaps that neither can cover alone: Dev Box gives the enterprise-grade foundation, while Codespaces provides the agile, cross-platform sandbox. For organizations, this pairing sets a higher ceiling for developer productivity, delivering a truly hybrid, agile and well governed developer platform. Better Together: DevBox and GitHub Codespaces in action Together, Microsoft Dev Box and GitHub Codespaces deliver a hybrid developer platform that combines consistency, speed, and flexibility. Teams can spin up full, policy-compliant Dev Box workstations preloaded with enterprise tooling, IDEs, and local testing infrastructure, while Codespaces provides ephemeral, Linux-native environments tailored to each project. One of my favourite use cases is having local testing setups like a Docker Swarm cluster, ready to go in either Dev Box or Codespaces. New developers can jump in and start running services or testing microservices immediately, without spending hours on environment setup. Anecdotally, my time to first commit and time to delivering “impact” has been significantly faster on projects where one or both technologies provide local development services out of the box. Switching between Dev Boxes and Codespaces is seamless every environment keeps its own libraries, extensions, and settings intact, so developers can jump between projects without reconfiguring or breaking dependencies. The result is a turnkey, ready-to-code experience that maximizes productivity, reduces friction, and lets teams focus entirely on building, testing, and shipping software. To showcase this value, I thought I would walk through an example scenario. In this scenario I want to simulate a typical modern developer workflow. Let's look at a day in the life of a developer on this hybrid platform building an IOT project using Python and React. Spin up a ready-to-go workstation (Dev Box) for Windows development and heavy builds. Launch a Linux-native Codespace for cross-platform services, ephemeral testing, and PR work. Run "local" testing like a Docker Swarm cluster, database, and message queue ready to go out-of-the-box. Switch seamlessly between environments without losing project-specific configurations, libraries, or extensions. 9:00 AM – Morning Kickoff on DevBox I start my day on my Microsoft DevBox, which gives me a fully-configured Windows environment with VS Code, design tools, and Azure integrations. I select my teams project, and the environment is pre-configured for me through the DevBox catalogue. Fortunately for me, its already provisioned. I could always self service another one using the "New DevBox" button if I wanted too. I'll connect through the browser but I could use the desktop app too if I wanted to. My Tasks are: Prototype a new dashboard widget for monitoring IoT device temperature. Use GUI-based tools to tweak the UI and preview changes live. Review my Visio Architecture. Join my morning stand up. Write documentation notes and plan API interactions for the backend. In a flash, I have access to my modern work tooling like Teams, I have this projects files already preloaded and all my peripherals are working without additional setup. Only down side was that I did seem to be the only person on my stand up this morning? Why DevBox first: GUI-heavy tasks are fast and responsive. DevBox’s environment allows me to use a full desktop. Great for early-stage design, planning, and visual work. Enterprise Apps are ready for me to use out of the box (P.S. It also supports my multi-monitor setup). I use my DevBox to make a very complicated change to my IoT dashboard. Changing the title from "IoT Dashboard" to "Owain's IoT Dashboard". I preview this change in a browser live. (Time for a coffee after this hardwork). The rest of the dashboard isnt loading as my backend isnt running... yet. 10:30 AM – Switching to Linux Codespaces Once the UI is ready, I push the code to GitHub and spin up a Linux-native GitHub Codespace for backend development. Tasks: Implement FastAPI endpoints to support the new IoT feature. Run the service on my Codespace and debug any errors. Why Codespaces now: Linux-native tools ensure compatibility with the production server. Docker and containerized testing run natively, avoiding WSL translation overhead. The environment is fully reproducible across any device I log in from. 12:30 PM – Midday Testing & Sync I toggle between DevBox and Codespaces to test and validate the integration. I do this in my DevBox Edge browser viewing my codespace (This could be a second VS Code Desktop) and I use the same browser to view my frontend preview. I update the environment variable for my frontend that is running locally in my DevBox and point it at the port running my API locally on my Codespace. In this case it was a web socket connection and HTTPS calls to port 8000. I can make this public by changing the port visibility in my codespace. https://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/api/devices wss://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/ws This allows me to: Preview the frontend widget on DevBox, connecting to the backend running in Codespaces. Make small frontend adjustments in DevBox while monitoring backend logs in Codespaces. Commit changes to GitHub, keeping both environments in sync and leveraging my CI/CD for deployment to the next environment. We can see the DevBox running local frontend and the Codespace running the API connected to each other, making requests and displaying the data in the frontend! Hybrid advantage: DevBox handles GUI previews comfortably and allows me to live test frontend changes. Codespaces handles production-aligned backend testing and Linux-native tools. DevBox allows me to view all of my files in one screen with potentially multiple Codespaces running in browser of VS Code Desktop. Due to all of those platform efficiencies I have completed my days goals within an hour or two and now I can spend the rest of my day learning about how to enable my developers to inner source using GitHub CoPilot and MCP (Shameless plug). The bottom line There are some additional considerations when architecting a developer platform for an enterprise such as private networking and security not covered in this post but these are implementation details to deliver the described developer experience. Architecting such a platform is a valuable investment to deliver the developer platform foundations we discussed at the top of the article. While in this demo I have quickly built I was working in a mono repository in real engineering teams it is likely (I hope) that an application is built of many different repositories. The great thing about DevBox and Codespaces is that this wouldn’t slow down the rapid development I can achieve when using both. My DevBox would be specific for the project or development team, pre loaded with all the tools I need and potentially some repos too! When I need too I can quickly switch over to Codespaces and work in a clean isolated environment and push my changes. In both cases any changes I want to deliver locally are pushed into GitHub (Or ADO), merged and my CI/CD ensures that my next step, potentially a staging environment or who knows perhaps *Whispering* straight into production is taken care of. Once I’m finished I delete my Codespace and potentially my DevBox if I am done with the project, knowing I can self service either one of these anytime and be up and running again! Now is there overlap in terms of what can be developed in a CodeSpace vs what can be developed in Azure DevBox? Of course, but as organisations prioritise developer experience to ensure release velocity while maintaining organisational standards and governance then providing developers a windows native and Linux native service both of which are primarily charged on the consumption of the compute* is a no brainer. There are also gaps that neither fill at the moment for example Microsoft DevBox only provides windows compute while GitHub Codespaces only supports VS Code as your chosen IDE. It's not a question of which service do I choose for my developers, these two services are better together! * Changes have been announced to DevBox pricing. A W365 license is already required today and devboxes will continue to be managed through Azure. For more information please see: Microsoft Dev Box capabilities are coming to Windows 365 - Microsoft Dev Box | Microsoft Learn157Views1like0CommentsOperational Excellence In AI Infrastructure Fleets: Standardized Node Lifecycle Management
Co-authors: Choudary Maddukuri and Bhushan Mehendale AI infrastructure is scaling at an unprecedented pace, and the complexity of managing it is growing just as quickly. Onboarding new hardware into hyperscale fleets can take months, slowed by fragmented tools, vendor-specific firmware, and inconsistent diagnostics. As hyperscalers expand with diverse accelerators and CPU architectures, operational friction has become a critical bottleneck. Microsoft, in collaboration with the Open Compute Project (OCP) and leading silicon partners, is addressing this challenge. By standardizing lifecycle management across heterogeneous fleets, we’ve dramatically reduced onboarding effort, improved reliability, and achieved >95% Nodes-in-Service on incredibly large fleet sizes. This blog explores how we are contributing to and leveraging open standards to transform fragmented infrastructure into scalable, vendor-neutral AI platforms. Industry Context & Problem The rapid growth of generative AI has accelerated the adoption of GPUs and accelerators from multiple vendors, alongside diverse CPU architectures such as Arm and x86. Each new hardware SKU introduces its own ecosystem of proprietary tools, firmware update processes, management interfaces, reliability mechanisms, and diagnostic workflows. This hardware diversity leads to engineering toil, delayed deployments, and inconsistent customer experiences. Without a unified approach to lifecycle management, hyperscalers face escalating operational costs, slower innovation, and reduced efficiency. Node Lifecycle Standardization: Enabling Scalable, Reliable AI Infrastructure Microsoft, through the Open Compute Project (OCP) in collaboration with AMD, Arm, Google, Intel, Meta, and NVIDIA, is leading an industry-wide initiative to standardize AI infrastructure lifecycle management across GPU and CPU hardware management workstreams. Historically, onboarding each new SKU was a highly resource-intensive effort due to custom implementations and vendor-specific behaviors that required extensive Azure integration. This slowed scalability, increased engineering overhead, and limited innovation. With standardized node lifecycle processes and compliance tooling, hyperscalers can now onboard new SKUs much faster, achieving over 70% reduction in effort while enhancing overall fleet operational excellence. These efforts also enable silicon vendors to ensure interoperability across multiple cloud providers. Figure: How Standardization benefits both Hyperscalers & Suppliers. Key Benefits and Capabilities Firmware Updates: Firmware update mechanisms aligned with DMTF standards, minimize downtime and streamline fleet-wide secure deployments. Unified Manageability Interfaces: Standardized Redfish APIs and PLDM protocols create a consistent framework for out-of-band management, reducing integration overhead and ensuring predictable behavior across hardware vendors. RAS (Reliability, Availability and Serviceability) Features: Standardization enforces minimum RAS requirements across all IP blocks, including CPER (Common Platform Error Record) based error logging, crash dumps, and error recovery flows to enhance system uptime. Debug & Diagnostics: Unified APIs and standardized crash & debug dump formats reduce issue resolution time from months to days. Streamlined diagnostic workflows enable precise FRU isolation and clear service actions. Compliance Tooling: Tool contributions such as CTAM (Compliance Tool for Accelerator Manageability) and CPACT (Cloud Processor Accessibility Compliance Tool) automate compliance and acceptance testing—ensuring suppliers meet hyperscaler requirements for seamless onboarding. Technical Specifications & Contributions Through deep collaboration within the Open Compute Project (OCP) community, Microsoft and its partners have published multiple specifications that streamline SKU development, validation, and fleet operations. Summary of Key Contributions Specification Focus Area Benefit GPU Firmware Update requirements Firmware Updates Enables consistent firmware update processes across vendors GPU Management Interfaces Manageability Standardizes telemetry and control via Redfish/PLDM GPU RAS Requirements Reliability and Availability Reduces AI job interruptions caused by hardware errors CPU Debug and RAS requirements Debug and Diagnostics Achieves >95% node serviceability through unified diagnostics and debug CPU Impactless Updates requirements Impactless Updates Enables Impactless firmware updates to address security and quality issues without workload interruptions Compliance Tools Validation Automates specification compliance testing for faster hardware onboarding Embracing Open Standards: A Collaborative Shift in AI Infrastructure Management This standardized approach to lifecycle management represents a foundational shift in how AI infrastructure is maintained. By embracing open standards and collaborative innovation, the industry can scale AI deployments faster, with greater reliability and lower operational cost. Microsoft’s leadership within the OCP community—and its deep partnerships with other Hyperscalers and silicon vendors—are paving the way for scalable, interoperable, and vendor-neutral AI infrastructure across the global cloud ecosystem. To learn more about Microsoft’s datacenter innovations, check out the virtual datacenter tour at datacenters.microsoft.com.534Views0likes0CommentsAnnouncing resource-scope query for Azure Monitor Workspaces
We’re excited to announce the public preview of resource-scope query for Azure Monitor Workspaces (AMWs)—a major step forward in simplifying observability, improving access control, and aligning with Azure-native experiences. This new capability builds on the successful implementation of resource-scope query in Log Analytics Workspaces (LAWs), which transformed how users access logs by aligning them with Azure resource scopes. We’re now bringing the same power and flexibility to metrics in AMWs. What is resource-scope query? Resource-scope query has been a frequently requested capability that allows users to query metrics scoped to a specific resource, resource group, or subscription—rather than needing to know which AMW the metrics are stored in. This means: Simpler querying: users can scope to the context of one or more resources directly, without knowledge of where metrics are stored. Granular Azure RBAC control: if the AMW is configured in resource-centric access mode, user permissions are checked against the resources they are querying for, rather than access to the workspace itself - just like how LAW works today. This supports security best practices for least privileged access requirements. Why use resource-centric query? Traditional AMW querying required users to: Know the exact AMW storing their metrics. Have access to the AMW. Navigate away from the resource context to query metrics. This created friction for DevOps teams and on-call engineers who do not necessarily know which AMW to query when responding to an alert. With resource-centric querying: Users can query metrics directly from the resource’s Metrics blade. Least privilege access is respected—users only need access to the resource(s) they are querying about. Central teams can maintain control of AMWs while empowering app teams to self-monitor. How does it work? All metrics ingested via Azure Monitor Agent are automatically stamped with dimensions like Microsoft.resourceid, Microsoft.subscriptionid, and Microsoft.resourcegroupname to enable this experience. The addition of these dimensions does not have any cost implications to end users. Resource-centric queries use a new endpoint: https://query.<region>.prometheus.monitor.azure.com We will re-route queries as needed from any region, but we recommend choosing the one nearest to your AMWs for the best performance. Users can query via: Azure Portal PromQL Editor Grafana dashboards (with data source configuration) Query-based metric alerts Azure Monitor solutions like Container Insights and App Insights (when using OTel metrics with AMW as data source) Prometheus HTTP APIs When querying programmatically, users pass an HTTP header: x-ms-azure-scoping: <ARM Resource ID> Scoping supports a single: Individual resource Resource group Subscription At this time, scoping is only support at a single-resource level, but comma-separated multi-resource scoping will be added by the end of 2025. Who Can Benefit? Application Teams: Query metrics for their own resources without needing AMW access. Central Monitoring Teams: Maintain control of AMWs while enabling secure, scoped access for app teams. DevOps Engineers: Respond to alerts and troubleshoot specific resources without needing to locate the AMW(s) storing the metrics they need. Grafana Users: Configure dashboards scoped to subscriptions or resource groups with dynamic variables without needing to identify the AMW(s) storing their metrics. When Is This Available? Microsoft. dimension stamping* is already complete and ongoing for all AMWs. Public Preview of the resource-centric query endpoint begins October 10th, 2025. Starting on that date, all newly created AMWs will default to resource-context access mode. What is the AMW “access control mode”? The access control mode is a setting on each workspace that defines how permissions are determined for the workspace. Require workspace permissions. This control mode does NOT allow granular resource-level Azure RBAC. To access the workspace, the user must be granted permissions to the workspace. When a user scopes their query to a workspace, workspace permissions apply. When a user scopes their query to a resource, both workspace permissions AND resource permissions are verified. This setting is the default for all workspaces created before October 2025. Use resource or workspace permissions. This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure read permission. When a user scopes their query to a workspace, workspace permissions apply. When a user scopes their query to a resource, only resource permissions are verified, and workspace permissions are ignored. This setting is the default for all workspaces created after October 2025. Read about how to change the control mode for your workspaces here. Final Thoughts Resource-centric query brings AMWs in line with Azure-native experiences, enabling secure, scalable, and intuitive observability. Whether you’re managing thousands of VMs, deploying AKS clusters, or building custom apps with OpenTelemetry, this feature empowers you to monitor in the context of your workloads or resources rather than needing to first query the AMW(s) and then filter down on what you’re looking for. To get started, simply navigate to your resource’s Metrics blade after October 10 th , 2025 or configure your Grafana data source to use the new query endpoint.255Views0likes0CommentsGA: DCasv6 and ECasv6 confidential VMs based on 4th Generation AMD EPYC™ processors
Today, Azure has expanded its confidential computing offerings with the general availability of the DCasv6 and ECasv6 confidential VM series in regions Korea Central, South Africa North, Switzerland North, UAE North, UK South, West Central US. These VMs are powered by 4th generation AMD EPYC™ processors and feature advanced Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) technology. These confidential VMs offer: Hardware-rooted attestation Memory encryption in multi-tenant environments Enhanced data confidentiality Protection against cloud operators, administrators, and insider threats You can get started today by creating confidential VMs in the Azure portal as explained here. Highlights: 4th generation AMD EPYC processors with SEV-SNP 25% performance improvement over previous generation Ability to rotate keys online AES-256 memory encryption enabled by default Up to 96 vCPUs and 672 GiB RAM for demanding workloads Streamlined Security Organizations in certain regulated industries and sovereign customers migrating to Microsoft Azure need strict security and compliance across all layers of the stack. With Azure Confidential VMs, organizations can ensure the integrity of the boot sequence and the OS kernel while helping administrators safeguard sensitive data against advanced and persistent threats. The DCasv6 and ECasv6 family of confidential VMs support online key rotation to give organizations the ability to dynamically adapt their defenses to rapidly evolving threats. Additionally, these new VMs include AES-256 memory encryption as a default feature. Customers have the option to use Virtualization-Based Security (VBS) in Windows, which is currently in preview to protect private keys from exfiltration via the Guest OS or applications. With VBS enabled, keys are isolated within a secure process, allowing key operations to be carried out without exposing them outside this environment. Faster Performance In addition to the newly announced security upgrades, the new DCasv6 and ECasv6 family of confidential VMs have demonstrated up to 25% improvement in various benchmarks compared to our previous generation of confidential VMs powered by AMD. Organizations that need to run complex workflows like combining multiple private data sets to perform joint analysis, medical research or Confidential AI services can use these new VMs to accelerate their sensitive workload faster than ever before. "While we began our journey with v5 confidential VMs, now we’re seeing noticeable performance improvements with the new v6 confidential VMs based on 4th Gen AMD EPYC “Genoa” processors. These latest confidential VMs are being rolled out across many Azure regions worldwide, including the UAE. So as v6 becomes available in more regions, we can deploy AMD based confidential computing wherever we need, with the same consistency and higher performance." — Mohammed Retmi, Vice President - Sovereign Public Cloud, at Core42, a G42 company. "KT is leveraging Azure confidential computing to secure sensitive and regulated data from its telco business in the cloud. With new V6 CVM offerings in Korea Central Region, KT extends its use to help Korean customers with enhanced security requirements, including regulated industries, benefit from the highest data protection as well as the fastest performance by the latest AMD SEV-SNP technology through its Secure Public Cloud built with Azure confidential computing." — Woojin Jung, EVP, KT Corporation Kubernetes support Deploy resilient, globally available applications on confidential VMs with our managed Kubernetes experience - Azure Kubernetes Service (AKS). AKS now supports the new DCasv6 and ECasv6 family of confidential VMs, enabling organizations to easily deploy, scale and manage confidential Kubernetes clusters on Azure, streamlining developer workflows and reducing manual tasks with integrated continuous integration and continuous delivery (CI/CD) pipelines. AKS brings integrated monitoring and logging to confidential VM node pools with in-depth performance and health insights, the clusters and containerized applications. Azure Linux 3.0 and Ubuntu 24.04 support are now in preview. AKS integration in this generation of confidential VMs also brings support for Azure Linux 3.0, that contains the most essential packages to be resource efficient and contains a secure, hardened Linux kernel specifically tuned for Azure cloud deployments. Ubuntu 24.04 clusters are also supported in addition to Azure Linux 3.0. Organizations wanting to ease the orchestration issues associated with deploying, scaling and managing hundreds of confidential VM node pools can now choose from either of these two for their node pools. General purpose & Memory-intensive workloads Featuring general purpose optimized memory-to-vCPU ratios and support for up to 96 vCPUs and 384 GiB RAM, the DCasv6-series delivers enterprise-grade performance. The DCasv6-series enables organizations to run sensitive workloads with hardware-based security guarantees, making them ideal for applications processing regulated or confidential data. For more memory demanding workloads that exceed even the capabilities of the DCasv6 series, the new ECasv6-series offer high memory-to-vCPU ratios with increased scalability up to 96 vCPUs and 672 GiB of RAM, nearly doubling the memory capacity of DCasv6. You can get started today by creating confidential VMs in the Azure portal as explained here. Additional Resources: Quickstart: Create confidential VM with Azure portal Quickstart: Create confidential VM with ARM template Azure confidential virtual machines FAQSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**897Views1like3CommentsUsing Application Gateway to secure access to the Azure OpenAI Service: Customer success story
Introduction A large enterprise customer set out to build a generative AI application using Azure OpenAI. While the app would be hosted on-premises, the customer wanted to leverage the latest large language models (LLMs) available through Azure OpenAI. However, they faced a critical challenge: how to securely access Azure OpenAI from an on-prem environment without private network connectivity or a full Azure landing zone. This blog post walks through how customers overcame these limitations using Application Gateway as a reverse proxy in front of their Azure Open AI along with other Azure services, to meet their security and governance requirements. Customer landscape and challenges The customer’s environment lacked: Private network connectivity (no Site-to-Site VPN or ExpressRoute). This was due to using a new Azure Government environment and not having a cloud operations team set up yet Common network topology such as Virtual WAN and Hub-Spoke network design A full Enterprise Scale Landing Zone (ESLZ) of common infrastructure Security components like private DNS zones, DNS resolvers, API Management, and firewalls This meant they couldn’t use private endpoints or other standard security controls typically available in mature Azure environments. Security was non-negotiable. Public access to Azure OpenAI was unacceptable. Customer needs to: Restrict access to specific IP CIDR ranges from on-prem user machines and data centers Limit ports communicating with Azure OpenAI Implement a reverse proxy with SSL termination and Web Application Firewall (WAF) Use a customer-provided SSL certificate to secure traffic Proposed solution To address these challenges, the customer designed a secure architecture using the following Azure components: Key Azure services Application Gateway – Layer 7 reverse proxy, SSL termination & Web Application Firewall (WAF) Public IP – Allows communication over public internet between customer’s IP addresses & Azure IP addresses Virtual Network – Allows control of network traffic in Azure Network Security Group (NSG) – Layer 4 network controls such as port numbers, service tags using five-tuple information (source, source port, destination, destination port, protocol) Azure OpenAI – Large Language Model (LLM) NSG configuration Inbound Rules: Allow traffic only from specific IP CIDR ranges and HTTP(S) ports Outbound Rules: Target AzureCloud.<region> with HTTP(S) ports (no service tag for Azure OpenAI yet) Application Gateway setup SSL Certificate: Issued by the customer’s on-prem Certificate Authority HTTPS Listener: Uses the on-prem certificate to terminate SSL Traffic flow: Decrypt incoming traffic Scan with WAF Re-encrypt using a well-known Azure CA Override backend hostname Custom health probe: Configured to detect a 404 response from Azure OpenAI (since no health check endpoint exists) Azure OpenAI configuration IP firewall restrictions: Only allow traffic from the Application Gateway subnet Outcome By combining Application Gateway, NSGs, and custom SSL configurations, the customer successfully secured their Azure OpenAI deployment—without needing a full ESLZ or private connectivity. This approach enabled them to move forward with their generative AI app while maintaining enterprise-grade security and governance.376Views1like0CommentsUnlock visibility, flexibility, and cost efficiency with Application Gateway logging enhancements
Introduction In today’s cloud-native landscape, organizations are accelerating the deployment of web applications at unprecedented speed. But with rapid scale comes increased complexity—and a growing need for deep, actionable visibility into the underlying infrastructure. As businesses embrace modern architectures, the demand for scalable, secure, and observable web applications continues to rise. Azure Application Gateway is evolving to meet these needs, offering enhanced logging capabilities that empower teams to gain richer insights, optimize costs, and simplify operations. This article highlights three powerful enhancements that are transforming how teams use logging in Azure Application Gateway: Resource-specific tables Data collection rule (DCR) transformations Basic log plan Resource-specific tables improve organization and query performance. DCR transformations give teams fine-grained control over the structure and content of their log data. And the basic log plan makes comprehensive logging more accessible and cost-effective. Together, these capabilities deliver a smarter, more structured, and cost-aware approach to observability. Resource-specific tables: Structured and efficient logging Azure Monitor stores logs in a Log Analytics workspace powered by Azure Data Explorer. Previously, when you configured Log Analytics, all diagnostic data for Application Gateway was stored in a single, generic table called AzureDiagnostics. This approach often led to slower queries and increased complexity, especially when working with large datasets. With resource-specific logging, Application Gateway logs are now organised into dedicated tables, each optimised for a specific log type: AGWAccessLogs- Contains access log information AGWPerformanceLogs-Contains performance metrics and data AGWFirewallLogs-Contains Web Application Firewall (WAF) log data This structured approach delivers several key benefits: Simplified queries – Reduces the need for complex filtering and data manipulation Improved schema discovery – Makes it easier to understand log structure and fields Enhanced performance – Speeds up both ingestion and query execution Granular access control – Allows you to grant Azure role-based access control (RBAC) permissions on specific tables Example: Azure diagnostics vs. resource-specific table approach Traditional AzureDiagnostics query: AzureDiagnostics | where ResourceType == "APPLICATIONGATEWAYS" and Category == "ApplicationGatewayAccessLog" | extend clientIp_s = todynamic(properties_s).clientIP | where clientIp_s == "203.0.113.1" New resource-specific table query: AGWAccessLogs | where ClientIP == "203.0.113.1" The resource-specific approach is cleaner, faster, and easier to maintain as it eliminates complex filtering and data manipulation. Data collection rules (DCR) log transformations: Take control of your log pipeline DCR transformations offer a flexible way to shape log data before it reaches your Log Analytics workspace. Instead of ingesting raw logs and filtering them post-ingestion, you can now filter, enrich, and transform logs at the source, giving you greater control and efficiency. Why DCR transformations matter: Optimize costs: Reduce ingestion volume by excluding non-essential data Support compliance: Strip out personally identifiable information (PII) before logs are stored, helping meet GDPR and CCPA requirements Manage volume: Ideal for high-throughput environments where only actionable data is needed Real-world use cases Whether you're handling high-traffic e-commerce workloads, processing sensitive healthcare data, or managing development environments with cost constraints, DCR transformations help tailor your logging strategy to meet specific business and regulatory needs. For implementation guidance and best practices, refer to Transformations Azure Monitor - Azure Monitor Basic log plan - Cost-effective logging for low-priority data Not all logs require real-time analysis. Some are used for occasional debugging or compliance audits. The Basic log plan in Log Analytics provides a cost-effective way to retain high-volume, low-priority diagnostic data—without paying for premium features you may not need. When to use the Basic log plan Save on costs: Pay-as-you-go pricing with lower ingestion rates Debugging and forensics: Retain data for troubleshooting and incident analysis, without paying premium costs for features you don't use regularly Understanding the trade-offs While the Basic plan offers significant savings, it comes with limitations: No real-time alerts: Not suitable for monitoring critical health metrics Query constraints: Limited KQL functionality and additional query costs Choose the Basic plan when deep analytics and alerting aren’t required and focus premium resources on critical logs. Building a smart logging strategy with Azure Application Gateway To get the most out of Azure Application Gateway logging, combine the strengths of all three capabilities: Assess your needs: Identify which logs require real-time monitoring versus those used for compliance or debugging Design for efficiency: Use the Basic log plan for low-priority data, and reserve standard tiers for critical logs Transform at the source: Apply DCR transformations to reduce costs and meet compliance before ingestion Query with precision: Use resource-specific tables to simplify queries and improve performance This integrated approach helps teams achieve deep visibility, maintain compliance, and manage costs.284Views0likes0CommentsIntroducing the Azure Maps Geocode Autocomplete API
We’re thrilled to unveil the public preview of Azure Maps Geocode Autocomplete API, a powerful REST service designed to modernize and elevate autocomplete capabilities across Microsoft’s mapping platforms. If you’ve ever started typing an address into a search bar and immediately seen a list of relevant suggestions—whether it’s for a landmark, or your own home—you’ve already experienced the convenience of autocomplete. What’s less obvious is just how complex it is to deliver those suggestions quickly, accurately, and in a format that modern applications can use. That’s exactly the challenge this new API is designed to solve. Why Autocomplete Matters More Than Ever The Azure Maps Geocode Autocomplete API is the natural successor to the Bing Maps Autosuggest REST API, designed to meet the growing demand for intelligent, real-time location suggestions across a wide range of applications. It’s an ideal solution for developers who need reliable and scalable autocomplete functionality—whether for small business websites or large-scale enterprise systems. Key use cases include: Store locators: When a customer starts typing “New Yo…” into store locator, autocomplete instantly suggests “New York, N.Y.” With just a click, the map centers on the right location—making it fast and effortless to find the nearest branch. Rideshare or dispatching platforms: A rideshare driver needs to pick up a passenger at “One Microsoft Way.” Instead of typing out the full address, the driver starts entering “One Micro…” and the app instantly offers the correct road segment in Redmond, Washington. Delivery services: A delivery app can limit suggestions to postal codes within a specific region, ensuring the addresses customers choose are deliverable and reducing the risk of failed shipments Any Web UIs requiring location input: From real estate search to form autofill, autocomplete enhances the user experience wherever accurate location entry is needed. What the API Can Do The Geocode Autocomplete API is designed to deliver fast, relevant, and structured suggestions as users type. Key capabilities include: Entity Suggestions: Supports both Place (e.g., administrative districts, populated places, landmarks, postal codes) and Address (e.g., roads, point addresses) entities. Ranking: Results can be ranked based on entity popularity, user location (coordinates), and bounding box (bbox). Structured Output: Returns suggestions with structured address formats, making integration seamless. Multilingual Support: Set up query language preferences via the Accept-Language parameter. Flexible Filtering: You can filter suggestions by specifying a country or region using countryRegion, or by targeting a specific entity subtype using resultType. This allows you to extract entities with precise categorization—for example, you can filter results to return only postal codes to match the needs of a location-based selection input in your web application. How It Works The Geocode Autocomplete API is accessed via the following endpoint: https://atlas.microsoft.com/search/geocode:autocomplete?api-version=2025-06-01-preview This endpoint provides autocomplete-style suggestions for addresses and places. With just a few parameters, like your Azure Maps subscription key, a query string, and optionally user coordinates or a bounding box, you can start returning structured suggestions instantly. Developers can further issue geocode service with the selected/ideal entity as query to locate the entity on map, which is a common scenario for producing interactive mapping experiences. Let’s look at below examples: Example 1: Place Entity Autocomplete GET https://atlas.microsoft.com/search/geocode:autocomplete?api-version=2025-06-01-preview &subscription-key={YourAzureMapsKey} &coordinates={coordinates} &query=new yo &top=3 A user starts typing “new yo.” The API quickly returns results like “New York City” and “New York State,” each complete with structured metadata you can plug directly into your app. { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": { "typeGroup": "Place", "type": "PopulatedPlace", "geometry": null, "address": { "locality": "New York", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York, N.Y." } } }, { "type": "Feature", "properties": { "typeGroup": "Place", "type": "AdminDivision1", "geometry": null, "address": { "locality": "", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York" } } }, { "type": "Feature", "properties": { "typeGroup": "Place", "type": "AdminDivision2", "geometry": null, "address": { "locality": "", "adminDistricts": [ { "name": "New York", "shortName": "N.Y." }, { "name": "New York County" } ], "countryRegions": { "ISO": "US", "name": "United States" }, "formattedAddress": "New York County" } } } ] } Example 2: Address Entity Autocomplete GET https://atlas.microsoft.com/search/geocode:autocomplete?api-version=2025-06-01-preview &subscription-key={YourAzureMapsKey} &bbox={bbox} &query=One Micro &top=3 &countryRegion=US A query for “One Micro” scoped to the U.S. yields “NE One Microsoft Way, Redmond, WA 98052, United States.” That’s a complete, structured address ready to be mapped, dispatched, or stored. { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": { "typeGroup": "Address", "type": "RoadBlock", "geometry": null, "address": { "locality": "Redmond", "adminDistricts": [ { "name": "Washington", "shortName": "WA" }, { "name": "King County" } ], "countryRegions": { "ISO": "US", "name": "United States" }, "postalCode": "98052", "streetName": "NE One Microsoft Way", "addressLine": "", "formattedAddress": "NE One Microsoft Way, Redmond, WA 98052, United States" } } } ] } Example 3: Integration with Web Application Below sample shows user enter query and autocomplete service provide a series of suggestions based on user query and location. Pricing and Billing The Geocode Autocomplete API uses the same metering model as the Azure Maps Search service. For billing purposes, every 10 Geocode Autocomplete API requests are counted as one billable transaction. This approach keeps usage and costs consistent with what developers are already familiar with in Azure Maps. Ready to Build Smarter Location Experiences? Whether you're powering a store locator, enhancing address entry, or building a dynamic dispatch system, the new Geocode Autocomplete API gives you the precision, flexibility, and performance needed to deliver seamless location intelligence. With real-world use cases already proving its value, now is the perfect time to integrate this service into your applications and unlock richer, more interactive mapping experiences. Let’s build what’s next—faster, smarter, and more intuitive. Resources to Get Started Geocode Autocomplete REST API Documentation Geocode Autocomplete Samples Migrate from Bing Maps to Azure Maps How to use Azure Maps APIs413Views1like0Comments