python
72 TopicsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**749Views1like3CommentsHow to connect Azure SQL database from Python Function App using managed identity or access token
This blog will demonstrate on how to connect Azure SQL database from Python Function App using managed identity or access token. If you are looking for how to implement it in Windows App Service, you may refer to this post: https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-azure-app-service-windows/ba-p/2873397. Note that Azure Active Directory managed identity authentication method was added in ODBC Driver since version 17.3.1.1 for both system-assigned and user-assigned identities. In Azure blessed image for Python Function, the ODBC Driver version is 17.8. Which makes it possible to leverage this feature in Linux App Service. Briefly, this post will provide you a step to step guidance with sample code and introduction on the authentication workflow. Steps: 1. Create a Linux Python Function App from portal 2. Set up the managed identity in the new Function App by enable Identity and saving from portal. It will generate an Object(principal) ID for you automatically. 3. Assign role in Azure SQL database. Search for your own account and save as admin. Note: Alternatively, you can search for the function app's name and set it as admin, then that function app would own admin permission on the database and you can skip step 4 and 5 as well. 4. Got to Query editor in database and be sure to login using your account set in previous step rather than username and password. Or step 5 will fail with below exception. "Failed to execute query. Error: Principal 'xxxx' could not be created. Only connections established with Active Directory accounts can create other Active Directory users." 5. Run below queries to create user for the function app and alter roles. You can choose to alter part of these roles per your demand. CREATE USER "yourfunctionappname" FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER "yourfunctionappname" ALTER ROLE db_datawriter ADD MEMBER "yourfunctionappname" ALTER ROLE db_ddladmin ADD MEMBER "yourfunctionappname" 6. Leverage below sample code to build your own project and deploy to the function app. Sample Code: Below is the sample code on how to use Azure access token when run it from local and use managed identity when run in Function app. The token part needs to be replaced with your own. Basically, it is using "pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi')" to authenticate with managed identity. Also, "MSI_SECRET" is used to tell if we are running it from local or function app, it will be created automatically as environment variable when the function app is enabled with Managed Identity. The complete demo project can be found from: https://github.com/kevin808/azure-function-pyodbc-MI import logging import azure.functions as func import os import pyodbc import struct def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') server="your-sqlserver.database.windows.net" database="your_db" driver="{ODBC Driver 17 for SQL Server}" query="SELECT * FROM dbo.users" # Optional to use username and password for authentication # username = 'name' # password = 'pass' db_token = '' connection_string = 'DRIVER='+driver+';SERVER='+server+';DATABASE='+database #When MSI is enabled if os.getenv("MSI_SECRET"): conn = pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi') #Used when run from local else: SQL_COPT_SS_ACCESS_TOKEN = 1256 exptoken = b'' for i in bytes(db_token, "UTF-8"): exptoken += bytes({i}) exptoken += bytes(1) tokenstruct = struct.pack("=i", len(exptoken)) + exptoken conn = pyodbc.connect(connection_string, attrs_before = { SQL_COPT_SS_ACCESS_TOKEN:tokenstruct }) # Uncomment below line when use username and password for authentication # conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = conn.cursor() cursor.execute(query) row = cursor.fetchone() while row: print(row[0]) row = cursor.fetchone() return func.HttpResponse( 'Success', status_code=200 ) Workflow: Below are the workflow in these two authentication ways, with them in mind, we can understand what happened under the hood. Managed Identity: When we enable the managed identify for function app, a service principal will be generated automatically for it, then it follows the same steps as below to authenticate in database. Function App with managed identify -> send request to database with service principal -> database check the corresponding database user and its permission -> Pass authentication. Access Token: The access toke can be generated by executing ‘az account get-access-token --resource=https://database.windows.net/ --query accessToken’ from local, we then hold this token to authenticate. Please note that the default lifetime for the token is one hour, which means we would need to retrieve it again when it expires. az login -> az account get-access-token -> local function use token to authenticate in SQL database -> DB check if the database user exists and if the permissions granted -> Pass authentication. Thanks for reading. I hope you enjoy it.54KViews6likes18CommentsUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!752Views4likes1CommentBuild an AI Image-Caption Generator on Azure App Service with Streamlit and GPT-4o-mini
This tiny app just does one thing: upload an image → get a natural one-line caption. Under the hood: Azure AI Vision extracts high-confidence tags from the image. Azure OpenAI (GPT-4o-mini) turns those tags into a fluent caption. Streamlit provides a lightweight, Python-native UI so you can ship fast. All code + infra templates: image_caption_app in the App Service AI Samples repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main/image_caption_app What are these components? What is Streamlit? An open-source Python framework to build interactive data/AI apps with just a few lines of code—perfect for quick, clean UIs. What is Azure AI Vision (Vision API)? A cloud service that analyzes images and returns rich signals like tags with confidence scores, which we use as grounded inputs for captioning. How it works (at a glance) User uploads a photo in Streamlit. The app calls Azure AI Vision → gets a list of tags (keeps only high-confidence ones). The app sends those tags to GPT-4o-mini → generates a one-line caption. Caption is shown instantly in the browser. Prerequisites Azure subscription — https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account Azure CLI — https://learn.microsoft.com/azure/cli/azure/install-azure-cli-linux Azure Developer CLI (azd) — https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd Python 3.10+ — https://www.python.org/downloads/ Visual Studio Code (optional) — https://code.visualstudio.com/download Streamlit (optional for local runs) — https://docs.streamlit.io/get-started/installation Managed Identity on App Service (recommended) — https://learn.microsoft.com/azure/app-service/overview-managed-identity Resources you’ll deploy You can create everything manually or with the provided azd template. What you need Azure App Service (Linux) to host the Streamlit app. Azure AI Foundry/OpenAI with a gpt-4o-mini deployment for caption generation. Azure AI Vision (Computer Vision) for image tagging. Managed Identity enabled on the Web App, with RBAC grants so the app can call Vision and OpenAI without secrets. One-command deploy with azd (recommended) The sample includes infra under image_caption_app/infra so azd up can provision + deploy in one go. # 1) Clone and move into the sample git clone https://github.com/Azure-Samples/appservice-ai-samples cd appservice-ai-samples/image_caption_app # 2) Log in and provision + deploy azd auth login azd up Manual path (if you prefer doing it yourself) Create Azure AI Vision, note the endpoint (custom subdomain). Create Azure AI Foundry/OpenAI and deploy gpt-4o-mini. Create App Service (Linux, Python) and enable System-Assigned Managed Identity. Assign roles to the Web App’s Managed Identity: Cognitive Services OpenAI User on your OpenAI resource. Cognitive Services User on your Vision resource. Add app settings for endpoints and deployment names (see repo), deploy the code, and run. Startup command (manual setting): If you’re configuring the Web App yourself (instead of using the Bicep), set the Startup Command to: streamlit run app.py --server.port 8000 --server.address 0.0.0.0 Portal path: App Service → Configuration → General settings → Startup Command. CLI example: az webapp config set \ --name <your-webapp-name> \ --resource-group <your-rg> \ --startup-file "streamlit run app.py --server.port 8000 --server.address 0.0.0.0" (The provided Bicep template already sets this for you.) Code tour (the important bits) Top-level flow (app.py) First we get tags from Vision, then ask GPT-4o-mini for a one-liner: tags = extract_tags(image_bytes) caption = generate_caption(tags) Vision call (utils/vision.py) Call the Vision REST API, parse JSON, and keep high-confidence tags (> 0.6): response = requests.post( VISION_API_URL, headers=headers, params=PARAMS, data=image_bytes, timeout=30, ) response.raise_for_status() analysis = response.json() tags = [ t.get('name') for t in analysis.get('tags', []) if t.get('name') and t.get('confidence', 0) > 0.6 ] Caption generation (utils/openai_caption.py) Join tags and ask GPT-4o-mini for a natural caption: tag_text = ", ".join(tags) prompt = f""" You are an assistant that generates vivid, natural-sounding captions for images. Create a one-line caption for an image that contains the following: {tag_text}. """ response = client.chat.completions.create( model=DEPLOYMENT_NAME, messages=[ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": prompt.strip()} ], max_tokens=60, temperature=0.7 ) return response.choices[0].message.content.strip() Security & auth: Managed Identity by default (recommended) This sample ships to use Managed Identity on App Service—no keys in config. The Web App’s Managed Identity authenticates to Vision and Azure OpenAI via Microsoft Entra ID. Prefer Managed Identity in production; if you need to test locally, you can switch to key-based auth by supplying the service keys in your environment. Run it locally (optional) # From the sample folder python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -r requirements.txt # Set env vars for endpoints + deployment (and keys if not using MI locally) streamlit run app.py Repo map App + Streamlit UI + helpers: image_caption_app/ Bicep infrastructure (used by azd up): image_caption_app/infra/ What’s next — ways to extend this sample Richer vision signals: Add object detection, OCR, or brand detection; blend those into the prompt for sharper captions. Persistence & gallery: Save images to Blob Storage and captions/metadata to Cosmos DB or SQLite; add a Streamlit gallery. Performance & cost: Cache tags by image hash; cap image size; track tokens/latency. Observability: Wire up Application Insights with custom events (e.g., caption_generated). Looking for more Python samples? Check out the repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main For more Azure App Service AI samples and best practices, check out the Azure App Service AI integration documentation233Views0likes0CommentsBuild lightweight AI Apps on Azure App Service with gpt-oss-20b
OpenAI recently introduced gpt-oss as an open-weight language model that delivers strong real-world performance at low cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware; see the announcement: https://openai.com/index/introducing-gpt-oss/. It’s an excellent choice for scenarios where you want the security and efficiency of a smaller model running on your application instance — while still getting impressive reasoning capabilities. By hosting it on Azure App Service, you can take advantage of enterprise-grade features without worrying about managing infrastructure: Built-in autoscaling Integration with VNet Enterprise-grade security and compliance Easy CI/CD integration Choice of deployment methods In this post, we’ll walk through a complete sample that uses gpt-oss-20b as a sidecar container running alongside a Python Flask app on Azure App Service. All the source code and Bicep templates are available here: 📂 Azure-Samples/appservice-ai-samples/gpt-oss-20b-sample Architecture of our sample at a glance Web app (Flask) runs as a code-based App Service. Model runs in a sidecar container (Ollama) in the same App Service. The Flask app calls the model over localhost:11434. Bicep provisions the Web App and an Azure Container Registry (ACR). You push your model image to ACR and attach it as a sidecar in the Portal. 1. Wrapping gpt-oss-20b in a Container Code location: /gpt-oss-20b-sample/ollama-image in the sample repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main/gpt-oss-20b-sample/ollama-image. What this image does (at a glance) Starts the Ollama server Pulls the gpt-oss:20b model on first run Exposes port 11434 for the Flask app to call locally Dockerfile: FROM ollama/ollama EXPOSE 11434 COPY startup.sh / RUN chmod +x /startup.sh ENTRYPOINT ["./startup.sh"] startup.sh # Start Ollama in the background ollama serve & sleep 5 # Pull and run gpt-oss:20b ollama pull gpt-oss:20b # Restart ollama and run it in the foreground pkill -f "ollama" ollama serve Build the image Choose one of the two common paths: A. Build locally with Docker From the ollama-image folder: # 1) (optional) pick a registry/image name up-front ACR_NAME=<your-acr-name> # e.g., myacr123 IMAGE=ollama-gpt-oss:20b # 2) build locally docker build -t $IMAGE . If you’re new to building images, see Docker’s build docs for options and examples. B. Build in Azure (no local Docker required) with ACR Tasks Run a cloud build directly from the repo or your working directory: ACR_NAME=<your-acr-name> az acr build \ --registry $ACR_NAME \ --image ollama-gpt-oss:20b \ ./gpt-oss-20b-sample/ollama-image ACR Tasks build the image in Azure and push it straight into your registry. Push the image to Azure Container Registry (ACR) If you built locally, tag and push to your ACR: # login (CLI recommended) az acr login --name $ACR_NAME # tag and push (note: all-lowercase FQDN) docker tag ollama-gpt-oss:20b $ACR_NAME.azurecr.io/ollama-gpt-oss:20b docker push $ACR_NAME.azurecr.io/ollama-gpt-oss:20b Full “push/pull with Docker CLI” quickstart is here if you need it. 2. The Flask Application Our main app is a simple Python Flask service that connects to the model running in the sidecar. Since the sidecar shares the same network namespace as the main app, we can call it at http://localhost:11434. OLLAMA_HOST = "http://localhost:11434" MODEL_NAME = "gpt-oss:20b" @app.route("/chat", methods=["POST"]) def chat(): data = request.get_json() prompt = data.get("prompt", "") payload = { "model": MODEL_NAME, "messages": [{"role": "user", "content": prompt}], "stream": True } def generate(): with requests.post(f"{OLLAMA_HOST}/api/chat", json=payload, stream=True) as r: for line in r.iter_lines(decode_unicode=True): if line: event = json.loads(line) if "message" in event: yield event["message"]["content"] return Response(generate(), mimetype="text/plain") This allows your app to stream responses back to the browser in real-time — giving a chat-like experience. 3. Deploying to Azure App Service Code location: /gpt-oss-20b-sample/flask-app in the sample repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main/gpt-oss-20b-sample/flask-app You can deploy the Flask app using your preferred method — VS Code, GitHub Actions, az webapp up, or via Bicep. We’ve included a Bicep template that sets up: An Azure Container Registry for your sidecar image An Azure Web App running on Premium V4 for best performance and cost efficiency 🔗 Azure App Service Premium V4 now in Public Preview If you want to use the azd template, pull down the repo and run these commands from the folder. azd init azd up Open the Web App in Azure Portal and add a sidecar: How-to: https://learn.microsoft.com/azure/app-service/configure-sidecar Choose your ACR image (the one you created in Step 1), set port to 11434 First startup note: the sidecar downloads the gpt-oss-20b model on first run; cold start will take time. Subsequent restarts will be faster because the model layers will not need to be pulled down. Try it, then open your site—it’s a chat UI backed by gpt-oss-20b running locally as a sidecar on Azure App Service. Conclusion With GPT-OSS-20B running as a sidecar on Azure App Service, you get the best of both worlds — the flexibility of open-source models and the reliability, scalability, and security of a fully managed platform. This setup makes it easy to integrate AI capabilities into your applications without having to provision or manage custom infrastructure. Whether you’re building a lightweight chat experience, prototyping a new AI-powered feature, or experimenting with domain-specific fine-tuning, this approach provides a robust foundation. You can scale your application based on demand, swap out models as needed, and take advantage of the full Azure ecosystem for networking, observability, and deployment automation. Next Steps & Resources Here are some useful resources to help you go further: 📂 Sample Code & Templates – gpt-oss-20b Sample Repository 📖 About GPT-OSS – Introducing gpt-oss (OpenAI blog) 🛠 Deploying Sidecars – Configure Sidecars in Azure App Service 🚀 Premium V4 Plan – Azure App Service Premium V4 announcement 📦 Pushing Images to ACR – Push and pull container images in Azure Container Registry 💡 Advanced AI Patterns – Build RAG solutions with Azure AI Search595Views0likes0CommentsAnnouncing Early Preview: BYO Remote MCP Server on Azure Functions
If you’ve already built Model Context Protocol (MCP) servers with the MCP SDKs and wished you could turn them into world class Remote MCP servers using a hyperscale, serverless platform, then this one’s for you! We’ve published samples showing how to host bring‑your-own (BYO) Remote MCP servers on Azure Functions, so you can run the servers you’ve already built with the MCP SDKs—Python, Node, and .NET—with minimal changes and full serverless goodness. Why this is exciting Keep your code. If you’ve already implemented servers with the MCP SDKs (Python, Node, .NET), deploy them to Azure Functions as remote MCP servers with just one line of code change. Serverless scale when you need it. Functions on the Flex Consumption plan handles bursty traffic, scales out and back to zero automatically, and gives you serverless billing. Secure by default. Your remote server endpoint is protected with function keys out-of- the-box, with option to layer on Azure API Management for added authorization flow. BYO vs. Functions Remote MCP extension—pick the path that fits The BYO option complements the existing Azure Functions MCP extension: Build and host with Functions MCP extension: You can build stateful MCP servers with the MCP tool trigger and binding and host them on Functions. Support for SSE is available today with streamable HTTP coming soon. Host BYO remote MCP Server (this announcement): If you already have a server built with the MCP SDKs, or you prefer those SDKs’ ergonomics, host it as‑is on Functions and keep your current codebase. Either way, you benefit from Functions’ serverless platform: secure access & auth, burst scale, event-driven scale from 0 to N, and pay-for-what-you‑use. What’s supported in this early preview Servers built with the Python, Node, and .NET SDKs Debug locally with func start on Visual Studio or Visual Studio Code; deploy with the Azure Developer CLI (azd up) to get your remote MCP server quickly deployed to Azure Functions Stateless servers using the streamable HTTP transport, with guidance coming soon for stateful servers Hosting on Flex Consumption plan Try it now! Python: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-python Node: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-node .NET: https://github.com/Azure-Samples/mcp-sdk-functions-hosting-dotnet Each repo includes the sample weather MCP server implemented with the MCP SDK for that language. You’ll find instructions on how to run the server locally with Azure Functions Core Tools and deploy with azd up in minutes. Once deployed, you can connect to the remote server from an MCP client. The samples use Visual Studio Code, but other clients like Claude can also be used. Provide feedback to shape feature Tell us what you need next - identity flows, diagnostics, more languages, or any other features. Your feedback will shape how we take this early preview to the next level!1.3KViews3likes0CommentsHost Remote MCP Servers on App Service: Updated samples now with new languages and auth support
If you haven't seen my previous blog post introducing MCP on Azure App Service, check that out here for a quick overview and getting started. In this blog post, I’m excited to share some updates for our App Service MCP samples: new language samples, updated functionality to replace deprecated methods, and built-in authentication and authorization—all designed to make it easier for developers to host MCP servers on Azure App Service. 🔄 Migrating from SSE to Streamable HTTP The original .NET sample I shared used Server-Sent Events (SSE) for streaming responses. However, SSE has since been deprecated in favor of streamable HTTP, which offers better compatibility and performance across platforms. To align with the latest MCP specification, I’ve updated the .NET sample to use streamable HTTP: ✅ Updated .NET Sample: remote-mcp-webapp-dotnet This update ensures your MCP server is compliant with the latest protocol guidance. All additional samples in this post also use streamable HTTP. 🌐 New Language Samples: Python and Node.js To support a broader range of developers, I’ve created new MCP server samples in Python and Node.js. These samples are lightweight, easy to deploy, and follow the same architectural principles as the .NET version. 🐍 Python Sample: remote-mcp-webapp-python 🟢 Node.js Sample: remote-mcp-webapp-node Each sample is designed to run seamlessly on Azure App Service, with minimal configuration required. 🔐 Secure Your MCP Server with Auth Support Security is a critical aspect of any remote server. To help developers implement secure MCP servers, I’ve added new samples that demonstrate how to use authentication and authorization mechanisms aligned with the MCP authorization specification. 🔐 Python + Basic Auth: remote-mcp-webapp-python-auth 🔐 Python + OAuth: remote-mcp-webapp-python-auth-oauth These samples show how to validate incoming requests using industry-standard methods, making it easier to integrate with identity providers and enforce access control. Use Basic Auth for a quick and easy way to add authentication to your MCP server. Or use OAuth and configure it with Microsoft Entra ID for an even more secure server. 🚀 Get Started Today Each sample includes detailed instructions for deployment, configuration, and testing. Whether you're building in .NET, Python, or Node.js, you can now host a secure, standards-compliant MCP server on Azure App Service with ease. Follow the guidance in the respective README.md files to get started today. All samples include Azure Developer CLI (azd) templates to get you up and running within minutes. 💬 Join the Conversation I’d love to hear how you’re using these samples or what features you’d like to see next. Feel free to open issues or contribute to the repositories on GitHub. Or leave comments on this post if you have any questions, feedback, or requests.1.9KViews0likes5CommentsSuperfast using Web App and Managed Identity to invoke Function App triggers
TOC Introduction Setup References 1. Introduction Many enterprises prefer not to use App Keys to invoke Function App triggers, as they are concerned that these fixed strings might be exposed. This method allows you to invoke Function App triggers using Managed Identity for enhanced security. I will provide examples in both Bash and Node.js. 2. Setup 1. Create a Linux Python 3.11 Function App 1.1. Configure Authentication to block unauthenticated callers while allowing the Web App’s Managed Identity to authenticate. Identity Provider Microsoft Choose a tenant for your application and it's users Workforce Configuration App registration type Create Name [automatically generated] Client Secret expiration [fit-in your business purpose] Supported Account Type Any Microsoft Entra Directory - Multi-Tenant Client application requirement Allow requests from any application Identity requirement Allow requests from any identity Tenant requirement Use default restrictions based on issuer Token store [checked] 1.2. Create an anonymous trigger. Since your app is already protected by App Registration, additional Function App-level protection is unnecessary; otherwise, you will need a Function Key to trigger it. 1.3. Once the Function App is configured, try accessing the endpoint directly—you should receive a 401 Unauthorized error, confirming that triggers cannot be accessed without proper Managed Identity authorization. 1.4. After making these changes, wait 10 minutes for the settings to take effect. 2. Create a Linux Node.js 20 Web App and Obtain an Access Token and Invoke the Function App Trigger Using Web App (Bash Example) 2.1. Enable System Assigned Managed Identity in the Web App settings. 2.2. Open Kudu SSH Console for the Web App. 2.3. Run the following commands, making the necessary modifications: subscriptionsID → Replace with your Subscription ID. resourceGroupsID → Replace with your Resource Group ID. application_id_uri → Replace with the Application ID URI from your Function App’s App Registration. https://az-9640-faapp.azurewebsites.net/api/test_trigger → Replace with the corresponding Function App trigger URL. # Please setup the target resource to yours subscriptionsID="01d39075-XXXX-XXXX-XXXX-XXXXXXXXXXXX" resourceGroupsID="XXXX" # Variable Setting (No need to change) identityEndpoint="$IDENTITY_ENDPOINT" identityHeader="$IDENTITY_HEADER" application_id_uri="api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX" # Install necessary tool apt install -y jq # Get Access Token tokenUri="${identityEndpoint}?resource=${application_id_uri}&api-version=2019-08-01" accessToken=$(curl -s -H "Metadata: true" -H "X-IDENTITY-HEADER: $identityHeader" "$tokenUri" | jq -r '.access_token') echo "Access Token: $accessToken" # Run Trigger response=$(curl -s -o response.json -w "%{http_code}" -X GET "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger" -H "Authorization: Bearer $accessToken") echo "HTTP Status Code: $response" echo "Response Body:" cat response.json 2.4. If everything is set up correctly, you should see a successful invocation result. 3. Invoke the Function App Trigger Using Web App (nodejs Example) I have also provide my example, which you can modify accordingly and save it to /home/site/wwwroot/callFunctionApp.js and run it cd /home/site/wwwroot/ vi callFunctionApp.js npm init -y npm install azure/identity axios node callFunctionApp.js // callFunctionApp.js const { DefaultAzureCredential } = require("@azure/identity"); const axios = require("axios"); async function callFunctionApp() { try { const applicationIdUri = "api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX"; // Change here const credential = new DefaultAzureCredential(); console.log("Requesting token..."); const tokenResponse = await credential.getToken(applicationIdUri); if (!tokenResponse || !tokenResponse.token) { throw new Error("Failed to acquire access token"); } const accessToken = tokenResponse.token; console.log("Token acquired:", accessToken); const apiUrl = "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger"; // Change here console.log("Calling the API now..."); const response = await axios.get(apiUrl, { headers: { Authorization: `Bearer ${accessToken}`, }, }); console.log("HTTP Status Code:", response.status); console.log("Response Body:", response.data); } catch (error) { console.error("Failed to call the function", error.response ? error.response.data : error.message); } } callFunctionApp(); Below is my execution result: 3. References Tutorial: Managed Identity to Invoke Azure Functions | Microsoft Learn How to Invoke Azure Function App with Managed Identity | by Krizzia 🤖 | Medium Configure Microsoft Entra authentication - Azure App Service | Microsoft Learn896Views1like2CommentsHighlights from Microsoft Build 2025
Microsoft just held its annual Microsoft Build event for developers. The live event might be over, but we have highlights and other content that will keep the excitement going. Explore on-demand sessions, learn about recent product announcements, watch deep technical demos, and discover fresh resources for learning cutting-edge developer skills. Microsoft Build opening keynote The world of development—its tools and its possibilities—is rapidly evolving. In the Microsoft Build keynote, Satya Nadella discusses the agentic web, current dev tools, the dev landscape right now, and where it’s headed. GitHub Copilot: Meet the new coding agent Check out the exciting new coding agent for GitHub Copilot. Just assign a task or issue to Copilot and it will run in the background, pushing commits to a draft pull request as it works. Read the blog for details. Scott and Mark Learn to… In this session from Microsoft Build, Mark Russinovich and Scott Hanselman combine tools and topics into one epic demo of AI-driven robotics. No pre-recorded videos. Just live code, dev tools, a robot, and a can of cola. "Another Highly Technical Talk" with Hanselman and Toub Level up your debugging, performance, and optimization skills. In this highly technical session from Microsoft Build, Scott Hanselman and Stephen Toub discuss the internals of. NET as they look for performance issues and fix them live on stage. Building agents for Microsoft 365 Copilot From Copilot Studio to Visual Studio and Azure AI Foundry, explore your options for building agents for Microsoft 365. This Microsoft Build session looks at what's new with tools for creating powerful agents. Unleash developer potential with AI and Dev Box Microsoft is transforming next-gen dev environments. See how Microsoft Dev Box accelerates AI development with a customizable, project-centric platform and integration with various dev tools. Introducing Microsoft 365 Copilot Tuning, multi-agent orchestration, and more Tune AI models using your company’s data, workflows, and processes. Microsoft 365 Copilot Tuning is a new, low-code solution in Microsoft Copilot Studio. Advancing Windows for AI development: New platform capabilities and tools What’s new for Windows? Read an overview of the latest advancements that make Windows an even better platform for developers in the era of AI. Learn about Windows AI Foundry, Windows ML, App Actions, and more. Announcing General Availability of Azure AI Foundry Agent Service At Microsoft Build, Microsoft announced the general availability of Azure AI Foundry Agent Service. Find out how this empowers developers to create multi-agent systems for mission-critical workloads. Start learning: .NET Workshops and Presentations on GitHub Get hands-on experience with .NET workshops and labs, including new labs from Microsoft Build. Head over to the .NET Workshops and Presentations repo on GitHub. Unlock developer potential with Microsoft Dev Box Find out how Microsoft Dev Box can accelerate AI development. Get AI-powered, ready-to-code environments with the tools your team needs—for fast, flexible, and secure experiences. Learn about new features, like serverless GPU access and the Dev Box MCP server. Use VS Code to build AI apps and agents Want to bring your AI-powered solutions to life faster? Find out how to streamline your dev workflow by exploring models, iterating on prompts, running evaluations, and deploying agents—all within Visual Studio Code. Join the Azure AI Foundry Developer Community Need quick answers? Looking for all the latest news and changes? The Azure AI Foundry Developer Community is here to support you in building your next great project. VS Code: Open Source AI Editor The GitHub Copilot Chat extension is being open sourced under the MIT license and key components are being refactored into Visual Studio Code core. Read the blog and find out why we believe the future of code editors should be open and AI-powered2.8KViews0likes0CommentsThroughput Testing at Scale for Azure Functions
Introduction Ensuring reliable, high-performance serverless applications is central to our work on Azure Functions. With new plans like Flex Consumption expanding the platform’s capabilities, it's critical to continuously validate that our infrastructure can scale—reliably and efficiently—under real-world load. To meet that need, we built PerfBench (Performance Benchmarker), a comprehensive benchmarking system designed to measure, monitor, and maintain our performance baselines—catching regressions before they impact customers. This infrastructure now runs close to 5,000 test executions every month, spanning multiple SKUs, regions, runtimes, and workloads—with Flex Consumption accounting for more than half of the total volume. This scale of testing helps us not only identify regressions early, but also understand system behavior over time across an increasingly diverse set of scenarios. of all Python Function apps across regions (SKU: Flex Consumption, Instance Size: 2048 – 1000 VUs over 5 mins, HTML Parsing test) Motivation: Why We Built PerfBench The Need for Scale Azure Functions supports a range of triggers, from HTTP requests to event-driven flows like Service Bus or Storage Queue messages. With an ever-growing set of runtimes (e.g., .NET, Node.js, Python, Java, PowerShell) and versions (like Python 3.11 or .NET 8.0), multiple SKUs and regions, the possible test combinations explode quickly. Manual testing or single-scenario benchmarks no longer cut it. The current scope of coverage tests. Plan PricingTier DistinctTestName FlexConsumption FLEX2048 110 FlexConsumption FLEX512 20 Consumption CNS 36 App Service Plan P1V3 32 Functions Premium EP1 46 Table 1: Different test combinations per plan based on Stack, Pricing Tier, Scenario, etc. This doesn’t include the ServiceBus tests. The Flex Consumption Plan There have been many iterations of this infrastructure within the team, and we’ve been continuously monitoring the Functions performance for more than 4 years now - with more than a million runs till now. But with the introduction of the Flex Consumption plan (Preview at the time of building PerfBench), we had to redesign the testing from ground up, as Flex Consumption unlocks new scaling behaviors and needed thorough testing—millions of messages or tens of thousands of requests per second—to ensure confidence in performance goals and regressions prevention. Consumption, Instance Size: 2048) PerfBench: High-Level Architecture Overview PerfBench is composed of several key pieces: Resource Creator – Uses meta files and Bicep templates to deploy receiver function apps (test targets) at scale. Test Infra Generator – Deploys and configures the system that actually does the load generation (e.g., SBLoadGen function app, Scheduler function app, ALT webhook function). Test Infra – The “brain” of testing, including the Scheduler, Azure Load Testing integration, and SBLoadGen. Receiver Function Apps – Deployed once per combination of runtime, version, region, OS, SKU, and scenario. Data Aggregation & Dashboards – Gathers test metrics from Azure Load Testing (ALT) or SBLoadGen, stores them in Azure Data Explorer (ADX), and displays trends in ADX dashboards. Below is a simplified architecture diagram illustrating these components: Components Resource Creator The resource creator uses meta files and Jinja templates to generate Bicep templates for creating resources. Meta Files: We define test scenarios in simple text-based files (e.g., os.txt, runtime_version.txt, sku.txt, scenario.txt). Each file lists possible values (like python|3.11 or dotnet|8.0) and short codes for resource naming. Template Generation: A script reads these meta files and uses them to produce Bicep templates—one template per valid combination—deploying receiver function apps into dedicated resource groups. Filters: Regex-like patterns in a filter.txt file exclude unwanted combos, keeping the matrix manageable. CI/CD Flow: Whenever we add a new runtime or region, a pull request updates the relevant meta file. Once merged, our pipeline regenerates Bicep and redeploys resources (these are idempotent updates). Test Infra Generator Deploys and configures the Scheduler Function App, SBLoadGen Durable Functions app, and the ALT webhook function. Similar CI/CD approach—merging changes triggers the creation (or update) of these infrastructure components. Test Infra: Load Generation, Scheduling, and Reporting Scheduler The conductor of the whole operation that runs every 5 minutes to load test configurations ( test_configs.json) from Blob Storage. The configuration includes details on what tests to run, at what time (e.g., “run at 13:45 daily”), and references to either ALT for HTTP or SBLoadGen for non-HTTP tests - to schedule them using different systems. Some tests run multiple times daily, others once a day; a scheduled downtime is built in for maintenance. HTTP Load Generator - Azure Load Testing (ALT) We utilize Azure Functions to trigger Azure Load Tests (ALT) for HTTP-based scenarios. ALT is a production-grade load generator tool that provides an easy to configure way to send load to different server endpoints using JMeter and Locust. We worked closely with the ALT team to optimize the JMeter scripts for different scenarios and it recently completed second year. We created an abstraction on top of ALT to create a webhook-approach of starting tests as well as get notified when tests finish, and this was done using a custom function app that does the following: Initiate a test run using a predefined JMX file. Continuously poll until the test execution is complete. Retrieve the test results and transform them into the required format. Transmit the formatted results to the data aggregation system. Sample ALT Test Run: 8.8 million requests in under 6 minutes, with a 90th percentile response time of 80ms and zero errors. The system maintained a throughput of 28K+ RPS. Some more details that we did within ALT - 25 Runtime Controllers manage the test logic and concurrency. 40 Engines handle actual load execution, distributing test plans. 1,000 Clients total for 5-minute runs to measure throughput, error rates, and latency. Test Types: HelloWorld (GET request, to understand baseline of the system). HtmlParser (POST request sending HTML for parsing to simulate moderate CPU usage). Service Bus Load Generator - SBLoadGen (Durable Functions) For event-driven scenarios (e.g., Service Bus–based triggers), we built SBLoadGen. It’s a Durable Function that uses the fan-out pattern to distribute work across multiple workers—each responsible for sending a portion of the total load. In a typical run, we aim to generate around one million messages in under a minute to stress-test the system. We intentionally avoid a fan-in step—once messages are in-flight, the system defers to the receiver function apps to process and emit relevant telemetry. Highlights: Generates ~1 million messages in under a minute. Durable Function apps are deployed regionally and are triggered via webhook. Implemented as a Python Function App using Model V2. Note: This would be open sourced in the coming days. Receiver Function Apps (Test apps) These are the actual apps receiving all the load generated. They are deployed with different combinations and updated rarely. Each valid combination (region + OS + runtime + SKU + scenario) gets its own function app, receiving load from ALT or SBLoadGen. HTTP Scenarios: HelloWorld: No-op test to measure overhead of the system and baseline. HTML Parser: POST with an HTML document for parsing (Simulating small CPU load). Non-HTTP (Service Bus) Scenario: CSV-to-JSON plus blob storage operations, blending compute and I/O overhead. Collected Metrics: RPS: Requests per second (RPS), success/error rates, latency distributions for HTTP workloads. MPPS: Messages processed per second (MPPS), success/error rates for non-HTTP (e.g. Service Bus) workloads. Data Aggregation & Dashboards Capturing results at scale is just as important as generating load. PerfBenchV2 uses a modular data pipeline to reliably ingest and visualize metrics from both HTTP and Service Bus–based tests. All test results flow through Event Hubs, which act as an intermediary between the test infrastructure and our analytics platform. The webhook function (used with ALT) and the SBLoadGen app both emit structured logs that are routed through Event Hub streams and ingested into dedicated Azure Data Explorer (ADX) tables. We use three main tables in ADX: HTTPTestResults for test runs executed via Azure Load Testing. SBLoadGenRuns for recording message counts and timing data from Service Bus scenarios. SchedulerRuns to log when and how each test was initiated. On top of this telemetry, we’ve built custom ADX dashboards that allow us to monitor trends in latency, throughput, and error rates over time. These dashboards provide clear, actionable views into system behavior across dozens of runtimes, regions, and SKUs. Because our focus is on long-term trend analysis, rather than real-time anomaly detection, this batch-oriented approach works well and reduces operational complexity. CI/CD Pipeline Integration Continuous Updates: Once a new language version or scenario is added to runtime_version.txt or scenario.txt meta files, the pipeline regenerates Bicep and deploys new receiver apps. The Test Infra Generator also updates or redeploys the needed function apps (Scheduler, SBLoadGen, or ALT webhook) whenever logic changes. Release Confidence: We run throughput tests on these new apps early and often, catching any performance regressions before shipping to customers. Challenges & Lessons Learned Designing and running this infrastructure hasn't been easy and we've learned a lot of valuable lessons on the way. Here are few Exploding Matrix - Handling every runtime, OS, SKU, region, scenario can lead to thousands of permutations. Meta files and a robust filter system help keep this under control, but it remains an ongoing effort. Cloud Transience - With ephemeral infrastructure, sometimes tests fail due to network hiccups or short-lived capacity constraints. We built in retries and redundancy to mitigate transient failures. Early Adoption - PerfBench was among the first heavy “customers” of the new Flex Consumption plan. At times, we had to wait for Bicep features or platform fixes—but it gave us great insight into the plan’s real-world performance. Maintenance & Cleanup - When certain stacks or SKUs near end-of-life, we have to decommission their resources—this also means regular grooming of meta files and filter rules. Success Stories Proactive Regression Detection: PerfBench surfaced critical performance regressions early—often before they could impact customers. These insights enabled timely fixes and gave us confidence to move forward with the General Availability of Flex Consumption. Production-Level Confidence: By continuously running tests across live production regions, PerfBench provided a realistic view of system behavior under load. This allowed the team to fine-tune performance, eliminate bottlenecks, and achieve improvements measured in single-digit milliseconds. Influencing Product Evolution: As one of the first large-scale internal adopters of the Flex Consumption plan, PerfBench served as a rigorous validation tool. The feedback it generated played a direct role in shaping feature priorities and improving platform reliability—well before broader customer adoption. Future Directions Open sourcing: We are in the process of open sourcing all the relevant parts of PerfBench - SBLoadGen, BicepTemplates generator, etc. Production Synthetic Validation and Alerting: Adapting PerfBench’s resource generation approach for ongoing synthetic tests in production, ensuring real environments consistently meet performance SLOs. This will also open up alerting and monitoring scenarios across production fleet. Expanding Trigger Coverage and Variations: Exploring additional triggers like Storage queues or Event Hub triggers to broaden test coverage. Testing different settings within the same scenario (e.g., larger payloads, concurrency changes). Conclusion PerfBench underscores our commitment to high-performance Azure Functions. By automating test app creation (via meta files and Bicep), orchestrating load (via ALT and SBLoadGen), and collecting data in ADX, we maintain a continuous pulse on throughput. This approach has already proven invaluable for Flex Consumption, and we’re excited to expand scenarios and triggers in the future. For more details on Flex Consumption and other hosting plans, check out the Azure Functions Documentation. We hope the insights shared here spark ideas for your own large-scale performance testing needs — whether on Azure Functions or any other distributed cloud services. Acknowledgements We’d like to acknowledge the entire Functions Platform and Tooling teams for their foundational work in enabling this testing infrastructure. Special thanks to the Azure Load Testing (ALT) team for their continued support and collaboration. And finally, sincere appreciation to our leadership for making performance a first-class engineering priority across the stack. Further Reading Azure Functions Azure Functions Flex Consumption Plan Azure Durable Funtions Azure Functions Python Developer Reference Guide Azure Functions Performance Optimizer Example case study: Github and Azure Functions Azure Load Testing Overview Azure Data Explorer Dashboards If you have any questions or want to share your own performance testing experiences, feel free to reach out in the comments!957Views0likes0Comments