cloud native
103 TopicsSimplify Image Signing and Verification with Notary Project and Trusted Signing (Public Preview)
Supply chain security has become one of the most pressing challenges for modern cloud-native applications. Every container image, Helm chart, SBOM, or AI model that flows through your CI/CD pipeline carries risk if its integrity or authenticity cannot be guaranteed. Attackers may attempt to tamper with artifacts, replace trusted images with malicious ones, or inject unverified base images into builds. Today, we’re excited to highlight how Notary Project and Trusted Signing (Public Preview) make it easier than ever to secure your container image supply chain with strong, standards-based signing and verification. Why image signing matters Image signing addresses two fundamental questions in the software supply chain: Integrity: Is this artifact exactly the same one that was originally published? Authenticity: Did this artifact really come from the expected publisher? Without clear answers, organizations risk deploying compromised images into production environments. With signing and verification in place, you can block untrusted artifacts at build time or deployment, ensuring only approved content runs in your clusters. Notary Project: A standard-based solution Notary Project is a CNCF open-source initiative that defines standards for signing and verifying OCI artifacts—including container images, SBOMs, Helm charts, and AI models. It provides a consistent, interoperable framework for ensuring artifact integrity and authenticity across different registries, platforms, and tools. Notary Project includes two key sub-projects that address different stages of the supply chain: Notation – a CLI tool designed for developers and CI/CD pipelines. It enables publishers to sign artifacts after they are built and consumers to verify signatures before artifacts are used in builds. Ratify – a verification engine that integrates with Azure policy and Azure Kubernetes Service (AKS). It enforces signature verification at deployment time, ensuring only trusted artifacts are admitted to run in the cluster. Together, Notation and Ratify extend supply chain security from the build pipeline all the way to runtime, closing critical gaps and reducing the risk of running unverified content. Trusted Signing: Simplifying certificate management Traditionally, signing workflows required managing certificates: issuing, rotating, and renewing them through services like Azure Key Vault. While this provides control, it also adds operational overhead. Trusted Signing changes the game. It offers: Zero-touch certificate lifecycle management: no manual issuance or rotation. Short-lived certificate: reducing the attack surface. Built-in timestamping support: ensuring signatures remain valid even after certificates expire. With Trusted Signing, developers focus on delivering software, not managing certificates. End-to-end scenarios Here’s how organizations can use Notary Project and Trusted Signing together: Sign in CI/CD: An image publisher signs images as part of a GitHub Actions or Azure DevOps pipeline, ensuring every artifact carries a verifiable signature. Verify in AKS: An image consumer configures Ratify and Azure Policy on an AKS cluster to enforce that only signed images can be deployed. Verify in build pipelines: Developers ensure base images and dependencies are verified before they’re used in application builds, blocking untrusted upstream components. Extend to all OCI artifacts: Beyond container images, SBOMs, Helm charts, and even AI models can be signed and verified with the same workflow. Get started To help you get started, we’ve published new documentation and step-by-step tutorials: Overview: Ensuring integrity and authenticity of container images and OCI artifacts Sign and verify images with Notation CLI and Trusted Signing Sign container images in GitHub Actions with Trusted Signing Verify signatures in GitHub Actions Verify signatures on AKS with Ratify Try it now Supply chain security is no longer optional. By combining Notary Project with the streamlined certificate management experience of Trusted Signing, you can strengthen the integrity and authenticity of every artifact in your pipeline without slowing down your teams. Start signing today and take the next step toward a trusted software supply chain.143Views0likes0CommentsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**577Views1like1CommentUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!659Views4likes1CommentBuild Multi-Agent AI Systems on Azure App Service
Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If you've been following along with this series, you've seen how we can quickly integrate AI Foundry agents with MCP servers and host remote MCP servers directly on App Service. Today, we're taking the next leap forward by demonstrating how to build sophisticated multi-agent systems that leverage connected agents, Model Context Protocol (MCP), and OpenAPI tools - all running on Azure App Service's Premium v4 tier with .NET Aspire for enhanced observability and cloud-native development experience. 💡 Want the full technical details? This blog provides an overview of the key concepts and capabilities. For comprehensive setup instructions, architecture deep-dives, performance considerations, debugging guidance, and detailed technical documentation, check out the complete README on GitHub. What Makes This Sample Special? This fashion e-commerce demo showcases several cutting-edge technologies working together: 🤖 Multi-Agent Architecture with Connected Agents Unlike single-agent systems, this sample implements an orchestration pattern where specialized agents work together: Main Orchestrator: Coordinates workflow and handles inventory queries via MCP tools Cart Manager: Specialized in shopping cart operations via OpenAPI tools Fashion Advisor: Provides expert styling recommendations Content Moderator: Ensures safe, professional interactions 🔧 Advanced Tool Integration MCP Tools: Real-time connection to external inventory systems using the Model Context Protocol OpenAPI Tools: Direct agent integration with your existing App Service APIs Connected Agent Tools: Seamless agent-to-agent communication with automatic orchestration ⚡ .NET Aspire Integration Enhanced development experience with built-in observability Simplified cloud-native application patterns Real-time monitoring and telemetry (when developing locally) 🚀 Premium v4 App Service Tier Latest App Service performance capabilities Optimized for modern cloud-native workloads Enhanced scalability for AI-powered applications Key Technical Innovations Connected Agent Orchestration Your application communicates with a single main agent, which automatically coordinates with specialist agents as needed. No changes to your existing App Service code required. Dual Tool Integration This sample demonstrates both MCP tools for external system connectivity and OpenAPI tools for direct API integration. Zero-Infrastructure Overhead Agents work directly with your existing App Service APIs and external endpoints - no additional infrastructure deployment needed. Why These Technologies Matter for Real Applications The combination of these technologies isn't just about showcasing the latest features - it's about solving real business challenges. Let's explore how each component contributes to building production-ready AI applications. .NET Aspire: Enhancing the Development Experience This sample leverages .NET Aspire to provide enhanced observability and simplified cloud-native development patterns. While .NET Aspire is still in preview on App Service, we encourage you to start exploring its capabilities and keep an eye out for future updates planned for later this year. What's particularly exciting about Aspire is how it maintains the core principle we've emphasized throughout this series: making AI integration as simple as possible. You don't need to completely restructure your application to benefit from enhanced observability and modern development patterns. Premium v4 App Service: Built for Modern AI Workloads This sample is designed to run on Azure App Service Premium v4, which we recently announced is Generally Available. Premium v4 is the latest offering in the Azure App Service family, delivering enhanced performance, scalability, and cost efficiency. From Concept to Implementation: Staying True to Our Core Promise Throughout this blog series, we've consistently demonstrated that adding AI capabilities to existing applications doesn't require massive rewrites or complex architectural changes. This multi-agent sample continues that tradition - what might seem like a complex system is actually built using the same principles we've established: ✅ Incremental Enhancement: Build on your existing App Service infrastructure ✅ Simple Integration: Use familiar tools like azd up for deployment ✅ Production-Ready: Leverage mature Azure services you already trust ✅ Future-Proof: Easy to extend as new capabilities become available Looking Forward: What's Coming Next This sample represents just the beginning of what's possible with AI-powered App Service applications. Here's what we're working on next: 🔐 MCP Authentication Integration Enhanced security patterns for production MCP server deployments, including Azure Entra ID integration. 🚀 New Azure AI Foundry Features As Azure AI Foundry continues to evolve, we'll be updating this sample to showcase: New agent capabilities Enhanced tool integrations Performance optimizations Additional model support 📊 Advanced Analytics and Monitoring Deeper integration with Azure Monitor for: Agent performance analytics Business intelligence from agent interactions 🔧 Additional Programming Language Support Following our multi-language MCP server samples, we'll be adding support for other languages in samples that will be added to the App Service documentation. Getting Started Today Ready to add multi-agent capabilities to your existing App Service application? The process follows the same streamlined approach we've used throughout this series. Quick Overview Clone and Deploy: Use azd up for one-command infrastructure deployment Create Your Agents: Run a Python setup script to configure the multi-agent system Connect Everything: Add one environment variable to link your agents Test and Explore: Try the sample conversations and see agent interactions 📚 For detailed step-by-step instructions, including prerequisites, troubleshooting tips, environment setup, and comprehensive configuration guidance, see the complete setup guide in the README. Learning Resources If you're new to this ecosystem, we recommend starting with these foundational resources: Integrate AI into your Azure App Service applications - Comprehensive guide with language-specific tutorials for building intelligent applications on App Service Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP Servers - Learn the basics of integrating AI Foundry agents with MCP servers Host Remote MCP Servers on App Service - Deploy and manage MCP servers on Azure App Service Conclusion: The Future of AI-Powered Applications This multi-agent sample represents the natural evolution of our App Service AI integration journey. We started with basic agent integration, progressed through MCP server hosting, and now we're showcasing sophisticated multi-agent orchestration - all while maintaining our core principle that AI integration should enhance, not complicate, your existing applications. Whether you're just getting started with AI agents or ready to implement complex multi-agent workflows, the path forward is clear and incremental. As Azure AI Foundry adds new capabilities and App Service continues to evolve, we'll keep updating these samples and sharing new patterns. Stay tuned - the future of AI-powered applications is being built today, one agent at a time. Additional Resources 🚀 Start Building GitHub repository for this sample - Comprehensive setup guide, architecture details, troubleshooting, and technical deep-dives 📚 Learn More Azure AI Foundry Documentation: Connected Agents Guide MCP Tools Setup: Model Context Protocol Integration .NET Aspire on App Service: Deployment Guide Premium v4 App Service: General Availability Announcement Have questions or want to share how you're using multi-agent systems in your applications? Join the conversation in the comments below. We'd love to hear about your AI-powered App Service success stories!841Views2likes0CommentsSecuring Cloud Shell Access to AKS
Azure Cloud Shell is an online shell hosted by Microsoft that provides instant access to a command-line interface, enabling users to manage Azure resources without needing local installations. Cloud Shell comes equipped with popular tools and programming languages, including Azure CLI, PowerShell, and the Kubernetes command-line tool (kubectl). Using Cloud Shell can provide several benefits for administrators who need to work with AKS, especially if they need quick access from anywhere, or are in locked down environments: Immediate Access: There’s no need for local setup; you can start managing Azure resources directly from your web browser. Persistent Storage: Cloud Shell offers a file share in Azure, keeping your scripts and files accessible across multiple sessions. Pre-Configured Environment: It includes built-in tools, saving time on installation and configuration. The Challenge of Connecting to AKS By default, Cloud Shell traffic to AKS originates from a random Microsoft-managed IP address, rather than from within your network. As a result, the AKS API server must be publicly accessible with no IP restrictions, which poses a security risk as anyone on the internet can attempt to reach it. While credentials are still required, restricting access to the API server significantly enhances security. Fortunately, there are ways to lock down the API server while still enabling access via Cloud Shell, which we’ll explore in the rest of this article Options for Securing Cloud Shell Access to AKS Several approaches can be taken to secure the access to your AKS cluster while using Cloud Shell: IP Allow Listing On AKS clusters with a public API server, it is possible to lock down access to the API server with an IP allow list. Each Cloud Shell instance has a randomly selected outbound IP coming from the Azure address space whenever a new session is deployed. This means we cannot allow access to these IPs in advance, but we apply them once our session is running and this will work for the duration of our session. Below is an example script that you could run from Cloud Shell to check the current outbound IP address and allow it on your AKS clusters authorised IP list. #!/usr/bin/env bash set -euo pipefail RG="$1"; AKS="$2" IP="$(curl -fsS https://api.ipify.org)" echo "Adding ${IP} to allow list" CUR="$(az aks show -g "$RG" -n "$AKS" --query "apiServerAccessProfile.authorizedIpRanges" -o tsv | tr '\t' '\n' | awk 'NF')" NEW="$(printf "%s\n%s/32\n" "$CUR" "$IP" | sort -u | paste -sd, -)" if az aks update -g "$RG" -n "$AKS" --api-server-authorized-ip-ranges "$NEW" >/dev/null; then echo "IP ${IP} applied successfully"; else echo "Failed to apply IP ${IP}" >&2; exit 1; fi This method comes with some caveats: The users running the script would need to be granted permissions to update the authorised IP ranges in AKS - this permission could be used to add any IP address This script will need to be run each time a Cloud Shell session is created, and can take a few minutes to run The script only deals with adding IPs to the allow list, you would also need to implement a process to remove these IPs on a regular basis to avoid building up a long list of IPs that are no longer needed. Adding Cloud Shell IPs in bulk, through Service Tags or similar will result in your API server being accessible to a much larger range of IP addresses, and should be avoided. Command Invoke Azure provides a feature known as Command Invoke that allows you to send commands to be run in AKS, without the need for direct network connectivity. This method executes a container within AKS to run your command and then return the result, and works well from within Cloud Shell. This is probably the simplest approach that works with a locked down API server and the quickest to implement. However, there are some downsides: Commands take longer to run - when you execute the command, it needs to run a container in AKS, execute the command and then return the result. You only get exitCode and text output, and you lose API level details. All commands must be run within the context of the az aks command invoke CLI command, making commands much longer and complex to execute, rather than direct access with Kubectl Command Invoke can be a practical solution for occasional access to AKS, especially when the cost or complexity of alternative methods isn't justified. However, its user experience may fall short if relied upon as a daily tool. Further Details: Access a private Azure Kubernetes Service (AKS) cluster using the command invoke or Run command feature - Azure Kubernetes Service | Microsoft Learn Cloud Shell vNet Integration It is possible to deploy Cloud Shell into a virtual network (vNet), allowing it to route traffic via the vNet, and so access resources using private network, Private Endpoints, or even public resources, but using a NAT Gateway or Firewall for consistent outbound IP address. This approach uses Azure Relay to provide secure access to the vNet from Cloud Shell, without the need to open additional ports. When using Cloud Shell in this way, it does introduce additional cost for the Azure Relay service. Using this solution will require two different approaches, depending on whether you are using a private or public API server. When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shell will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. The benefit is that we can control the public IP used for the outbound traffic using a Nat Gateway or Azure Firewall. Once this is configured, we can then allow-list this fixed IP on the AKS API server authorised IP ranges. Further Details: Use Cloud Shell in an Azure virtual network | Microsoft Learn Azure Bastion Azure Bastion provides secure and seamless RDP and SSH connectivity to your virtual machines (VMs) directly from the Azure portal, without exposing them to the public internet. Recently, Bastion has also added support for direct connection to AKS with SSH, rather than needing to connect to a jump box and then use Kubectl from there. This greatly simplifies connecting to AKS, and also reduces the cost. Using this approach, we can deploy a Bastion into the vNet hosting AKS. From Cloud Shell we can then use the following command to create a tunnel to AKS. az aks bastion --name <aks name> --resource-group <resource group name> --bastion <bastion resource ID> Once this tunnel is connected, we can run Kubectl commands without any need for further configuration. As with Cloud Shell network integration, we take two slightly different approaches depending on whether the API server is public or private: When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shells connected via Bastion will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. As with Cloud Shell vNet integration, we can configure this to use a static outbound IP and allow list this on the API server. Using Bastion, we can still use NAT Gateway or Azure Firewall to achieve this, however you can also allow list the public IP assigned to the Bastion, removing the cost for NAT Gateway or Azure Firewall if these are not required for anything else. Connecting to AKS directly from Bastion requires the use of the Standard for Premium SKU of Bastion, which does have additional cost over the Developer or Basic SKU. This feature also requires that you enable native client support. Further details: Connect to AKS Private Cluster Using Azure Bastion (Preview) - Azure Bastion | Microsoft Learn Summary of Options IP Allow Listing The outbound IP addresses for Cloud Shell instances can be added to the Authorised IP list for your API server. As these IPs are dynamically assigned to sessions they would need to be added at runtime, to avoid adding a large list of IPs and reducing security. This can be achieved with a script. While easy to implement, this requires additional time to run the script with every new session, and increases the overhead for managing the Authorise IP list to remove unused IPs. Command Invoke Command Invoke allows you to run commands against AKS without requiring direct network access or any setup. This is a convenient option for occasional tasks or troubleshooting, but it’s not designed for regular use due to its limited user experience and flexibility. Cloud Shell vNet Integration This approach connects Cloud Shell directly to your virtual network, enabling secure access to AKS resources. It’s well-suited for environments where Cloud Shell is the primary access method and offers a more secure and consistent experience than default configurations. It does involve additional cost for Azure Relay. Azure Bastion Azure Bastion provides a secure tunnel to AKS that can be used from Cloud Shell or by users running the CLI locally. It offers strong security by eliminating public exposure of the API server and supports flexible access for different user scenarios, though it does require setup and may incur additional cost. Cloud Shell is a great tool for providing pre-configured, easily accessible CLI instances, but in the default configuration it can require some security compromises. With a little work, it is possible to make Cloud Shell work with a more secure configuration that limits how much exposure is needed for your AKS API server.222Views1like0CommentsSimplifying Outbound Connectivity Troubleshooting in AKS with Connectivity Analysis (Preview)
Announce the Connectivity Analysis feature for AKS, now available in Public Preview and available through the AKS Portal. You can use the Connectivity Analysis (Preview) feature to quickly verify whether outbound traffic from your AKS nodes is being blocked by Azure network resources such as Azure Firewall, Network Security Groups (NSGs), route tables, and more.755Views1like0CommentsAzure at KubeCon India 2025 | Hyderabad, India – 6-7 August 2025
Welcome to KubeCon + CloudNativeCon India 2025! We’re thrilled to join this year’s event in Hyderabad as a Gold sponsor, where we’ll be highlighting the newest innovations in Azure and Azure Kubernetes Service (AKS) while connecting with India’s dynamic cloud-native community. We’re excited to share some powerful new AKS capabilities that bring AI innovation to the forefront, strengthen security and networking, and make it easier than ever to scale and streamline operations. Innovate with AI AI is increasingly central to modern applications and competitive innovation, and AKS is evolving to support intelligent agents more natively. The AKS Model Context Protocol (MCP) server, now in public preview, introduces a unified interface that abstracts Kubernetes and Azure APIs, allowing AI agents to manage clusters more easily across environments. This simplifies diagnostics and operations—even across multiple clusters—and is fully open-source, making it easier to integrate AI-driven tools into Kubernetes workflows. Enhance networking capabilities Networking is foundational to application performance and security. This wave of AKS improvements delivers more control, simplicity, and scalability in networking: Traffic between AKS services can now be filtered by HTTP methods, paths, and hostnames using Layer-7 network policies, enabling precise control and stronger zero-trust security. Built-in HTTP proxy management simplifies cluster-wide proxy configuration and allows easy disabling of proxies, reducing misconfigurations while preserving future settings. Private AKS clusters can be accessed securely through Azure Bastion integration, eliminating the need for VPNs or public endpoints by tunneling directly with kubectl. DNS performance and resilience are improved with LocalDNS for AKS, which enables pods to resolve names even during upstream DNS outages, with no changes to workloads. Outbound traffic from AKS can now use static egress IP prefixes, ensuring predictable IPs for compliance and smoother integration with external systems. Cluster scalability is enhanced by supporting multiple Standard Load Balancers, allowing traffic isolation and avoiding rule limits by assigning SLBs to specific node pools or services. Network troubleshooting is streamlined with Azure Virtual Network Verifier, which runs connectivity tests from AKS to external endpoints and identifies misconfigured firewalls or routes. Strengthen security posture Security remains a foundational priority for Kubernetes environments, especially as workloads scale and diversify. The following enhancements strengthen protection for data, infrastructure, and applications running in AKS—addressing key concerns around isolation, encryption, and visibility. Confidential VMs for Azure Linux enable containers to run on hardware-encrypted, isolated VMs using AMD SEV-SNP, providing data-in-use protection for sensitive workloads without requiring code changes. Confidential VMs for Ubuntu 24.04 combine AKS’s managed Kubernetes with memory encryption and VM-level isolation, offering enhanced security for Linux containers in Ubuntu-based clusters. Encryption in transit for NFS secures data between AKS pods and Azure Files NFS volumes using TLS 1.3, protecting sensitive information without modifying applications. Web Application Firewall for Containers adds OWASP rule-based protection to containerized web apps via Azure Application Gateway, blocking common exploits without separate WAF appliances. The AKS Security Dashboard in Azure Portal centralizes visibility into vulnerabilities, misconfigurations, compliance gaps, and runtime threats, simplifying cluster security management through Defender for Cloud. Simplify and scale operations To streamline operations at scale, AKS is introducing new capabilities that automate resource provisioning, enforce deployment best practices, and simplify multi-tenant management—making it easier to maintain performance and consistency across complex environments. Node Auto-Provisioning improves resource efficiency by automatically adding and removing standalone nodes based on pod demand, eliminating the need for pre-created node pools during traffic spikes. Deployment Safeguards help prevent misconfigurations by validating Kubernetes manifests against best practices and optionally enforcing corrections to reduce instability and security risks. Managed Namespaces streamline multi-tenant cluster operations by providing a unified view of accessible namespaces across AKS clusters, along with quick access credentials via CLI, API, or Portal. Maximize performance and visibility To enhance performance and observability in large-scale environments, AKS is also rolling out infrastructure-level upgrades that improve monitoring capacity and control plane efficiency. Prometheus quotas in Azure Monitor can now be raised to 20 million samples per minute or active time series, ensuring full metric coverage for massive AKS deployments. Control plane performance has been improved with a backported Kubernetes enhancement (KEP-5116), reducing API server memory usage by ~10× during large listings and enabling faster kubectl responses with lower risk of OOM issues in AKS versions 1.31.9 and above. Microsoft is at KubeCon India 2025 - come say hi! Connect with us in Hyderabad! Microsoft has a strong on-site presence at KubeCon + CloudNativeCon India 2025. Here are some highlights of how you can connect with us at the event: August 6-7: Visit Microsoft at Booth G4 for live demos and expert Q&A throughout the conference. Microsoft engineers are also delivering several breakout sessions on AKS and cloud-native technologies. Microsoft Sessions: Throughout the conference, Microsoft engineers are speaking in various sessions, including: Keynote: The Last Mile Problem: Why AI Won’t Replace You (Yet) Lightning Talk: Optimizing SNAT Port and IP Address Management in Kubernetes Smart Capacity-Aware Volume Provisioning for LVM Local Storage Across Multi-Cluster Kubernetes Fleet Minimal OS, Maximum Impact: Journey To a Flatcar Maintainer We’re thrilled to connect with you at KubeCon + CloudNativeCon India 2025. Whether you attend sessions, drop by our booth, or watch the keynote, we look forward to discussing these announcements and hearing your thoughts. Thank you for being part of the community, and happy KubeCon! 👋498Views2likes0CommentsEnhancing Performance in Azure Container Apps
Azure Container Apps is a fully managed serverless container service that enables you to deploy and run applications without having to manage the infrastructure. The Azure Container Apps team has made improvements recently to the load balancing algorithm and scaling behavior to better align with customer expectations to meet their performance needs.6.7KViews3likes1Comment