php
57 TopicsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**127Views1like1CommentUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!211Views3likes1CommentWhat's New in Azure App Service at #MSBuild 2025
New App Service Premium v4 plan The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux! This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provide attractive price-performance across the entire performance curve for both Windows and Linux customers. Premium v4 customers using pay-as-you-go (PAYG) on Azure App Service for Windows can expect to save up to 24% compared with prior Premium plans. We plan to provide deeper commitment-based discounts such as reserved instances and savings plan at GA. For more detailed pricing on the various CPU and memory options, see the pricing pages for Windows and Linux as well as the Azure Pricing Calculator. App Service currently has Pv4 deployed in a few regions with more regions being regularly added. For more details on how to configure app service plans with Premium v4 as well as a regularly updated list of regional availability, see the product documentation and start taking advantage of faster performance today! 2-zone Availability Zone support is now generally available With a recently completed platform update in May, customers now enjoy the 99.99% Availability Zone (AZ) SLA when running on only two instances (instead of three)! As part of this update more parts of the App Service footprint have enabled AZ support “in place”, which means many existing app service plans can now also use Availability Zones. Availability Zone configuration for app service plans is also now mutable. This means if an app service plan is running on an AZ-enabled part of the App Service footprint, customers can choose to enable and disable Availability Zone support at any time. Read more about the new Availability Zone options in the announcement article! ARM/CLI surface area for Availability Zone support has also been updated to provide increased visibility into AZ configuration details. The same enhanced visibility is also coming to the Azure Portal in June. With these changes customers can determine if an App Service plan is on an AZ-enabled scale unit, as well as how many zones are available for zone spanning. This allows customers to deploy with either two zones, or three zones (where available), of zone spanning for their App Service plans. For App Service plans that are AZ-enabled, customers will also be able to see the physical zone placement of each AZ enabled App Service plan. Availability Zone support is available on the new Premium v4 plan, and also supported with Premium v2, Premium v3, and the dedicated App Service Environment v3 (Isolated V2 plan). Check out the Availability Zone options for your App Service plans and start getting the benefits of zone resiliency today! .NET Aspire on Azure App Service .NET Aspire support is now available in public preview for App Service on Linux! .NET Aspire developers creating applications have an additional deployment option with App Service as a deployment target. Developers can create multi-app/multi-service .NET Aspire applications locally and deploy them into Azure using the new App Service deployment provider. The App Service and .NET Aspire teams worked together to create an App Service “provider” using .NET Aspire’s new “provider model”. The build provider translates the code-centric view of a .NET Aspire application topology into an Azure deployment mapped onto App Service constructs. The App Service provider supports securely deploying multiple .NET Aspire applications, with observability via the familiar .NET Aspire dashboard coming in the near future. The Getting Started with .NET Aspire on Azure App Service blog has instructions on how to create a .NET Aspire project for deployment onto App Service, as well a link for providing feedback. If you happen to be at Build 2025, drop by our booth or the theatre session “DEM548: How .NET Aspire on App Service enhances modern app development” to see live demonstrations of the App Service support for .NET Aspire! Using App Service to build agentic AI apps The last few months of intelligent app development have seen a frenetic pace of change with the rapid evolution of agents on Azure AI Foundry Agent Service and new agent extensibility options like Model Context Protocol (MCP) opening avenues for integrating existing data sources and APIs into agentic architectures. Here's a quick run-down of useful resources published recently: This article demonstrates hosting a remote MCP server on Azure App Service. The sample is an adaptation of the weather service example from the MCP site. The App Service variation also includes an azd template for easy experimentation via a CLI deployment to App Service! This article walks through integrating a .NET Core implementation of a “To-Do” list API running on App Service with an agent created on Azure AI Foundry Agent Service. It’s a straightforward example demonstrating how developers can bring together the power of AI agents with existing web API investments. Quick start guides for using App Service with Azure Open AI in your language of choice -- Python, Node, .NET, and Java. Using Microsoft Research’s latest 1-bit “super-small” language model, BitNet on App Service. Enhance search queries on text data stored in Azure SQL DB using natural language vector functions and Azure App Service. Includes an accompanying azd example. How to use Azure AI Search hybrid search capabilities from App Service with .NET (Blazor), Java (Spring Boot), Node (Express), or Python (FastAPI). Use GitHub Copilot to compare your application’s bicep against a representative “best practices” bicep definition and then generate the necessary bicep diff. In addition, using Sidecar for App Service on Linux, developers can easily connect Phi SLMs to their applications. Examples using the chat completion endpoint in the SLM sidecar extensions are available in this GitHub repo with code examples for .NET, Node, Python and Java. There are also accompanying docs for .NET, Node, Python (FastAPI) and Java (Spring Boot) which go into more details on using the SLM sidecar extensions. The sidecar extensions capability is also now enabled in the Azure Portal. AI Labs at Microsoft Build For those of you attending Microsoft Build in person, we will have labs for additional hands-on experience using AI with Azure App Service. LAB347: Add AI experiences to existing .NET apps using Sidecar in App Service This lab (first lab occurrence and second lab occurrence - see Exercise 4) covers an e-commerce inventory API (written in .NET) integrated with an agent running on Azure AI Foundry Agent Service. When a customer interacts with the AI agent it automatically invokes the appropriate web APIs to fetch real-time inventory information, add/remove products in a shopping cart, and increment/decrement product inventory. This is a great example of an AI powered agent grounded in a company’s ever changing transactional data. As a fun sidenote, GitHub Copilot was used extensively to build >95% of the sample application as well as to generate the OpenAPI specification that integrates the inventory web API with the AI agent! The same AI-on-App Service lab (Exercise 1) walks developers through integrating a basic Azure OpenAI chat interface into a web application. The lab also demonstrates using a background WebJob on Linux with Azure OpenAI (Exercise 2) to categorize user sentiment for product reviews. The lab also shows (Exercise 3) how to use a small language model (SLM) like Microsoft’s Phi-4 model in a WebJob to perform similar categorization, without the need to call out to an LLM. Although SLMs are not as powerful as LLMs, SLMs are an interesting alternative for integrating AI functionality where either cost, or control over AI data flows, are considerations. Azure SRE Agent for App Service One of the big announcements at Build this year was the Agentic DevOps announcement, which includes the new Azure SRE Agent. Designed to empower Site Reliability Engineers (SRE), the SRE Agent is a new agentic service that can manage Azure application platform services. including App Service, Functions, and Azure Container Apps to name just a few. It provides automatic incident response and mitigation, faster root cause analysis (RCA) of production issues, and continuous monitoring of application health and performance. With SRE Agent, you can use a natural language interface for managing your web applications on Azure App Service. To be an early adopter of the Agentic DevOps revolution, check out the announcement blog and sign-up to join the SRE Agent preview as it starts rolling out! WebJobs for App Service on Linux (GA) WebJobs for App Service on Linux just recently GA’d earlier this month. With this functionality developers can implement the same “infra-glue” style of background jobs that they have enjoyed with App service on Windows. Take a look at the documentationdemonstrating WebJobs support for shell scripts, Python, Java, .NET and Node on Linux! As mentioned earlier, the AI-on-App Service lab at this year’s Build conference has two code examples (see Exercise 2 and Exercise 3) demonstrating Linux WebJobs with Azure OpenAI, as well as a locally connected Phi-4 Small Language Model (SLM) sidecar, to categorize user sentiment for submitted product reviews. These are great examples of creatively using WebJobs to perform background batch-style work with your AI resources. Also keep an eye out for the upcoming WebJobs for Windows Containers GA which is planned “soon” this summer! Language and Framework Updates In addition to the release of .NET Aspire support for App Service, the App Service team has kept busy updating myriad Node, Python, Java/JBoss, .NET and PHP versions. To give an idea of the scope of effort keeping language and framework versions up to date across both Windows and Linux, App Service released more than two dozen language/framework specific updates in the last few weeks prior to Build. That represents the ongoing platform commitment to keeping languages regularly updated without the need for developers to explicitly invest time and effort doing so themselves. Just last month, Strapi support was introduced for App Service on Linux! Strapi is an open source headless Javascript based content management system that provides developers a robust platform for developing and delivering content across a variety of formats. The Azure Marketplace Strapi offering provides customization control, global availability and pre-built integration to essential Azure services like Azure Database for MySQL or PostgreSQL and Azure Email Communication Services. Deep dive on the details of hosting Strapi on App Service in this article. The custom error pages feature for App Service has also been updated just prior to Build. Custom error pages enable developers to customize the response rendered for common HTTP errors (403, 502 and 503) which are returned by the platform. This release includes a new option to always render custom errors regardless of whether the HTTP error was platform generated, or application generated. There will also be an Azure Portal update coming in June with support for the new custom error page features! Looking ahead to summer, stay tuned for the impending arrival of .NET 10 preview bits on App Service across both Windows and Linux! Networking and ASE Updates App Service support for public inbound IPv6 traffic is availablein most regions in public preview, with the service working towards a planned GA of inbound IPv6 support during the summer. Inbound IPv6 is supported for both IPv6-only upstream clients, as well dual-stack scenarios where a web application is reachable over either an IPv4 address or an IPv6 address. As part of an upcoming summer release, App Service will be delivering a public preview of *outbound* IPv6 traffic. For details on using IPv6 on App Service, as well to track all of the upcoming updates, consult this article: Announcing inbound IPv6 support in public preview - Azure App Service. For App Service Environment (ASE) customers, App Service will soon be releasing new support for adding custom Certificate Authorities (CAs) to an ASE. This new support will enable securing inbound TLS traffic using certificates issued by a custom Certificate Authority. Hybrid Connections customers will be happy to see that a new version of the App Service Hybrid Connection Manager (HCM) was just released just a few weeks ago. The new HCM delivers updated UX support for both Linux and Windows customers, enhanced logging and connection testing, and a brand new CLI for scripting and command-line management of Hybrid Connections! You might have missed it, but there was a recent addition to the troubleshooting options on App Service with the new Network Troubleshooter! The Network Troubleshooter offers comprehensive analysis and actionable insights to resolve connectivity failures for both Linux and Windows web apps. It tests connectivity to Azure resources like Storage, Redis, SQL Server, MySQL server, and other apps running on App Service. It diagnoses connectivity problems with Private endpoints, Service endpoints, and Internet-based endpoints, detects NAT gateways, and investigates DNS failures with custom DNS servers. Additionally, it provides actionable recommendations and surfaces any network rules it finds that are blocking connectivity. If you regularly wrestle with connectivity challenges, give the Network Troubleshooter a try! Next Steps Developers can learn more about Azure App Service at Getting Started with Azure App Service. Stay up to date on new features and innovations on Azure App Service via Azure Updates as well as the Azure App Service (@AzAppService) X feed. There is always a steady stream of great deep-dive technical articles about App Service as well as the breadth of developer focused Azure services over on the Apps on Azure blog. And lastly take a look at Azure App Service Community Standups hosted on the Microsoft Azure Developers YouTube channel. The Azure App Service Community Standup series regularly features walkthroughs of new and upcoming features from folks that work directly on the product! Build 2025 Session Reference (Note: all times below are listed in Seattle time - Pacific Daylight Time) (Note: some labs have more than one timeslot spanning multiple days) Innovate, deploy, & optimize your apps without infrastructure hassles https://build.microsoft.com/en-US/sessions/BRK201 Monday, May 19 th 11:15 AM – 12:15 PM Pacific Daylight Time Arch, 705 Pike, Level 6, Room 606 Breakout, Streaming Online and Recorded Session (BRK201) Quickly build, deploy, and scale web apps and APIs globally with App Service https://build.microsoft.com/en-US/sessions/BRK200 Tuesday, May 20 th 11:45 AM – 12:45 PM Pacific Daylight Time Arch, 705 Pike, Level 6, Room 608 Breakout, Streaming Online and Recorded Session (BRK200) Simplifying .NET upgrades with GitHub Copilot https://build.microsoft.com/en-US/sessions/DEM549 Monday, May 19 th 5:05 PM - 5:20 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater B Demo Session – Also Recorded (DEM549) Use Azure SRE Agent to automate tasks and increase site reliability https://build.microsoft.com/en-US/sessions/DEM550 Tuesday, May 20 th 5:10 PM - 5:25 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater A Demo Session – Also Recorded (DEM550) How .NET Aspire on App Service enhances modern app development https://build.microsoft.com/en-US/sessions/DEM548 Wednesday, May 21 st 2:00 PM - 2:15 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater B Demo Session – Also Recorded (DEM548) Add AI experiences to existing .NET apps using Sidecars in App Service [Note: Lab participants will be able to try Phi-4 and Azure AI Foundry Agent service scenarios in this lab.] https://build.microsoft.com/en-US/sessions/LAB347 Monday, May 19 th 4:45 PM - 6:00 PM Pacific Daylight Time Arch, 800 Pike, Level 1, Yakima 1 Hands on Lab – In-Person Only (LAB347) You can also work through the lab with your own Azure subscription! Code is available at https://github.com/Azure-Samples/Build2025-LAB347. Deploy the lab resources using the included resource provisioning template (https://github.com/Azure-Samples/Build2025-LAB347/blob/main/resources/lab347.json). You can deploy the template by searching on “Deploy a custom template” in the Azure Portal, and copying and pasting the template into the “Build your own template in the editor option”! Add AI experiences to existing .NET apps using Sidecars in App Service [Note: Lab participants will be able to try Phi-4 and Azure AI Foundry Agent service scenarios in this lab.] https://build.microsoft.com/en-US/sessions/LAB347-R1 Wednesday, May 21 st 4:30 PM - 5:45 PM Pacific Daylight Time Arch, 800 Pike, Lower Level, Skagit 5 Hands on Lab – In-Person Only (LAB347-R1) You can also work through the lab with your own Azure subscription! Code is available at https://github.com/Azure-Samples/Build2025-LAB347. Deploy the lab resources using the included resource provisioning template (https://github.com/Azure-Samples/Build2025-LAB347/blob/main/resources/lab347.json). You can deploy the template by searching on “Deploy a custom template” in the Azure Portal, and copying and pasting the template into the “Build your own template in the editor option”! Modernizing .NET Applications using Azure Migrate and GitHub Copilot https://build.microsoft.com/en-US/sessions/LAB343 Tuesday, May 20 th 5:15 PM - 6:30 PM Pacific Daylight Time Arch, 800 Pike, Level 1, Yakima 1 Hands on Lab – In-Person Only (LAB343) Modernizing .NET Applications using Azure Migrate and GitHub Copilot https://build.microsoft.com/en-US/sessions/LAB343-R1 Thursday, May 22 nd 10:15 AM – 11:30 AM Pacific Daylight Time Arch, 800 Pike, Level 2, Chelan 2 Hands on Lab – In-Person Only (LAB343-R1)2.9KViews0likes0CommentsHow to Choose the Right Hosting Plan – WordPress on App Service
Choosing the right hosting plan for your WordPress site on Azure App Service can feel overwhelming—but it doesn’t have to be. Whether you're just exploring WordPress or launching a high-traffic production site, we’ve created four tailored hosting plans to help you get started quickly and confidently. Let’s walk through how to pick the right plan for your needs. Which Hosting Plan Should You Choose? We’ve simplified the decision-making process with a clear recommendation based on your use case: Use Case Recommended Plan Hobby or exploratory site Free or Basic Small production website Standard High-load production website Premium 💡 Important: Only the Premium plan supports High Availability (HA). This is the only setting that cannot be changed after deployment. If HA is a requirement, start with Premium. Everything else—scaling, storage, CDN, networking, identity, and email—can be added or modified after deployment. Hosting Plan Pricing Breakdown You don’t pay for the hosting plan itself. Instead, you pay for the underlying Azure resources like App Service, MySQL, CDN, Blob Storage, and more. Here’s a breakdown of what each plan includes and the estimated monthly cost (based on US East region): Plan Azure App Service Azure DB for MySQL Total Est. Cost/Month Free F1 Free Tier (60 CPU mins/day, 1 GB RAM) B1ms Free Trial (1 vCore, 2 GB RAM, 32 GB) Free (for eligible subscriptions) Basic B1 (1 vCore, 1.75 GB RAM) – $12.41 B1s (1 vCore, 1 GB RAM) – $6.21 $18.62 Standard P1V2 (1 vCore, 3.5 GB RAM) – $73.73 B2s (2 vCores, 4 GB RAM) – $49.64 $123.37 Premium P1V3 (2 vCores, 8 GB RAM) – $113.15 D2ds_v4 (2 vCores, 16 GB RAM) – $124.83 $237.98 📝 Note: Prices vary by region and subscription type. Reserved instances can offer up to 60% savings. Always check the Azure Pricing Calculator for the most accurate estimates. Learn more: How to estimate pricing for WordPress on App Service | Microsoft Community Hub What Can You Customize After Deployment? Almost everything! Here’s what you can scale or configure post-deployment: Compute & Database: Scale up/down App Service and MySQL Networking: Configure VNET integration Storage: Add Azure Blob Storage Performance: Add Azure CDN or Front Door Security & Identity: Enable Entra ID managed identity Email: Integrate Azure Communication Services Email 📚 Explore the official documentation for step-by-step guides. https://learn.microsoft.com/en-us/azure/app-service/overview-wordpress Final Thoughts Choosing the right plan depends on your goals: Just exploring? Start with Free or Basic. Running a small business site? Go with Standard. Need high availability and performance? Choose Premium from the start. Still unsure? Start small—you can always scale up later (except for High Availability). Support and Feedback We’re here to help! If you need any assistance, feel free to open a support request through the Microsoft Azure portal. New support request - Microsoft Azure For more details about our offering, check out the announcement on the General Availability of WordPress on Azure App Service in the Microsoft Tech Community. Announcing the General Availability of WordPress on Azure App Service - Microsoft Tech Community. We value your feedback and ideas on how we can improve WordPress on Azure App Service. Share your thoughts and suggestions on our Community page Post idea · Community (azure.com) or report any issues on our GitHub repository Issues · Azure/wordpress-linux-appservice (github.com). Alternatively, you can start a conversation with us by emailing wordpressonazure@microsoft.com.258Views0likes1CommentAdd-ins and more – WordPress on App Service
The WordPress on App Service create flow offers a streamlined process to set up your site along with all the necessary Azure resources. Let's learn more about add-ins that can enhance your WordPress experience and help you decide which ones to opt for. Deploying WordPress on App Service is a breeze thanks to the ARM template approach, which ties together Azure applications to ensure a seamless experience for developers. Whether you're a seasoned pro or new to the create flow, this guide will demystify these additional settings and help you make informed choices. Add-ins tab Managed Identity: Say goodbye to managing credentials! Managed Identities provide secure access to Azure resources without storing sensitive credentials. Enabling this option creates a user-assigned managed identity, configured with App Service to access Azure DB for MySQL and storage accounts. You can also configure this manually if you prefer. Learn more Email with Azure Communication Services: Emails are crucial for WordPress functionality, from password resets to admin notifications. Since SMTP is blocked in Azure App Service, Azure Communication Services handle email management seamlessly. You can configure this manually if needed. Learn more Azure CDN: Improve performance and security with Azure Content Delivery Network (CDN). It uses a distributed network of servers to store cached content close to end users, enhancing speed and reliability. Manual configuration is also an option. Learn more Azure Front Door: Like Azure CDN, Azure Front Door accelerates your web application by reducing response times and caching content at edge servers. While CDN is simpler to use, Azure Front Door offers advanced features like WAF and will replace Azure CDN by 2027. You can choose an existing profile or configure it manually. Learn more Azure Blob Storage: Store and access images, videos, and other files with Azure Blob Storage, reducing the load on your web server and improving performance. Learn more Networking tab Virtual Network: Configure IP address ranges, subnets, route tables, gateways, and security settings with Virtual Networks. You can select an existing VNET or create a new one, ensuring enough space for subnets. Deployment tab Staging Slot: Test your changes safely before deploying them to production with a staging site. This reduces the risk of disruptions and is easy to set up during deployment. Learn more High Availability: Available with Premium Hosting plans, High Availability ensures redundancy across availability zones, protecting your service against zone-level failures and ensuring business continuity. This cannot be enabled post-deployment. Learn more Ready to Deploy? The WordPress on App Service create experience simplifies the deployment of Azure resources required for WordPress. For advanced options, consider using the ARM template. Create your WordPress site today! Support and Feedback We’re here to help! If you need any assistance, feel free to open a support request through the Microsoft Azure portal. New support request - Microsoft Azure For more details about our offering, check out the announcement on the General Availability of WordPress on Azure App Service in the Microsoft Tech Community. Announcing the General Availability of WordPress on Azure App Service - Microsoft Tech Community. We value your feedback and ideas on how we can improve WordPress on Azure App Service. Share your thoughts and suggestions on our Community page Post idea · Community (azure.com) or report any issues on our GitHub repository Issues · Azure/wordpress-linux-appservice (github.com). Alternatively, you can start a conversation with us by emailing at wordpressonazure@microsoft.com.343Views1like0CommentsOutlook IMAP in PHP
I'm trying to retrieve inbox mail using IMAP, but I'm encountering the following error. I've attached the code and error details below. Please advise. $imapServer = "{outlook.office365.com:993/imap/ssl/novalidate-cert}INBOX"; $username = '*************'; $password = '*************'; // Use an App Password for security $imapStream = imap_open($imapServer, $username, $password, OP_READONLY, 0, ['DISABLE_AUTHENTICATOR' => 'GSSAPI']); I'm getting this error: A PHP Error was encountered Severity: Warning Message: imap_open(): Couldn't open stream {outlook.office365.com:993/imap/ssl/novalidate-cert}INBOX Filename: controllers/Cron_Controller.php Line Number: 39 Backtrace: File: C:\xampp73\htdocs\WPS2.0_2025_03_04_1847\application\controllers\Cron_Controller.php Line: 39 Function: imap_open File: C:\xampp73\htdocs\WPS2.0_2025_03_04_1847\index.php Line: 348 Function: require_once A PHP Error was encountered Severity: Notice Message: Unknown: LOGIN failed. (errflg=1) Filename: Unknown Line Number: 0 Backtrace Thanks & Regards Kumaresan222Views0likes1CommentHow can I hide the Server information in the response headers in PHP?
In certain scenarios, you might want to remove the server information from your request header. Therefore, we might consider hiding that information. In Azure App Service for PHP, we are using Nginx, and we can modify configuration files if necessary. First, we need to locate the Nginx configuration file on the Kudu site, which can be found at the path /etc/nginx/nginx.conf. Then, perform cp /etc/nginx/nginx.conf /home/site/nginx.conf. We modify the configuration file under /home to retain our changes. We open the configuration file and uncomment the server_tokens off; directive in the http section of the Nginx configuration. Then you need to configure the startup command using Azure Portal from Configuration -> General Settings as below: cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload Checking again, we can see that the Nginx version is hidden. But what if we want to hide all the server information? To do this, follow these steps: (1) Copy the Nginx configuration file to the /home directory as we mentioned earlier. This is necessary because any files outside of /home will not be preserved after a restart. Use the following command: cp /etc/nginx/nginx.conf /home/site/nginx.conf (2) Open the Nginx configuration file located in /home, and add the following line in the http section. more_clear_headers 'server'. After adding it, save the file. (3) Update custom startup command using Azure Portal from Configuration -> General Settings as follows: apt update && apt install -y nginx-extras && cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload (4) Once done, and the request header should no longer display the Server information. Reference: How to set Nginx headers -302Views1like0CommentsGetting Started with Linux WebJobs on App Service – PHP
WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the PHP 8.4 runtime stack, create your App Service Web App. The stack must be PHP, since we plan on writing our WebJob using PHP and a bash startup script. For this example, we’ll use PHP 8.4. Next, we’ll add a basic WebJob to our app. Create WebJob Before we do anything else, let’s write our WebJob. This will execute every time our WebJob is triggered. WebJobs on App Service can be run on a Triggered (Scheduled) basis or Continuously. This example uses a Triggered WebJob. For this example, we’ll need to compress two files into a zip archive that we’ll upload to our App Service Web App. We’ll need a startup script and a PHP file. The full code for this sample is available at: Azure-Samples/App-Service-PHP-WebJobs-QuickStart Digging into the code, our PHP file located in webjob.php is relatively simple. It just outputs the current time to the console. <?php // Get the current time $current_time = date("Y-m-d H:i:s"); // Display the current time echo "The current time is: " . $current_time; ?> The run.sh script is located at the root of our git repo. This script will run when our WebJob is triggered, and it is the job of this script to kick off our PHP file. #!/bin/bash php -f webjob.php Now we have everything we need to assemble our zip file. Again, there are multiple ways to do this, but for this demo we’ll use the zip CLI utility to create our zip file called webjob.zip. Run the following command to create the zip. zip webjob.zip run.sh webjob.php Now it’s time to create our WebJob and upload our zip file. Create WebJob in Portal Start by entering your Web App overview page. Then, under Settings select WebJobs. Here we can create and manage our App Service WebJobs for this Web App. Click Add to create a new WebJob. Now we can name our WebJob, upload our zip from the previous step, and choose our execution type. Under Type, select Triggered. Under CRON Expression, enter the following to trigger our WebJob once every 15 minutes. 0 */15 * * * * Note: These are NCRONTAB expressions, not standard Linux CRONTAB expressions. An important distinction. Now click Create WebJob to finish making our new WebJob. Let’s test it out now. Run Manually or Scheduled To manually test our WebJob, we can click the play button under Run. A status of Completed means that WebJob is finished. Confirm Results in Logs We can check the logs to confirm that the console print statements from our PHP file were output to the console. There may be some warnings at startup, but these are typically safe to ignore for PHP WebJobs. While this is a basic example, WebJobs are a powerful and easy to use feature that have incredible utility for running scheduled (or continuous) actions in conjunction with your App Service Web Apps at no additional cost. Learn more about WebJobs on App Service190Views0likes0CommentsWhat's New in Azure App Service at Ignite 2024
Learn about the GA of sidecar extensibility on Linux and see team members demonstrating the latest tools for AI assisted web application migration and modernization as well as the latest updates to Java JBoss EAP on Azure App Service. Team members will also demonstrate integrating the Phi-3 small language model with a web application via the new sidecar extensibility using existing App Service hardware! Also new for this year’s Ignite, many topics that attendees see in App Service related sessions are also available for hands-on learning across multiple hands-on labs (HoLs). Don’t just watch team members demonstrating concepts on-stage, drop by one of the many HoL sessions and test drive the functionality yourself! Azure App Service team members will also be in attendance at the Expert Meetup area on the third floor in the Hub – drop by and chat if you are attending in-person! Additional demos, presentations and hands-on labs covering App Service are listed at the end of this blog post for easy reference. Sidecar Extensibility GA for Azure App Service on Linux Sidecar extensibility for Azure App Service on Linux is now GA! Linux applications deployed from source-code as well as applications deployed using custom containers can take advantage of sidecar extensibility. Sidecars enable developers to attach additional capabilities like third-party application monitoring providers, in-memory caches, or even local SLM (small language model) support to their applications without having to bake that functionality directly into their applications. Developers can configure up to four sidecar containers per application, with each sidecar being associated with its own container registry and (optional) startup command. Examples of configuring an OpenTelemetry collector sidecar are available in the documentation for both container-based applications and source-code based applications. There are also several recent blog posts demonstrating additional sidecar scenarios. One example walks through using a Redis cache sidecar as an in-memory cache to accelerate data retrieval in a web application (sample code here). Another example demonstrates adding a sidecar containing the Phi-3 SLM to a custom container web application (sample code here). Once the web app is running with the SLM sidecar, Phi-3 processes text prompts directly on the web server without the need to call remote LLMs or host models on scarce GPU hardware. Similar examples for source deployed applications are available in the Ignite 2024 hands on lab demonstrating sidecars. Exercise three walks through attaching an OTel sidecar to a source-code based application, and exercise four shows how to attach a Phi-3 sidecar to a source-code based application. Looking ahead to the future, App Service will be adding “curated sidecars” to the platform to make it easier for developers to integrate common sidecar scenarios. Development is already underway to include options for popular third-party application monitoring providers, Redis cache support, as well as a curated sidecar encapsulating the Phi-3 SLM example mentioned earlier. Stay tuned for these enhancements in the future! If you are attending Microsoft Ignite 2024 in person, drop by the theater session “Modernize your apps with AI without completely rewriting your code” (session code: THR 614) which demonstrates using sidecar extensibility to add Open Telemetry monitoring as well as Phi-3 SLM support to applications on App Service for Linux! .NET 9 GA, JBoss EAP and More Language Updates! With the recent GA of .NET 9 last week developers can deploy applications running .NET 9 GA on both Windows and Linux variants of App Service! Visual Studio, Visual Studio Code, Azure DevOps and GitHub Actions all support building and deploying .NET 9 applications onto App Service. Start a new project using .NET 9 or upgrade your existing .NET applications in-place and take advantage of .NET 9! For JBoss EAP on App Service for Linux, customers will soon be able to bring their existing JBoss licenses with them when moving JBoss EAP workloads onto App Service for Linux. This change will make it easier and more cost effective than ever for JBoss EAP customers to migrate existing workloads to App Service, including JBoss versions 7.3, 7.4 and 8.0! As a quick reminder, last month App Service also announced reduced pricing for JBoss EAP licenses (for net-new workloads) as well as expanded hardware support (both memory-optimized and Free tier are now supported for JBoss EAP applications). App Service is planning to release both Node 22 and Python 3.13 onto App Service for Linux with expected availability in December! Python 3.13 is the latest stable Python release which means developers will be able to leverage this version with confidence given long term support runs into 2029. Node 22 is the latest active LTS release of Node and is a great version for developers to adopt with its long-term support lasting into 2026. A special note for Linux Python developers, App Service now supports “auto-instrumentation” in public preview for Python versions 3.8 through 3.12. This makes it trivial for source-code based Python applications to enable Application Insights monitoring for their applications by simply turning the feature “on” in the Azure Portal. If you ever thought to yourself that it can be a hassle setting up application monitoring and hence find yourself procrastinating, this is the monitoring feature for you! Looking ahead just a few short weeks until December, App Service also plans to release PHP 8.4 for developers on App Service for Linux. This will enable PHP developers to leverage the latest fully supported PHP release with an expected support cycle stretching into 2028. For WordPress customers Azure App Service has added support for managed identities when connecting to MySQL database as well as storage accounts. The platform has also transitioned WordPress from Alpine Linux to Debian, aligning with App Service for Linux to offer a more secure platform. Looking ahead, App Service is excited to introduce some new features by the end of the year, including an App Service plugin for WordPress! This plugin will enable users to manage WordPress integration with Azure Communication Services email, set up Single Sign-On using Microsoft Entra ID, and diagnose performance bottlenecks. Stay tuned for upcoming WordPress announcements! End-to-End TLS & Min TLS Cipher Suite are now GA End-to-end TLS encryption for public multi-tenant App Service is now GA! When E2E TLS is configured, traffic between the App Service frontends and individual workers is secured using a platform supplied TLS certificate. This additional level of security is available for both Windows and Linux sites using Standard SKU and above as well as Isolatedv2 SKUs. You can enable this feature easily in the Azure Portal by going to your resource, clicking the “Configuration” blade and turning the feature “On” as shown below: Configuration of the minimum TLS cipher suite for a web application is also GA! With this feature developers can choose from a pre-determined list of cipher suites. When a minimum cipher suite is selected, the App Service frontends will reject any incoming requests that use a cipher suite weaker than the selected minimum cipher suite. This feature is supported for both Windows and Linux applications using Basic SKU and higher as well as Isolatedv2 SKUs. You configure a minimum TLS cipher suite in the Azure Portal by going to the “Configuration” blade for a website and selecting “Change” for the Minimum Inbound TLS Cipher Suite setting. In the resulting blade (shown below) you can select the minimum cipher suite for your application: To learn more about these and other TLS features on App Service, please refer to the App Service TLS overview. AI-Powered Conversational Diagnostics Building on the Conversational Diagnostics AI-powered tool and the guided decision making path introduced in Diagnostic Workflows, the team has created a new AI-driven natural language-based diagnostics solution for App Service on Linux. The new solution brings together previous functionality to create an experience that comprehends user intent, selects the appropriate Diagnostic Workflow, and keeps users engaged by providing real-time updates and actionable insights through chat. Conversational Diagnostics also provides the grounding data that the generative AI back-end uses to produce recommendations thus empowering users to check the conclusions. The integration of Conversational Diagnostics and Diagnostic Workflows marks a significant advancement in the platform’s diagnostic capabilities. Stay tuned for more updates and experience the transformative power of Generative AI-driven diagnostics firsthand! App Service Migration and Modernization The team just recently introduced new architectural guidance around evolving and modernizing web applications with the Modern Web Application pattern for .NET and Java! This guidance builds on the Reliable Web App pattern for .NET and Java as well as the Azure Migrate application and code assessment tool. With the newly released Modern Web Application guidance, there is a well-documented path for migrating web applications from on-premises/VM deployments using the application and code assessment tool, iterating and evolving web applications with best practices using guidance from the Reliable Web App pattern, and subsequently going deeper on modernization and re-factoring following guidance from the Modern Web Application pattern. Best of all customers can choose to “enter” this journey at any point and progress as far down the modernization path as needed based on their unique business and technical requirements! As a quick recap on the code assessment tool, it is a guided experience inside of Visual Studio with GitHub Copilot providing actionable guidance and feedback on recommended changes needed to migrate applications to a variety of Azure services including Azure App Service. Combined with AI-powered Conversational Diagnostics (mentioned earlier), developers now have AI-guided journeys supporting them from migration all the way through deployment and runtime operation on App Service! Networking and ASE Updates As of November 1, 2024, we are excited to announce that App Service multi-plan subnet join is generally available across all public Azure regions! Multi-plan subnet join eases network management by reducing subnet sprawl, enabling developers to connect multiple app service plans to a single subnet. There is no limit to the number of app service plans that connect to a single subnet. However, developers should keep in mind the number of available IPs since tasks such as changing the SKU for an app service plan will temporarily double the number of IP addresses used in a connected subnet. For more information as well as examples on using multi-plan subnet join see the documentation! App Service also recently announced GA of memory optimized options for Isolatedv2 on App Service Environment v3. The new memory-optimized options range from two virtual cores with 16 GB RAM in I1mv2 (compared to two virtual cores, 8 GB RAM in I1v2) all the way up to 32 virtual cores with 256 GB RAM in I5mv2. The new plans are available in most regions. Check back regularly to see if your preferred region is supported. For more details on the technical specifications of these plans, as well as information on the complete range of tiers and plans for Microsoft Azure App Service, visit our pricing page. Using services such as Application Gateway and Azure Front Door with App Service as entry points for client traffic is a common scenario that many of our customers implement. However, when using these services together, there are integration challenges around the default cookie domain for HTTP cookies, including the ARRAffinity cookie used for session affinity. App Service collaborated with the Application Gateway team to introduce a simple solution that addresses the session affinity problem. App Service introduced a new session affinity proxy configuration setting in October which tells App Service to always set the hostname for outbound cookies based on the upstream hostname seen by Application Gateway or Azure Front Door. This simplifies integration with a single-click experience for App Service developers who front-end their websites using one of Azure’s reverse proxies, and it solves the challenge of round-tripping the ArrAffinity cookie when upstream proxies are involved. Looking ahead to early 2025, App Service will shortly be expanding support for IPv6 to include both inbound and outbound connections (currently only inbound connections are supported). The current public preview includes dual-stack support for both IPv4 and IPv6, allowing for a smooth transition and compatibility with existing systems. Read more about the latest status of the IPv6 public preview on App Service here ! Lastly, the new application naming and hostname convention that was rolled out a few months earlier for App Service is now GA for App Service. The platform has also extended this new naming convention to Azure Functions where it is now available in public preview for newly created functions. To learn more about the new naming convention and the protection it provides against subdomain takeover take a look at the introductory blog post about the unique default hostname feature. Upcoming Availability Zone Improvements New Availability Zone features are currently rolling out that will make zone redundant App Service deployments more cost efficient and simpler to manage in early 2025! The platform will be changing the minimum requirement for enabling Availability Zones to two instances instead of three, while still maintaining a 99.99% SLA. Many existing app service plans with two or more instances will also automatically become capable of supporting Availability Zones without requiring additional setup. Additionally, the zone redundant setting will be mutable throughout the life of an app service plan. This upcoming improvement will allow customers on Premium V2, Premium V3, or Isolated V2 plans, to toggle zone redundancy on or off as needed. Customers will also gain enhanced visibility into Availability Zone information, including physical zone placement and counts. As a sneak peek into the future, the screenshot below shows what the new experience will look like in the Azure Portal: Stay tuned for Availability Zone updates coming to App Service in early 2025! Next Steps Developers can learn more about Azure App Service at Getting Started with Azure App Service. Stay up to date on new features and innovations on Azure App Service via Azure Updates as well as the Azure App Service (@AzAppService) X feed. There is always a steady stream of great deep-dive technical articles about App Service as well as the breadth of developer focused Azure services over on the Apps on Azure blog. Azure App Service (virtually!) attended the recently completed November .Net Conf 2024. App Service functionality was featured showing a .NET 9.0 app using Azure Sql’s recently released native vector data type support that enables developers to perform hybrid text searches on Azure Sql data using vectors generated via Azure OpenAI embeddings! And lastly take a look at Azure App Service Community Standups hosted on the Microsoft Azure Developers YouTube channel. The Azure App Service Community Standup series regularly features walkthroughs of new and upcoming features from folks that work directly on the product! Ignite 2024 Session Reference (Note: some sessions/labs have more than one timeslot spanning multiple days). (Note: all times below are listed in Chicago time - Central Standard Time). Modernize your apps with AI without completely rewriting your code Modernize your apps with AI without completely rewriting your code [Note: this session includes a demonstration of the Phi-3 sidecar scenario] Wednesday, November 20 th 1:00 PM - 1:30 PM Central Standard Time Theater Session – In-Person Only (THR614) McCormick Place West Building – Level 3, Hub, Theater C Unlock AI: Assess your app and data estate for AI-powered innovation Unlock AI: Assess your app and data estate for AI-powered innovation Wednesday, November 20 th 1:15 PM – 2:00 PM Central Time McCormick Place West Building – Level 1, Room W183c Breakout and Recorded Session (BRK137) Modernize and scale enterprise Java applications on Azure Modernize and scale enterprise Java applications on Azure Thursday, November 21 st 8:30 AM - 9:15 AM Central Time McCormick Place West Building – Level 1, Room W183c Breakout and Recorded Session (BRK147) Assess apps with Azure Migrate and replatform to Azure App Service Assess apps with Azure Migrate and replatform to Azure App Service Tuesday, November 19 th 1:15 PM - 2:30 PM Central Time McCormick Place West Building – Level 4, Room W475 Hands on Lab – In-Person Only (LAB408) Integrate GenAI capabilities into your .NET apps with minimal code changes Integrate GenAI capabilities into your .NET apps with minimal code changes [Note: Lab participants will be able to try out the Phi-3 sidecar scenario in this lab.] Wednesday, November 20 th 8:30 AM - 9:45 AM Central Time McCormick Place West Building – Level 4, Room W475 Hands on Lab – In-Person Only (LAB411) Assess apps with Azure Migrate and replatform to Azure App Service Assess apps with Azure Migrate and replatform to Azure App Service Wednesday, November 20 th 6:30 PM - 7:45 PM Central Time McCormick Place West Building – Level 4, Room W470b Hands on Lab – In-Person Only (LAB408-R1) Integrate GenAI capabilities into your .NET apps with minimal code changes Integrate GenAI capabilities into your .NET apps with minimal code changes [Note: Lab participants will be able to try out the Phi-3 sidecar scenario in this lab.] Thursday, November 21 st 10:15 AM - 11:30 AM Central Time McCormick Place West Building – Level 1, Room W180 Hands on Lab – In-Person Only (LAB411-R1) Assess apps with Azure Migrate and replatform to Azure App Service Assess apps with Azure Migrate and replatform to Azure App Service Friday, November 22 nd 9:00 AM – 10:15 AM Central Time McCormick Place West Building – Level 4, Room W474 Hands on Lab – In-Person Only (LAB408-R2)3.1KViews0likes1Comment