java
201 TopicsDisciplined Guardrail Development in enterprise application with GitHub Copilot
What Is Disciplined Guardrail-Based Development? In AI-assisted software development, approaches like Vibe Coding—which prioritize momentum and intuition—often fail to ensure code quality and maintainability. To address this, Disciplined Guardrail-Based Development introduces structured rules ("guardrails") that guide AI systems during coding and maintenance tasks, ensuring consistent quality and reliability. To get AI (LLMs) to generate appropriate code, developers must provide clear and specific instructions. Two key elements are essential: What to build – Clarifying requirements and breaking down tasks How to build it – Defining the application architecture The way these two elements are handled depends on the development methodology or process being used. Here are examples as follows. How to Set Up Disciplined Guardrails in GitHub Copilot To implement disciplined guardrail-based development with GitHub Copilot, two key configuration features are used: 1. Custom Instructions (.github/copilot-instructions.md): This file allows you to define persistent instructions that GitHub Copilot will always refer to when generating code. Purpose: Establish coding standards, architectural rules, naming conventions, and other quality guidelines. Best Practice: Instead of placing all instructions in a single file, split them into multiple modular files and reference them accordingly. This improves maintainability and clarity. Example Use: You might define rules like using camelCase for variables, enforcing error boundaries in React, or requiring TypeScript for all new code. https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions 2. Chat Modes (.github/chatmodes/*.chatmode.md): These files define specialized chat modes tailored to specific tasks or workflows. Purpose: Customize Copilot’s behavior for different development contexts (e.g., debugging, writing tests, refactoring). Structure: Each .chatmode.md file includes metadata and instructions that guide Copilot’s responses in that mode. Example Use: A debug.chatmode.md might instruct Copilot to focus on identifying and resolving runtime errors, while a test.chatmode.md could prioritize generating unit tests with specific frameworks. https://code.visualstudio.com/docs/copilot/customization/custom-chat-modes The files to be created and their relationships are as follows. Next, there are introductions for the specific creation method. #1: Custom Instructions With custom instructions, you can define commands that are always provided to GitHub Copilot. The prepared files are always referenced during chat sessions and passed to the LLM (this can also be confirmed from the chat history). An important note is to split the content into several files and include links to those files within the .github/copilot-instructions.md file. Because it can become too long if everything is written in a single file. There are mainly two types of content that should be described in custom instructions: A: Development Process (≒ outcome + Creation Method) What documents or code will be created: requirements specification, design documents, task breakdown tables, implementation code, etc. In what order and by whom they will be created: for example, proceed in the order of requirements definition → design → task breakdown → coding. B: Application Architecture How will the outcome be defined in A be created? What technology stack and component structure will be used? A concrete example of copilot-instructions.md is shown below. # Development Rules ## Architecture - When performing design and coding tasks, always refer to the following architecture documents and strictly follow them as rules. ### Product Overview - Document the product overview in `.github/architecture/product.md` ### Technology Stack - Document the technologies used in `.github/architecture/techstack.md` ### Coding Standards - Document coding standards in `.github/architecture/codingrule.md` ### Project Structure - Document the project directory structure in `.github/architecture/structure.md` ### Glossary (Japanese-English) - Document the list of terms used in the project in `.github/architecture/dictionary.md` ## Development Flow - Follow a disciplined development flow and execute the following four stages in order (proceed to the next stage only after completing the current one): 1. Requirement Definition 2. Design 3. Task Breakdown 4. Coding ### 1. Requirement Definition - Document requirements in `docs/[subsystem_name]/[business_name]/requirement.md` - Use `requirement.chatmode.md` to define requirements - Focus on clarifying objectives, understanding the current situation, and setting success criteria - Once requirements are defined, obtain user confirmation before proceeding to the next stage ### 2. Design - Document design in `docs/[subsystem_name]/[business_name]/design.md` - Use `design.chatmode.md` to define the design - Define UI, module structure, and interface design - Once the design is complete, obtain user confirmation before proceeding to the next stage ### 3. Task Breakdown - Document tasks in `docs/[subsystem_name]/[business_name]/tasks.md` - Use `tasks.chatmode.md` to define tasks - Break down tasks into executable units and set priorities - Once task breakdown is complete, obtain user confirmation before proceeding to the next stage ### 4. Coding - Implement code under `src/[subsystem_name]/[business_name]/` - Perform coding task by task - Update progress in `docs/[subsystem_name]/[business_name]/tasks.md` - Report to the user upon completion of each task Note: The only file that is always sent to the LLM is `copilot-instructions.md`. Documents linked from there (such as `product.md` or `techstack.md`) are not guaranteed to be read by the LLM. That said, a reasonably capable LLM will usually review these files before proceeding with the work. If the LLM does not properly reference each file, you may explicitly add these architecture documents to the context. Another approach is to instruct the LLM to review these files in the **chat mode settings**, which will be described later. There are various “schools of thought” regarding application architecture, and it is still an ongoing challenge to determine exactly what should be defined and what documents should be created. The choice of architecture depends on factors such as the business context, development scale, and team structure, so it is difficult to prescribe a one-size-fits-all approach. That said, as a general guideline, it is desirable to summarize the following: Product Overview: Overview of the product, service, or business, including its overall characteristics Technology Stack: What technologies will be used to develop the application? Project Structure: How will folders and directories be organized during development? Module Structure: How will the application be divided into modules? Coding Rules: Rules for handling exceptions, naming conventions, and other coding practices Writing all of this from scratch can be challenging. A practical approach is to create template information with the help of Copilot and then refine it. Specifically, you can: Use tools like M365 Copilot Researcher to create content based on general principles Analyze a prototype application and have the architecture information summarized (using Ask mode or Edit mode, feed the solution files to a capable LLM for analysis) However, in most cases, the output cannot be used as-is. The structure may not be analyzed correctly (hallucinations may occur) Project-specific practices and rules may not be captured Use the generated content as a starting point, and then refine it to create architecture documentation tailored to your own project. When creating architecture documents for enterprise-scale application development, a useful approach is to distinguish between the foundational parts and the individual application parts. Discipline-based guardrail development is particularly effective when building multiple applications in a “cookie-cutter” style on top of a common foundation. A cler example of this is Data-Oriented Architecture (DOA). In DOA, individual business applications are built on top of a shared database that serves as the overall common foundation. In this case, the foundational parts (the database layer) should not be modified arbitrarily by individual developers. Instead, focus on how to standardize the development of the individual application parts (the blue-framed sections) while ensuring consistency. Architecture documentation should be organized with this distinction in mind, emphasizing the uniformity of application-level development built upon the stable foundation. #2 Chat Mode By default, GitHub Copilot provides three chat modes: Ask, Edit, and Agent. However, by creating files under .github/chatmodes/*.chatmode.md, you can customize the Agent mode to create chat modes tailored for specific tasks. Specifically, you can configure the following three aspects. Functionally, this allows you to perform a specific task without having to manually change the model or tools, or write detailed instructions each time: model: Specify the default LLM to use (Note: The user can still manually switch to another LLM if desired) tools: Restrict which tools can be used (Note: The user can still manually select other tools if desired) custom instructions: Provide custom instructions specific to this chat mode A concrete example of .github/chatmodes/*.chatmode.md is shown below. description: This mode is used for requirement definition tasks. model: Claude Sonnet 4 tools: ['changes', 'codebase', 'editFiles', 'fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'runCommands', 'search', 'searchResults', 'terminalLastCommand', 'terminalSelection', 'usages', 'vscodeAPI', 'mssql_connect', 'mssql_disconnect', 'mssql_list_servers', 'mssql_show_schema'] --- # Requirement Definition Mode In this mode, requirement definition tasks are performed. Specifically, the project requirements are clarified, and necessary functions and specifications are defined. Based on instructions or interviews with the user, document the requirements according to the format below. If any specifications are ambiguous or unclear, Copilot should ask the user questions to clarify them. ## File Storage Location Save the requirement definition file in the following location: - Save as `requirement.md` under the directory `docs/[subsystem_name]/[business_name]/` ## Requirement Definition Format While interviewing the user, document the following items in the Markdown file: - **Subsystem Name**: The name of the subsystem to which this business belongs - **Business Name**: The name of the business - **Overview**: A summary of the business - **Use Cases**: Clarify who uses this business, when/under what circumstances, and for what purpose, using the following structure: - **Who (Persona)**: User or system roles - **When/Under What Circumstances (Scenario)**: Timing when the business is executed - **Purpose (Goal)**: Objectives or expected outcomes of the business - **Importance**: The importance of the business (e.g., High, Medium, Low) - **Acceptance Criteria**: Conditions that must be satisfied for the requirement to be considered met - **Status**: Current state of the requirement (e.g., In Progress, Completed) ## After Completion - Once requirement definition is complete, obtain user confirmation and proceed to the next stage (Design). Tips for Creating Chat Modes Here are some tips for creating custom chat modes: Align with the development process: Create chat modes based on the workflow and the deliverables. Instruct the LLM to ask the user when unsure: Direct the LLM to request clarification from the user if any information is missing. Clarify what deliverables to create and where to save them: Make it explicit which outputs are expected and their storage locations. The second point is particularly important. Many AI (LLMs) tend to respond to user prompts in a sycophantic manner (known as sycophancy). As a result, they may fill in unspecified requirements or perform tasks that were not requested, often with the intention of being helpful. The key difference between Ask/Edit modes and Agent mode is that Agent mode allows the LLM to proactively ask questions and engage in dialogue with the user. However, unless the user explicitly includes a prompt such as “ask if you don’t know,” the AI rarely initiates questions on its own. By creating a custom chat mode and instructing the LLM to “ask the user when unsure,” you can fully leverage the benefits of Agent mode. About Tools You can easily check tool names from the list of available tools in the command palette. Alternatively, as shown in the diagram below, it can be convenient to open the custom chat mode file and specify the tool configuration. You can specify not only the MCP server functionality but also built-in tools and Copilot Extensions. Example of Actual Operation An example interaction when using this chat mode is as follows: The LLM behaves according to the custom instructions defined in the chat mode. When you answer questions from GHC, the LLM uses that information to reason and proceed with the task. However, the output is not guaranteed to be correct (hallucinations may occur) → A human should review the output and make any necessary corrections before committing. The basic approach to disciplined guardrail-based development has been covered above. In actual business application development, it is also helpful to understand the following two points: Referencing the database schema Integrated management of design documents and implementation code (Important) Reading the Database Schema In business application development, requirements definition and functional design are often based on the schema information of entities. There are two main ways to allow the system to read schema information: Dynamically read the schema from a development/test DB server using MCP or similar tools. Include a file containing schema information within the project and read from it. A development/test database can be prepared, and schema information can be read via the MCP server or Copilot Extensions. For SQL Server or Azure SQL Database, an MCP Server is available, but its setup can be cumbersome. Therefore, using Copilot Extensions is often easier and recommended. This approach is often seen online, but it is not recommended for the following reasons: Setting up MCP Server or Copilot Extensions can be cumbersome (installation, connection string management, etc.) It is time-consuming (the LLM needs schema information → reads the schema → writes code based on it) Connecting to a DB server via MCP or similar tools is useful for scenarios such as “querying a database in natural language” for non-engineers performing data analysis. However, if the goal is simply to obtain the schema information of entities needed for business application development, the method described below is much simpler. Storing Schema Information Within the Project Place a file containing the schema information inside the project. Any of the following formats is recommended. Write custom instructions so that development refers to this file: DDL (full CREATE DATABASE scripts) O/R mapper files (e.g., Entity Framework context files) Text files documenting schema information, etc. DDL files are difficult for humans to read, but AI (LLMs) can easily read and accurately understand them. In .NET + SQL development, it is recommended to include both the DDL and EF O/R mapper files. Additionally, if you include links to these files in your architecture documents and chat mode instructions, the LLM can generate code while understanding the schema with high accuracy. Integrated Management of Design Documents and Implementation Code Disciplined guardrail-based development with LLMs has made it practical to synchronize and manage design documents and implementation code together—something that was traditionally very difficult. In long-standing systems, it is common for old design documents to become largely useless. During maintenance, code changes are often prioritized. As a result, updating and maintaining design documents tends to be neglected, leading to a significant divergence between design documents and the actual code. For these reasons, the following have been considered best practices (though often not followed in reality): Limit requirements and external design documents to the minimum necessary. Do not create internal design documents; instead, document within the code itself. Always update design documents before making changes to the implementation code. When using LLMs, guardrail-based development makes it easier to enforce a “write the documentation first” workflow. Following the flow of defining specifications, updating the documents, and then writing code also helps the LLM generate appropriate code more reliably. Even if code is written first, LLM-assisted code analysis can significantly reduce the effort required to update the documentation afterward. However, the following points should be noted when doing this: Create and manage design documents as text files, not Word, Excel, or PowerPoint. Use text-based technologies like Mermaid for diagrams. Clearly define how design documents correspond to the code. The last point is especially important. It is crucial to align the structure of requirements and design documents with the structure of the implementation code. For example: Place design documents directly alongside the implementation code. Align folder structures, e.g., /doc and /src. Information about grouping methods and folder mapping should be explicitly included in the custom instructions. Conclusion of Disciplined Guardrail-Based Development with GHC Formalizing and Applying Guardrails Define the development flow and architecture documents in .github/copilot-instructions.md using split references. Prepare .github/chatmodes/* for each development phase, enforcing “ask the AI if anything is unclear.” Synchronization of Documents and Implementation Code Update docs first → use the diff as the basis for implementation (Doc-first). Keep docs in text format (Markdown/Mermaid). Fix folder correspondence between /docs and /src. Handling Schemas Store DDL/O-R mapper files (e.g., EF) in the repository and have the LLM reference them. Minimize dynamic DB connections, prioritizing speed, reproducibility, and security. This disciplined guardrail-based development technique is an AI-assisted approach that significantly improves the quality, maintainability, and team efficiency of enterprise business application development. Adapt it appropriately to each project to maximize productivity in application development.137Views0likes0CommentsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**188Views1like1CommentUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!261Views3likes1Comment🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.80Views1like0CommentsSend metrics from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 20 July 2025. MicronautからAzure Monitorにmetricを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorials were used as a reference. Create a Micronaut Application to Collect Metrics and Monitor Them on Azure Monitor Metrics Collect Metrics with Micronaut Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch https://micronaut.io/launch/ $ mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=validation,graalvm,micrometer-azure-monitor,http-client,micrometer-annotation,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. validation graalvm micrometer-azure-monitor http-client micrometer-annotation yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation In this section, we’re going to use the GDK sample code that we can find in the tutorial. The code is from the Micronaut Guides, but the database access and other parts have been taken out. We have made the following changes to the code to make it fit our needs. a) Structure of the directory In the GDK tutorial, folders called azure and lib are created, but this structure isn’t used in the standard Micronaut archetype. So, codes in both directories has now been combined. b) Instrumentation Key As the tutorial above and the Micronaut Micrometer documentation explain, we need to specify the Instrumentation Key. When we create an archetype using Micronaut CLI or Micronaut Launch, the configuration assuming the use of the Instrumentation Key is included in application.properties / application.yml . 6.3 Azure Monitor Registry Micronaut Micrometer This configuration will work, but currently, Application Insights does not recommend accessing it using only the Instrumentation Key. So, it is better to modify the connection string to include the Instrumentation Key. To set it up, open the file application.properties and enter the following information: micronaut.metrics.export.azuremonitor.connectionString="InstrumentationKey=...." In the case of application.yml , we need to specify the connection string in YAML format. micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: InstrumentationKey=.... We can also specify the environment variable MICRONAUT_METRICS_EXPORT_AZUREMONITOR_CONNECTIONSTRING , but since this environment variable name is too long, it is better to use a shorter one. Here’s an example using AZURE_MONITOR_CONNECTION_STRING (which is also long, if you think about it). micronaut.metrics.export.azuremonitor.connectionString=${AZURE_MONITOR_CONNECTION_STRING} micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: ${AZURE_MONITOR_CONNECTION_STRING} The connection string can be specified because Micrometer, which is used internally, already supports it. We can find the AzurMonitorConfig.java file here. AzureMonitorConfig.java micrometer/implementations/micrometer-registry-azure-monitor/src/main/java/io/micrometer/azuremonitor/AzureMonitorConfig.java at main · micrometer-metrics/micrometer The settings in application.properties / application.yml are as follows. For more information about the specified meter binders, please look at the following documents. Meter Binder Micronaut Micrometer micronaut: application: name: azuremonitor-metric metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} c) pom.xml To use the GraalVM Reachability Metadata Repository, you need to add this dependency. The latest version is 0.11.0 as of 20 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> For now, let’s build it as a Java application. $ mvn clean package Check if it works as a Java application At first, verify that the application is running without any problems and that metrics are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent The following files are stored in the specific directory. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. a) Location of configuration files: As described in the documentation, we can specify the location of configuration property files. If we build using the recommended method (placing the files in the directory src/main/resources/META-INF/native-image/{groupId}/{artifactId} ), we can specify the directory location using ${.}. -H:DynamicProxyConfigurationResources -H:JNIConfigurationResources -H:ReflectionConfigurationResources -H:ResourceConfigurationResources -H:SerializationConfigurationResources Native Image Build Configuration b) HTTP/HTTPS protocols support: We need to use --enable-https / --enable-http when using the HTTP(S) protocol in your application. URL Protocols in Native Image c) When classes are loaded and initialized: In the case of AOT compilation, classes are usually loaded at compile time and stored in the image heap (at build time). However, some classes might be specified to be loaded when the program is running. In these cases, it is necessary to explicitly specify initialization at runtime (and vice versa, of course). There are two types of build arguments. # Explicitly specify initialisation at runtime --initialize-at-run-time=... # Explicitly specify initialisation at build time --initialize-at-build-time=... To enable tracing of class initialization, use the following arguments. # Enable tracing of class initialization --trace-class-initialization=... # Deprecated in GraalVM 21.3 --trace-object-instantiation=... # Current option Specify Class Initialization Explicitly Class Initialization in Native Image d) Prevent fallback builds: If the application cannot be optimized during the Native Image build, the native-image tool will create a fallback file, which needs JVM. To prevent fallback builds, we need to specify the option --no-fallback . For other build options, please look at the following document. Command-line Options Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application When you start the native image application, we might see the following message: This message means that GC notifications are not available because GarbageCollectorMXBean of JVM does not provide any notifications. GC notifications will not be available because no GarbageCollectorMXBean of the JVM provides any. GCs=[young generation scavenger, complete scavenger] Let’s check if the application works. 1) GET /books and GET /books/{isbn} This is a normal REST API. Call both of them a few times. 2) GET /metrics We can check the list of available metrics. { "names": [ "books.find", "books.index", "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "microserviceBooksNumber.checks", "microserviceBooksNumber.latest", "microserviceBooksNumber.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } At first, the following three metrics are custom ones added in the class MicroserviceBooksNumberService . microserviceBooksNumber.checks microserviceBooksNumber.time microserviceBooksNumber.latest And, the following two metrics are custom ones collected in the class BooksController , which collect information such as the time taken and the number of calls. Each metric can be viewed at GET /metrics/{metric name} . books.find books.index The following is an example of microserviceBooksNumber.* . // miroserviceBooksNumber.checks { "name": "microserviceBooksNumber.checks", "measurements": [ { "statistic": "COUNT", "value": 12 } ] } // microserviceBooksNumber.time { "name": "microserviceBooksNumber.time", "measurements": [ { "statistic": "COUNT", "value": 12 }, { "statistic": "TOTAL_TIME", "value": 0.212468 }, { "statistic": "MAX", "value": 0.032744 } ], "baseUnit": "seconds" } //microserviceBooksNumber.latest { "name": "microserviceBooksNumber.latest", "measurements": [ { "statistic": "VALUE", "value": 2 } ] } Here is an example of the metric books.* . // books.index { "name": "books.index", "measurements": [ { "statistic": "COUNT", "value": 6 }, { "statistic": "TOTAL_TIME", "value": 3.08425 }, { "statistic": "MAX", "value": 3.02097 } ], "availableTags": [ { "tag": "exception", "values": [ "none" ] } ], "baseUnit": "seconds" } // books.find { "name": "books.find", "measurements": [ { "statistic": "COUNT", "value": 7 } ], "availableTags": [ { "tag": "result", "values": [ "success" ] }, { "tag": "exception", "values": [ "none" ] } ] } Metrics from Azure Monitor (application insights) Here is the grid view of custom metrics in Application Insights ( microserviceBooks.time is the average value). To confirm that the values match those in Application Insights, check the metric http.server.requests , for example. We should see three items on the graph and the value is equal to the number of API responses (3).107Views0likes0CommentsSend traces from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 23 July 2025. MicronautからAzure Monitorにtraceを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorial was used as a reference. OpenTelemetry Tracing with Oracle Cloud and the Micronaut Framework As of 13 August 2025, GDK (Graal Dev Kit) guides are also available. Create and Trace a Micronaut Application Using Azure Monitor Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=tracing-opentelemetry-http,validation,graalvm,azure-tracing,http-client,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. tracing-opentelemetry-http validation graalvm azure-tracing http-client yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> <scope>compile</scope> </dependency> In this section, we’re going to use the tutorial in Micronaut Guides. We can use these codes as they are, but several points are modified. a) For sending traces to Application Insights Please note that we didn’t include metrics in this article because we discussed them in the last one. Starting with Micronaut 4.9.0, a feature package called micronaut-azure-tracing has been added. This feature enables sending traces to Application Insights. <dependency> <groupid>io.micronaut.azure</groupid> <artifactid>micronaut-azure-tracing</artifactid> </dependency> Indeed, this dependency is necessary for sending data to Application Insights. However, adding this dependency and specifying the Application Insights connection string is not enough to send traces from applications. micronaut-azure-tracing depends upon the three dependencies listed below. This shows that adding dependencies for trace collection and creation are required. com.azure:azure-monitor-opentelemetry-autoconfigure io.micronaut:micronaut-inject io.micronaut.tracing:micronaut-tracing-opentelemetry In this case, we want to obtain HTTP traces, so we will add dependencies for generating HTTP traces. <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> <scope>compile</scope> </dependency> Where setting the connection string for micronaut-azure-tracing is different from where for micrometer-azure-monitor ( azure.tracing.connection-string ). If we want to retrieve not only metrics but traces, the setting location is different, which can be confusing. We can also use environment variables to specify the connection string. azure.tracing.connection-string="InstrumentationKey=...." azure: tracing: connection-string: InstrumentationKey=.... b) pom.xml To use the GraalVM Reachability Metadata Repository, we need to add this dependency. The latest version is 0.11.0 as of 23 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> c) To avoid version conflicts with dependencies used in the Azure SDK This often happens when using Netty and/or Jackson. To avoid version conflicts during Native Image generation, Micronaut offers alternative components that we can choose. For example, if we want to avoid Netty version conflicts, we can use undertow. Dependencies Alternatives Netty undertow, jetty, Tomcat Jackson JSON-P / JSON-B, BSON HTTP Client JDK HTTP Client For now, let’s build it as a Java application. mvn clean package Test as a Java application At first, verify that the application is running without any problems and that traces are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Let’s check if the application works. To check the inventory of desktop, execute the following call. curl https://<container apps="" url="" and="" port="">/store/inventory/desktop</container> We should receive a response like this. {"warehouse":7,"item":"desktop","store":2} In Azure Monitor (Application Insights) Application Map, we can observe this interaction visually. Switching to the trace page shows us traces and custom properties on the right of the screen. Press enter or click to view image in full size Then, we add an order. For example, if we place an order for five desktops and then receive 202 Accepted , we need to call inventory check API again. This will show that the number has increased by five and the desktop order has changed to seven (original was 2). $ curl -X "POST" "https://<container apps="" url="" and="" port="">/store/order" \ -H 'Content-Type: application/json; charset=utf-8' \ -d $'{"item":"desktop", "count":5}' $ curl https://<container apps="" url="" and="" port="">/store/inventory/desktop</container></container> Within azuremonitor-trace , an HTTP Client is used internally to execute POST /warehouse/order . Looking at the Application Map in Azure Monitor (Application Insights), we can confirm that a call to azuremonitor-trace itself is occurring. The trace at the time of order placement is as follows. Clicking ‘View all’ in the red frame, we can check the details of each trace.98Views0likes0CommentsSend logs from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 29 July 2025. MicronautからAzure Monitorにlogを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Where can we post logs? Log destination differs depending upon managed services such as App Service, Container Apps, etc. We can also send logs to specified destination which is different destination from the default one. In the case of Azure Container Apps, for instance, we have several options to send logs. Type Destination How to Write console output to a log ContainerAppConsoleLogs_CL If diagnostic settings are configured, destination table may differ from the above. The output destination can be changed in the diagnostic settings. This is handled by Container Apps, so no user action is required. Use DCE (Data Collection Endpoint) to write logs to custom table in Log Analytics Workspace Custom tables in Log Analytics Workspace. Follow these tutorials listed below. Publish Application Logs to Azure Monitor Logs Publish Micronaut application logs to Microsoft Azure Monitor Logs Using the Log Appender traces table in Application Insights When writing logs to thetraces table in Application Insights, Log Appender configuration is required. Log storage and monitoring options in Azure Container Apps From now on, we elaborate the 3rd way — write logs to the traces table in Application Insights. Prerequisites Maven: 3.9.10 JDK: 21 Micronaut: 4.9.0 or later Regarding logs, the logs posted with the following 4 log libraries are automatically collected. In this entry, we use Logback. Log4j2 Logback JBoss Logging java.util.logging Create Azure resource (Application Insights) Create a resource group and configure Application Insights. Refer to the following documentation for details. Create and configure Application Insights resources - Azure Monitor That’s it for the Azure setup. Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=graalvm,azure-tracing,yaml \ dev.logicojp.micronaut.azuremonitor-log When using Micronaut Launch, click [FEATURES] and select the following features. graalvm azure-tracing yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Add dependencies and plugins to pom.xml In order to output logs to Application Insights, the following dependencies must be added. <dependency> <groupid>io.opentelemetry.instrumentation</groupid> <artifactid>opentelemetry-logback-appender-1.0</artifactid> </dependency> <dependency> <groupid>com.microsoft.azure</groupid> <artifactid>applicationinsights-logging-logback</artifactid> </dependency> <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> </dependency> In this entry, we are using Logback for log output, so we are using opentelemetry-logback-appender-1.0. However, should you be using a different library, it will be necessary to specify the appropriate an appender for that library. The dependency com.azure:azure-monitor-opentelemetry-autoconfigure is being included transitively since io.micronaut.tracing:azure-tracing depends upon the dependency. If Azure tracing has not yet been added, the following dependencies must be added explicitly. <dependency> <groupid>com.azure</groupid> <artifactid>azure-monitor-opentelemetry-autoconfigure</artifactid> </dependency> Additionally, we need to add this dependency to use the GraalVM Reachability Metadata Repository. The latest version is 0.11.0 as of 29 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> <classifier>repository</classifier> <type>zip</type> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> Application configuration In order to proceed, it is necessary to include both Application Insights-specific settings and Azure-tracing settings. To ensure optimal performance when using Azure tracing, please refer to the settings outlined below. Send traces from Micronaut native image applications to Azure Monitor | Microsoft Community Hub For Application Insights-specific settings, please refer to the documentation provided. Configuration options - Azure Monitor Application Insights for Java - Azure Monitor According to the documentation, when specifying a connection string, the configuration should be as follows. You can also set the connection string by using the environment variable APPLICATIONINSIGHTS_CONNECTION_STRING . It then takes precedence over the connection string specified in the JSON configuration. Or you can set the connection string by using the Java system property applicationinsights.connection.string . It also takes precedence over the connection string specified in the JSON configuration. Initially, it may appear that there is no alternative but to use environment variables or Java system properties. However, in the case of Micronaut (and similarly for Spring Boot and Quarkus), the connection string can be configured using the relationship between application settings and environment variables. This allows for defining it in application.properties or application.yml . For instance, in the case of the connection string mentioned above, if we specify it using an environment variable, we would use APPLICATIONINSIGHTS_CONNECTION_STRING . In Micronaut, we can specify it as shown in lines 5–7 of the following application.yml example (the key matches the one used when setting it as a system property). The configuration of application.yml, including Application Insights-specific settings, is as follows: applicationinsights: connection: string: ${AZURE_MONITOR_CONNECTION_STRING} sampling: percentage: 100 instrumentation: logging: level: "INFO" preview: captureLogbackMarker: true captureControllerSpans: true azure: tracing: connection-string: ${AZURE_MONITOR_CONNECTION_STRING} Codes a) To enable Application Insights We need to explicitly create an OpenTelemetry object to send logs. Please note that while Azure-tracing enables Application Insights, the OpenTelemetry object generated during this process is not publicly accessible and cannot be retrieved from outside. AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder(); OpenTelemetry openTelemetry = sdkBuilder.build().getOpenTelemetrySdk(); AzureMonitorAutoConfigure.customize(sdkBuilder, "connectionString"); b) Log Appender When we create the archetype, src/main/resources/logback.xml should be generated. In this file, add an Appender to associate with the io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender class object. <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%cyan(%d{HH:mm:ss.SSS}) %gray([%thread]) %highlight(%-5level) %magenta(%logger{36}) - %msg%n </pattern> </encoder> </appender> <appender name="OpenTelemetry" class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender"> <captureexperimentalattributes>true</captureexperimentalattributes> <capturecodeattributes>true</capturecodeattributes> <capturemarkerattribute>true</capturemarkerattribute> <capturekeyvaluepairattributes>true</capturekeyvaluepairattributes> <capturemdcattributes>*</capturemdcattributes> </appender> <root level="info"> <appender-ref ref="STDOUT"> <appender-ref ref="OpenTelemetry"> </appender-ref></appender-ref></root> </configuration> Then, associate the OpenTelemetry object we created earlier with Log Appender so that logs can be sent using OpenTelemetry. OpenTelemetryAppender.install(openTelemetry); c) Other implementation The objective of this article is to verify the Trace and Trace log. To that end, we will develop a rudimentary REST API, akin to a “Hello World” application. However, we will utilize the logger feature to generate multiple logs. In a real-world application, we would likely refine this process to avoid generating excessive logs. For example, HelloController.java is shown below. package dev.logicojp.micronaut; import io.micronaut.http.HttpStatus; import io.micronaut.http.MediaType; import io.micronaut.http.annotation.*; import io.micronaut.http.exceptions.HttpStatusException; import io.micronaut.scheduling.TaskExecutors; import io.micronaut.scheduling.annotation.ExecuteOn; import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @Controller("/api/hello") @ExecuteOn(TaskExecutors.IO) public class HelloController { private static final Logger logger = LoggerFactory.getLogger(HelloController.class); public HelloController(OpenTelemetry _openTelemetry){ OpenTelemetryAppender.install(_openTelemetry); logger.info("OpenTelemetry is configured and ready to use."); } @Get @Produces(MediaType.APPLICATION_JSON) public GreetingResponse hello(@QueryValue(value = "name", defaultValue = "World") String name) { logger.info("Hello endpoint was called with query parameter: {}", name); // Simulate some processing HelloService helloService = new HelloService(); GreetingResponse greetingResponse = helloService.greet(name); logger.info("Processing complete, returning response"); return greetingResponse; } @Post @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Status(HttpStatus.ACCEPTED) public void setGreetingPrefix(@Body GreetingPrefix greetingPrefix) { String prefix = greetingPrefix.prefix(); if (prefix == null || prefix.isBlank()) { logger.error("Received request to set an empty or null greeting prefix."); throw new HttpStatusException(HttpStatus.BAD_REQUEST, "Prefix cannot be null or empty"); } HelloService helloService = new HelloService(); helloService.setGreetingPrefix(prefix); logger.info("Greeting prefix set to: {}", prefix); } } For now, let’s build it as a Java application. mvn clean package Test as a Java application Please verify that the application is running without any issues … that traces are being sent to Application Insights that logs are being sent to the traces table that they can be confirmed on the Trace screen. If the call is GET /api/hello?name=Logico_jp, the traces table will look like this: In the Trace application, it should resemble this structure, in conjunction with the Request. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Verify that this application works the same as a normal Java application. For example, call GET /api/hello?name=xxxx, GET /api/hello?name=, GET /api/hello , and POST /api/hello. Check if traces and logs are visible in Azure Monitor (application insights) When reviewing the traces table in Application Insights, it becomes evident that four records were added at 3:14 p.m. When checking traces… As can be seen in the traces table, the logs have indeed been added to the trace. Naturally, the occurrence times remain consistent. Summary I have outlined the process of writing to the traces table in Application Insights. However, it should be noted that some code is necessary to configure the Log Appender. Consequently, zero code instrumentation cannot be achieved strictly. However, the actual configuration is relatively minor, so implementation is not difficult.98Views0likes0CommentsSend signals from Micronaut applications to Azure Monitor through zero-code instrumentation
The original post (Japanese) was written on 13 August 2025. Zero code instrumentationでMicronautアプリケーションからAzure Monitorにtraceやmetricを送信したい – Logico Inside This entry is a series posts below. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Received another question from the customer. I understand that I can get metrics and traces, but is it possible to send them to Azure Monitor (Application Insights) without using code? If you are not familiar with zero-code instrumentation, please check the following URL. Zero-code | OpenTelemetry The customer wondered if the dependencies would take care of everything else when they only specified the dependencies and destinations. To confirm this (and to provide a sample), we have prepared the following environment. As I wrote in the previous post, logs are dealt with in a different way depending on how they are used (IaaS, PaaS, etc.), so they are not included in this example. This example is a REST API application that can be used to find, add, change, and delete movie information. It uses PostgreSQL as a data store and sends information about how the system is performing to Azure Monitor, specifically Application Insights. You can find the code below. GitHub - anishi1222/micronaut-telemetry-movie: Zero code instrumentation (Azure Monitor, GraalVM Native Image, and Micronaut) Prerequisites Maven: 3.9.10 JDK: 21 Micronaut: 4.9.0 or later And we need to provision an instance of Azure Monitor (application insights) and PostgreSQL Flexible Server. Create an Archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. Micronaut Launch In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. The following features are needed when creating an archetype for this app. graalvm management micrometer-azure-monitor azure-tracing yaml validation postgres jdbc-hikari data-jpa Dependencies The basics of sending traces and metrics are as described in the previous two entries. In this post, we want to obtain traces for HTTP and JDBC connections, so we will add the following two dependencies. <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> </dependency> <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-jdbc</artifactid> </dependency> Additionally, we need to add this dependency to use the GraalVM Reachability Metadata Repository. The latest version is 0.11.0 as of 13 August, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> Application configuration This app connects to a database and Azure Monitor, so we need the following information. Database where the app connects. Azure Monitor related information. 1) Database We specify data source information in application.yml . 2) Azure Monitor Set the connection string for Application Insights. Because of dependency issues, it is necessary to set different locations for Metric and Trace, which is a bit inconvenient. However, it is recommended to pass it via environment variables to make it as common as possible. Here is the sample of application.yml . micronaut: application: name: micronaut-telemetry-movie metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} datasources: default: driver-class-name: org.postgresql.Driver db-type: postgres url: ${JDBC_URL} username: ${JDBC_USERNAME} password: ${JDBC_PASSWORD} dialect: POSTGRES schema-generate: CREATE_DROP hikari: connection-test-query: SELECT 1 connection-init-sql: SELECT 1 connection-timeout: 10000 idle-timeout: 30000 auto-commit: true leak-detection-threshold: 2000 maximum-pool-size: 10 max-lifetime: 60000 transaction-isolation: TRANSACTION_READ_COMMITTED azure: tracing: connection-string: ${AZURE_MONITOR_CONNECTION_STRING} otel: exclusions: /health, /info, /metrics, /actuator/health, /actuator/info, /actuator/metrics For now, let’s build it as a Java application. Test as a Java application Make sure the application is running smoothly, that traces are being sent to Application Insights, and that metrics are being output. Now, run the application using the Tracing Agent and create the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Let’s check if the application works. At first, we have to populate initial data with the following command. This command adds 3 records. curl -X PUT https://<container apps="" url="">/api/movies</container> { "message":"Database initialized with default movies." } Now let’s verify if three records exist. curl https://<container apps="" url="">/api/movies</container> [ { "id": 1, "title": "Inception", "releaseYear": 2010, "directors": "Christopher Nolan", "actors": "Leonardo DiCaprio, Joseph Gordon-Levitt, Elliot Page" }, { "id": 2, "title": "The Shawshank Redemption", "releaseYear": 1994, "directors": "Frank Darabont", "actors": "Tim Robbins, Morgan Freeman, Bob Gunton" }, { "id": 3, "title": "The Godfather", "releaseYear": 1972, "directors": "Francis Ford Coppola", "actors": "Marlon Brando, Al Pacino, James Caan" } ] (1) Azure Monitor (Application Insights) We should see the images like this. (2) Metrics We can see which metrics we can check with the API call GET /metrics. { "names": [ "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "hikaricp.connections", "hikaricp.connections.acquire", "hikaricp.connections.active", "hikaricp.connections.creation", "hikaricp.connections.idle", "hikaricp.connections.max", "hikaricp.connections.min", "hikaricp.connections.pending", "hikaricp.connections.timeout", "hikaricp.connections.usage", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "process.cpu.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } But because this is a native image application, we can’t get the right information about the JVM. For example, if we invoke the API with GET /metrics/jvm.memory.max, we will see the following. What does -2 mean? { "name": "jvm.memory.max", "measurements": [ { "statistic": "VALUE", "value": -2.0 } ], "availableTags": [ { "tag": "area", "values": [ "nonheap" ] }, { "tag": "id", "values": [ "runtime code cache (native metadata)", "runtime code cache (code and data)" ] } ], "description": "The maximum amount of memory in bytes that can be used for memory management", "baseUnit": "bytes" } To find out how much the CPU is being used, run GET /metrics/process.cpu.usage, and we’ll get this result. { "name": "process.cpu.usage", "measurements": [ { "statistic": "VALUE", "value": 0.0017692156477295067 } ], "description": "The \"recent cpu usage\" for the Java Virtual Machine process" } To add logs to Azure Monitor “traces” table... Some of you might want to use the information in the following entry with zero-code instrumentation, but currently you cannot. Send logs from Micronaut native image applications to Azure Monitor | Microsoft Community Hub This is because we cannot get the OpenTelemetry object needed to write to the Application Insights traces table. Therefore, it must be explicitly declared. The following example clearly states and sets up Appender in the MovieController constructor. The way Appender is set up is not included here, as it was explained before. @Inject AzureTracingConfigurationProperties azureTracingConfigurationProperties; private static final Logger logger = LoggerFactory.getLogger(MovieController.class); public MovieController(AzureTracingConfigurationProperties azureTracingConfigurationProperties) { this.azureTracingConfigurationProperties = azureTracingConfigurationProperties; AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder(); AzureMonitorAutoConfigure.customize(sdkBuilder, azureTracingConfigurationProperties.getConnectionString()); OpenTelemetryAppender.install(sdkBuilder.build().getOpenTelemetrySdk()); logger.info("OpenTelemetry configured for MovieController."); } Although explicit declaration is required, logs will be recorded in Traces as long as this setting is enabled.118Views0likes0CommentsModernizing legacy Java project using GitHub Copilot App Modernization upgrade for Java
In this blog, we explore how the GitHub Copilot App Modernization – Upgrade for Java can streamline the process of modernizing legacy Java applications. To put the tool to the test, we selected the widely known Spring Boot Pet Clinic project, originally built with Java 8 and Spring Boot 2.x. Using GitHub Copilot Upgrade for Java’s modernization capabilities, we successfully upgraded the project to Java 21 and Spring Boot 3.4.x. This post highlights the upgrade journey, showcases the tool’s capabilities, and shares practical tips and lessons learned along the way.