java ee
42 TopicsModernizing Spring Framework Applications with GitHub Copilot App Modernization
Upgrading Spring Framework applications from version 5 to the latest 6.x line (including 6.2+) enables improved performance, enhanced security, alignment with modern Java releases, and full Jakarta namespace compatibility. The transition often introduces breaking API changes, updated module requirements, and dependency shifts. GitHub Copilot app modernization streamlines this upgrade by analyzing your project, generating targeted changes, and guiding you through the migration. Supported Upgrade Path GitHub Copilot app modernization supports: Upgrading Spring Framework to 6.x, including 6.2+ Migrating from javax to jakarta Aligning transitive dependencies and version constraints Updating build plugins and configurations Identifying deprecated or removed APIs Validating dependency updates and surfacing CVE issues These capabilities align with the Microsoft Learn quickstart for upgrading Java projects with GitHub Copilot app modernization. Project Setup Open your Spring Framework project in Visual Studio Code or IntelliJ IDEA with GitHub Copilot app modernization enabled. The tool works with Maven or Gradle projects and evaluates your existing Spring Framework, Java version, imports, and build configurations. Project Analysis When you trigger the upgrade, GitHub Copilot app modernization: Detects the current Spring Framework version Flags javax imports requiring Jakarta migration Identifies incompatible modules, libraries, and plugins Validates JDK compatibility requirements for Spring Framework 6.x Reviews transitive dependencies impacted by the update This analysis provides the foundation for the upgrade plan generated next. Upgrade Plan Generation GitHub Copilot app modernization produces a structured plan including: Updated Spring Framework version (6.x / 6.2+) Replacements for deprecated or removed APIs jakarta namespace updates Updated build plugins and version constraints JDK configuration adjustments You can review the plan, modify version targets, and confirm actions before the tool applies them. Automated Transformations After approval, GitHub Copilot app modernization applies automated changes such as: Updating Spring Framework module coordinates Rewriting imports from javax.* to jakarta.* Updating libraries required for Spring Framework 6.x Adjusting plugin versions and build logic Recommending fixes for API changes These transformations rely on OpenRewrite‑based rules to modernize your codebase efficiently. Build Fix Iteration Once changes are applied, the tool compiles your project and automatically responds to failures: Captures compilation errors Suggests targeted fixes Rebuilds iteratively This loop continues until the project compiles with Spring Framework 6.x in place. Security & Behavior Checks GitHub Copilot app modernization performs validation steps after the upgrade: Checks for CVEs in updated dependencies Identifies potential behavior changes introduced during the transition Offers optional fixes to address issues This adds confidence before final verification. Expected Output After a Spring Framework 5 → 6.x upgrade, you can expect: Updated module coordinates for Spring Framework 6.x / 6.2 jakarta‑aligned imports across the codebase Updated dependency versions aligned with the new Spring ecosystem Updated plugins and build tool configurations Modernized test stack (JUnit 5) A summary file detailing versions updated, code edits applied, dependencies changed, and items requiring manual review Developer Responsibilities GitHub Copilot app modernization accelerates framework upgrade mechanics, but developers remain responsible for: Running full test suites Reviewing custom components, filters, and validation logic Revisiting security configurations and reactive vs. servlet designs Checking integration points and application semantics post‑migration The tool handles the mechanical modernization work so you can focus on correctness, runtime behavior, and quality assurance. Learn More For prerequisites, setup steps, and the complete Java upgrade workflow, refer to the Microsoft Learn guide: Upgrade a Java Project with Github Copilot App Modernization Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA141Views0likes0CommentsModernizing Spring Boot Applications with GitHub Copilot App Modernization
Upgrading Spring Boot applications from 2.x to the latest 3.x releases introduces significant changes across the framework, dependencies, and Jakarta namespace. These updates improve long-term support, performance, and compatibility with modern Java platforms, but the migration can surface breaking API changes and dependency mismatches. GitHub Copilot app modernization helps streamline this transition by analyzing your project, generating an upgrade plan, and applying targeted updates. Supported Upgrade Path GitHub Copilot app modernization supports upgrading Spring Boot applications to Spring Boot 3.5, including: Updating Spring Framework libraries to 6.x Migrating from javax to jakarta Aligning dependency versions with Boot 3.x Updating plugins and starter configurations Adjusting build files for the required JDK level Validating dependency updates and surfacing CVE issues These capabilities complement the Microsoft Learn quickstart for upgrading Java projects using GitHub Copilot app modernization. How GitHub Copilot app modernization helps When you open a Spring Boot 2.x project in Visual Studio Code or IntelliJ IDEA and initiate an upgrade, GitHub Copilot app modernization performs: Project Analysis Detects your current Spring Boot version Identifies incompatible starters, libraries, and plugins Flags javax.* imports requiring Jakarta migration Evaluates your build configuration and JDK requirements Upgrade Plan Generation The tool produces an actionable plan that outlines: New Spring Boot parent version Updated Spring Framework and related modules Required namespace changes from javax.* to jakarta.* Build plugin updates JDK configuration alignment for Boot 3 You can review and adjust the plan before applying changes. Automated Transformations GitHub Copilot app modernization applies targeted changes such as: Updating spring-boot-starter-parent to 3.5.x Migrating imports to jakarta.* Updating dependencies and BOM versions Rewriting removed or deprecated APIs Aligning test dependencies (e.g., JUnit 5) Build / Fix Iteration The agent automatically: Builds the project Captures failures Suggests fixes Applies updates Rebuilds until the project compiles successfully This loop continues until all actionable issues are addressed. Security & Behavior Checks As part of the upgrade, the tool can: Validate CVEs introduced by dependency version changes Surface potential behavior changes Recommend optional fixes Expected Output After running the upgrade for a Spring Boot 2.x project, you should expect: An updated Spring Boot parent in Maven or Gradle Spring Framework 6.x and Jakarta-aligned modules Updated starter dependencies and plugin versions Rewritten imports from javax.* to jakarta.* Updated testing stack A summary file detailing: Versions updated Code edits applied Dependencies changed CVE results Remaining manual review items Developer Responsibilities GitHub Copilot app modernization accelerates technical migration tasks, but final validation still requires developer review, including: Running the full test suite Reviewing custom filters, security configuration, and web components Re-validating integration points Confirming application behavior across runtime environments The tool handles mechanical upgrade work so you can focus on correctness, quality, and functional validation. Learn more For setup, prerequisites, and the broader Java upgrade workflow, refer to the official Microsoft Learn guide: Quickstart: Upgrade a Java Project with GitHub Copilot App Modernization Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA209Views0likes0CommentsUpgrade your Java JDK (8, 11, 17, 21, or 25) with GitHub Copilot App Modernization
Developers modernizing Java applications often need to upgrade the Java Development Kit (JDK), update frameworks, align dependencies, or migrate older stacks such as Java EE. GitHub Copilot app modernization dramatically speeds up this process by analyzing your project, identifying upgrade blockers, and generating targeted changes. This post highlights supported upgrade paths and what you can expect when using GitHub Copilot app modernization—optimized for search discoverability rather than deep tutorial content. For complete, authoritative guidance, refer to the official Microsoft Learn quickstart. Supported Upgrade Scenarios GitHub Copilot app modernization supports upgrading: Java Development Kit (JDK) to versions 8, 11, 17, 21, or 25 Spring Boot up to 3.5 Spring Framework up to 6.2+ Java EE → Jakarta EE (up to Jakarta EE 10) JUnit Third‑party dependencies to specified versions Ant → Maven build migrations For the full capabilities list, see the Microsoft Learn quickstart. Prerequisites (VS Code or IntelliJ) To use GitHub Copilot app modernization, you’ll need: GitHub account + GitHub Copilot Free Tier, Pro, Pro+, Business, or Enterprise Visual Studio Code Version 1.101+ GitHub Copilot extension GitHub Copilot app modernization extension Restart after installation IntelliJ IDEA Version 2023.3+ GitHub Copilot plugin 1.5.59+ Restart after installation Recommended: Auto‑approve MCP Tool Annotations under Tools > GitHub Project Requirements Java project using Maven or Gradle Git‑managed Maven access to public Maven Central (if Maven) Gradle wrapper version 5+ Kotlin DSL supported VS Code setting: “Tools enabled” set to true if controlled by your org Selecting a Java Project to Upgrade Open any Java project in: Visual Studio Code IntelliJ IDEA Optional sample projects: Maven: uportal‑messaging Gradle: docraptor‑java Once open, launch GitHub Copilot app modernization using Agent Mode. Running an Upgrade (Example: Java 8 → Java 21) Open GitHub Copilot Chat → Switch to Agent Mode → Run a prompt such as: Upgrade this project to Java 21 You’ll receive: Upgrade Plan JDK version updates Build file changes (Maven/Gradle) Dependency version adjustments Framework upgrade paths, if relevant Automated Transformations GitHub Copilot app modernization applies changes using OpenRewrite‑based transformations. Dynamic Build / Fix Loop The agent iterates: Build Detect failure Fix Retry Until the project builds successfully. Security & Behavior Checks Detects CVEs in upgraded dependencies Flags potential behavior changes Offers optional fixes Final Upgrade Summary Generated as a markdown file containing: Updated JDK level Dependencies changed Code edits made Any remaining CVEs or warnings What You Can Expect in a JDK Upgrade Typical outcomes from upgrading Java 8 → Java 21: Updated build configuration (maven.compiler.release → 21) Removal or replacement of deprecated JDK APIs Updated library versions for Java 21 compatibility Surface warnings for manual review Successfully building project with modern JDK settings GitHub Copilot app modernization accelerates these updates while still leaving space for developer review of runtime or architectural changes. Learn More For the complete, authoritative upgrade workflow—covering setup, capabilities, and the full end‑to‑end process—visit: ➡ Quickstart: Upgrade a Java project with GitHub Copilot app modernization (Microsoft Learn) Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA352Views0likes0CommentsMigrating Application Credentials to Azure Key Vault with GitHub Copilot App Modernization
Storing secrets directly in applications or configuration files increases operational risk. Migrating to Azure Key Vault centralizes secret management, supports rotation, and removes embedded credentials from application code. GitHub Copilot app modernization accelerates this process by identifying credential usage areas and generating changes for Key Vault integration. What This Migration Covers GitHub Copilot app modernization helps with: Detecting secrets hard‑coded in source files, config files, or environment variables. Recommending retrieval patterns using Azure Key Vault SDKs. Updating application code to load secrets from Key Vault. Preparing configuration updates to remove stored credentials. Surfacing dependency, version, and API adjustments required for Key Vault usage. Project Analysis Once the project is opened in Visual Studio Code or IntelliJ IDEA, GitHub Copilot app modernization analyzes: Hard‑coded credentials: passwords, tokens, client secrets, API keys. Legacy configuration patterns using .properties, .yaml, or environment variables. Azure SDK usage and required upgrades for Key Vault integration. Areas requiring secure retrieval or replacement with a managed identity. Migration Plan Generation The tool creates a step‑by‑step migration plan including: Introducing Key Vault client libraries. Mapping existing credential variables to Key Vault secrets. Updating configuration loading logic to retrieve secrets at runtime. Integrating Managed Identity authentication if applicable. Removing unused credential fields from code and configuration. Automated Transformations GitHub Copilot app modernization applies targeted changes: Rewrites code retrieving credentials from files or constants. Generates Key Vault retrieval patterns using SecretClient. Updates build dependencies to current Azure SDK versions. Removes unused configuration entries and environment variables. Build & Fix Iteration The project is rebuilt and validated: Fixes constructor changes related to updated clients. Resolves missing dependency versions. Corrects updated method signatures for Key Vault API calls. Rebuilds until no actionable errors remain. Security & Behavior Checks The tool surfaces: CVEs introduced by new or updated libraries. Behavior changes tied to lazy loading of secrets at runtime. Optional fixes or alternative patterns if Key Vault integration affects existing workflows. Expected Output After modernization: Credentials removed from source and config files. Application retrieves secrets from Azure Key Vault. Updated Azure SDK versions aligned with Key Vault. A summary file detailing code changes, dependency updates, and review items. Developer Responsibilities Developers should: Provision Key Vault resources and assign required access policies. Validate permissions through Managed Identity or service principals. Test application startup, error handling, and rotation scenarios. Review semantic impacts on components relying on early secret loading. Refer to the Microsoft Learn guide on upgrading Java projects with GitHub Copilot app modernization for foundational workflow details. Learn more Predefined tasks for GitHub Copilot app modernization Apply a predefined task Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA287Views0likes0CommentsModernizing Applications by Migrating Code to Use Managed Identity with Copilot App Modernization
Migrating application code to use Managed Identity removes hard‑coded secrets, reduces operational risk, and aligns with modern cloud security practices. Applications authenticate directly with Azure services without storing credentials. GitHub Copilot app modernization streamlines this transition by identifying credential usage patterns, updating code, and aligning dependencies for Managed Identity flows. Supported Migration Steps GitHub Copilot app modernization helps accelerate: Replacing credential‑based authentication with Managed Identity authentication. Updating SDK usage to token‑based flows. Refactoring helper classes that build credential objects. Surfacing libraries or APIs that require alternative authentication approaches. Preparing build configuration changes needed for managed identity integration. Migration Analysis Open the project in Visual Studio Code or IntelliJ IDEA. GitHub Copilot app modernization analyzes: Locations where secrets, usernames, passwords, or connection strings are referenced. Service clients using credential constructors or static credential factories. Environment‑variable‑based authentication workarounds. Dependencies and SDK versions required for Managed Identity authentication. The analysis outlines upgrade blockers and the required changes for cloud‑native authentication. Migration Plan Generation GitHub Copilot app modernization produces a migration plan containing: Replacement of hard‑coded credentials with Managed Identity authentication patterns. Version updates for Azure libraries aligning with Managed Identity support. Adjustments to application configuration to remove unnecessary secrets. Developers can review and adjust before applying. Automated Transformations GitHub Copilot app modernization applies changes: Rewrites code that initializes clients using username/password or connection strings. Introduces Managed Identity‑friendly constructors and token credential patterns. Updates imports, method signatures, and helper utilities. Cleans up configuration files referencing outdated credential flows. Build & Fix Iteration The tool rebuilds the project, identifies issues, and applies targeted fixes: Compilation errors from removed credential classes. Incorrect parameter types or constructors. Dependencies requiring updates for Managed Identity compatibility. Security & Behavior Checks GitHub Copilot app modernization validates: CVEs introduced through dependency updates. Behavior changes caused by new authentication flows. Optional fixes for dependency vulnerabilities. Expected Output A migrated codebase using Managed Identity: Updated authentication logic. Removed credential references. Updated SDKs and dependencies. A summary file listing code edits, dependency changes, and items requiring manual review. Developer Responsibilities Developers should: Validate identity access on Azure resources. Reconfigure role assignments for system‑assigned or user‑assigned managed identities. Test functional behavior across environments. Review integration points dependent on identity scopes and permissions. Learn full upgrade workflows in the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Learn more Predefined tasks for GitHub Copilot app modernization Apply a predefined task Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA200Views0likes0CommentsModernizing Java EE Applications to Jakarta EE with GitHub Copilot App Modernization
Migrating a Java EE application to Jakarta EE is now a required step as the ecosystem has fully transitioned to the new jakarta.* namespace. This migration affects servlet APIs, persistence, security, messaging, and frameworks built on top of the Jakarta specifications. The changes are mechanical but widespread, and manual migration is slow, error‑prone, and difficult to validate at scale. GitHub Copilot app modernization accelerates this process by analyzing the project, identifying required namespace and dependency updates, and guiding developers through targeted upgrade steps. Supported Upgrade Path GitHub Copilot app modernization supports: Migrating Java EE applications to Jakarta EE (up to Jakarta EE 10) Updating javax.* imports to jakarta.* Aligning dependencies and application servers with Jakarta EE 10 requirements Updating build plugins, BOMs, and libraries impacted by namespace migration Fixing compilation issues and surfacing API incompatibilities Detecting dependency CVEs after migration These capabilities complement the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Getting Started Open your Java EE project in Visual Studio Code or IntelliJ IDEA with GitHub Copilot app modernization enabled. Copilot evaluates the project structure, build files, frameworks in use, and introduces a migration workflow tailored to Jakarta EE. Project Analysis The migration begins with a full project scan: Identifies Java EE libraries (javax.*) Detects frameworks depending on older EE APIs (Servlet, JPA, JMS, Security) Flags incompatible versions of application servers and dependencies Determines JDK constraints for Jakarta EE 10 compatibility Analyzes build configuration (Maven/Gradle) and transitive dependencies This analysis forms the basis for the generated migration plan. Migration Plan Generation GitHub Copilot app modernization generates a clear, actionable plan outlining: Required namespace transitions from javax.* to jakarta.* Updated dependency coordinates aligned with Jakarta EE 10 Plugin version updates Adjustments to JDK settings if needed Additional changes for frameworks relying on legacy EE APIs You can review and adjust versions or library targets before applying changes. Automated Transformations After approving the plan, Copilot performs transformation steps: Rewrites imports from javax. to jakarta. Updates dependencies to Jakarta EE 10–compatible coordinates Applies required framework-level changes (JPA, Servlet, Bean Validation, JAX‑RS, CDI) Updates plugin versions aligned with Jakarta EE–based libraries Converts removed or relocated APIs with recommended replacements These transformations rely on OpenRewrite‑based rules surfaced through Copilot app modernization. Build Fix Iteration Copilot iterates through a build‑and‑fix cycle: Runs the build Captures compilation errors introduced by the migration Suggests targeted fixes Applies changes Rebuilds until the project compiles successfully This loop eliminates hours or days of manual mechanical migration work. Security & Behavior Checks After a successful build, Copilot performs additional validation: Flags CVEs introduced by updated or newly resolved dependencies Surfaces potential behavior changes from updated APIs Offers optional fixes for dependency vulnerabilities These checks ensure the migrated application is secure and stable before runtime verification. Expected Output A Jakarta EE migration with GitHub Copilot app modernization results in: Updated imports from javax.* to jakarta.* Dependencies aligned with Jakarta EE 10 Updated Maven or Gradle plugin and library versions Rewritten API usage where needed Updated tests and validation logic if required A summary file containing: Versions updated Code edits applied Dependency changes CVE results Items requiring manual developer review Developer Responsibilities While GitHub Copilot app modernization accelerates the mechanical upgrade, developers remain responsible for: Running the full application test suite Reviewing security, validation, and persistence behavior changes Updating application server configuration (if applicable) Re‑verifying integrations with messaging, REST endpoints, and persistence providers Confirming semantic correctness post‑migration Copilot focuses on the mechanical modernization tasks so developers can concentrate on validating runtime behavior and business logic. Learn More For setup prerequisites and full upgrade workflow details, refer to the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Quickstart: Upgrade a Java Project with GitHub Copilot App Modernization | Microsoft Learn294Views0likes0CommentsDisciplined Guardrail Development in enterprise application with GitHub Copilot
What Is Disciplined Guardrail-Based Development? In AI-assisted software development, approaches like Vibe Coding—which prioritize momentum and intuition—often fail to ensure code quality and maintainability. To address this, Disciplined Guardrail-Based Development introduces structured rules ("guardrails") that guide AI systems during coding and maintenance tasks, ensuring consistent quality and reliability. To get AI (LLMs) to generate appropriate code, developers must provide clear and specific instructions. Two key elements are essential: What to build – Clarifying requirements and breaking down tasks How to build it – Defining the application architecture The way these two elements are handled depends on the development methodology or process being used. Here are examples as follows. How to Set Up Disciplined Guardrails in GitHub Copilot To implement disciplined guardrail-based development with GitHub Copilot, two key configuration features are used: 1. Custom Instructions (.github/copilot-instructions.md): This file allows you to define persistent instructions that GitHub Copilot will always refer to when generating code. Purpose: Establish coding standards, architectural rules, naming conventions, and other quality guidelines. Best Practice: Instead of placing all instructions in a single file, split them into multiple modular files and reference them accordingly. This improves maintainability and clarity. Example Use: You might define rules like using camelCase for variables, enforcing error boundaries in React, or requiring TypeScript for all new code. https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions 2. Chat Modes (.github/chatmodes/*.chatmode.md): These files define specialized chat modes tailored to specific tasks or workflows. Purpose: Customize Copilot’s behavior for different development contexts (e.g., debugging, writing tests, refactoring). Structure: Each .chatmode.md file includes metadata and instructions that guide Copilot’s responses in that mode. Example Use: A debug.chatmode.md might instruct Copilot to focus on identifying and resolving runtime errors, while a test.chatmode.md could prioritize generating unit tests with specific frameworks. https://code.visualstudio.com/docs/copilot/customization/custom-chat-modes The files to be created and their relationships are as follows. Next, there are introductions for the specific creation method. #1: Custom Instructions With custom instructions, you can define commands that are always provided to GitHub Copilot. The prepared files are always referenced during chat sessions and passed to the LLM (this can also be confirmed from the chat history). An important note is to split the content into several files and include links to those files within the .github/copilot-instructions.md file. Because it can become too long if everything is written in a single file. There are mainly two types of content that should be described in custom instructions: A: Development Process (≒ outcome + Creation Method) What documents or code will be created: requirements specification, design documents, task breakdown tables, implementation code, etc. In what order and by whom they will be created: for example, proceed in the order of requirements definition → design → task breakdown → coding. B: Application Architecture How will the outcome be defined in A be created? What technology stack and component structure will be used? A concrete example of copilot-instructions.md is shown below. # Development Rules ## Architecture - When performing design and coding tasks, always refer to the following architecture documents and strictly follow them as rules. ### Product Overview - Document the product overview in `.github/architecture/product.md` ### Technology Stack - Document the technologies used in `.github/architecture/techstack.md` ### Coding Standards - Document coding standards in `.github/architecture/codingrule.md` ### Project Structure - Document the project directory structure in `.github/architecture/structure.md` ### Glossary (Japanese-English) - Document the list of terms used in the project in `.github/architecture/dictionary.md` ## Development Flow - Follow a disciplined development flow and execute the following four stages in order (proceed to the next stage only after completing the current one): 1. Requirement Definition 2. Design 3. Task Breakdown 4. Coding ### 1. Requirement Definition - Document requirements in `docs/[subsystem_name]/[business_name]/requirement.md` - Use `requirement.chatmode.md` to define requirements - Focus on clarifying objectives, understanding the current situation, and setting success criteria - Once requirements are defined, obtain user confirmation before proceeding to the next stage ### 2. Design - Document design in `docs/[subsystem_name]/[business_name]/design.md` - Use `design.chatmode.md` to define the design - Define UI, module structure, and interface design - Once the design is complete, obtain user confirmation before proceeding to the next stage ### 3. Task Breakdown - Document tasks in `docs/[subsystem_name]/[business_name]/tasks.md` - Use `tasks.chatmode.md` to define tasks - Break down tasks into executable units and set priorities - Once task breakdown is complete, obtain user confirmation before proceeding to the next stage ### 4. Coding - Implement code under `src/[subsystem_name]/[business_name]/` - Perform coding task by task - Update progress in `docs/[subsystem_name]/[business_name]/tasks.md` - Report to the user upon completion of each task Note: The only file that is always sent to the LLM is `copilot-instructions.md`. Documents linked from there (such as `product.md` or `techstack.md`) are not guaranteed to be read by the LLM. That said, a reasonably capable LLM will usually review these files before proceeding with the work. If the LLM does not properly reference each file, you may explicitly add these architecture documents to the context. Another approach is to instruct the LLM to review these files in the **chat mode settings**, which will be described later. There are various “schools of thought” regarding application architecture, and it is still an ongoing challenge to determine exactly what should be defined and what documents should be created. The choice of architecture depends on factors such as the business context, development scale, and team structure, so it is difficult to prescribe a one-size-fits-all approach. That said, as a general guideline, it is desirable to summarize the following: Product Overview: Overview of the product, service, or business, including its overall characteristics Technology Stack: What technologies will be used to develop the application? Project Structure: How will folders and directories be organized during development? Module Structure: How will the application be divided into modules? Coding Rules: Rules for handling exceptions, naming conventions, and other coding practices Writing all of this from scratch can be challenging. A practical approach is to create template information with the help of Copilot and then refine it. Specifically, you can: Use tools like M365 Copilot Researcher to create content based on general principles Analyze a prototype application and have the architecture information summarized (using Ask mode or Edit mode, feed the solution files to a capable LLM for analysis) However, in most cases, the output cannot be used as-is. The structure may not be analyzed correctly (hallucinations may occur) Project-specific practices and rules may not be captured Use the generated content as a starting point, and then refine it to create architecture documentation tailored to your own project. When creating architecture documents for enterprise-scale application development, a useful approach is to distinguish between the foundational parts and the individual application parts. Discipline-based guardrail development is particularly effective when building multiple applications in a “cookie-cutter” style on top of a common foundation. A cler example of this is Data-Oriented Architecture (DOA). In DOA, individual business applications are built on top of a shared database that serves as the overall common foundation. In this case, the foundational parts (the database layer) should not be modified arbitrarily by individual developers. Instead, focus on how to standardize the development of the individual application parts (the blue-framed sections) while ensuring consistency. Architecture documentation should be organized with this distinction in mind, emphasizing the uniformity of application-level development built upon the stable foundation. #2 Chat Mode By default, GitHub Copilot provides three chat modes: Ask, Edit, and Agent. However, by creating files under .github/chatmodes/*.chatmode.md, you can customize the Agent mode to create chat modes tailored for specific tasks. Specifically, you can configure the following three aspects. Functionally, this allows you to perform a specific task without having to manually change the model or tools, or write detailed instructions each time: model: Specify the default LLM to use (Note: The user can still manually switch to another LLM if desired) tools: Restrict which tools can be used (Note: The user can still manually select other tools if desired) custom instructions: Provide custom instructions specific to this chat mode A concrete example of .github/chatmodes/*.chatmode.md is shown below. description: This mode is used for requirement definition tasks. model: Claude Sonnet 4 tools: ['changes', 'codebase', 'editFiles', 'fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'runCommands', 'search', 'searchResults', 'terminalLastCommand', 'terminalSelection', 'usages', 'vscodeAPI', 'mssql_connect', 'mssql_disconnect', 'mssql_list_servers', 'mssql_show_schema'] --- # Requirement Definition Mode In this mode, requirement definition tasks are performed. Specifically, the project requirements are clarified, and necessary functions and specifications are defined. Based on instructions or interviews with the user, document the requirements according to the format below. If any specifications are ambiguous or unclear, Copilot should ask the user questions to clarify them. ## File Storage Location Save the requirement definition file in the following location: - Save as `requirement.md` under the directory `docs/[subsystem_name]/[business_name]/` ## Requirement Definition Format While interviewing the user, document the following items in the Markdown file: - **Subsystem Name**: The name of the subsystem to which this business belongs - **Business Name**: The name of the business - **Overview**: A summary of the business - **Use Cases**: Clarify who uses this business, when/under what circumstances, and for what purpose, using the following structure: - **Who (Persona)**: User or system roles - **When/Under What Circumstances (Scenario)**: Timing when the business is executed - **Purpose (Goal)**: Objectives or expected outcomes of the business - **Importance**: The importance of the business (e.g., High, Medium, Low) - **Acceptance Criteria**: Conditions that must be satisfied for the requirement to be considered met - **Status**: Current state of the requirement (e.g., In Progress, Completed) ## After Completion - Once requirement definition is complete, obtain user confirmation and proceed to the next stage (Design). Tips for Creating Chat Modes Here are some tips for creating custom chat modes: Align with the development process: Create chat modes based on the workflow and the deliverables. Instruct the LLM to ask the user when unsure: Direct the LLM to request clarification from the user if any information is missing. Clarify what deliverables to create and where to save them: Make it explicit which outputs are expected and their storage locations. The second point is particularly important. Many AI (LLMs) tend to respond to user prompts in a sycophantic manner (known as sycophancy). As a result, they may fill in unspecified requirements or perform tasks that were not requested, often with the intention of being helpful. The key difference between Ask/Edit modes and Agent mode is that Agent mode allows the LLM to proactively ask questions and engage in dialogue with the user. However, unless the user explicitly includes a prompt such as “ask if you don’t know,” the AI rarely initiates questions on its own. By creating a custom chat mode and instructing the LLM to “ask the user when unsure,” you can fully leverage the benefits of Agent mode. About Tools You can easily check tool names from the list of available tools in the command palette. Alternatively, as shown in the diagram below, it can be convenient to open the custom chat mode file and specify the tool configuration. You can specify not only the MCP server functionality but also built-in tools and Copilot Extensions. Example of Actual Operation An example interaction when using this chat mode is as follows: The LLM behaves according to the custom instructions defined in the chat mode. When you answer questions from GHC, the LLM uses that information to reason and proceed with the task. However, the output is not guaranteed to be correct (hallucinations may occur) → A human should review the output and make any necessary corrections before committing. The basic approach to disciplined guardrail-based development has been covered above. In actual business application development, it is also helpful to understand the following two points: Referencing the database schema Integrated management of design documents and implementation code (Important) Reading the Database Schema In business application development, requirements definition and functional design are often based on the schema information of entities. There are two main ways to allow the system to read schema information: Dynamically read the schema from a development/test DB server using MCP or similar tools. Include a file containing schema information within the project and read from it. A development/test database can be prepared, and schema information can be read via the MCP server or Copilot Extensions. For SQL Server or Azure SQL Database, an MCP Server is available, but its setup can be cumbersome. Therefore, using Copilot Extensions is often easier and recommended. This approach is often seen online, but it is not recommended for the following reasons: Setting up MCP Server or Copilot Extensions can be cumbersome (installation, connection string management, etc.) It is time-consuming (the LLM needs schema information → reads the schema → writes code based on it) Connecting to a DB server via MCP or similar tools is useful for scenarios such as “querying a database in natural language” for non-engineers performing data analysis. However, if the goal is simply to obtain the schema information of entities needed for business application development, the method described below is much simpler. Storing Schema Information Within the Project Place a file containing the schema information inside the project. Any of the following formats is recommended. Write custom instructions so that development refers to this file: DDL (full CREATE DATABASE scripts) O/R mapper files (e.g., Entity Framework context files) Text files documenting schema information, etc. DDL files are difficult for humans to read, but AI (LLMs) can easily read and accurately understand them. In .NET + SQL development, it is recommended to include both the DDL and EF O/R mapper files. Additionally, if you include links to these files in your architecture documents and chat mode instructions, the LLM can generate code while understanding the schema with high accuracy. Integrated Management of Design Documents and Implementation Code Disciplined guardrail-based development with LLMs has made it practical to synchronize and manage design documents and implementation code together—something that was traditionally very difficult. In long-standing systems, it is common for old design documents to become largely useless. During maintenance, code changes are often prioritized. As a result, updating and maintaining design documents tends to be neglected, leading to a significant divergence between design documents and the actual code. For these reasons, the following have been considered best practices (though often not followed in reality): Limit requirements and external design documents to the minimum necessary. Do not create internal design documents; instead, document within the code itself. Always update design documents before making changes to the implementation code. When using LLMs, guardrail-based development makes it easier to enforce a “write the documentation first” workflow. Following the flow of defining specifications, updating the documents, and then writing code also helps the LLM generate appropriate code more reliably. Even if code is written first, LLM-assisted code analysis can significantly reduce the effort required to update the documentation afterward. However, the following points should be noted when doing this: Create and manage design documents as text files, not Word, Excel, or PowerPoint. Use text-based technologies like Mermaid for diagrams. Clearly define how design documents correspond to the code. The last point is especially important. It is crucial to align the structure of requirements and design documents with the structure of the implementation code. For example: Place design documents directly alongside the implementation code. Align folder structures, e.g., /doc and /src. Information about grouping methods and folder mapping should be explicitly included in the custom instructions. Conclusion of Disciplined Guardrail-Based Development with GHC Formalizing and Applying Guardrails Define the development flow and architecture documents in .github/copilot-instructions.md using split references. Prepare .github/chatmodes/* for each development phase, enforcing “ask the AI if anything is unclear.” Synchronization of Documents and Implementation Code Update docs first → use the diff as the basis for implementation (Doc-first). Keep docs in text format (Markdown/Mermaid). Fix folder correspondence between /docs and /src. Handling Schemas Store DDL/O-R mapper files (e.g., EF) in the repository and have the LLM reference them. Minimize dynamic DB connections, prioritizing speed, reproducibility, and security. This disciplined guardrail-based development technique is an AI-assisted approach that significantly improves the quality, maintainability, and team efficiency of enterprise business application development. Adapt it appropriately to each project to maximize productivity in application development.1.4KViews5likes0Comments🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.152Views1like0CommentsObserve Quarkus Apps with Azure Application Insights using OpenTelemetry
Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. Install a Java SE implementation version 17 - for example, Microsoft build of OpenJDK. Install Maven, version 3.9.8 or higher. Install Docker for your OS. Install the Azure CLI to run Azure CLI commands. Sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign into Azure with Azure CLI. When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use and manage extensions with the Azure CLI. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade. This blog requires at least version 2.65.0 of Azure CLI. Prepare the Quarkus app Run the following commands to get the sample app app-insights-quarkus from GitHub: git clone https://github.com/Azure-Samples/quarkus-azure cd quarkus-azure git checkout 2024-11-27 cd app-insights-quarkus Here's the file structure of the application, with important files and directories: ├── src/main/ │ ├── java/io/quarkus/sample/ │ │ └── TodoResource.java │ └── resources/ │ ├── META-INF/resources/ │ ├── application.properties ├── pom.xml The directory src/main/resources/META-INF/resources contains the front-end code for the application. It's a Vue.js front end where you can view, add, update, and delete todo items. The src/main/java/io/quarkus/sample/TodoResource.java file implements the REST resource for the application. It uses the Jakarta REST API to expose the REST endpoints for the front end. The invocation of the REST endpoints is automatically instrumented by OpenTelemetry tracing. Besides, each REST endpoint uses the org.jboss.logging.Logger to log messages, which are collected by OpenTelemetry logging. For example, the GET method for the /api endpoint that returns all todo items is shown in the following code snippet: package io.quarkus.sample; import jakarta.inject.Inject; import jakarta.transaction.Transactional; import jakarta.validation.Valid; import jakarta.ws.rs.*; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.core.Response.Status; import java.util.List; import org.jboss.logging.Logger; @Path("/api") public class TodoResource { private static final Logger LOG = Logger.getLogger(TodoResource.class); @Inject TodoRepository todoRepository; @GET public List<Todo> getAll() { List<Todo> todos = todoRepository.findAll(); LOG.info("Found " + todos.size() + " todos"); return todos; } } The pom.xml file contains the project configuration, including the dependencies for the Quarkus application. The application uses the following extensions to support OpenTelemetry: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-jdbc</artifactId> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-logging</artifactId> </dependency> The src/main/resources/application.properties file contains the configuration for the Quarkus application. The configuration includes database connection properties for production, such as the JDBC URL and username. The configuration also includes the OpenTelemetry properties, such as enabling OpenTelemetry including logs and JDBC instrumentation at build time, using logging as exporter in development mode, and specifying the endpoint for the OpenTelemetry Protocol (OTLP) exporter in production mode. The following example shows the configuration for the OpenTelemetry: quarkus.otel.enabled=true quarkus.otel.logs.enabled=true quarkus.datasource.jdbc.telemetry=true %dev.quarkus.otel.logs.exporter=logging %dev.quarkus.otel.traces.exporter=logging %prod.quarkus.otel.exporter.otlp.endpoint=${OTEL_EXPORTER_OTLP_ENDPOINT} Run the Quarkus app locally Quarkus supports the automatic provisioning of unconfigured services in development mode. For more information, see Dev Services Overview in the Quarkus documentation. Now, run the following command to enter Quarkus dev mode, which automatically provisions a PostgreSQL database as a Docker container for the app: mvn clean package quarkus:dev The output should look like the following example: 2025-03-17 11:14:32,880 INFO [io.qua.dat.dep.dev.DevServicesDatasourceProcessor] (build-26) Dev Services for default datasource (postgresql) started - container ID is 56acc7e1cb46 2025-03-17 11:14:32,884 INFO [io.qua.hib.orm.dep.dev.HibernateOrmDevServicesProcessor] (build-4) Setting quarkus.hibernate-orm.database.generation=drop-and-create to initialize Dev Services managed database __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2025-03-17 11:14:36,202 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'quarkus' : 80437b598962f82bffd0735bbf00e9f1 aa86f0553056a8c9 CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=set client_min_messages = WARNING, db.system=postgresql}, capacity=128, totalAddedValues=6} 1970-01-01T00:00:00Z INFO ''quarkus' : 80437b598962f82bffd0735bbf00e9f1 aa86f0553056a8c9 CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=set client_min_messages = WARNING, db.system=postgresql}, capacity=128, totalAddedValues=6}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,236 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'DROP table quarkus' : 6b732661c29a9f0966403d49db9e4cff d86f29284f0d8eac CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=DROP table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=drop table if exists Todo cascade, db.system=postgresql}, capacity=128, totalAddedValues=7} 1970-01-01T00:00:00Z INFO ''DROP table quarkus' : 6b732661c29a9f0966403d49db9e4cff d86f29284f0d8eac CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=DROP table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=drop table if exists Todo cascade, db.system=postgresql}, capacity=128, totalAddedValues=7}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,259 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'CREATE table quarkus' : 54df3edf9f523a71bc85d0106a57016c bb43aa63ec3526ed CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=CREATE table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=create table Todo (completed boolean not null, ordering integer, id bigint generated by default as identity, title varchar(?) unique, url varchar(?), primary key (id)), db.system=postgresql}, capacity=128, totalAddedValues=7} 1970-01-01T00:00:00Z INFO ''CREATE table quarkus' : 54df3edf9f523a71bc85d0106a57016c bb43aa63ec3526ed CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=CREATE table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=create table Todo (completed boolean not null, ordering integer, id bigint generated by default as identity, title varchar(?) unique, url varchar(?), primary key (id)), db.system=postgresql}, capacity=128, totalAddedValues=7}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,438 INFO [io.quarkus] (Quarkus Main Thread) quarkus-todo-demo-app-insights 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.16.3) started in 7.409s. Listening on: http://localhost:8080 1970-01-01T00:00:00Z INFO 'quarkus-todo-demo-app-insights 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.16.3) started in 7.409s. Listening on: http://localhost:8080' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=109, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} 2025-03-17 11:14:36,441 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. 1970-01-01T00:00:00Z INFO 'Profile dev activated. Live Coding activated.' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=113, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} 2025-03-17 11:14:36,443 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-validator, jdbc-postgresql, narayana-jta, opentelemetry, rest, rest-jackson, smallrye-context-propagation, vertx] 1970-01-01T00:00:00Z INFO 'Installed features: [agroal, cdi, hibernate-orm, hibernate-validator, jdbc-postgresql, narayana-jta, opentelemetry, rest, rest-jackson, smallrye-context-propagation, vertx]' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=115, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} -- Tests paused Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options> The output shows that the Quarkus app is running in development mode. The app is listening on http://localhost:8080. The PostgreSQL database is automatically provisioned as a Docker container for the app. The OpenTelemetry instrumentation for Quarkus and JDBC is enabled, and the telemetry data is exported to the console. Access the application GUI at http://localhost:8080. You should see a similar Todo app with an empty todo list, as shown in the following screenshot: Switch back to the terminal where Quarkus dev mode is running, you should see more OpenTelemetry data exported to the console. For example, the following output shows the OpenTelemetry logging and tracing data for the GET method for the /api endpoint: 2025-03-17 11:15:13,785 INFO [io.qua.sam.TodoResource] (executor-thread-1) Found 0 todos 1970-01-01T00:00:00Z INFO 'Found 0 todos' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 [scopeInfo: io.quarkus.opentelemetry:] {code.function="getAll", code.lineno=25, code.namespace="io.quarkus.sample.TodoResource", log.logger.namespace="org.jboss.logging.Logger", parentId="c48a4a02e74e1901", thread.id=116, thread.name="executor-thread-1"} 2025-03-17 11:15:13,802 INFO [io.ope.exp.log.LoggingSpanExporter] (vert.x-eventloop-thread-1) 'GET /api' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 SERVER [tracer: io.quarkus.opentelemetry:] AttributesMap{data={http.response.status_code=200, url.scheme=http, server.port=8080, server.address=localhost, client.address=0:0:0:0:0:0:0:1, user_agent.original=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0, url.path=/api/, code.namespace=io.quarkus.sample.TodoResource, http.request.method=GET, code.function=getAll, http.response.body.size=2, http.route=/api}, capacity=128, totalAddedValues=12} 1970-01-01T00:00:00Z INFO ''GET /api' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 SERVER [tracer: io.quarkus.opentelemetry:] AttributesMap{data={http.response.status_code=200, url.scheme=http, server.port=8080, server.address=localhost, client.address=0:0:0:0:0:0:0:1, user_agent.original=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0, url.path=/api/, code.namespace=io.quarkus.sample.TodoResource, http.request.method=GET, code.function=getAll, http.response.body.size=2, http.route=/api}, capacity=128, totalAddedValues=12}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="export", code.lineno=65, code.namespace="io.opentelemetry.exporter.logging.LoggingSpanExporter", log.logger.namespace="org.jboss.logmanager.Logger", thread.id=126, thread.name="vert.x-eventloop-thread-1"} Then return to the web browser and interact with the Todo app, try to add a new todo item by typing in the text box and pressing ENTER, selecting the checkbox to mark the todo item as completed, or selecting Clear completed to remove all completed todo items. You can also delete a todo item by selecting the x icon when you hover over it. The app should work as expected. Finally, switch back to the terminal and press q to exit Quarkus dev mode. Create the Azure resources The steps in this section show you how to create the following Azure resources to run the Quarkus sample app on Azure: Azure Database for PostgreSQL Flexible Server Azure Container Registry Azure Container Apps environment Azure Application Insights First, define the following variables in your bash shell by replacing the placeholders with your own values. They will be used throughout the example: UNIQUE_VALUE=<your unique value> LOCATION=eastus2 RESOURCE_GROUP_NAME=${UNIQUE_VALUE}rg DB_SERVER_NAME=${UNIQUE_VALUE}db DB_NAME=demodb REGISTRY_NAME=${UNIQUE_VALUE}reg ACA_ENV=${UNIQUE_VALUE}env APP_INSIGHTS=${UNIQUE_VALUE}appinsights ACA_NAME=${UNIQUE_VALUE}aca Next, create the resource group to host Azure resources: az group create \ --name $RESOURCE_GROUP_NAME \ --location $LOCATION Then, create the Azure resources in the resource group by following the steps below. Create an Azure Database for PostgreSQL flexible server instance: az postgres flexible-server create \ --name $DB_SERVER_NAME \ --resource-group $RESOURCE_GROUP_NAME \ --database-name $DB_NAME \ --public-access None \ --sku-name Standard_B1ms \ --tier Burstable \ --active-directory-auth Enabled Create the Azure Container Registry and get the login server: az acr create \ --resource-group $RESOURCE_GROUP_NAME \ --location ${LOCATION} \ --name $REGISTRY_NAME \ --sku Basic LOGIN_SERVER=$(az acr show \ --name $REGISTRY_NAME \ --query 'loginServer' \ --output tsv) Create the Azure Container Apps environment: az containerapp env create \ --resource-group $RESOURCE_GROUP_NAME \ --location $LOCATION \ --name $ACA_ENV Create an Azure Application Insights instance: logAnalyticsWorkspace=$(az monitor log-analytics workspace list \ -g $RESOURCE_GROUP_NAME \ --query "[0].name" -o tsv | tr -d '\r') az monitor app-insights component create \ --resource-group $RESOURCE_GROUP_NAME \ --location $LOCATION \ --app $APP_INSIGHTS \ --workspace $logAnalyticsWorkspace Use the created Application Insights instance as the destination service to enable the managed OpenTelemetry agent for the Azure Container Apps environment: appInsightsConn=$(az monitor app-insights component show \ --app $APP_INSIGHTS \ -g $RESOURCE_GROUP_NAME \ --query 'connectionString' -o tsv | tr -d '\r') az containerapp env telemetry app-insights set \ --name $ACA_ENV \ --resource-group $RESOURCE_GROUP_NAME \ --connection-string $appInsightsConn \ --enable-open-telemetry-logs true \ --enable-open-telemetry-traces true When you deploy the Quarkus app to the Azure Container Apps environment later in this blog, the OpenTelemetry data is automatically collected by the managed OpenTelemetry agent and exported to the Application Insights instance. Deploy the Quarkus app to Azure Container Apps You have set up all the necessary Azure resources to run the Quarkus app on Azure Container Apps. In this section, you containerize the Quarkus app and deploy it to Azure Container Apps. First, use the following command to build the application. This command uses the Jib extension to build the container image. Quarkus instrumentation works both in JVM and native modes. In this blog, you build the container image for JVM mode, to work with Microsoft Entra ID authentication for Azure Database for PostgreSQL flexible server. TODO_QUARKUS_IMAGE_NAME=todo-quarkus-app-insights TODO_QUARKUS_IMAGE_TAG=${LOGIN_SERVER}/${TODO_QUARKUS_IMAGE_NAME}:1.0 mvn clean package -Dquarkus.container-image.build=true -Dquarkus.container-image.image=${TODO_QUARKUS_IMAGE_TAG} Next, sign in to the Azure Container Registry and push the Docker image to the registry: az acr login --name $REGISTRY_NAME docker push $TODO_QUARKUS_IMAGE_TAG Then, use the following command to create a Container Apps instance to run the app after pulling the image from the Container Registry: az containerapp create \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --image $TODO_QUARKUS_IMAGE_TAG \ --environment $ACA_ENV \ --registry-server $LOGIN_SERVER \ --registry-identity system \ --target-port 8080 \ --ingress 'external' \ --min-replicas 1 Finally, connect the Azure Database for PostgreSQL Flexible Server instance to the container app using Service Connector: # Install the Service Connector passwordless extension az extension add --name serviceconnector-passwordless --upgrade --allow-preview true az containerapp connection create postgres-flexible \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --target-resource-group $RESOURCE_GROUP_NAME \ --server $DB_SERVER_NAME \ --database $DB_NAME \ --system-identity \ --container $ACA_NAME \ --yes Wait for a while until the application is deployed, started and running. Then get the application URL and open it in a browser: QUARKUS_URL=https://$(az containerapp show \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --query properties.configuration.ingress.fqdn -o tsv) echo $QUARKUS_URL You should see the similar Todo app when you ran the app locally before. Interact with the app by adding, completing and removing todo items, which generates telemetry data and sends it to Azure Application Insights. Observe the Quarkus app with Azure Application Insights Open the Azure Portal and navigate to the Azure Monitor Application Insights resource you created earlier. You can monitor the application with different views backed by the telemetry data sent from the application. For example: Investigate > Application map: Shows the application components and their dependencies. Investigate > Failures: Shows the failures and exceptions in the application. Investigate > Performance: Shows the performance of the application. Monitoring > Logs: Shows the logs and traces of the application. You may notice that metrics are not observed in the Application Insights in this blog, that's because the Application Insights endpoint used in the managed OpenTelemetry agent doesn't accept metrics yet, which is listed as a known limitation. This is also the reason why Quarkus metrics is not enabled in the configuration file with quarkus.otel.metrics.enabled=true above. Alternatively, you can consider using Quarkus Opentelemetry Exporter for Microsoft Azure in your Quarkus apps to export the telemetry data directly to Azure Application Insights. Clean up resources To avoid Azure charges, you should clean up unneeded resources. When the resources are no longer needed, use the az group delete command to remove the resource group and all Azure resources within it: az group delete \ --name $RESOURCE_GROUP_NAME \ --yes \ --no-wait Next steps In this blog, you observe the Quarkus app with Azure Application Insights using OpenTelemetry. To learn more, explore the following resources: OpenTelemetry on Azure Collect and read OpenTelemetry data in Azure Container Apps (preview) Application Insights overview Using OpenTelemetry Deploy a Java application with Quarkus on an Azure Container Apps Secure Quarkus applications with Microsoft Entra ID using OpenID Connect Deploy a Java application with Quarkus on an Azure Kubernetes Service cluster Deploy serverless Java apps with Quarkus on Azure Functions Jakarta EE on Azure1.1KViews2likes3Comments