Azure Container Apps
178 TopicsBuilding Agentic Workflows with n8n and Azure Container Apps
When it comes to connecting systems and automating tasks, n8n has built a passionate community around a powerful, extensible workflow engine. With hundreds of integrations and the ability to share workflows across a vibrant user base, n8n is quickly becoming the go-to choice for teams who want to orchestrate data, APIs, and AI. On Azure, combining n8n with Azure OpenAI Service and Azure Container Apps (ACA) makes it easier than ever to run intelligent automations at any scale. Whether you’re experimenting with ideas, building small team solutions, or deploying resilient production systems, Azure provides the right infrastructure options. Why n8n on Azure? Community workflows: Start quickly with pre-built automation templates and shared flows from the n8n community. AI built-in: Seamlessly add natural language, summarization, and reasoning with Azure Foundry’s OpenAI models. Managed scale: Use Azure Container Apps to deploy n8n in a fully managed, container-native environment with scaling, networking, and security built in. Flexibility: Choose from lightweight testing to production-grade environments with the same deployment template. Three ways to deploy n8n with Azure Container Apps We’ve created an Azure deployment template that supports three common scenarios. You can move between them as your needs evolve. Try: Spin up n8n in minutes. Great for testing integrations with Azure OpenAI before committing to infrastructure. Small: Bring persistence and private networking. Designed for small teams who want to keep their workflows and data across sessions. Production: Scale securely and reliably. Suitable for production deployments where resilience, security, and multi-instance scaling are key. Bringing AI into the Flow Once n8n is running, you can plug in Azure OpenAI models directly into workflows—powering: Automated content generation Intelligent routing and decision logic Summarization of long-form data Enhanced customer engagement scenarios By pairing n8n’s integrations with Azure OpenAI’s reasoning and text generation, you unlock entirely new categories of automation. Get Started Today You can find the template and detailed instructions here. Closing thoughts n8n brings a thriving automation ecosystem; Azure provides the enterprise-grade foundation. Together, they empower developers and business teams to create intelligent, scalable automations—from quick experiments to mission-critical production workflows.123Views0likes0CommentsSend metrics from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 20 July 2025. MicronautからAzure Monitorにmetricを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorials were used as a reference. Create a Micronaut Application to Collect Metrics and Monitor Them on Azure Monitor Metrics Collect Metrics with Micronaut Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch https://micronaut.io/launch/ $ mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=validation,graalvm,micrometer-azure-monitor,http-client,micrometer-annotation,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. validation graalvm micrometer-azure-monitor http-client micrometer-annotation yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation In this section, we’re going to use the GDK sample code that we can find in the tutorial. The code is from the Micronaut Guides, but the database access and other parts have been taken out. We have made the following changes to the code to make it fit our needs. a) Structure of the directory In the GDK tutorial, folders called azure and lib are created, but this structure isn’t used in the standard Micronaut archetype. So, codes in both directories has now been combined. b) Instrumentation Key As the tutorial above and the Micronaut Micrometer documentation explain, we need to specify the Instrumentation Key. When we create an archetype using Micronaut CLI or Micronaut Launch, the configuration assuming the use of the Instrumentation Key is included in application.properties / application.yml . 6.3 Azure Monitor Registry Micronaut Micrometer This configuration will work, but currently, Application Insights does not recommend accessing it using only the Instrumentation Key. So, it is better to modify the connection string to include the Instrumentation Key. To set it up, open the file application.properties and enter the following information: micronaut.metrics.export.azuremonitor.connectionString="InstrumentationKey=...." In the case of application.yml , we need to specify the connection string in YAML format. micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: InstrumentationKey=.... We can also specify the environment variable MICRONAUT_METRICS_EXPORT_AZUREMONITOR_CONNECTIONSTRING , but since this environment variable name is too long, it is better to use a shorter one. Here’s an example using AZURE_MONITOR_CONNECTION_STRING (which is also long, if you think about it). micronaut.metrics.export.azuremonitor.connectionString=${AZURE_MONITOR_CONNECTION_STRING} micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: ${AZURE_MONITOR_CONNECTION_STRING} The connection string can be specified because Micrometer, which is used internally, already supports it. We can find the AzurMonitorConfig.java file here. AzureMonitorConfig.java micrometer/implementations/micrometer-registry-azure-monitor/src/main/java/io/micrometer/azuremonitor/AzureMonitorConfig.java at main · micrometer-metrics/micrometer The settings in application.properties / application.yml are as follows. For more information about the specified meter binders, please look at the following documents. Meter Binder Micronaut Micrometer micronaut: application: name: azuremonitor-metric metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} c) pom.xml To use the GraalVM Reachability Metadata Repository, you need to add this dependency. The latest version is 0.11.0 as of 20 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> For now, let’s build it as a Java application. $ mvn clean package Check if it works as a Java application At first, verify that the application is running without any problems and that metrics are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent The following files are stored in the specific directory. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. a) Location of configuration files: As described in the documentation, we can specify the location of configuration property files. If we build using the recommended method (placing the files in the directory src/main/resources/META-INF/native-image/{groupId}/{artifactId} ), we can specify the directory location using ${.}. -H:DynamicProxyConfigurationResources -H:JNIConfigurationResources -H:ReflectionConfigurationResources -H:ResourceConfigurationResources -H:SerializationConfigurationResources Native Image Build Configuration b) HTTP/HTTPS protocols support: We need to use --enable-https / --enable-http when using the HTTP(S) protocol in your application. URL Protocols in Native Image c) When classes are loaded and initialized: In the case of AOT compilation, classes are usually loaded at compile time and stored in the image heap (at build time). However, some classes might be specified to be loaded when the program is running. In these cases, it is necessary to explicitly specify initialization at runtime (and vice versa, of course). There are two types of build arguments. # Explicitly specify initialisation at runtime --initialize-at-run-time=... # Explicitly specify initialisation at build time --initialize-at-build-time=... To enable tracing of class initialization, use the following arguments. # Enable tracing of class initialization --trace-class-initialization=... # Deprecated in GraalVM 21.3 --trace-object-instantiation=... # Current option Specify Class Initialization Explicitly Class Initialization in Native Image d) Prevent fallback builds: If the application cannot be optimized during the Native Image build, the native-image tool will create a fallback file, which needs JVM. To prevent fallback builds, we need to specify the option --no-fallback . For other build options, please look at the following document. Command-line Options Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application When you start the native image application, we might see the following message: This message means that GC notifications are not available because GarbageCollectorMXBean of JVM does not provide any notifications. GC notifications will not be available because no GarbageCollectorMXBean of the JVM provides any. GCs=[young generation scavenger, complete scavenger] Let’s check if the application works. 1) GET /books and GET /books/{isbn} This is a normal REST API. Call both of them a few times. 2) GET /metrics We can check the list of available metrics. { "names": [ "books.find", "books.index", "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "microserviceBooksNumber.checks", "microserviceBooksNumber.latest", "microserviceBooksNumber.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } At first, the following three metrics are custom ones added in the class MicroserviceBooksNumberService . microserviceBooksNumber.checks microserviceBooksNumber.time microserviceBooksNumber.latest And, the following two metrics are custom ones collected in the class BooksController , which collect information such as the time taken and the number of calls. Each metric can be viewed at GET /metrics/{metric name} . books.find books.index The following is an example of microserviceBooksNumber.* . // miroserviceBooksNumber.checks { "name": "microserviceBooksNumber.checks", "measurements": [ { "statistic": "COUNT", "value": 12 } ] } // microserviceBooksNumber.time { "name": "microserviceBooksNumber.time", "measurements": [ { "statistic": "COUNT", "value": 12 }, { "statistic": "TOTAL_TIME", "value": 0.212468 }, { "statistic": "MAX", "value": 0.032744 } ], "baseUnit": "seconds" } //microserviceBooksNumber.latest { "name": "microserviceBooksNumber.latest", "measurements": [ { "statistic": "VALUE", "value": 2 } ] } Here is an example of the metric books.* . // books.index { "name": "books.index", "measurements": [ { "statistic": "COUNT", "value": 6 }, { "statistic": "TOTAL_TIME", "value": 3.08425 }, { "statistic": "MAX", "value": 3.02097 } ], "availableTags": [ { "tag": "exception", "values": [ "none" ] } ], "baseUnit": "seconds" } // books.find { "name": "books.find", "measurements": [ { "statistic": "COUNT", "value": 7 } ], "availableTags": [ { "tag": "result", "values": [ "success" ] }, { "tag": "exception", "values": [ "none" ] } ] } Metrics from Azure Monitor (application insights) Here is the grid view of custom metrics in Application Insights ( microserviceBooks.time is the average value). To confirm that the values match those in Application Insights, check the metric http.server.requests , for example. We should see three items on the graph and the value is equal to the number of API responses (3).101Views0likes0CommentsSend traces from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 23 July 2025. MicronautからAzure Monitorにtraceを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorial was used as a reference. OpenTelemetry Tracing with Oracle Cloud and the Micronaut Framework As of 13 August 2025, GDK (Graal Dev Kit) guides are also available. Create and Trace a Micronaut Application Using Azure Monitor Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=tracing-opentelemetry-http,validation,graalvm,azure-tracing,http-client,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. tracing-opentelemetry-http validation graalvm azure-tracing http-client yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> <scope>compile</scope> </dependency> In this section, we’re going to use the tutorial in Micronaut Guides. We can use these codes as they are, but several points are modified. a) For sending traces to Application Insights Please note that we didn’t include metrics in this article because we discussed them in the last one. Starting with Micronaut 4.9.0, a feature package called micronaut-azure-tracing has been added. This feature enables sending traces to Application Insights. <dependency> <groupid>io.micronaut.azure</groupid> <artifactid>micronaut-azure-tracing</artifactid> </dependency> Indeed, this dependency is necessary for sending data to Application Insights. However, adding this dependency and specifying the Application Insights connection string is not enough to send traces from applications. micronaut-azure-tracing depends upon the three dependencies listed below. This shows that adding dependencies for trace collection and creation are required. com.azure:azure-monitor-opentelemetry-autoconfigure io.micronaut:micronaut-inject io.micronaut.tracing:micronaut-tracing-opentelemetry In this case, we want to obtain HTTP traces, so we will add dependencies for generating HTTP traces. <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> <scope>compile</scope> </dependency> Where setting the connection string for micronaut-azure-tracing is different from where for micrometer-azure-monitor ( azure.tracing.connection-string ). If we want to retrieve not only metrics but traces, the setting location is different, which can be confusing. We can also use environment variables to specify the connection string. azure.tracing.connection-string="InstrumentationKey=...." azure: tracing: connection-string: InstrumentationKey=.... b) pom.xml To use the GraalVM Reachability Metadata Repository, we need to add this dependency. The latest version is 0.11.0 as of 23 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> c) To avoid version conflicts with dependencies used in the Azure SDK This often happens when using Netty and/or Jackson. To avoid version conflicts during Native Image generation, Micronaut offers alternative components that we can choose. For example, if we want to avoid Netty version conflicts, we can use undertow. Dependencies Alternatives Netty undertow, jetty, Tomcat Jackson JSON-P / JSON-B, BSON HTTP Client JDK HTTP Client For now, let’s build it as a Java application. mvn clean package Test as a Java application At first, verify that the application is running without any problems and that traces are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Let’s check if the application works. To check the inventory of desktop, execute the following call. curl https://<container apps="" url="" and="" port="">/store/inventory/desktop</container> We should receive a response like this. {"warehouse":7,"item":"desktop","store":2} In Azure Monitor (Application Insights) Application Map, we can observe this interaction visually. Switching to the trace page shows us traces and custom properties on the right of the screen. Press enter or click to view image in full size Then, we add an order. For example, if we place an order for five desktops and then receive 202 Accepted , we need to call inventory check API again. This will show that the number has increased by five and the desktop order has changed to seven (original was 2). $ curl -X "POST" "https://<container apps="" url="" and="" port="">/store/order" \ -H 'Content-Type: application/json; charset=utf-8' \ -d $'{"item":"desktop", "count":5}' $ curl https://<container apps="" url="" and="" port="">/store/inventory/desktop</container></container> Within azuremonitor-trace , an HTTP Client is used internally to execute POST /warehouse/order . Looking at the Application Map in Azure Monitor (Application Insights), we can confirm that a call to azuremonitor-trace itself is occurring. The trace at the time of order placement is as follows. Clicking ‘View all’ in the red frame, we can check the details of each trace.93Views0likes0CommentsSend logs from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 29 July 2025. MicronautからAzure Monitorにlogを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Where can we post logs? Log destination differs depending upon managed services such as App Service, Container Apps, etc. We can also send logs to specified destination which is different destination from the default one. In the case of Azure Container Apps, for instance, we have several options to send logs. Type Destination How to Write console output to a log ContainerAppConsoleLogs_CL If diagnostic settings are configured, destination table may differ from the above. The output destination can be changed in the diagnostic settings. This is handled by Container Apps, so no user action is required. Use DCE (Data Collection Endpoint) to write logs to custom table in Log Analytics Workspace Custom tables in Log Analytics Workspace. Follow these tutorials listed below. Publish Application Logs to Azure Monitor Logs Publish Micronaut application logs to Microsoft Azure Monitor Logs Using the Log Appender traces table in Application Insights When writing logs to thetraces table in Application Insights, Log Appender configuration is required. Log storage and monitoring options in Azure Container Apps From now on, we elaborate the 3rd way — write logs to the traces table in Application Insights. Prerequisites Maven: 3.9.10 JDK: 21 Micronaut: 4.9.0 or later Regarding logs, the logs posted with the following 4 log libraries are automatically collected. In this entry, we use Logback. Log4j2 Logback JBoss Logging java.util.logging Create Azure resource (Application Insights) Create a resource group and configure Application Insights. Refer to the following documentation for details. Create and configure Application Insights resources - Azure Monitor That’s it for the Azure setup. Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=graalvm,azure-tracing,yaml \ dev.logicojp.micronaut.azuremonitor-log When using Micronaut Launch, click [FEATURES] and select the following features. graalvm azure-tracing yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Add dependencies and plugins to pom.xml In order to output logs to Application Insights, the following dependencies must be added. <dependency> <groupid>io.opentelemetry.instrumentation</groupid> <artifactid>opentelemetry-logback-appender-1.0</artifactid> </dependency> <dependency> <groupid>com.microsoft.azure</groupid> <artifactid>applicationinsights-logging-logback</artifactid> </dependency> <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> </dependency> In this entry, we are using Logback for log output, so we are using opentelemetry-logback-appender-1.0. However, should you be using a different library, it will be necessary to specify the appropriate an appender for that library. The dependency com.azure:azure-monitor-opentelemetry-autoconfigure is being included transitively since io.micronaut.tracing:azure-tracing depends upon the dependency. If Azure tracing has not yet been added, the following dependencies must be added explicitly. <dependency> <groupid>com.azure</groupid> <artifactid>azure-monitor-opentelemetry-autoconfigure</artifactid> </dependency> Additionally, we need to add this dependency to use the GraalVM Reachability Metadata Repository. The latest version is 0.11.0 as of 29 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> <classifier>repository</classifier> <type>zip</type> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> Application configuration In order to proceed, it is necessary to include both Application Insights-specific settings and Azure-tracing settings. To ensure optimal performance when using Azure tracing, please refer to the settings outlined below. Send traces from Micronaut native image applications to Azure Monitor | Microsoft Community Hub For Application Insights-specific settings, please refer to the documentation provided. Configuration options - Azure Monitor Application Insights for Java - Azure Monitor According to the documentation, when specifying a connection string, the configuration should be as follows. You can also set the connection string by using the environment variable APPLICATIONINSIGHTS_CONNECTION_STRING . It then takes precedence over the connection string specified in the JSON configuration. Or you can set the connection string by using the Java system property applicationinsights.connection.string . It also takes precedence over the connection string specified in the JSON configuration. Initially, it may appear that there is no alternative but to use environment variables or Java system properties. However, in the case of Micronaut (and similarly for Spring Boot and Quarkus), the connection string can be configured using the relationship between application settings and environment variables. This allows for defining it in application.properties or application.yml . For instance, in the case of the connection string mentioned above, if we specify it using an environment variable, we would use APPLICATIONINSIGHTS_CONNECTION_STRING . In Micronaut, we can specify it as shown in lines 5–7 of the following application.yml example (the key matches the one used when setting it as a system property). The configuration of application.yml, including Application Insights-specific settings, is as follows: applicationinsights: connection: string: ${AZURE_MONITOR_CONNECTION_STRING} sampling: percentage: 100 instrumentation: logging: level: "INFO" preview: captureLogbackMarker: true captureControllerSpans: true azure: tracing: connection-string: ${AZURE_MONITOR_CONNECTION_STRING} Codes a) To enable Application Insights We need to explicitly create an OpenTelemetry object to send logs. Please note that while Azure-tracing enables Application Insights, the OpenTelemetry object generated during this process is not publicly accessible and cannot be retrieved from outside. AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder(); OpenTelemetry openTelemetry = sdkBuilder.build().getOpenTelemetrySdk(); AzureMonitorAutoConfigure.customize(sdkBuilder, "connectionString"); b) Log Appender When we create the archetype, src/main/resources/logback.xml should be generated. In this file, add an Appender to associate with the io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender class object. <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%cyan(%d{HH:mm:ss.SSS}) %gray([%thread]) %highlight(%-5level) %magenta(%logger{36}) - %msg%n </pattern> </encoder> </appender> <appender name="OpenTelemetry" class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender"> <captureexperimentalattributes>true</captureexperimentalattributes> <capturecodeattributes>true</capturecodeattributes> <capturemarkerattribute>true</capturemarkerattribute> <capturekeyvaluepairattributes>true</capturekeyvaluepairattributes> <capturemdcattributes>*</capturemdcattributes> </appender> <root level="info"> <appender-ref ref="STDOUT"> <appender-ref ref="OpenTelemetry"> </appender-ref></appender-ref></root> </configuration> Then, associate the OpenTelemetry object we created earlier with Log Appender so that logs can be sent using OpenTelemetry. OpenTelemetryAppender.install(openTelemetry); c) Other implementation The objective of this article is to verify the Trace and Trace log. To that end, we will develop a rudimentary REST API, akin to a “Hello World” application. However, we will utilize the logger feature to generate multiple logs. In a real-world application, we would likely refine this process to avoid generating excessive logs. For example, HelloController.java is shown below. package dev.logicojp.micronaut; import io.micronaut.http.HttpStatus; import io.micronaut.http.MediaType; import io.micronaut.http.annotation.*; import io.micronaut.http.exceptions.HttpStatusException; import io.micronaut.scheduling.TaskExecutors; import io.micronaut.scheduling.annotation.ExecuteOn; import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @Controller("/api/hello") @ExecuteOn(TaskExecutors.IO) public class HelloController { private static final Logger logger = LoggerFactory.getLogger(HelloController.class); public HelloController(OpenTelemetry _openTelemetry){ OpenTelemetryAppender.install(_openTelemetry); logger.info("OpenTelemetry is configured and ready to use."); } @Get @Produces(MediaType.APPLICATION_JSON) public GreetingResponse hello(@QueryValue(value = "name", defaultValue = "World") String name) { logger.info("Hello endpoint was called with query parameter: {}", name); // Simulate some processing HelloService helloService = new HelloService(); GreetingResponse greetingResponse = helloService.greet(name); logger.info("Processing complete, returning response"); return greetingResponse; } @Post @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Status(HttpStatus.ACCEPTED) public void setGreetingPrefix(@Body GreetingPrefix greetingPrefix) { String prefix = greetingPrefix.prefix(); if (prefix == null || prefix.isBlank()) { logger.error("Received request to set an empty or null greeting prefix."); throw new HttpStatusException(HttpStatus.BAD_REQUEST, "Prefix cannot be null or empty"); } HelloService helloService = new HelloService(); helloService.setGreetingPrefix(prefix); logger.info("Greeting prefix set to: {}", prefix); } } For now, let’s build it as a Java application. mvn clean package Test as a Java application Please verify that the application is running without any issues … that traces are being sent to Application Insights that logs are being sent to the traces table that they can be confirmed on the Trace screen. If the call is GET /api/hello?name=Logico_jp, the traces table will look like this: In the Trace application, it should resemble this structure, in conjunction with the Request. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Verify that this application works the same as a normal Java application. For example, call GET /api/hello?name=xxxx, GET /api/hello?name=, GET /api/hello , and POST /api/hello. Check if traces and logs are visible in Azure Monitor (application insights) When reviewing the traces table in Application Insights, it becomes evident that four records were added at 3:14 p.m. When checking traces… As can be seen in the traces table, the logs have indeed been added to the trace. Naturally, the occurrence times remain consistent. Summary I have outlined the process of writing to the traces table in Application Insights. However, it should be noted that some code is necessary to configure the Log Appender. Consequently, zero code instrumentation cannot be achieved strictly. However, the actual configuration is relatively minor, so implementation is not difficult.90Views0likes0CommentsSend signals from Micronaut applications to Azure Monitor through zero-code instrumentation
The original post (Japanese) was written on 13 August 2025. Zero code instrumentationでMicronautアプリケーションからAzure Monitorにtraceやmetricを送信したい – Logico Inside This entry is a series posts below. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Received another question from the customer. I understand that I can get metrics and traces, but is it possible to send them to Azure Monitor (Application Insights) without using code? If you are not familiar with zero-code instrumentation, please check the following URL. Zero-code | OpenTelemetry The customer wondered if the dependencies would take care of everything else when they only specified the dependencies and destinations. To confirm this (and to provide a sample), we have prepared the following environment. As I wrote in the previous post, logs are dealt with in a different way depending on how they are used (IaaS, PaaS, etc.), so they are not included in this example. This example is a REST API application that can be used to find, add, change, and delete movie information. It uses PostgreSQL as a data store and sends information about how the system is performing to Azure Monitor, specifically Application Insights. You can find the code below. GitHub - anishi1222/micronaut-telemetry-movie: Zero code instrumentation (Azure Monitor, GraalVM Native Image, and Micronaut) Prerequisites Maven: 3.9.10 JDK: 21 Micronaut: 4.9.0 or later And we need to provision an instance of Azure Monitor (application insights) and PostgreSQL Flexible Server. Create an Archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. Micronaut Launch In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. The following features are needed when creating an archetype for this app. graalvm management micrometer-azure-monitor azure-tracing yaml validation postgres jdbc-hikari data-jpa Dependencies The basics of sending traces and metrics are as described in the previous two entries. In this post, we want to obtain traces for HTTP and JDBC connections, so we will add the following two dependencies. <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-http</artifactid> </dependency> <dependency> <groupid>io.micronaut.tracing</groupid> <artifactid>micronaut-tracing-opentelemetry-jdbc</artifactid> </dependency> Additionally, we need to add this dependency to use the GraalVM Reachability Metadata Repository. The latest version is 0.11.0 as of 13 August, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> Application configuration This app connects to a database and Azure Monitor, so we need the following information. Database where the app connects. Azure Monitor related information. 1) Database We specify data source information in application.yml . 2) Azure Monitor Set the connection string for Application Insights. Because of dependency issues, it is necessary to set different locations for Metric and Trace, which is a bit inconvenient. However, it is recommended to pass it via environment variables to make it as common as possible. Here is the sample of application.yml . micronaut: application: name: micronaut-telemetry-movie metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} datasources: default: driver-class-name: org.postgresql.Driver db-type: postgres url: ${JDBC_URL} username: ${JDBC_USERNAME} password: ${JDBC_PASSWORD} dialect: POSTGRES schema-generate: CREATE_DROP hikari: connection-test-query: SELECT 1 connection-init-sql: SELECT 1 connection-timeout: 10000 idle-timeout: 30000 auto-commit: true leak-detection-threshold: 2000 maximum-pool-size: 10 max-lifetime: 60000 transaction-isolation: TRANSACTION_READ_COMMITTED azure: tracing: connection-string: ${AZURE_MONITOR_CONNECTION_STRING} otel: exclusions: /health, /info, /metrics, /actuator/health, /actuator/info, /actuator/metrics For now, let’s build it as a Java application. Test as a Java application Make sure the application is running smoothly, that traces are being sent to Application Insights, and that metrics are being output. Now, run the application using the Tracing Agent and create the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent Make the following files in the specified folder. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. This is also explained in the metric entry, so some details will be left out. If needed, please check the metric entry. Send metrics from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application Let’s check if the application works. At first, we have to populate initial data with the following command. This command adds 3 records. curl -X PUT https://<container apps="" url="">/api/movies</container> { "message":"Database initialized with default movies." } Now let’s verify if three records exist. curl https://<container apps="" url="">/api/movies</container> [ { "id": 1, "title": "Inception", "releaseYear": 2010, "directors": "Christopher Nolan", "actors": "Leonardo DiCaprio, Joseph Gordon-Levitt, Elliot Page" }, { "id": 2, "title": "The Shawshank Redemption", "releaseYear": 1994, "directors": "Frank Darabont", "actors": "Tim Robbins, Morgan Freeman, Bob Gunton" }, { "id": 3, "title": "The Godfather", "releaseYear": 1972, "directors": "Francis Ford Coppola", "actors": "Marlon Brando, Al Pacino, James Caan" } ] (1) Azure Monitor (Application Insights) We should see the images like this. (2) Metrics We can see which metrics we can check with the API call GET /metrics. { "names": [ "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "hikaricp.connections", "hikaricp.connections.acquire", "hikaricp.connections.active", "hikaricp.connections.creation", "hikaricp.connections.idle", "hikaricp.connections.max", "hikaricp.connections.min", "hikaricp.connections.pending", "hikaricp.connections.timeout", "hikaricp.connections.usage", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "process.cpu.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } But because this is a native image application, we can’t get the right information about the JVM. For example, if we invoke the API with GET /metrics/jvm.memory.max, we will see the following. What does -2 mean? { "name": "jvm.memory.max", "measurements": [ { "statistic": "VALUE", "value": -2.0 } ], "availableTags": [ { "tag": "area", "values": [ "nonheap" ] }, { "tag": "id", "values": [ "runtime code cache (native metadata)", "runtime code cache (code and data)" ] } ], "description": "The maximum amount of memory in bytes that can be used for memory management", "baseUnit": "bytes" } To find out how much the CPU is being used, run GET /metrics/process.cpu.usage, and we’ll get this result. { "name": "process.cpu.usage", "measurements": [ { "statistic": "VALUE", "value": 0.0017692156477295067 } ], "description": "The \"recent cpu usage\" for the Java Virtual Machine process" } To add logs to Azure Monitor “traces” table... Some of you might want to use the information in the following entry with zero-code instrumentation, but currently you cannot. Send logs from Micronaut native image applications to Azure Monitor | Microsoft Community Hub This is because we cannot get the OpenTelemetry object needed to write to the Application Insights traces table. Therefore, it must be explicitly declared. The following example clearly states and sets up Appender in the MovieController constructor. The way Appender is set up is not included here, as it was explained before. @Inject AzureTracingConfigurationProperties azureTracingConfigurationProperties; private static final Logger logger = LoggerFactory.getLogger(MovieController.class); public MovieController(AzureTracingConfigurationProperties azureTracingConfigurationProperties) { this.azureTracingConfigurationProperties = azureTracingConfigurationProperties; AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder(); AzureMonitorAutoConfigure.customize(sdkBuilder, azureTracingConfigurationProperties.getConnectionString()); OpenTelemetryAppender.install(sdkBuilder.build().getOpenTelemetrySdk()); logger.info("OpenTelemetry configured for MovieController."); } Although explicit declaration is required, logs will be recorded in Traces as long as this setting is enabled.109Views0likes0CommentsAnnouncing a flexible, predictable billing model for Azure SRE Agent
Billing for Azure SRE Agent will start on September 1, 2025. Announced at Microsoft Build 2025, Azure SRE Agent is a pre-built AI agent for root cause analysis, uptime improvement, and operational cost reduction. Learn more about the billing model and example scenarios.2.1KViews1like1CommentAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav24.5KViews2likes1Comment