Blog Post

Azure Observability Blog
5 MIN READ

Azure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS

abinetabate's avatar
abinetabate
Icon for Microsoft rankMicrosoft
Apr 15, 2025

Key Takeaways (TLDR)

  • Monitor Java and Node applications with zero code changes
  • Fast onboarding: just 2 steps
  • Supports distributed tracing, logs, and metrics
  • Correlates application-level telemetry in Application Insights with infrastructure-level telemetry in Container Insights
  • Available today in public preview

Introduction

Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster.

 This feature is ideal for developers or operators who are...

  • Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK update maintenance.
  • Starting out on their monitoring journey and looking to benefit from carefully chosen default configurations with the ability to tweak them over time.
  • Working with someone else’s code and looking to instrument at scale.
  • Or considering monitoring for the first time at the time of deployment.

Before the introduction of this feature, users needed to manually instrument code, install language-specific SDKs, and manage updates on their own—a process that involved significant effort and numerous opportunities for errors. Now, all you need to do is follow a simple two-step process to instrument your applications and automatically send correlated OpenTelemetry-based application-level logs, metrics, and distributed tracing to your Application Insights resource.

With AKS Auto-Instrumentation, you will be able to assess the performance of your application and identify the cause of any incidents more efficiently using the robust application performance monitoring capabilities of Azure Monitor Application Insights. This streamlined approach not only saves time but also ensures that your monitoring setup is both reliable and scalable.

Feature Enablement and Onboarding

To onboard to this feature, you will need to follow a two-step process:

  1. Prepare your cluster by installing the application monitoring webhook.
  2. Choose between namespace-wide onboarding or per-deployment onboarding by creating K8’s custom resources.

Namespace-wide onboarding is the easiest method. It allows you to instrument all Java or Node deployments in your namespace and direct telemetry to a single Application Insights resource. Per-deployment onboarding allows more control by targeting specific deployments and directing telemetry to different Application Insights resources.

Once the custom resource is created, you will need to deploy or redeploy your application, and telemetry will start flowing to Application Insights.

For step-by-step instructions and to learn more about onboarding visit our official documentation on MS Learn.

The Application Insights experience

Once telemetry begins flowing, you can take advantage of Application Insights features such as Application Map, Failures/Performance Views, Availability, and more to help you efficiently diagnose and troubleshoot application issues.

Let’s look at an example:

I have an auto-instrumented distributed application running in the demoapp namespace of my AKS cluster. It consists of:

  • One Java microservice
  • Two Node.js microservices
  • MongoDB and Redis as its data layer

Scenario: End users have been complaining about some latency in the application.

As the DRI, I can start my troubleshooting journey by going to the Application Map to get a topological view of my distributed application.

  1. I open Application Map and notice MicroserviceA has a red border - 50% of calls are erroring.
  2. The Container Insights card shows healthy pods - no failed pods or high CPU/memory usage. I can eliminate infrastructure issues as the cause of the slowness.
  3. In the Performance card, I spot that the rescuepet operation has an average duration of 10 seconds. That's pretty long.
  4. I drill in to get a distributed trace of the operation and find the root cause: an OutOfMemoryError.

In this scenario, the issue has been identified as an out-of-memory error at the application layer. However, when the root cause is not in the code but in the infrastructure

  1. I get a full set of resource properties with every distributed trace so I can easily identify the infra resources running each span of my trace.
  2. I can click the investigate pods button to transition to Azure Monitor Container Insights and investigate my pods further.

This correlation between application-level and infrastructure-level telemetry makes it much easier to determine whether the issue is caused by the application or the infrastructure.

Pricing

There is no additional cost to use AKS auto-instrumentation to send data to Azure Monitor. You will be only charged as per the current pricing.

What’s Next

Language Support

This integration supports Java and Node workloads by leveraging the Azure Monitor OpenTelemetry distro. We have distros for .NET and Python as well and we are working to integrate these distros into this solution. At that point, this integration will support .NET, Python, Java and Node.js.

For customers that want to instrument workloads in other languages such as Go, Ruby, PHP, etc. we plan to leverage open-source instrumentations available in the Open Telemetry community. In this scenario, customers will instrument their code using open source OpenTelemetry instrumentations, and we will provide mechanisms that will make it easy to channel the telemetry to Application Insights. Application Insights will expose an endpoint that accepts OpenTelemetry Language Protocol (OTLP) signals and configure the instrumented workload to channel the telemetry to this endpoint.

Operating Systems and K8’s Controllers

Right now, you can only instrument kubernetes deployments running on Linux node pools, but we plan to expand support to introduce support for Linux ARM64 node pools as well as support for StatefulSet, Job, Cronjob, and Replicaset controller types.

Portal Experiences

We are also working on Azure portal experiences to make onboarding easier. When our portal experiences for onboarding are released, users will be able to install the Application Insights extension for AKS using the portal and use a portal user interface to instrument their workloads instead of having to create custom resources. Beyond onboarding, we are working to build Application Insights consumption experiences within the AKS namespace and workloads blade. You will be able to see application-level telemetry right there in the AKS portal without having to navigate away from your cluster to Application Insights.

 

FAQs:

  1. What are the advantages of AKS Auto-Instrumentation?
  • No code changes required
  • No access to source code required
  • No configuration changes required
  • Eliminates instrumentation maintenance

 

  1. What languages are supported by AKS Auto-Instrumentation?

Currently, AKS Auto-Instrumentation supports Java and Node.js applications. Python and .NET support is coming soon. Moreover, we will be adding support for all OTel supported languages like Go soon via native OTLP ingestion.

 

  1. Does AKS Auto-Instrumentation support custom metrics?

For Node.js applications, custom metrics require manual instrumentation with the Azure Monitor OpenTelemetry Distro. Java applications allow custom metrics with auto-instrumentation.

 

Click here for more FAQs.

 

This article was co-authored by Rishab Jolly and Abinet Abate

Updated Apr 16, 2025
Version 6.0
No CommentsBe the first to comment