java ee
34 Topics🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.59Views1like0CommentsObserve Quarkus Apps with Azure Application Insights using OpenTelemetry
Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. Install a Java SE implementation version 17 - for example, Microsoft build of OpenJDK. Install Maven, version 3.9.8 or higher. Install Docker for your OS. Install the Azure CLI to run Azure CLI commands. Sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign into Azure with Azure CLI. When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use and manage extensions with the Azure CLI. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade. This blog requires at least version 2.65.0 of Azure CLI. Prepare the Quarkus app Run the following commands to get the sample app app-insights-quarkus from GitHub: git clone https://github.com/Azure-Samples/quarkus-azure cd quarkus-azure git checkout 2024-11-27 cd app-insights-quarkus Here's the file structure of the application, with important files and directories: ├── src/main/ │ ├── java/io/quarkus/sample/ │ │ └── TodoResource.java │ └── resources/ │ ├── META-INF/resources/ │ ├── application.properties ├── pom.xml The directory src/main/resources/META-INF/resources contains the front-end code for the application. It's a Vue.js front end where you can view, add, update, and delete todo items. The src/main/java/io/quarkus/sample/TodoResource.java file implements the REST resource for the application. It uses the Jakarta REST API to expose the REST endpoints for the front end. The invocation of the REST endpoints is automatically instrumented by OpenTelemetry tracing. Besides, each REST endpoint uses the org.jboss.logging.Logger to log messages, which are collected by OpenTelemetry logging. For example, the GET method for the /api endpoint that returns all todo items is shown in the following code snippet: package io.quarkus.sample; import jakarta.inject.Inject; import jakarta.transaction.Transactional; import jakarta.validation.Valid; import jakarta.ws.rs.*; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.core.Response.Status; import java.util.List; import org.jboss.logging.Logger; @Path("/api") public class TodoResource { private static final Logger LOG = Logger.getLogger(TodoResource.class); @Inject TodoRepository todoRepository; @GET public List<Todo> getAll() { List<Todo> todos = todoRepository.findAll(); LOG.info("Found " + todos.size() + " todos"); return todos; } } The pom.xml file contains the project configuration, including the dependencies for the Quarkus application. The application uses the following extensions to support OpenTelemetry: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-jdbc</artifactId> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-logging</artifactId> </dependency> The src/main/resources/application.properties file contains the configuration for the Quarkus application. The configuration includes database connection properties for production, such as the JDBC URL and username. The configuration also includes the OpenTelemetry properties, such as enabling OpenTelemetry including logs and JDBC instrumentation at build time, using logging as exporter in development mode, and specifying the endpoint for the OpenTelemetry Protocol (OTLP) exporter in production mode. The following example shows the configuration for the OpenTelemetry: quarkus.otel.enabled=true quarkus.otel.logs.enabled=true quarkus.datasource.jdbc.telemetry=true %dev.quarkus.otel.logs.exporter=logging %dev.quarkus.otel.traces.exporter=logging %prod.quarkus.otel.exporter.otlp.endpoint=${OTEL_EXPORTER_OTLP_ENDPOINT} Run the Quarkus app locally Quarkus supports the automatic provisioning of unconfigured services in development mode. For more information, see Dev Services Overview in the Quarkus documentation. Now, run the following command to enter Quarkus dev mode, which automatically provisions a PostgreSQL database as a Docker container for the app: mvn clean package quarkus:dev The output should look like the following example: 2025-03-17 11:14:32,880 INFO [io.qua.dat.dep.dev.DevServicesDatasourceProcessor] (build-26) Dev Services for default datasource (postgresql) started - container ID is 56acc7e1cb46 2025-03-17 11:14:32,884 INFO [io.qua.hib.orm.dep.dev.HibernateOrmDevServicesProcessor] (build-4) Setting quarkus.hibernate-orm.database.generation=drop-and-create to initialize Dev Services managed database __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2025-03-17 11:14:36,202 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'quarkus' : 80437b598962f82bffd0735bbf00e9f1 aa86f0553056a8c9 CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=set client_min_messages = WARNING, db.system=postgresql}, capacity=128, totalAddedValues=6} 1970-01-01T00:00:00Z INFO ''quarkus' : 80437b598962f82bffd0735bbf00e9f1 aa86f0553056a8c9 CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=set client_min_messages = WARNING, db.system=postgresql}, capacity=128, totalAddedValues=6}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,236 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'DROP table quarkus' : 6b732661c29a9f0966403d49db9e4cff d86f29284f0d8eac CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=DROP table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=drop table if exists Todo cascade, db.system=postgresql}, capacity=128, totalAddedValues=7} 1970-01-01T00:00:00Z INFO ''DROP table quarkus' : 6b732661c29a9f0966403d49db9e4cff d86f29284f0d8eac CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=DROP table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=drop table if exists Todo cascade, db.system=postgresql}, capacity=128, totalAddedValues=7}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,259 INFO [io.ope.exp.log.LoggingSpanExporter] (JPA Startup Thread) 'CREATE table quarkus' : 54df3edf9f523a71bc85d0106a57016c bb43aa63ec3526ed CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=CREATE table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=create table Todo (completed boolean not null, ordering integer, id bigint generated by default as identity, title varchar(?) unique, url varchar(?), primary key (id)), db.system=postgresql}, capacity=128, totalAddedValues=7} 1970-01-01T00:00:00Z INFO ''CREATE table quarkus' : 54df3edf9f523a71bc85d0106a57016c bb43aa63ec3526ed CLIENT [tracer: io.opentelemetry.jdbc:2.8.0-alpha] AttributesMap{data={db.operation=CREATE table, db.name=quarkus, server.port=59406, server.address=localhost, db.connection_string=postgresql://localhost:59406, db.statement=create table Todo (completed boolean not null, ordering integer, id bigint generated by default as identity, title varchar(?) unique, url varchar(?), primary key (id)), db.system=postgresql}, capacity=128, totalAddedValues=7}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.lineno=-1, log.logger.namespace="org.jboss.logmanager.Logger", thread.id=122, thread.name="JPA Startup Thread"} 2025-03-17 11:14:36,438 INFO [io.quarkus] (Quarkus Main Thread) quarkus-todo-demo-app-insights 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.16.3) started in 7.409s. Listening on: http://localhost:8080 1970-01-01T00:00:00Z INFO 'quarkus-todo-demo-app-insights 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.16.3) started in 7.409s. Listening on: http://localhost:8080' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=109, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} 2025-03-17 11:14:36,441 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. 1970-01-01T00:00:00Z INFO 'Profile dev activated. Live Coding activated.' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=113, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} 2025-03-17 11:14:36,443 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-validator, jdbc-postgresql, narayana-jta, opentelemetry, rest, rest-jackson, smallrye-context-propagation, vertx] 1970-01-01T00:00:00Z INFO 'Installed features: [agroal, cdi, hibernate-orm, hibernate-validator, jdbc-postgresql, narayana-jta, opentelemetry, rest, rest-jackson, smallrye-context-propagation, vertx]' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="printStartupTime", code.lineno=115, code.namespace="io.quarkus.bootstrap.runner.Timing", log.logger.namespace="org.jboss.logging.Logger", thread.id=112, thread.name="Quarkus Main Thread"} -- Tests paused Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options> The output shows that the Quarkus app is running in development mode. The app is listening on http://localhost:8080. The PostgreSQL database is automatically provisioned as a Docker container for the app. The OpenTelemetry instrumentation for Quarkus and JDBC is enabled, and the telemetry data is exported to the console. Access the application GUI at http://localhost:8080. You should see a similar Todo app with an empty todo list, as shown in the following screenshot: Switch back to the terminal where Quarkus dev mode is running, you should see more OpenTelemetry data exported to the console. For example, the following output shows the OpenTelemetry logging and tracing data for the GET method for the /api endpoint: 2025-03-17 11:15:13,785 INFO [io.qua.sam.TodoResource] (executor-thread-1) Found 0 todos 1970-01-01T00:00:00Z INFO 'Found 0 todos' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 [scopeInfo: io.quarkus.opentelemetry:] {code.function="getAll", code.lineno=25, code.namespace="io.quarkus.sample.TodoResource", log.logger.namespace="org.jboss.logging.Logger", parentId="c48a4a02e74e1901", thread.id=116, thread.name="executor-thread-1"} 2025-03-17 11:15:13,802 INFO [io.ope.exp.log.LoggingSpanExporter] (vert.x-eventloop-thread-1) 'GET /api' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 SERVER [tracer: io.quarkus.opentelemetry:] AttributesMap{data={http.response.status_code=200, url.scheme=http, server.port=8080, server.address=localhost, client.address=0:0:0:0:0:0:0:1, user_agent.original=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0, url.path=/api/, code.namespace=io.quarkus.sample.TodoResource, http.request.method=GET, code.function=getAll, http.response.body.size=2, http.route=/api}, capacity=128, totalAddedValues=12} 1970-01-01T00:00:00Z INFO ''GET /api' : 7cf260232ff81caf90abc354357c16ab c48a4a02e74e1901 SERVER [tracer: io.quarkus.opentelemetry:] AttributesMap{data={http.response.status_code=200, url.scheme=http, server.port=8080, server.address=localhost, client.address=0:0:0:0:0:0:0:1, user_agent.original=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.0, url.path=/api/, code.namespace=io.quarkus.sample.TodoResource, http.request.method=GET, code.function=getAll, http.response.body.size=2, http.route=/api}, capacity=128, totalAddedValues=12}' : 00000000000000000000000000000000 0000000000000000 [scopeInfo: io.quarkus.opentelemetry:] {code.function="export", code.lineno=65, code.namespace="io.opentelemetry.exporter.logging.LoggingSpanExporter", log.logger.namespace="org.jboss.logmanager.Logger", thread.id=126, thread.name="vert.x-eventloop-thread-1"} Then return to the web browser and interact with the Todo app, try to add a new todo item by typing in the text box and pressing ENTER, selecting the checkbox to mark the todo item as completed, or selecting Clear completed to remove all completed todo items. You can also delete a todo item by selecting the x icon when you hover over it. The app should work as expected. Finally, switch back to the terminal and press q to exit Quarkus dev mode. Create the Azure resources The steps in this section show you how to create the following Azure resources to run the Quarkus sample app on Azure: Azure Database for PostgreSQL Flexible Server Azure Container Registry Azure Container Apps environment Azure Application Insights First, define the following variables in your bash shell by replacing the placeholders with your own values. They will be used throughout the example: UNIQUE_VALUE=<your unique value> LOCATION=eastus2 RESOURCE_GROUP_NAME=${UNIQUE_VALUE}rg DB_SERVER_NAME=${UNIQUE_VALUE}db DB_NAME=demodb REGISTRY_NAME=${UNIQUE_VALUE}reg ACA_ENV=${UNIQUE_VALUE}env APP_INSIGHTS=${UNIQUE_VALUE}appinsights ACA_NAME=${UNIQUE_VALUE}aca Next, create the resource group to host Azure resources: az group create \ --name $RESOURCE_GROUP_NAME \ --location $LOCATION Then, create the Azure resources in the resource group by following the steps below. Create an Azure Database for PostgreSQL flexible server instance: az postgres flexible-server create \ --name $DB_SERVER_NAME \ --resource-group $RESOURCE_GROUP_NAME \ --database-name $DB_NAME \ --public-access None \ --sku-name Standard_B1ms \ --tier Burstable \ --active-directory-auth Enabled Create the Azure Container Registry and get the login server: az acr create \ --resource-group $RESOURCE_GROUP_NAME \ --location ${LOCATION} \ --name $REGISTRY_NAME \ --sku Basic LOGIN_SERVER=$(az acr show \ --name $REGISTRY_NAME \ --query 'loginServer' \ --output tsv) Create the Azure Container Apps environment: az containerapp env create \ --resource-group $RESOURCE_GROUP_NAME \ --location $LOCATION \ --name $ACA_ENV Create an Azure Application Insights instance: logAnalyticsWorkspace=$(az monitor log-analytics workspace list \ -g $RESOURCE_GROUP_NAME \ --query "[0].name" -o tsv | tr -d '\r') az monitor app-insights component create \ --resource-group $RESOURCE_GROUP_NAME \ --location $LOCATION \ --app $APP_INSIGHTS \ --workspace $logAnalyticsWorkspace Use the created Application Insights instance as the destination service to enable the managed OpenTelemetry agent for the Azure Container Apps environment: appInsightsConn=$(az monitor app-insights component show \ --app $APP_INSIGHTS \ -g $RESOURCE_GROUP_NAME \ --query 'connectionString' -o tsv | tr -d '\r') az containerapp env telemetry app-insights set \ --name $ACA_ENV \ --resource-group $RESOURCE_GROUP_NAME \ --connection-string $appInsightsConn \ --enable-open-telemetry-logs true \ --enable-open-telemetry-traces true When you deploy the Quarkus app to the Azure Container Apps environment later in this blog, the OpenTelemetry data is automatically collected by the managed OpenTelemetry agent and exported to the Application Insights instance. Deploy the Quarkus app to Azure Container Apps You have set up all the necessary Azure resources to run the Quarkus app on Azure Container Apps. In this section, you containerize the Quarkus app and deploy it to Azure Container Apps. First, use the following command to build the application. This command uses the Jib extension to build the container image. Quarkus instrumentation works both in JVM and native modes. In this blog, you build the container image for JVM mode, to work with Microsoft Entra ID authentication for Azure Database for PostgreSQL flexible server. TODO_QUARKUS_IMAGE_NAME=todo-quarkus-app-insights TODO_QUARKUS_IMAGE_TAG=${LOGIN_SERVER}/${TODO_QUARKUS_IMAGE_NAME}:1.0 mvn clean package -Dquarkus.container-image.build=true -Dquarkus.container-image.image=${TODO_QUARKUS_IMAGE_TAG} Next, sign in to the Azure Container Registry and push the Docker image to the registry: az acr login --name $REGISTRY_NAME docker push $TODO_QUARKUS_IMAGE_TAG Then, use the following command to create a Container Apps instance to run the app after pulling the image from the Container Registry: az containerapp create \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --image $TODO_QUARKUS_IMAGE_TAG \ --environment $ACA_ENV \ --registry-server $LOGIN_SERVER \ --registry-identity system \ --target-port 8080 \ --ingress 'external' \ --min-replicas 1 Finally, connect the Azure Database for PostgreSQL Flexible Server instance to the container app using Service Connector: # Install the Service Connector passwordless extension az extension add --name serviceconnector-passwordless --upgrade --allow-preview true az containerapp connection create postgres-flexible \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --target-resource-group $RESOURCE_GROUP_NAME \ --server $DB_SERVER_NAME \ --database $DB_NAME \ --system-identity \ --container $ACA_NAME \ --yes Wait for a while until the application is deployed, started and running. Then get the application URL and open it in a browser: QUARKUS_URL=https://$(az containerapp show \ --resource-group $RESOURCE_GROUP_NAME \ --name $ACA_NAME \ --query properties.configuration.ingress.fqdn -o tsv) echo $QUARKUS_URL You should see the similar Todo app when you ran the app locally before. Interact with the app by adding, completing and removing todo items, which generates telemetry data and sends it to Azure Application Insights. Observe the Quarkus app with Azure Application Insights Open the Azure Portal and navigate to the Azure Monitor Application Insights resource you created earlier. You can monitor the application with different views backed by the telemetry data sent from the application. For example: Investigate > Application map: Shows the application components and their dependencies. Investigate > Failures: Shows the failures and exceptions in the application. Investigate > Performance: Shows the performance of the application. Monitoring > Logs: Shows the logs and traces of the application. You may notice that metrics are not observed in the Application Insights in this blog, that's because the Application Insights endpoint used in the managed OpenTelemetry agent doesn't accept metrics yet, which is listed as a known limitation. This is also the reason why Quarkus metrics is not enabled in the configuration file with quarkus.otel.metrics.enabled=true above. Alternatively, you can consider using Quarkus Opentelemetry Exporter for Microsoft Azure in your Quarkus apps to export the telemetry data directly to Azure Application Insights. Clean up resources To avoid Azure charges, you should clean up unneeded resources. When the resources are no longer needed, use the az group delete command to remove the resource group and all Azure resources within it: az group delete \ --name $RESOURCE_GROUP_NAME \ --yes \ --no-wait Next steps In this blog, you observe the Quarkus app with Azure Application Insights using OpenTelemetry. To learn more, explore the following resources: OpenTelemetry on Azure Collect and read OpenTelemetry data in Azure Container Apps (preview) Application Insights overview Using OpenTelemetry Deploy a Java application with Quarkus on an Azure Container Apps Secure Quarkus applications with Microsoft Entra ID using OpenID Connect Deploy a Java application with Quarkus on an Azure Kubernetes Service cluster Deploy serverless Java apps with Quarkus on Azure Functions Jakarta EE on Azure794Views2likes3CommentsBuilding the Agentic Future
As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being defined today. Of those organizations building AI agents, 41% chose Microsoft to build and run their solutions, significantly more than any other vendor. But within software development the opportunity is even greater, with approximately 50% of businesses intending to incorporate agentic AI into software engineering this year alone. Developers face a fascinating yet challenging world of complex agent workflows, a constant pipeline of new models, new security and governance requirements, and the continued pressure to deliver value from AI, fast, all while contending with decades of legacy applications and technical debt. This week at Microsoft Build, you can see how we’re making this future a reality with new AI-native developer practices and experiences, by extending the value of AI across the entire software lifecycle, and by bringing critical AI, data, and toolchain services directly to the hands of developers, in the most popular developer tools in the world. Agentic DevOps AI has already transformed the way we code, with 15 million developers using GitHub Copilot today to build faster. But coding is only a fraction of the developer’s time. Extending agents across the entire software lifecycle, means developers can move faster from idea to production, boost code quality, and strengthen security, while removing the burden of low value, routine, time consuming tasks. We can even address decades of technical debt and keep apps running smoothly in production. This is the foundation of agentic DevOps—the next evolution of DevOps, reimagined for a world where intelligent agents collaborate with developer teams and with each other. Agents introduced today across GitHub Copilot and Azure operate like a member of your development team, automating and optimizing every stage of the software lifecycle, from performing code reviews, and writing tests to fixing defects and building entire specs. Copilot can even collaborate with other agents to complete complex tasks like resolving production issues. Developers stay at the center of innovation, orchestrating agents for the mundane while focusing their energy on the work that matters most. Customers like EY are already seeing the impact: “The coding agent in GitHub Copilot is opening up doors for each developer to have their own team, all working in parallel to amplify their work. Now we're able to assign tasks that would typically detract from deeper, more complex work, freeing up several hours for focus time." - James Zabinski, DevEx Lead at EY You can learn more about agentic DevOps and the new capabilities announced today from Amanda Silver, Corporate Vice President of Product, Microsoft Developer Division, and Mario Rodriguez, Chief Product Office at GitHub. And be sure to read more from GitHub CEO Thomas Dohmke about the latest with GitHub Copilot. At Microsoft Build, see agentic DevOps in action in the following sessions, available both in-person May 19 - 22 in Seattle and on-demand: BRK100: Reimagining Software Development and DevOps with Agentic AI BRK 113: The Agent Awakens: Collaborative Development with GitHub Copilot BRK118: Accelerate Azure Development with GitHub Copilot, VS Code & AI BRK131: Java App Modernization Simplified with AI BRK102: Agent Mode in Action: AI Coding with Vibe and Spec-Driven Flows BRK101: The Future of .NET App Modernization Streamlined with AI New AI Toolchain Integrations Beyond these new agentic capabilities, we’re also releasing new integrations that bring key services directly to the tools developers are already using. From the 150 million GitHub users to the 50 million monthly users of the VS Code family, we’re making it easier for developers everywhere to build AI apps. If GitHub Copilot changed how we write code, Azure AI Foundry is changing what we can build. And the combination of the two is incredibly powerful. Now we’re bringing leading models from Azure AI Foundry directly into your GitHub experience and workflow, with a new native integration. GitHub models lets you experiment with leading models from OpenAI, Meta, Cohere, Microsoft, Mistral and more. Test and compare performance while building models directly into your codebase all within in GitHub. You can easily select the best model performance and price side by side and swap models with a simple, unified API. And keeping with our enterprise commitment, teams can set guardrails so model selection is secure, responsible, and in line with your team’s policies. Meanwhile, new Azure Native Integrations gives developers seamless access to a curated set of 20 software services from DataDog, New Relic, Pinecone, Pure Storage Cloud and more, directly through Azure portal, SDK, and CLI. With Azure Native Integrations, developers get the flexibility to work with their preferred vendors across the AI toolchain with simplified single sign-on and management, while staying in Azure. Today, we are pleased to announce the addition of even more developer services: Arize AI: Arize’s platform provides essential tooling for AI and agent evaluation, experimentation, and observability at scale. With Arize, developers can easily optimize AI applications through tools for tracing, prompt engineering, dataset curation, and automated evaluations. Learn more. LambdaTest HyperExecute: LambdaTest HyperExecute is an AI-native test execution platform designed to accelerate software testing. It enables developers and testers to run tests up to 70% faster than traditional cloud grids by optimizing test orchestration, observability and streamlining TestOps to expedite release cycles. Learn more. Mistral: Mistral and Microsoft announced a partnership today, which includes integrating Mistral La Plateforme as part of Azure Native Integrations. Mistral La Plateforme provides pay-as-you-go API access to Mistral AI's latest large language models for text generation, embeddings, and function calling. Developers can use this AI platform to build AI-powered applications with retrieval-augmented generation (RAG), fine-tune models for domain-specific tasks, and integrate AI agents into enterprise workflows. MongoDB (Public Preview): MongoDB Atlas is a fully managed cloud database that provides scalability, security, and multi-cloud support for modern applications. Developers can use it to store and search vector embeddings, implement retrieval-augmented generation (RAG), and build AI-powered search and recommendation systems. Learn more. Neon: Neon Serverless Postgres is a fully managed, autoscaling PostgreSQL database designed for instant provisioning, cost efficiency, and AI-native workloads. Developers can use it to rapidly spin up databases for AI agents, store vector embeddings with pgvector, and scale AI applications seamlessly. Learn more. Java and .Net App Modernization Shipping to production isn’t the finish line—and maintaining legacy code shouldn’t slow you down. Today we’re announcing comprehensive resources to help you successfully plan and execute app modernization initiatives, along with new agents in GitHub Copilot to help you modernize at scale, in a fraction of the time. In fact, customers like Ford China are seeing breakthrough results, reducing up to 70% of their Java migration efforts by using GitHub Copilot to automate middleware code migration tasks. Microsoft’s App Modernization Guidance applies decades of enterprise apps experience to help you analyze production apps and prioritize modernization efforts, while applying best practices and technical patterns to ensure success. And now GitHub Copilot transforms the modernization process, handling code assessments, dependency updates, and remediation across your production Java and .NET apps (support for mainframe environments is coming soon!). It generates and executes update plans automatically, while giving you full visibility, control, and a clear summary of changes. You can even raise modernization tasks in GitHub Issues from our proven service Azure Migrate to assign to developer teams. Your apps are more secure, maintainable, and cost-efficient, faster than ever. Learn how we’re reimagining app modernization for the era of AI with the new App Modernization Guidance and the modernization agent in GitHub Copilot to help you modernize your complete app estate. Scaling AI Apps and Agents Sophisticated apps and agents need an equally powerful runtime. And today we’re advancing our complete portfolio, from serverless with Azure Functions and Azure Container Apps, to the control and scale of Azure Kubernetes Service. At Build we’re simplifying how you deploy, test, and operate open-source and custom models on Kubernetes through Kubernetes AI Toolchain Operator (KAITO), making it easy to inference AI models with the flexibility, auto-scaling, pay-per-second pricing, and governance of Azure Container Apps serverless GPU, helping you create real-time, event-driven workflows for AI agents by integrating Azure Functions with Azure AI Foundry Agent Service, and much, much more. The platform you choose to scale your apps has never been more important. With new integrations with Azure AI Foundry, advanced automation that reduces developer overhead, and simplified operations, security and governance, Azure’s app platform can help you deliver the sophisticated, secure AI apps your business demands. To see the full slate of innovations across the app platform, check out: Powering the Next Generation of AI Apps and Agents on the Azure Application Platform Tools that keep pace with how you need to build This week we’re also introducing new enhancements to our tooling to help you build as fast as possible and explore what’s next with AI, all directly from your editor. GitHub Copilot for Azure brings Azure-specific tools into agent mode in VS Code, keeping you in the flow as you create, manage, and troubleshoot cloud apps. Meanwhile the Azure Tools for VS Code extension pack brings everything you need to build apps on Azure using GitHub Copilot to VS Code, making it easy to discover and interact with cloud services that power your applications. Microsoft’s gallery of AI App Templates continues to expand, helping you rapidly move from concept to production app, deployed on Azure. Each template includes fully working applications, complete with app code, AI features, infrastructure as code (IaC), configurable CI/CD pipelines with GitHub Actions, along with an application architecture, ready to deploy to Azure. These templates reflect the most common patterns and use cases we see across our AI customers, from getting started with AI agents to building GenAI chat experiences with your enterprise data and helping you learn how to use best practices such as keyless authentication. Learn more by reading the latest on Build Apps and Agents with Visual Studio Code and Azure Building the agentic future The emergence of agentic DevOps, the new wave of development powered by GitHub Copilot and new services launching across Microsoft Build will be transformative. But just as we’ve seen over the first 50 years of Microsoft’s history, the real impact will come from the global community of developers. You all have the power to turn these tools and platforms into advanced AI apps and agents that make every business move faster, operate more intelligently and innovate in ways that were previously impossible. Learn more and get started with GitHub Copilot2.1KViews2likes0CommentsThe State of Coding the Future with Java and AI – May 2025
Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world approaches that Java developers are already using - things like Retrieval-Augmented Generation (RAG), vector databases, embeddings, and AI agents. These are not just buzzwords - they help teams build apps that work well at scale, stay secure, and are easier to manage over time. For teams figuring out where to start, the report includes guidance and simple workflows to make things easier. In short, Java is well-positioned to keep leading in enterprise software. This is an invitation to Java architects, tech leads, decision-makers, and developers to explore what is next and build smarter, more connected apps with AI. Introduction The world of software development is changing fast. Over the past two years, we have seen a major shift – not just in tools and frameworks, but in how developers think about building software. Artificial Intelligence is now part of the everyday conversation – helping developers rethink what their applications can do and how quickly they can build them. In the middle of all this change, it helps to pause and look at where we are. Java developers are especially exploring how to add intelligence to their existing applications or build new ones that can learn, adapt, and scale. But with so many innovative ideas and so much information out there, the real question is – what are developers doing? To answer that, we reached out directly to Java professionals across the world. We wanted to understand their thinking, what they are trying, and what they need to move forward with confidence. Our invitation was simple - "Calling all Java pros – share your insights to help simplify AI-powered apps 👉 aka.ms/java-ai." The response was strong. A total of 647 Java professionals took part: 587 have experience with AI - representing a wide range of perspectives and levels of AI knowledge. 60 have not yet explored AI - but are curious and eager to learn what is possible. Among all respondents: Two-thirds (67%) had 4 to 10 years of Java experience. One-third (33%) had more than 10 years of experience. This report highlights what we learned – and what it means for the future of Java and AI. The Scenario We Asked Java Pros to Imagine “Picture yourself adding an AI-driven feature to an existing Java-based app or building a brand-new intelligent application. This feature might improve customer experience – such as personalized recommendations – optimize business processes – like fraud detection – or enhance product searches using natural language. Your goal is to seamlessly integrate this feature, ensuring it is easy to develop, scalable, and maintainable.” An impressive 97 percent of respondents said they would choose Java for building this type of intelligent application. A Common Misconception 90 percent of respondents believed that building intelligent Java apps would require deep experience with AI, Machine Learning, or Python. Developers Can Start to Deliver Production-Grade Intelligent Java Apps without AI, ML, or Python Skills Myth of AI/ML and Java Java developers already have what they need – today – to build intelligent applications using modern Java-first frameworks such as Model Context Protocol (MCP) Java SDK, Spring AI, or LangChain4j. No prior experience in Python or Machine Learning is required for Java developers to begin adding intelligent features to their apps. Connecting a Java application to backend AI systems – including Large Language Models and Vector Databases – is conceptually like working with REST APIs or traditional SQL and NoSQL databases. Modern libraries like MCP Java SDK, Spring AI, and LangChain4j make it easier for developers to build and enhance AI-powered Java applications. These frameworks offer support for: Retrieval-Augmented Generation (RAG) Conversational memory Conversation logging Integration with vector stores Secure, observable, and safe-by-default interactions Streamed outputs and structured reasoning Java continues to play a leading role in enterprise software. This gives Java developers a natural advantage – and a unique opportunity – to lead the way in delivering intelligent features inside core business applications. It is also important to note that tasks requiring deep AI and Data Science knowledge are best left to specialists. Java developers can focus on app logic, integration, and delivering business value without needing to become AI experts themselves. In-Process vs HTTP-Based - A Common Misstep in AI Application Design AI-powered applications can be built in different ways - and one of the patterns is to embed the model directly within the same app that handles business logic and exposes the API. This is known as the in-process approach. In this setup, the model is loaded at runtime, using local weights and often relying on a GPU for inference. It is a convenient option - especially when working with models you have created from scratch or downloaded for use in your own application. The Shift to Model-as-a-Service - A Simple History Before foundation models were made available as services, most AI models were custom-built for specific use cases – like classifying documents, detecting anomalies, or predicting demand. These models were typically developed in Python using frameworks such as TensorFlow or PyTorch. Because development and usage happened in the same environment, it was natural to load the model directly into the application’s memory using local weights, and to rely on a local GPU for inference. This model-in-app pattern made sense when the app and the model were designed together. Many popular Python-based libraries, including PyTorch, TensorFlow, and Hugging Face Transformers, encourage this in-process setup by default. As a result, the model often becomes a local function call - tightly coupled to the application’s logic and runtime. However, that convenience introduces scaling and maintenance challenges. Every application instance must run on a machine with GPU access. You must allocate GPU resources per app, even when the app is idle. As demand grows, this leads to higher infrastructure costs, lower resource efficiency, and architectural rigidity. If you scale the app, you are often forced to scale GPU capacity along with it - even if only the app's business logic needs scaling. The Rise of HTTP-Based Integration with Model-as-a-Service The introduction of foundation models like GPT-4, available through services such as OpenAI and Azure OpenAI, brought a shift in how models are used in applications. These models are designed to handle a wide range of tasks and are offered as cloud-hosted APIs - a model-as-a-service approach. Instead of embedding the model into each application, you send a request over HTTP and receive a response. This change enables a new design pattern: the application and the model are treated as separate services. The app handles business logic, while the model service handles inference. This pattern brings clear advantages - modularity, cleaner separation of concerns, centralized control over GPU infrastructure, and the ability to reuse models across many applications. The diagram above illustrates this shift. On the left, the in-process setup binds the model tightly to the application, requiring direct GPU access and local weights. On the right, the HTTP-based setup enables applications written in any language stack - such as Java, Python, JavaScript, .NET, or Go - to interact with a shared model endpoint over HTTP. This separation makes it easier to update models, manage infrastructure, including GPU infrastructure, and scale intelligently. It also reflects how most modern AI platforms are now built. HTTP-based integration is scalable, cost-effective, and designed for modern application environments. It reduces operational complexity and gives developers the flexibility to choose the architecture that fits their needs - without being locked into one stack, tools, models, or setup. Myth about Python As we listened to Java developers across the community, a familiar pattern emerged. Many shared their experiences - and sometimes frustrations - when working with AI technologies in Java. These were not just passing remarks. They reflected real challenges - especially when it came to building or training machine learning models, where Python has long been the preferred environment. Here is a glimpse into what we heard: “Java has fewer AI-specific libraries compared to Python. Libraries like TensorFlow and PyTorch are more mature and feature-rich in Python, making it easier to implement complex AI models.” “Working on AI-powered applications with Java presents challenges, especially when building or training models. The ecosystem is not as deep as Python’s, which has tools like Scikit-learn and notebooks like Jupyter.” “Even though Java can be used for AI, the support for GPU acceleration is not as seamless as it is in Python. You need extra setup and tuning.” “There are fewer Java developers with strong AI backgrounds. It is harder to find or grow a team when Python seems to be the go-to language for most ML engineers.” These are all honest, valid observations. If the job-to-be-done is building foundation models, training models from scratch, or fine-tuning existing models, then Python is a natural choice. It offers the right tools, libraries, and ecosystem support to do that job well. But here is what really matters today: Most AI application developers - including those working in Java - are not training or fine-tuning models. They are not building models from the ground up or optimizing low-level GPU workloads. Instead, they are focused on a different job: Connecting to existing foundation models. Calling AI services over REST APIs. Using AI libraries like Spring AI and LangChain4j to orchestrate intelligent workflows. Querying vector databases. Embedding AI capabilities into production-grade enterprise applications. This distinction is clearly reflected in the diagram above. On the right side, you see “Model Training and Development” as a separate, specialized job. It is critical work - best handled by teams with deep expertise in data science and model engineering. On the left side, you see the application architecture most Java developers work with every day: REST APIs, business logic, database integration, and calls to external AI models and vector stores using AI libraries. This is where Java fits. Java developers are not building models - they are building apps on top of foundation models. And with tools like MCP Java SDK, Spring AI, and LangChain4j, they are not playing catch-up - they are building what matters, integrate AI into existing apps and capabilities. You do not need to train models. You just need to wire up the right parts, connect to the services that make AI possible, and deliver intelligent functionality where it belongs - inside the applications your organization already depends on. “You can be an AI application developer - in less than 2 minutes. Minute 1: Sign up for access to an LLM - Azure OpenAI, OpenAI, whatever. Get yourself an API key. Minute 2: Head to https://start.spring.io, specify `OpenAI` (or whatever), hit `Generate`, and then open your new Spring Boot + Spring AI project in your IDE. In your `application.properties`, you’ll need to specify your API key. Then, inject the auto-configured `ChatClient` somewhere in your code. Use the `ChatClient` to make a request to your model. Congratulations - you’re an AI application developer!” -- Josh Long, Spring Developer Advocate, Broadcom "Is Java still relevant in this new era of AI?" "How do I, with my years of Java expertise, even begin to work with these Large Language Models today?" These are questions I have heard time and again at community events and industry conferences. Today, Java developers are at a pivotal moment. Our existing skills are not just still relevant - they are the foundation for building the next generation of AI-powered applications. Thanks to frameworks like Quarkus, Langchain4j, and MCP integration, we can bridge the world of traditional enterprise development with the fast-growing world of AI - all without giving up the strengths and familiarity of Java.” – Daniel Oh, Senior Principal Developer Advocate, Red Hat The future of AI in software development will be defined by who integrates AI well into applications - both existing and new apps. Most AI-related development will be connecting models to solve real problems inside real applications. And in that space, Java is already strong. From financial services to healthcare, logistics to manufacturing - Java powers the business logic and workflows that now need to become intelligent. This is where Java developers shine. And with the right tools, they are more than ready to lead. Crucial Elements for Java AI Applications As we looked deeper into the survey results, one thing became clear – Java developers are not just interested in adding AI for the sake of it. They are focused on building practical, enterprise-ready features that are reliable, secure, and easy to maintain. 98% of respondents highlighted a core set of approaches or elements that they see as essential for any AI-powered Java application: Retrieval-Augmented Generation (RAG) – Bringing real-time, context-aware answers by grounding responses in trusted data. This is especially useful in enterprise scenarios where accuracy and context matter. Embeddings and Vector Databases – Enabling efficient semantic search and advanced knowledge retrieval. Developers recognize this as the key to making applications “understand” the meaning behind user inputs. Function Calling or Tool Calling – Allowing AI models to interact with APIs, pull in real-time data, or trigger backend workflows. This is where AI starts to act – not just suggest – making it a true part of the application logic. AI Agents – These are not just chatbots. Agents are intelligent programs that can automate or assist with tasks on behalf of users or teams. They combine reasoning, memory, and action – gathering information and triggering responses dynamically. For many developers, agents represent the next step toward intelligent automation inside business-critical workflows. Fundamentals for Enterprise-Grade AI Applications When building applications for the enterprise, developers know that intelligence alone is not enough. Trust, safety, and integration matter just as much. These are the foundational features Java developers called out: Security and Access Control – Making sure AI features respect user roles, protect sensitive data, and fit into enterprise identity systems. Safety and Compliance – Filtering outputs to align with internal policies, legal regulations, and brand standards. This is especially important for customer-facing features. Observability – Tracking how AI decisions are made, logging user and AI interactions, and making sure there is a clear record of what happened – and why. Structured Outputs – AI responses need to work within the system, not outside it. Structured formats – like JSON or XML – ensure smooth handoffs between the AI component and the rest of the application. Reasoning and Explainability – Developers want AI features that can explain their answers, show their sources, and help users trust the output – especially in domains like finance, healthcare, or compliance. Representative Scenarios and Business Impact To make things more concrete, let us look at two sample scenarios. These are not the only ones – just representative examples to help spark ideas. There is a broad and growing range of real-world situations where Java developers can use AI to create business value. Scenario One – Intelligent Workflow Automation Imagine a production manager at an auto manufacturer - say, Mercedes-Benz or Ford - who needs to align the assembly line schedule with real-time component availability and constantly shifting order priorities. The manager’s question is urgent and complex: “How can I adjust the production schedule based on today’s parts inventory and current orders?” Answering means pulling in data from ERP systems, supply chain feeds, vendor dashboards, and manufacturing operations - a level of complexity that can overwhelm even experienced teams. This is where AI steps in as a true copilot - working alongside the human decision-maker to gather data, flag supply constraints, and highlight scheduling options. Together, they can plan faster, adapt more confidently, and respond to change in real time. For Java developers, this is an opportunity to build intelligent systems that bring together data from ERP, inventory, and order management applications - enabling AI models to interact with information and collaborate with decision-makers. These systems do not rely on AI alone; they depend on strong data integration and reliable workflows - all of which can be designed, secured, and scaled within the Java ecosystem. In this way, AI becomes part of a co-working loop - and Java developers are the ones who make that loop real. Scenario Two – AI-Powered Process Assistants Picture a logistics manager at a major shipping company - FedEx, UPS, or DHL - facing cascading delays due to severe weather across multiple regions. The manager is under pressure to reroute packages efficiently while minimizing disruptions to downstream delivery schedules. The question is urgent: “What is the fastest rerouting option for delayed packages due to weather?” Answering requires combining live weather feeds, traffic data, delivery schedules, hub capacities, and driver availability - all in real time. AI acts as a true copilot at this moment, working alongside the manager to collect relevant signals, flag risk zones, and generate rerouting recommendations. Together, they respond with speed and clarity, keeping shipments moving and customers informed. For Java teams, this is a practical opportunity to build intelligent systems that embed AI into logistics, supply chain, and delivery operations - not by rebuilding everything, but by integrating the right data streams, APIs, and business logic. The real value lies in data orchestration, not just algorithms. Java developers are key to enabling these AI-powered assistants by securing connections across systems and building workflows that help humans and AI collaborate effectively under pressure. More Scenarios – World of Possibilities These two scenarios only scratch the surface. Developers across industries – from healthcare and finance to retail and public services – are finding ways to integrate AI that solve meaningful problems, reduce complexity, and improve how their systems perform. Java and AI Technology Stack So far, we have looked at what developers want to build and how AI is changing the way applications are designed. Now, let us look at the platform behind it all – the technology stack that powers intelligent Java applications. To bring these ideas to life and impact, Java developers need a foundation that connects data, apps, and AI services. We call this an AI application platform. It is not a specific product – it is an integrated platform made up of components that most developer teams already use or are familiar with. The goal of this platform is to make developers more productive while building intelligent features into their applications. It gives teams the freedom to choose familiar tools – while making it easier to bring in AI capabilities when and where they are needed. We group this platform into three areas: The app platform The data platform The AI platform Let us break it down using the numbered diagram: Developer Services: These are the core tools that developers use every day – IDEs, coding assistants, build tools, testing frameworks, CI/CD pipelines. They help you write, debug, and manage application code across your team. Container Services | Platform-as-a-Service: This is the runtime layer – where your applications are deployed and scaled. Whether using containers or a managed platform, this layer handles traffic, performance, and operational efficiency. Data Platform: This is where your application data lives – databases, data lakes, and other storage services. It connects structured data, business logic, and real-time events. AI Platform: This is where intelligence is added. It includes access to large language models, embeddings, vector search, and other tools that support natural language interactions, automation, and decision-making. Together, these four parts form the foundation for building, deploying, and managing AI-powered Java applications. Technology Stacks for Spring Boot and Quarkus Applications To make this more relatable, we highlighted two of the most widely adopted Java frameworks – Spring Boot and Quarkus. These stacks represent popular combinations that many Java teams are already using today for building cloud-native applications. That said, these are just representative examples. There are many valid combinations that developers, platform teams, and organizations can choose – based on their existing tools, workloads, and team preferences. Representative Spring Boot Stack App Platform: App hosting service of choice AI Library: Spring AI AI Platform: OpenAI Business Data: PostgreSQL Vector Database: PostgreSQL Representative Quarkus Stack App Platform: App hosting service of choice AI Library: LangChain4j AI Platform: OpenAI Business Data: PostgreSQL Vector Database: PostgreSQL Both stacks support the core capabilities needed for building intelligent apps – from secure model access and real-time data integration to observability and system-level debugging. But the opportunity does not stop there. Traditional App Servers - Tomcat, WebLogic, JBoss EAP or WebSphere Many enterprise applications continue to run on Tomcat, WebLogic, JBoss EAP, or WebSphere. These are stable platforms that power core business systems – and they are very much part of the AI journey. If you are running on one of these platforms, you can still bring intelligence into your applications. By using a Java library of choice (Spring AI or LangChain4j) , you can connect these applications to Large Language Models (LLMs) and Model Context Protocol (MCP) servers – without needing to rewrite or migrate them. This means that intelligence can be added, not just rebuilt – a powerful approach for teams with large existing investments in Java EE or Jakarta EE applications. Whether your Java app is built with Spring Boot, Quarkus, or deployed on a traditional app server, the tools are here – and the path to intelligent applications is open. You do not have to start from scratch. You can start from where you are. Java and MCP – The Bridge to Intelligent Applications One of the most important parts of the Java and AI story is MCP - the Model Context Protocol. MCP is an open, flexible, and interoperable standard that allows large language models to connect with the outside world - and more importantly, with real applications and real data. At its core, MCP is a bridge - a structured way for models to access enterprise data, invoke tools, and collaborate with AI agents. It gives developers control over how data moves, how decisions are made, and how actions are triggered. The result is safer, more predictable AI behavior inside real-world systems. MCP servers can be implemented in any language stack – such as Java, C#, Python, and NodeJS - and integrated into any AI-powered application, regardless of how that application is written. That interoperability makes MCP especially valuable for teams working across systems and languages. If you are building an MCP server using Java, the official MCP Java SDK maintained by Anthropic provides the right starting point. You can also use frameworks like Spring or Quarkus to implement MCP servers with full enterprise capabilities. For those building applications using Spring AI or LangChain4j, both libraries support connecting to any MCP server - whether running locally or remotely - to orchestrate tools, call functions, and manage agent behavior as part of the runtime flow. In addition, ready-to-use implementations like the Azure MCP Server make it easier to add intelligence to backends, orchestrate workflows, and shape AI agent behavior without starting from scratch. Authentication and Authorization Security is a critical part of any enterprise-grade solution - and MCP is no exception. In collaboration with Anthropic, Microsoft proposed a new authorization specification for MCP. This specification has now been finalized and is being implemented across MCP clients and servers to ensure that all interactions are secure, policy-driven, and consistent across environments. This continued investment in standards, tooling, and security is helping MCP mature into a core enabler of intelligent applications - especially for enterprise Java teams looking to move fast without compromising on trust or control. Preferred Libraries – What Java Developers Are Choosing As part of our outreach to 647 Java professionals, we asked a key question: “Which AI frameworks or libraries would you consider for building or integrating intelligence into Java applications?” Here is what they told us: Spring AI - selected by 43 percent. LangChain4j – preferred by 37 percent. These two libraries clearly lead the way. They reflect the maturity of the Java ecosystem and show strong alignment with the two most active communities – Spring and Jakarta EE. Spring AI and LangChain4j offer higher-level abstractions that simplify how developers connect to AI services, manage context, interact with models, and build intelligent features. For developers already working in Spring Boot or Quarkus, these libraries feel familiar – and that lowers the barrier to adding intelligent capabilities into existing codebases. Other Developer Preferences At the same time, a considerable number of developers – 37 percent and 29 percent – also shared that they would prefer to work directly with AI service provider libraries or call REST APIs. This is not a surprise. In fact, it is a healthy signal. Many teams use these lower-level integrations to gain early access to new features or customize interactions in ways that higher-level libraries may not yet support. It is important that these developers know: You are not wrong - but you are not alone either. While direct API integration offers flexibility, AI libraries like Spring AI and LangChain4j are designed to make those experiences easier. They wrap the complexity, manage context, offer tested patterns that align with enterprise application needs - like observability, security, and structured outputs – plug your code into the Spring and Java ecosystem of possibilities for deep integration. Evolving Together As AI services evolve, day-zero support will almost always appear first in the service provider’s native SDKs and REST APIs. That is expected. But as those capabilities stabilize, AI libraries like Spring AI and LangChain4j will latch on – offering developers a smoother, more consistent programming experience. The result: Developers get to start fast with APIs – and scale confidently with higher level libraries and frameworks. Top Challenges and Areas Needing Improvement As part of our research, we asked 647 Java professionals to share the biggest challenges they face – and where they believe improvements would help the most. The answers reflected where the community is today – eager to build but still facing some friction points. Top Challenges Java Developers Encounter Lack of clear starting points and step-by-step guidance. Feeling overwhelmed by the variety of AI models, libraries, and tools. Misconceptions that machine learning expertise is required. Complexity with integrating AI features into existing applications - particularly introduced by suboptimal patterns such as directly calling REST APIs through low-level HTTP libraries, invoking Python-based routines through external OS processes, or loading models into the application’s memory at runtime using local weights and GPU resources. Missing features in some of the current libraries and frameworks. Uncertainty about scaling applications and safely using private models on the cloud. Areas Developers Believe Need the Most Improvement Clear and practical step-by-step workflows. Guidance on how to securely integrate private models. Examples that show how to use chat models for function calling and streaming completions. Educational content that explains completions, reasoning, and data validation. Tools and how-to guides for embedding-based search and question answering. Tutorials on how to leverage external data to improve model output. Java Developers – Familiar with AI vs. New to AI Among all respondents: 87 percent are familiar with AI. 13 percent are newer to AI and just getting started. Java Developers New to AI These developers are exploring use cases, evaluating models, and building early prototypes. Their top challenges are: Lack of clear starting points. Too many options in terms of tools and models. Need for simple, practical guidance. The top areas that will benefit them: Step-by-step development workflows. Examples of using chat models and completions. Simple breakdowns of key concepts like reasoning and validation. Java Developers Familiar with AI These developers are further along – often in development or production stages – and face a wide range of challenges. A top need for them is: Secure ways to integrate private models into Java apps. They also benefit from deeper technical content, patterns, and advanced tooling support. Moving Forward with Confidence This space is evolving faster than anyone expected – and that can feel overwhelming. But the top challenges Java developers face are real, and the community is actively addressing them. It is important not to worry about the number of tools, models, or libraries. That diversity is a sign of progress. Models will keep evolving. New ones will arrive. And this is exactly where higher-level Java libraries step in. Spring AI, LangChain4j, and the MCP Java SDK are designed to simplify the path forward. These libraries create a layer of abstraction that shields your application code from constant changes. You can build once – and switch models or providers as needed, without rewriting core logic. And these libraries are alive. You can see them in action on GitHub – through open issues, pull requests, and rapid updates. If you see a missing feature, open an issue. If you want to contribute, send a pull request. These are responsive communities that welcome participation. “AI, for most people today, effectively means "sending human-language sentences to an HTTP endpoint." 80% of the noise right now is about the artful and scalable connection of these models with your data and business logic - data guarded by services whose business logic is statistically implemented in Spring Boot and Spring AI. We, as JVM developers, are uniquely well-positioned to expand the AI universe. Don’t delay - start (spring.io) today!” -- Josh Long, Spring Developer Advocate, Broadcom “By combining Quarkus’s speed, Langchain4j’s AI orchestration, and MCP’s unified tool access, Java developers are in a unique position to lead this transformation - building intelligent, resilient applications using the tools and approaches we know well. Working with AI is no longer a distant specialty - it is becoming a natural part of modern Java development” -- Daniel Oh, Senior Principal Developer Advocate, Red Hat AI Concepts for App Developers As you build your first intelligent Java applications, it helps to become familiar with key AI concepts such as foundation models, chat models, embedding models, prompts, inference endpoints, context windows, and vector search (see diagram below for a curated list). These concepts are not just theory - they directly shape how your applications interact with AI systems. To make it easier, we created a simple learning prompt that you can use with your favorite Chat Model like ChatGPT and Claude: // Prompt Template for Java Developers Learning AI Concepts // (Replace <TERM> with the topic you want to learn.) I am a Java enterprise app developer focused on building AI-powered Java applications. I do not fully understand what '<TERM>' means. Please explain it to me so simply that I can think about it like a Java library or service I would naturally use. Use examples from enterprise Java (like APIs, search, summarization, customer service bots) that I would typically build. If possible, give a mental model I can remember and relate to Java patterns. Also, suggest a small sample of how I would use '<TERM>' via an API in a Java app, if that helps understanding. You can replace <TERM> with any concept you want to learn, such as "Embedding Model" or "Inference Endpoint," and get a focused, practical explanation that fits how you already think as a Java developer. By practicing with this method, you can quickly build second-nature familiarity with the terms and ideas behind AI development - without needing a research background. With the strong foundation you already have in Java, you will be ready to confidently integrate, adapt, and innovate as AI tools continue to evolve. AI Is Now a Java Developer’s Game You do not need to train your own models to start building intelligent applications. You can do it today - using popular, production-ready foundation models - all within the Java ecosystem you already know and trust. These are still Java applications at their core. What sets you apart is not just your Java expertise - it is your deep understanding of how real business processes work and how to bring them to life in code. You know how to build, secure, scale, and operate reliable systems - and now, you can apply those same skills to deliver AI-powered solutions that run in the environments your teams already use, including Microsoft Azure. AI-Assisted App Development – A Powerful Companion to Building Intelligent Java Apps No discussion about the future of software development is complete without acknowledging the rise of AI-assisted development. This is a separate - but equally important - path alongside building intelligent applications. And it is transforming how developers write, upgrade, and manage code. At the center of this shift is tools like GitHub Copilot - a tool that is reshaping how developers approach their daily work. Developers using GitHub Copilot report coding up to 55 percent faster, freeing up time to focus on design decisions, solving business problems, and writing less boilerplate code. But the benefits go deeper than speed - 75 percent of developers say that Copilot makes their work more satisfying. Today, 46 percent of all code on GitHub is written with AI assistance, and over 20,000 organizations are already embracing these tools to improve development workflows and accelerate delivery. Built Into the Tools You Already Use GitHub Copilot works where Java developers already build – in IDEs like Visual Studio Code, IntelliJ, and Eclipse. It brings contextual, customizable assistance, powered by the latest models such as gpt-4o and Claude 3.5 Sonnet. Whether it is suggesting code snippets, auto-completing functions, or helping enforce best practices, GitHub Copilot enhances the quality of code and the productivity of developers – all while keeping them in control. Helping Modernize and Maintain Java Codebases One of the most exciting capabilities is GitHub Copilot’s growing role in modernizing Java applications. Late last year, the GitHub Copilot App Modernization – Java Upgrade feature entered private preview. This tool is designed to support large, mission-critical tasks like: Upgrading Java versions and frameworks Refactoring code Updating dependencies Aligning to cloud-first practices The process starts with your local Java project. Copilot provides an AI-generated upgrade assistant, and developers stay in the loop to approve changes step-by-step. The goal is to take the heavy lifting out of routine upgrades while ensuring everything remains safe and aligned with your architecture. Beyond App Code – Towards Cloud-Ready Modernization Technology providers like Microsoft are investing deeply in this space to bring additional capabilities into developer workflows – including: Secure identity handling through passwordless authentication Certificate management and secrets handling Integration with PaaS services for storage, data, caching, and observability All of this reduces the time it takes to bring legacy apps forward and prepare them for modern, scalable deployments – so teams can spend more time building intelligent features and less time managing technical debt. Two Paths – One Goal AI-assisted development, including upgrading and modernizing apps with AI, and building intelligent apps are not the same – but together, they form a powerful foundation. One helps you write and modernize code faster, the other helps you deliver smarter features inside your apps. For Java developers, this means there is support at every step – from idea to implementation to impact. Start Today and Move the Java Community Forward The message from 647 Java professionals is clear: Java developers are ready – and the tools they need to build intelligent applications are already here. If you are a Java developer and have not started your AI journey yet now is the right time. You do not need to become an AI expert. You do not need to change your language, tools, or working style. Modern Java frameworks and libraries like Spring AI, LangChain4j, and the MCP Java SDK are designed to work the way you already build – while making it easier to add intelligence, automation, and smart experiences to your applications. You can start with what you know – and grow into what is next: aka.ms/spring-ai and aka.ms/langchain4j. To Java Ecosystem Leaders We also want to speak directly to those shaping the Java ecosystem – community leaders, experienced developers, and technical influencers. Your role is more important than ever. We invite you to: Show what is possible – share real examples of how AI features can be integrated into Java applications with minimal friction. Promote best practices – use meetups, blogs, workshops, and developer forums to spread practical guidance and patterns that others can follow. Improve the experience – contribute documentation, examples, and even code to help close the gaps developers face when starting their AI journeys. Push frameworks forward – help identify and implement missing features that can simplify Java + AI integration and speed up real-world adoption. This is not just about tools – it is about people helping people move forward. Many of you already helped make this research possible – by spreading the word on LinkedIn, sharing the survey, and encouraging others to contribute. Your support made a difference. And now, these findings belong to the entire Java ecosystem – so we can act on them together. To the Java Developers Who Participated Thank you. Your input – your time, your thoughts, your challenges, your ideas – shaped this entire report. You told us what is working, what is missing, and what you need next. We hope this reflection of your voices is helpful – to you and to the broader Java community. The road ahead is exciting – and Java is ready to lead. AI Learning Resources for Java App Developers Azure AI Services documentation Azure AI Services quick starts – like Chat Completions and Use Your Data Build Enterprise Agents using Java and Spring OpenAI RAG with Java, LangChain4j and Quarkus Spring AI Learn how to build effective agents with Spring AI Spring AI reference documentation Prompt Engineering Techniques with Spring AI Spring AI GitHub repo Spring AI examples Spring AI updates The Seamless Path for Spring Developer to the World of Generative AI LangChain4j Supercharge your Java application with the power of LLMs LangChain4j GitHub repo LangChain4j Examples Quarkus LangChain4j Quarkus LangChain4j Workshop LangChain4j updates2.1KViews2likes0CommentsMeet your hosts for JDConf 2025!
JDConf 2025 is right around the corner and is set to be a global gathering for Java developers passionate about Cloud, AI, and the future of Java. With 22+ sessions and 10+ hours of live content streaming from April 9 - 10, plus additional on-demand sessions, this year’s event dives into app modernization, intelligent apps, frameworks, and AI-powered development with tools like Copilot and more. We are excited to invite you to join us with our three distinguished hosts: Bruno Borges, Sandra Ahlgrimm, and Rory Preddy. Meet Bruno Borges - Your host for JDConf 2025 Americas Bruno Borges is a seasoned professional with a rich background in the tech industry. Currently serving as Principal Product Manager at Microsoft, he focuses on Java developers' experience on Azure and beyond. With over two decades of experience, Bruno has been instrumental in bridging the gap between Java communities and Microsoft technologies and services, ensuring seamless integration, optimal developer productivity, and application performance and efficiency. His dedication to enhancing developer experiences has made him a respected figure in the Java ecosystem. Meet Sandra Ahlgrimm - Your host for JDConf 2025 EMEA Sandra Ahlgrimm is a Senior Cloud Advocate at Microsoft, specializing in Java and AI. With over fifteen years of experience as a Java developer, she is highly engaged with various communities. She co-leads the local Java User Group (JUG) and Docker Meetup. Her role on the Program Advisory Board at Oracle's GraalVM community highlights her interest in cutting-edge Java and sustainability. Meet Rory Preddy - Your host for JDConf 2025 Asia-Pacific Rory Preddy is a Principal Cloud Advocate at Microsoft, where he helps professional cloud developers discover and successfully use Microsoft’s platforms. He is a seasoned speaker whose talks are both meaningful and humorous, and he speaks around the world empowering developers to achieve more. Rory has over 25 years of experience in software development, working with various technologies such as Java, AI and Azure. Prepare for learning and some entertainment! Under the guidance of these accomplished hosts, JDConf 2025 promises to be an enlightening experience. The conference will feature a technical keynote and 22 breakout sessions, showcasing 26 Java influencers and community leaders from organizations such as Microsoft, Broadcom, Oracle, Red Hat, and IBM. Live sessions are scheduled to accommodate attendees worldwide, fostering a global networking opportunity for Java professionals and enthusiasts. With Bruno, Sandra, and Rory as your hosts, you will be entertained throughout the live stream as they invite each speaker to share more insights and learning, with Q&A after each session that attendees find very helpful. Bruno's engaging style, Sandra's enthusiasm, and Rory's humor make learning fun and interactive. Their energy and passion as hosts ensure a delightful experience for everyone. Here are few memories from last year’s event. Join the JDConf experience and earn Microsoft Rewards! We are thrilled to offer Microsoft Rewards points to participants who register or attend the JDConf! ⭐ Registration Rewards: Participants who register for one of the JDConf 2025 – America, Europe or Asia – will receive 100 Microsoft Rewards points. ⭐ Attendance Rewards: The first 300 attendees to check-in live for one of the JDConf - America, Europe or Asia - will receive 5000 Microsoft Rewards points Don't miss this chance to learn from industry leaders, connect with peers, and explore the latest developments in Java, Cloud, and AI. Join us at JDConf 2025 and be part of the future of Java development. 👉 RSVP Now at JDConf.com Learn more about our hosts: You can hear from Bruno in his recent keynote at JavaOne 2025. You can Sandra's videos here Get to know Rory's work here150Views0likes0CommentsCode the Future with Java and AI – Join Me at JDConf 2025
JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today. Why JDConf 2025? With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.) Bring AI into Java Apps At JDConf 2025, we are going beyond buzzwords. We’ll show you how to bring AI into real Java apps, using patterns and tools that work today. First, we’ll cover Retrieval-Augmented Generation (RAG), a design pattern where your app retrieves the right business data in real time, and combines it with AI models to generate smart, context-aware responses. Whether it is answering support queries, optimizing schedules, or generating insights, RAG enables your app to think in real time. Second, we’ll introduce AI agents -- software entities that do more than respond. They act. Think about automating production line scheduling at an auto manufacturer or rebooking delayed flights for passengers. These agents interact with APIs, reason over data, and make decisions, all without human intervention. Third, we’ll explore the complete AI application platform on Azure. It is built to work with the tools Java developers already know - from Spring Boot to Quarkus - and includes OpenAI and many other models, vector search with PostgreSQL, and libraries like Spring AI and LangChain4j. Here are just two example stacks: Spring Boot AI Stack: any app hosting services like Azure Container Apps or App Service + Spring AI + OpenAI + PostgreSQL for business data and vector data store. Quarkus AI Stack: any app hosting services like Azure Container Apps or App Service + LangChain4j + OpenAI + PostgreSQL for business data and vector data store. This is how you turn existing Java apps into intelligent, interactive systems, without reinventing everything. Whether you are an experienced developer or just starting out, JDConf offers valuable opportunities to explore the latest advancements in Java, cloud, and AI technologies; gain practical insights; and connect with Java experts from across the globe – including Java 25, Virtual Threads, Spring Boot, Jakarta EE 12, AI developer experiences, Spring AI, LangChain4j, combining data and AI, automated refactoring to Java app code modernization. We’ll also show you how GitHub Copilot helps you modernize faster. GitHub Copilot's new “upgrade assistant” can help refactor your project, suggest dependency upgrades, and guide you through framework transitions, freeing you up to focus on innovation. Get the Right Fit for Your Java App And what if your apps run on JBoss, WebLogic, or Tomcat? We will walk you through how to map those apps to the right Azure service: Monoliths (JAR, WAR, EAR) → Deploy to App Service Microservices or containers → Use Azure Container Apps or AKS WebLogic & WebSphere → Lift and shift to Azure Virtual Machines JBoss EAP containers → Run on Azure Red Hat OpenShift You’ll get clear guidance on where your apps fit and how to move forward, with no guesswork or dead ends. Let's Code the Future, Together I’ll be there, along with Josh Long from the Spring AI community and Lize Raes from the LangChain4j community, delivering a technical keynote packed with practical insights. If you haven’t started building intelligent Java apps, you can start with JDConf. If you’ve already started on the journey, tune in to learn how you can enrich your experiences with the latest in tech. So, mark your calendar. Spread the word. Bring your team. JDConf 2025 is your place to build what is next with Java and AI. 👉 Register now at jdconf.com. Check out the 20+ exclusive sessions brought to you by Java experts from across the globe in all major time zones.188Views0likes0CommentsJDConf 2025: Announcing Keynote Speaker and Exciting Sessions on Java, Cloud, and AI
Microsoft JDConf 2025 is rapidly approaching and promises to be the must-attend event for Java developers, particularly those interested in the latest advancements in Java, Cloud and AI. This year, the conference will feature over 22 sessions and more than 10 hours of live streaming content for global audience, along with additional on-demand sessions available from April 9 to 10. The spotlight this year is on integrating AI into your development workflow with tools like Copilot, showcasing how these advancements are revolutionizing the coding landscape. Whether you are exploring application modernization, leveraging AI for intelligent apps, or optimizing Java deployments, JDConf has sessions for every interest. Code the future with AI Explore AI-driven Java innovation: Uncover the role of AI in enhancing Java application development on the cloud for greater efficiency and innovation. Livestream for all time zones: Live sessions scheduled to accommodate attendees from around the globe, ensuring no one misses out. Learn from Java experts and innovators: Discover the impact of diversity and open-source innovation in advancing the Java ecosystem. Global networking opportunity: Connect with Java professionals and community leaders worldwide to share knowledge and foster community growth. Free & accessible content: Enjoy all sessions without cost, available live and on-demand for ultimate flexibility. Earn rewards: Join the JDConf experience and earn Microsoft Rewards points. 🌟 RSVP now at JDConf.com !! ⭐ This year’s list of sessions Figure 1: Your quick guide to JDConf 2025: cheat sheet for the keynote and breakout sessions happening across three regions. Do not miss out on planning your perfect conference experience! Technical keynote: Code the future with Java & AI Amanda Silver, Microsoft | Josh Long, Broadcom | Lize Raes, Naboo.ai Join Amanda Silver, CVP and head of product, Microsoft Developer Division, as she takes the stage for the JDConf Opening Keynote, exploring how Java developers can harness the power of AI, cloud, and cutting-edge tools to accelerate development. From Visual Studio Code and GitHub Copilot to Cloud services, Amanda will showcase how Cloud and AI are transforming the developer experience, enabling teams to go from code to production faster than ever. She’ll also dive into the latest advancements in Java technologies, Microsoft's deep investments in the Java ecosystem, and the company's ongoing commitment to open-source innovation. Don't miss this opportunity to discover how Microsoft is empowering Java developers in the AI era! Session summaries by region Americas live stream - April 9, 8:30am – 12:30pm PDT Spring Boot: Bootiful Spring Boot: A DOGumentary by Josh Long will dive into Spring Boot 3.x and Java 21, exploring AI, modularity, and powerful optimizations like virtual threads, GraalVM, and AppCDS. AI Dev Experience: Boosting AI Developer Experience with Quarkus, LangChain4j, and Azure OpenAI by Daniel Oh will demonstrate how this trio streamlines development and powers intelligent apps. Spring AI: How to Build Agents with Spring AI by Adib Saikali will showcase building intelligent AI agents, covering key patterns like self-editing memory, task orchestration, & collaborative multi-agent systems. Jakarta EE 12: What Comes After Jakarta EE 11? Reza Rahman and Emily Jiang will share roadmap, contribution pathways, and key updates, including Security, Concurrency, Messaging, and new APIs. Deployment: Production Best Practices: Go from Dev to Delivered and Stay There by Mark Heckler will take Java apps from development to production with a focus on CI/CD, containerization, infrastructure as code, and cloud deployment. Cloud-native: Java Cloud-Native Shoot-Out: InstantOn vs CRaC vs Native Image by Yee-Kang Chang and Rich Hagarty will compare three emerging Java technologies; Liberty InstantOn, OpenJDK CRaC, and Native Image to determine which best supports fast start-up times and low resource usage in your cloud-native apps. AI-Driven Testing: Test Smarter, Not Harder: AI-Driven Test Development by Loiane Groner will demo how AI-powered tools like GitHub Copilot enhance TDD through automated test generation and improved test coverage, even for legacy code. Asia-Pacific live stream – April 10, 10:00am-1:30pm SGT LLMs integration: Building LLM Apps in Java with LangChain4j and Jakarta EE by Bazlur Rahman and Syed M Shaaf will demonstrates how to integrate large language models (LLMs) into Java apps, including techniques like retrieval-augmented generation (RAG) and embedding databases. Java Modernization: Modernize Java Apps Using GitHub Copilot Upgrade Assistant for Java by Nick Zhu will show how this tool can help modernize Java apps by automating refactoring, managing dependencies, and resolving version conflicts. Automated Refactoring: The State of AI in Large Scale Automated Refactoring by Jonathan Schneider will show how OpenRewrite’s Lossless Semantic Tree enhances AI-driven refactoring for accurate decision-making. Java Modernization: Cloud Migration of Java Applications Using Various Tools and Techniqueby Yoshio Terada will demo modernizing legacy apps with tools like VS Code, GitHub Copilot, and Azure Migrate. Java & AI: AI for Java Developers by Dan Vega will introduce AI for Java developers, covering machine learning, deep learning, and practical AI implementations such as chatbots, recommendation systems, and sentiment analysis. Hyperscale PaaS: Spring, Quarkus, Tomcat, JBoss EAP - Hyperscale PaaS for Any Java App by Haixia Cheng and Edward Burns will demo how to deploy any Java appson Azure App Service. Buildpacks: Paketo Buildpacks: The Best Way to Build Java Container Images? by Anthony Dahanne and David O'Sullivan will explore the benefits of buildpacks for Java containerization, comparing them with traditional Dockerfile-based approaches. Europe, Middle East and Africa - April 10, 9:00am – 12:30pm GMT Java 25: Explore The Hidden Gems of Java 25 with Mohamed Taman as he uncovers key Java SE features, updates, and fixes that will simplify migration to new Java and enhance your daily development workflow. GitHub Copilot: Use GitHub Copilot in your favorite Java IDEs by Julia Kordick and Brian Benz will show how to maximize productivity with GitHub Copilot’s latest features in IntelliJ, VS Code, and Eclipse. LangChain4j: AI-Powered Development: Hands-On Techniques for Immediate Impact by Lize Raes will explore AI tools like Cursor, Devin, and GitHub Workspace to help developers accelerate workflows and embrace AI-driven coding practices. Data and AI: Powering Spring AI with RAG and NoSQL by Theo van Kraay will demo how integrating Cosmos DB as vector store with Spring AI enables scalable, intelligent and high performing apps. Spring Security: Passkeys, One-Time Tokens: Passwordless Spring Security by Daniel Garnier-Moiroux dives into latest passwordless authentication methods in Spring Security with real-world implementation demos. Virtual Threads: Virtual Threads in Action with Jakarta EE Core Profile by Daniel Kec explores Helidon 4, the first Jakarta EE Core Profile runtime built on a pure Virtual Thread-based web server. Web apps: Simplifying Web App Development with HTMX and Hypermedia by Frederik Hahne shows how HTMX and modern template engines simplify Java web development by reducing reliance on complex single-page apps. Register and attend to earn rewards 🚀 Join the JDConf Experience and Earn Microsoft Rewards! 🚀 The first 300 attendees to check-in live for one of the JDConf - America, Europe or Asia - will receive 5,000 Microsoft Rewards points. How to Participate: Attendance Rewards: For your check-in to be counted you will need to do one of the following on the day of the event: Go to the JDConf Event details page on the Reactor website, Sign in with your Microsoft account (top right corner) and then check-in on the right-hand side, or Click the Join live stream link in the confirmation or reminder e-mail you receive to the Microsoft account e-mail address you registered with, or Click the link in the calendar reminder email, you will see the option to add the event to your calendar in your Microsoft account confirmation email. Points Distribution: Microsoft Rewards points will be added to the participants' Microsoft accounts within 60 days following the event. To earn points, you must use an email that is associated with a Microsoft account. You will receive an e-mail from the Microsoft Reactor team if you are eligible and earn the Microsoft Rewards. Points can be used towards many different rewards, check out Microsoft rewards to see what rewards are available in your region. Terms | Privacy RSVP now - engage, learn, and code the future with AI! Do not miss out – RSVP now and be part of the future of Java at JDConf 2025! We are calling all Java enthusiasts and developers around the globe to join us for a two-day event on April 9 and 10. This is more than just a conference. It is a chance to engage with the community, learn from the experts, and help drive Java technology forward. Get ready to dive into deep Java insights, connect with fellow developers, and discover the latest innovations that are shaping the world of Java. Let us gather to celebrate our passion for Java, share knowledge, and explore new possibilities together. Make sure you are there to push the boundaries of what Java can do. RSVP now at JDConf.com and let's make JDConf 2025 a milestone event for Java and its community. See you there! ⭐ RSVP now at JDConf.com 🌟897Views0likes0CommentsMeet First Round of Speakers for Microsoft JDConf 2025: Code the future with Java and AI
We are excited to share the initial lineup of speakers and sessions for Microsoft JDConf 2025, taking place on April 9-10. Whether you are an experienced developer or just starting out, JDConf offers valuable opportunities to explore the latest advancements in Java, Cloud and AI technologies, gain practical insights, and connect with Java experts from across the globe. Secure your spot now at jdconf.com. Here are the initial sessions and speakers who will provide valuable insights into Java, Cloud, and AI. Java 25. Explore The Hidden Gems of Java 25 with Mohamed Taman as he uncovers key Java SE features, updates, and fixes that will simplify migration to new Java and enhance your daily development workflow. Virtual Threads. Virtual Threads in Action with Jakarta EE Core Profile by Daniel Kec will explore Helidon 4, the first Jakarta EE Core Profile runtime built on a pure Virtual Thread-based web server. Spring Boot. Bootiful Spring Boot: A DOGumentary by Josh Long will dive into Spring Boot 3.x and Java 21, exploring AI, modularity, and powerful optimizations like virtual threads, GraalVM, and AppCDS. Jakarta EE 12. What Comes After Jakarta EE 11? Reza Rahman and Emily Jiang will share roadmap, contribution pathways, and key updates, including Security, Concurrency, Messaging, and new APIs. GitHub Copilot. Use GitHub Copilot in your favorite Java IDEs by Julia Kordick and Brian Benz will show how to maximize productivity with GitHub Copilot’s latest features in IntelliJ, VS Code, and Eclipse. AI Dev Experience. Boosting AI Developer Experience with Quarkus, LangChain4j, and Azure OpenAI by Daniel Oh will demonstrate how this trio streamlines development and powers intelligent applications. Spring AI. How to Build Agents with Spring AI by Adib Saikali will showcase building intelligent AI agents, covering key patterns like self-editing memory, task orchestration, & collaborative multi-agent systems. LangChain4j. AI-Powered Development: Hands-On Techniques for Immediate Impact by Lize Raes will explore AI tools like Cursor, Devin, and GitHub Workspace to help developers accelerate workflows and embrace AI-driven coding practices. Data and AI. Powering Spring AI with RAG and NoSQL by Theo van Kraay will demo how integrating Cosmos DB as vector store with Spring AI enables scalable, intelligent and high performing applications. Automated Refactoring. The State of AI in Large Scale Automated Refactoring by Jonathan Schneider will show how OpenRewrite’s Lossless Semantic Tree enhances AI-driven refactoring for accurate decision-making. Java Modernization. Cloud Migration of Java Applications Using Various Tools and Technique by Yoshio Terada will demo modernizing legacy apps with tools like VS Code, GitHub Copilot, and Azure Migrate. AI-Driven Testing. Test Smarter, Not Harder: AI-Driven Test Development by Loiane Groner will demo how AI-powered tools like GitHub Copilot enhance TDD through automated test generation and improved test coverage, even for legacy code. RSVP Now Join us at Microsoft JDConf 2025 and code the future with Java, Cloud and AI. RSVP today at jdconf.com to secure your spot. Your registration grants access to live streams, on-demand sessions, and a collection of valuable resources. Stay tuned for updates on more engaging sessions and inspiring speakers. Connect with a community shaping tomorrow’s technology and gain practical insights from industry leaders. Follow the conversation using #JDConf, and visit jdconf.com for the latest agenda and schedule. Secure your spot now at jdconf.com!543Views2likes1CommentOpen Standard Enterprise Java and our Secure Future Initiative
Microsoft Azure is the best place for enterprise Java workloads. Whether you are using plain Java SE, Spring Boot and its many sub-projects, or a Jakarta EE and MicroProfile runtime, our portfolio of Java support has first class, framework-native, compute offerings and detailed guidance to give you confidence in your choice of Azure for your mission critical Java workloads. This blog post covers the Jakarta EE and MicroProfile part of our Java on Azure portfolio, and specifically how our Secure Future Initiative (SFI) is supported by Jakarta EE and MicroProfile on Azure. What is Jakarta EE and MicroProfile on Azure? Our product offering for Jakarta EE and MicroProfile on Azure is partner driven and Azure native. We recognize that our partners are the experts in the Java frameworks that power many Fortune 500 companies [boss magazine 2024-10-25]. Microsoft has partnered with Oracle, IBM, and Red Hat to build a portfolio based on two pillars: 1. Azure portal deployment experiences and 2. step-by-step guidance. These Azure portal deployment experiences are Azure-native and are maintained and supported by each partner. The step-by-step guidance shows users exactly how to implement advanced use-cases using the partner’s Jakarta EE and MicroProfile products. Direct links to the portal experiences and guidance are included later in this blog post. The landing page for Jakarta EE and MicroProfile on Azure is at https://aka.ms/java/ee . What is the Secure Future Initiative (SFI) and how does it relate to Jakarta EE and MicroProfile on Azure? The Secure Future Initiative is our name for Microsoft’s comprehensive, top-to-bottom, vision-backed implementation of security for every aspect for all our products. Microsoft has been doing security at scale for half a century, but in November 2023 we launched SFI to give our users and partners a transparent look at exactly how we are delivering on our promise to be the most secure hyperscale cloud. Everything you need to learn about, and follow along with, the SFI can be found on the SFI landing page. The remainder of this section breaks down a few of the ways SFI is implemented in our Jakarta EE and MicroProfile on Azure portfolio. Jakarta EE and MicroProfile on Azure is not an Azure service. Instead of being a service, users of the portfolio are empowered to run the software in their own tenancy, or even on their own sites with their own hardware, using Azure Local. As such, it’s important to understand how SFI applies to Jakarta EE and MicroProfile on Azure. SFI is explained in terms of principles, foundations and pillars. The principles are 1. Secure by design. 2. Secure by default and 3. Secure operations. This table breaks down how we implement these principles for each of the Azure portal deployment experiences: Oracle WebLogic Server, IBM Web Sphere Application Server and Liberty, and Red Hat JBoss EAP. Partner Secure by design Secure by default Secure operations Oracle WebLogic Server For AKS the offer is tightly integrated with Oracle Container Registry, which contains the most secure and up-to-date Critical Patch Update releases. For virtual machines, Oracle maintains the base images and allows easy registration with MyOracleSupport to get Critical Patch Updates. For AKS, the deployment experience requires providing your Oracle Container Registry credentials. Users can easily choose to pull from pre-approved secure patched WebLogic Server images. For AKS, easy integration with Microsoft Defender for Cloud ensures continually updated threat monitoring. For complete details see, What is Microsoft Defender for Cloud? For Virtual Machines, the Azure Virtual Machine marketplace continually monitors all VM images and notifies vendors of vulnerabilities. Vendors, including Oracle continually update their images based on this guidance. IBM WebSphere Application Server and Liberty For AKS, the offer supports deploying fully secure WebSphere Liberty. For virtual machines, IBM maintains the VM images or a quarterly update schedule. For AKS, the deployment experience makes it easy to use the supported version. For virtual machines, in addition to IBM’s quarterly update process, the deployment action also updates the deployed VM to the latest secure patches from IBM with OWASP and CIS compliance. For AKS, easy integration with Microsoft Defender for Cloud ensures continually updated threat monitoring. For complete details see, What is Microsoft Defender for Cloud? For Virtual Machines, the Azure Virtual Machine marketplace continually monitors all VM images and notifies vendors of vulnerabilities. Vendors, including Oracle continually update their images based on this guidance. Red Hat JBoss EAP For Azure Red Hat OpenShift, security is built into the service as described in Security for Azure Red Hat OpenShift. The portal experience uses the JBoss EAP Operator. The operator is designed to deliver the most secure and supported EAP version to run on OpenShift. For Virtual Machines, Red Hat maintains all the images on a regular update schedule. For more, see Security considerations for Red Hat Enterprise Linux on Azure. For virtual machines, in addition to Red Hat’s process to continually update the base images, the VM is registered with your Red Hat account at deployment time and the latest patches are applied. For Virtual Machines, the Azure Virtual Machine marketplace continually monitors all VM images and notifies vendors of vulnerabilities. Vendors, including Oracle continually update their images based on this guidance. On top of the principles are the foundations: 1. A security first-culture, 2. The ability to integrate with security governance, 3. Continuous security improvement, 4. “Paved paths” that optimize productivity, compliance, and security. The Jakarta EE and MicroProfile on Azure team has ongoing engineering meetings with the developers at each of our partners. These meetings allow us to reinforce our security-first culture from Microsoft, and allow it to build on the existing security cultures at each of our partners. Security governance, as described in SFI, is about aligning security efforts with business priorities and technical implementations. One way this is implemented in our portal experiences is by integration with Azure Policy in the use of Tags. All of our offers support tags, for more on tags, see Manage tag governance with Azure Policy. The first two foundations enable the third. We are constantly revising and improving our portal experiences and guidance with input from the experts in Azure core. The final foundation, “paved paths” is the most visible manifestation of SFI in the Jakarta EE on Azure portfolio. For more on the paved paths and how they relate to Jakarta EE and MicroProfile on Azure, see the next section. Finally, after the principles and foundations, come the six pillars of SFI. These pillars are very clear and detailed goals and actions that Microsoft uses to secure how it runs Azure and its own mission critical business operations. Because the Jakarta EE and MicroProfile on Azure offers are not an Azure first party service, the pillars are not directly applicable. Even so, Microsoft is making the pillars transparent so you can apply them in your own operations, safe in the knowledge that our fifty years of security experience are baked into every one of them. You can read the pillars at Secure Future Initiative pillars. What are the paved paths and how do they apply to Jakarta EE and MicroProfile on Azure? In the context of SFI, paved paths are best practices that optimize productivity, compliance, and security. Because step-by-step guidance is a big part of the Jakarta EE and MicroProfile on Azure portfolio, it makes sense to think of the guidance as paved paths. In fact, we recently audited our entire portfolio for SFI compliance. We focused specifically on usage of the “Resource Owner Password Credential (ROPC)” pattern. Strictly speaking, this pattern comes from the world of OAuth 2.0. We use the term ROPC to include any usage of username and password credentials that could possibly be replaced by a usage of managed identities for Azure resources. Wherever possible, we have replaced the use of ROPC with a more SFI compliant approach that does the same thing. For more details on ROPC, see Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials. For more details on managed identities for Azure resources, see What are managed identities for Azure resources? The following tables list, for each supported Jakarta EE and MicroProfile on Azure runtime, the portal deployment experiences, corresponding paved paths and some specific notes about SFI compliance. Oracle WebLogic Server on Azure Azure compute offer Deploy it now Paved paths SFI notes AKS https://aka.ms/wlsaks Deploy WebLogic Server on Azure Kubernetes Service using the Azure portal - Azure Kubernetes Service WebLogic Server step-by-step guidance Tutorial: Migrate Oracle WebLogic Server to Azure Kubernetes Service (AKS) with geo-redundancy Use passwordless database connection with managed identities. The username/password approach is still shown in a separate embedded tab. Recommendation to use patched images. Virtual machines https://aka.ms/wls-vm-admin https://aka.ms/wls-vm-cluster https://aka.ms/wls-vm-base-images Quickstart: Deploy WebLogic Server on Azure Virtual Machines (VMs) - Azure Virtual Machines Configure Passwordless Database Connections for Java Apps on Oracle WebLogic Server Use SSH for VM login. IBM WebSphere Application Server (traditional) Azure compute offer Deploy it now Paved paths SFI notes Virtual machines https://aka.ms/twas-cluster-portal https://aka.ms/twas-single-portal https://aka.ms/twas-nd-vm-portal Deploy WebSphere Application Server Cluster on Azure VMs IBM WebSphere and Open Liberty Azure compute offer Deploy it now Paved paths SFI notes AKS https://aka.ms/liberty-aks Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster - Azure Kubernetes Service Tutorial: Migrate WebSphere Liberty/Open Liberty to Azure Kubernetes Service (AKS) with high availability and disaster recovery Use managed identity for container registry access. Use Service Connector for easy passwordless database connection. Azure Red Hat OpenShift https://aka.ms/liberty-aro WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift Azure Container Apps Not applicable Deploy a Java Application with Open Liberty or WebSphere Liberty on Azure Container Apps Use managed identity for container registry access. Use Service Connector for easy passwordless database connection. Red Hat JBoss EAP Azure compute offer Deploy it now Paved paths SFI notes Azure Red Hat OpenShift https://aka.ms/eap-aro-portal Quickstart: JBoss EAP on Azure Red Hat OpenShift - Azure Red Hat OpenShift Manually Deploy a Java Application with JBoss EAP on an Azure Red Hat OpenShift Cluster Workload identity not yet supported on Azure Red Hat OpenShift. Virtual machines https://aka.ms/eap-vm-vmss-portal https://aka.ms/eap-vm-single-portal https://aka.ms/eap-vm-cluster-portal https://aka.ms/eap-vm-base-images Tutorial: Install JBoss EAP on Azure Virtual Machines (VMs) manually Use passwordless database connection with managed identities. Red Hat Quarkus Azure compute offer Paved paths SFI notes AKS Deploy Quarkus on Azure Kubernetes Service - Azure Kubernetes Service Use passwordless database connection with managed identities. Use Service Connector for easy passwordless database connection. Azure Container Apps Deploy a Java Application with Quarkus on Azure Container Apps Use passwordless database connection with managed identities. Use Service Connector for easy passwordless database connection. Azure Functions Deploy serverless Java apps with Quarkus on Azure Functions Multiple compute offers Quarkus with Microsoft Entra ID Use of OIDC is SFI compliant. Summary Microsoft has partnered with the leading enterprise Java vendors to build Azure-native deployment experiences and guidance for the most popular enterprise Java products. Through SFI, Microsoft is committed to continually update this portfolio to be the most secure way to run Java at scale in the cloud.311Views2likes0CommentsSeamlessly Integrating Azure KeyVault with Jarsigner for Enhanced Security
Dive into the world of enhanced security with our step-by-step guide on integrating Azure KeyVault with Jarsigner. Whether you're a beginner or an experienced developer, this guide will walk you through the process of securely signing your Java applications using Azure's robust security features. Learn how to set up, execute, and verify digital signatures with ease, ensuring your applications are protected in an increasingly digital world. Join us to boost your security setup now!7.5KViews0likes1Comment