Spring Integrations
26 TopicsSpring AI 1.0 GA is Here - Build Java AI Apps End-to-End on Azure Today
Spring AI 1.0 is now generally available, and it is ready to help Java developers bring the power of AI into their Spring Boot applications. This release is the result of open collaboration and contributions across the Spring and Microsoft Azure engineering teams. Together, they have made it simple for Java developers to integrate LLMs, vector search, memory, and agentic workflows using the patterns they already know. Why This Matters for Java Developers? Spring AI 1.0, built and maintained by the Spring team at Broadcom with active contributions from Microsoft Azure, delivers an intuitive and powerful foundation for building intelligent apps. You can plug AI into existing Spring Boot apps with minimal friction, using starters and conventions familiar to every Spring developer. Whether you are building new intelligent features or exploring AI use cases, Spring AI and Azure have you covered. Azure - The Complete AI Stack for Java Developers Azure offers every essential component needed to build intelligent Java applications. At the core of this offering is Azure AI Foundry, which provides a unified platform for enterprise AI operations, model builders, and application development. With AI Foundry, Java developers do not need to train or fine-tune models. They can deploy foundation models themselves, interact with already deployed foundation models, or connect to models deployed by their AI engineers or data/ML engineers. Developers can connect to these deployed models, test prompt templates, inspect token usage and latency metrics, and embed model interactions directly into their Spring Boot applications. This platform combines production-grade infrastructure with user-friendly tools, helping developers operate AI-powered applications with confidence. Here is how each piece fits into your Spring development workflow: Model-as-a-Service – Use Azure OpenAI for hosted large language models or choose from models available in Azure AI model inference, including offerings from Meta, Mistral AI, and DeepSeek. These models can be accessed directly using Spring AI starters, allowing your app to summarize, answer, generate, or assist through a simple, declarative API. Vector Databases – Embeddings and vector similarity are essential for semantic search and RAG. Azure provides multiple options: Cosmos DB with vector search support, Azure AI Search for advanced indexing and reranking, PostgreSQL with pgvector for SQL-based access, or Redis for in-memory speed. Spring AI integrates with these options to retrieve relevant context dynamically. Relational Databases – Use familiar relational databases like Azure SQL Database, PostgreSQL, and MySQL for structured data and transactional workloads. These remain the backbone for business logic, customer data, and system state that complement AI-generated content. Chat Memory – Track conversations across multiple turns using memory stores. Cosmos DB and Redis can persist chat memory, enabling your Spring-based agents to retain history, manage context, and respond in a more personalized way. RAG Workflows – Retrieval-Augmented Generation allows models to respond using external knowledge. Spring AI provides out-of-the-box support for RAG using vector stores and document loaders, enabling Java developers to build grounded, trustworthy interactions with minimal boilerplate. Orchestration with MCP – Build intelligent agents that call functions, retrieve data, and reason across steps. With Model Context Protocol (MCP) support, Spring AI apps can integrate seamlessly with external toolchains and invoke capabilities across multiple services. The MCP Java SDK is available through Spring Boot starters, simplifying the creation of both MCP Servers and Clients. App, AI Agent or MCP Server Deployment – Run your Spring AI applications, AI agents, and MCP servers on your choice of Azure compute: Azure App Service for managed deployments, AKS for containerized microservices, Azure Container Apps for serverless scale-out, or Virtual Machines for full control. Spring Boot simplifies packaging and deployment across these environments. In addition to deploying applications built with Spring AI, you can deploy any MCP Server - regardless of the language stack - to Azure and interoperate with it using MCP Clients built with Spring AI. This ensures that Java developers can connect and collaborate with agents and services written by other teams using different technology stacks. Whether you are deploying an AI-powered web app, a backend intelligence service, or a full agentic workflow across services, Azure provides the flexibility, scalability, and operational reliability you need. Fundamentals for Enterprise-Grade AI Applications In production, AI features must align with enterprise requirements like security, reliability, and explainability. Here is how Spring AI and Azure together deliver on these needs: Security and Access Control – Ensure AI features respect role-based access policies. Use keyless or passwordless authentication for Spring Boot apps accessing Azure OpenAI, PostgreSQL + pgvector, Cosmos DB, and connections to Azure SQL, PostgreSQL, and MySQL – using managed identities and Entra ID. Integrate identity with Microsoft Entra ID and protect secrets using Azure Key Vault. Spring Security helps you enforce user-scoped interactions. Safety and Compliance – Use content filtering, prompt injection protection, and policy alignment to control model behavior. Azure OpenAI includes built-in safety filters. Combined with Spring AI’s prompt templates, you can enforce structured, policy-compliant interactions. Observability – Monitor model usage, token consumption, latency, and errors. Spring Boot Actuator and Micrometer provide metrics out-of-the-box. You can export these metrics to Azure Monitor and Application Insights for full-stack visibility. Structured Outputs – Use JSON or XML formats when integrating AI responses into downstream systems. Spring AI supports output parsing and schema validation, so generated content can drive actions within your application without post-processing. Reasoning and Explainability – Let applications show sources, highlight references, or explain decision flows. In domains like healthcare or finance, this transparency builds trust. Spring AI supports tool calling and multi-step workflows that help models reason and communicate clearly. Tools, Demos and Learning Paths As always, you can start at start.spring.io, the easiest place to start any Spring Boot project, to quickly generate your app project using Spring AI, Web and Azure starters like OpenAI, AI Search, Cosmos DB, PostgreSQL + pgvector, relational databases, and Redis. End-to-End Demo – watch a hands-on demo of building enterprise AI agents with Java, Spring, and MCP. Get Started Today! Resources: Azure QuickStart using Spring AI and Spring AI resources Azure QuickStart using Spring AI o Chat o Use Your Data o Completions Spring AI o Spring AI Project o Spring AI Documentation o Spring AI Examples Collaboration and Contributions This GA release of Spring AI reflects the combined engineering and open-source commitment of Broadcom and Microsoft. Special thanks to Adib Saikali, Mark Pollack, Christian Tzolov, Dariusz Jędrzejczyk, Josh Long, Dan Vega, DaShaun Carter, Asir Selvasingh, Theo van Kraay, Mark Heckler, Matt Gotteiner, Govind Kamtamneni, Jorge Balderas, Brendan Mitchell, and Karl Erickson for their leadership, code, documentation, guidance, and community focus. Get Started Today! Everything you need is ready. Build your next AI-powered Java app with Spring AI and Azure. Begin your journey here: aka.ms/spring-ai314Views4likes0CommentsThe State of Coding the Future with Java and AI – May 2025
Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world approaches that Java developers are already using - things like Retrieval-Augmented Generation (RAG), vector databases, embeddings, and AI agents. These are not just buzzwords - they help teams build apps that work well at scale, stay secure, and are easier to manage over time. For teams figuring out where to start, the report includes guidance and simple workflows to make things easier. In short, Java is well-positioned to keep leading in enterprise software. This is an invitation to Java architects, tech leads, decision-makers, and developers to explore what is next and build smarter, more connected apps with AI. Introduction The world of software development is changing fast. Over the past two years, we have seen a major shift – not just in tools and frameworks, but in how developers think about building software. Artificial Intelligence is now part of the everyday conversation – helping developers rethink what their applications can do and how quickly they can build them. In the middle of all this change, it helps to pause and look at where we are. Java developers are especially exploring how to add intelligence to their existing applications or build new ones that can learn, adapt, and scale. But with so many innovative ideas and so much information out there, the real question is – what are developers doing? To answer that, we reached out directly to Java professionals across the world. We wanted to understand their thinking, what they are trying, and what they need to move forward with confidence. Our invitation was simple - "Calling all Java pros – share your insights to help simplify AI-powered apps 👉 aka.ms/java-ai." The response was strong. A total of 647 Java professionals took part: 587 have experience with AI - representing a wide range of perspectives and levels of AI knowledge. 60 have not yet explored AI - but are curious and eager to learn what is possible. Among all respondents: Two-thirds (67%) had 4 to 10 years of Java experience. One-third (33%) had more than 10 years of experience. This report highlights what we learned – and what it means for the future of Java and AI. The Scenario We Asked Java Pros to Imagine “Picture yourself adding an AI-driven feature to an existing Java-based app or building a brand-new intelligent application. This feature might improve customer experience – such as personalized recommendations – optimize business processes – like fraud detection – or enhance product searches using natural language. Your goal is to seamlessly integrate this feature, ensuring it is easy to develop, scalable, and maintainable.” An impressive 97 percent of respondents said they would choose Java for building this type of intelligent application. A Common Misconception 90 percent of respondents believed that building intelligent Java apps would require deep experience with AI, Machine Learning, or Python. Developers Can Start to Deliver Production-Grade Intelligent Java Apps without AI, ML, or Python Skills Myth of AI/ML and Java Java developers already have what they need – today – to build intelligent applications using modern Java-first frameworks such as Model Context Protocol (MCP) Java SDK, Spring AI, or LangChain4j. No prior experience in Python or Machine Learning is required for Java developers to begin adding intelligent features to their apps. Connecting a Java application to backend AI systems – including Large Language Models and Vector Databases – is conceptually like working with REST APIs or traditional SQL and NoSQL databases. Modern libraries like MCP Java SDK, Spring AI, and LangChain4j make it easier for developers to build and enhance AI-powered Java applications. These frameworks offer support for: Retrieval-Augmented Generation (RAG) Conversational memory Conversation logging Integration with vector stores Secure, observable, and safe-by-default interactions Streamed outputs and structured reasoning Java continues to play a leading role in enterprise software. This gives Java developers a natural advantage – and a unique opportunity – to lead the way in delivering intelligent features inside core business applications. It is also important to note that tasks requiring deep AI and Data Science knowledge are best left to specialists. Java developers can focus on app logic, integration, and delivering business value without needing to become AI experts themselves. In-Process vs HTTP-Based - A Common Misstep in AI Application Design AI-powered applications can be built in different ways - and one of the patterns is to embed the model directly within the same app that handles business logic and exposes the API. This is known as the in-process approach. In this setup, the model is loaded at runtime, using local weights and often relying on a GPU for inference. It is a convenient option - especially when working with models you have created from scratch or downloaded for use in your own application. The Shift to Model-as-a-Service - A Simple History Before foundation models were made available as services, most AI models were custom-built for specific use cases – like classifying documents, detecting anomalies, or predicting demand. These models were typically developed in Python using frameworks such as TensorFlow or PyTorch. Because development and usage happened in the same environment, it was natural to load the model directly into the application’s memory using local weights, and to rely on a local GPU for inference. This model-in-app pattern made sense when the app and the model were designed together. Many popular Python-based libraries, including PyTorch, TensorFlow, and Hugging Face Transformers, encourage this in-process setup by default. As a result, the model often becomes a local function call - tightly coupled to the application’s logic and runtime. However, that convenience introduces scaling and maintenance challenges. Every application instance must run on a machine with GPU access. You must allocate GPU resources per app, even when the app is idle. As demand grows, this leads to higher infrastructure costs, lower resource efficiency, and architectural rigidity. If you scale the app, you are often forced to scale GPU capacity along with it - even if only the app's business logic needs scaling. The Rise of HTTP-Based Integration with Model-as-a-Service The introduction of foundation models like GPT-4, available through services such as OpenAI and Azure OpenAI, brought a shift in how models are used in applications. These models are designed to handle a wide range of tasks and are offered as cloud-hosted APIs - a model-as-a-service approach. Instead of embedding the model into each application, you send a request over HTTP and receive a response. This change enables a new design pattern: the application and the model are treated as separate services. The app handles business logic, while the model service handles inference. This pattern brings clear advantages - modularity, cleaner separation of concerns, centralized control over GPU infrastructure, and the ability to reuse models across many applications. The diagram above illustrates this shift. On the left, the in-process setup binds the model tightly to the application, requiring direct GPU access and local weights. On the right, the HTTP-based setup enables applications written in any language stack - such as Java, Python, JavaScript, .NET, or Go - to interact with a shared model endpoint over HTTP. This separation makes it easier to update models, manage infrastructure, including GPU infrastructure, and scale intelligently. It also reflects how most modern AI platforms are now built. HTTP-based integration is scalable, cost-effective, and designed for modern application environments. It reduces operational complexity and gives developers the flexibility to choose the architecture that fits their needs - without being locked into one stack, tools, models, or setup. Myth about Python As we listened to Java developers across the community, a familiar pattern emerged. Many shared their experiences - and sometimes frustrations - when working with AI technologies in Java. These were not just passing remarks. They reflected real challenges - especially when it came to building or training machine learning models, where Python has long been the preferred environment. Here is a glimpse into what we heard: “Java has fewer AI-specific libraries compared to Python. Libraries like TensorFlow and PyTorch are more mature and feature-rich in Python, making it easier to implement complex AI models.” “Working on AI-powered applications with Java presents challenges, especially when building or training models. The ecosystem is not as deep as Python’s, which has tools like Scikit-learn and notebooks like Jupyter.” “Even though Java can be used for AI, the support for GPU acceleration is not as seamless as it is in Python. You need extra setup and tuning.” “There are fewer Java developers with strong AI backgrounds. It is harder to find or grow a team when Python seems to be the go-to language for most ML engineers.” These are all honest, valid observations. If the job-to-be-done is building foundation models, training models from scratch, or fine-tuning existing models, then Python is a natural choice. It offers the right tools, libraries, and ecosystem support to do that job well. But here is what really matters today: Most AI application developers - including those working in Java - are not training or fine-tuning models. They are not building models from the ground up or optimizing low-level GPU workloads. Instead, they are focused on a different job: Connecting to existing foundation models. Calling AI services over REST APIs. Using AI libraries like Spring AI and LangChain4j to orchestrate intelligent workflows. Querying vector databases. Embedding AI capabilities into production-grade enterprise applications. This distinction is clearly reflected in the diagram above. On the right side, you see “Model Training and Development” as a separate, specialized job. It is critical work - best handled by teams with deep expertise in data science and model engineering. On the left side, you see the application architecture most Java developers work with every day: REST APIs, business logic, database integration, and calls to external AI models and vector stores using AI libraries. This is where Java fits. Java developers are not building models - they are building apps on top of foundation models. And with tools like MCP Java SDK, Spring AI, and LangChain4j, they are not playing catch-up - they are building what matters, integrate AI into existing apps and capabilities. You do not need to train models. You just need to wire up the right parts, connect to the services that make AI possible, and deliver intelligent functionality where it belongs - inside the applications your organization already depends on. “You can be an AI application developer - in less than 2 minutes. Minute 1: Sign up for access to an LLM - Azure OpenAI, OpenAI, whatever. Get yourself an API key. Minute 2: Head to https://start.spring.io, specify `OpenAI` (or whatever), hit `Generate`, and then open your new Spring Boot + Spring AI project in your IDE. In your `application.properties`, you’ll need to specify your API key. Then, inject the auto-configured `ChatClient` somewhere in your code. Use the `ChatClient` to make a request to your model. Congratulations - you’re an AI application developer!” -- Josh Long, Spring Developer Advocate, Broadcom "Is Java still relevant in this new era of AI?" "How do I, with my years of Java expertise, even begin to work with these Large Language Models today?" These are questions I have heard time and again at community events and industry conferences. Today, Java developers are at a pivotal moment. Our existing skills are not just still relevant - they are the foundation for building the next generation of AI-powered applications. Thanks to frameworks like Quarkus, Langchain4j, and MCP integration, we can bridge the world of traditional enterprise development with the fast-growing world of AI - all without giving up the strengths and familiarity of Java.” – Daniel Oh, Senior Principal Developer Advocate, Red Hat The future of AI in software development will be defined by who integrates AI well into applications - both existing and new apps. Most AI-related development will be connecting models to solve real problems inside real applications. And in that space, Java is already strong. From financial services to healthcare, logistics to manufacturing - Java powers the business logic and workflows that now need to become intelligent. This is where Java developers shine. And with the right tools, they are more than ready to lead. Crucial Elements for Java AI Applications As we looked deeper into the survey results, one thing became clear – Java developers are not just interested in adding AI for the sake of it. They are focused on building practical, enterprise-ready features that are reliable, secure, and easy to maintain. 98% of respondents highlighted a core set of approaches or elements that they see as essential for any AI-powered Java application: Retrieval-Augmented Generation (RAG) – Bringing real-time, context-aware answers by grounding responses in trusted data. This is especially useful in enterprise scenarios where accuracy and context matter. Embeddings and Vector Databases – Enabling efficient semantic search and advanced knowledge retrieval. Developers recognize this as the key to making applications “understand” the meaning behind user inputs. Function Calling or Tool Calling – Allowing AI models to interact with APIs, pull in real-time data, or trigger backend workflows. This is where AI starts to act – not just suggest – making it a true part of the application logic. AI Agents – These are not just chatbots. Agents are intelligent programs that can automate or assist with tasks on behalf of users or teams. They combine reasoning, memory, and action – gathering information and triggering responses dynamically. For many developers, agents represent the next step toward intelligent automation inside business-critical workflows. Fundamentals for Enterprise-Grade AI Applications When building applications for the enterprise, developers know that intelligence alone is not enough. Trust, safety, and integration matter just as much. These are the foundational features Java developers called out: Security and Access Control – Making sure AI features respect user roles, protect sensitive data, and fit into enterprise identity systems. Safety and Compliance – Filtering outputs to align with internal policies, legal regulations, and brand standards. This is especially important for customer-facing features. Observability – Tracking how AI decisions are made, logging user and AI interactions, and making sure there is a clear record of what happened – and why. Structured Outputs – AI responses need to work within the system, not outside it. Structured formats – like JSON or XML – ensure smooth handoffs between the AI component and the rest of the application. Reasoning and Explainability – Developers want AI features that can explain their answers, show their sources, and help users trust the output – especially in domains like finance, healthcare, or compliance. Representative Scenarios and Business Impact To make things more concrete, let us look at two sample scenarios. These are not the only ones – just representative examples to help spark ideas. There is a broad and growing range of real-world situations where Java developers can use AI to create business value. Scenario One – Intelligent Workflow Automation Imagine a production manager at an auto manufacturer - say, Mercedes-Benz or Ford - who needs to align the assembly line schedule with real-time component availability and constantly shifting order priorities. The manager’s question is urgent and complex: “How can I adjust the production schedule based on today’s parts inventory and current orders?” Answering means pulling in data from ERP systems, supply chain feeds, vendor dashboards, and manufacturing operations - a level of complexity that can overwhelm even experienced teams. This is where AI steps in as a true copilot - working alongside the human decision-maker to gather data, flag supply constraints, and highlight scheduling options. Together, they can plan faster, adapt more confidently, and respond to change in real time. For Java developers, this is an opportunity to build intelligent systems that bring together data from ERP, inventory, and order management applications - enabling AI models to interact with information and collaborate with decision-makers. These systems do not rely on AI alone; they depend on strong data integration and reliable workflows - all of which can be designed, secured, and scaled within the Java ecosystem. In this way, AI becomes part of a co-working loop - and Java developers are the ones who make that loop real. Scenario Two – AI-Powered Process Assistants Picture a logistics manager at a major shipping company - FedEx, UPS, or DHL - facing cascading delays due to severe weather across multiple regions. The manager is under pressure to reroute packages efficiently while minimizing disruptions to downstream delivery schedules. The question is urgent: “What is the fastest rerouting option for delayed packages due to weather?” Answering requires combining live weather feeds, traffic data, delivery schedules, hub capacities, and driver availability - all in real time. AI acts as a true copilot at this moment, working alongside the manager to collect relevant signals, flag risk zones, and generate rerouting recommendations. Together, they respond with speed and clarity, keeping shipments moving and customers informed. For Java teams, this is a practical opportunity to build intelligent systems that embed AI into logistics, supply chain, and delivery operations - not by rebuilding everything, but by integrating the right data streams, APIs, and business logic. The real value lies in data orchestration, not just algorithms. Java developers are key to enabling these AI-powered assistants by securing connections across systems and building workflows that help humans and AI collaborate effectively under pressure. More Scenarios – World of Possibilities These two scenarios only scratch the surface. Developers across industries – from healthcare and finance to retail and public services – are finding ways to integrate AI that solve meaningful problems, reduce complexity, and improve how their systems perform. Java and AI Technology Stack So far, we have looked at what developers want to build and how AI is changing the way applications are designed. Now, let us look at the platform behind it all – the technology stack that powers intelligent Java applications. To bring these ideas to life and impact, Java developers need a foundation that connects data, apps, and AI services. We call this an AI application platform. It is not a specific product – it is an integrated platform made up of components that most developer teams already use or are familiar with. The goal of this platform is to make developers more productive while building intelligent features into their applications. It gives teams the freedom to choose familiar tools – while making it easier to bring in AI capabilities when and where they are needed. We group this platform into three areas: The app platform The data platform The AI platform Let us break it down using the numbered diagram: Developer Services: These are the core tools that developers use every day – IDEs, coding assistants, build tools, testing frameworks, CI/CD pipelines. They help you write, debug, and manage application code across your team. Container Services | Platform-as-a-Service: This is the runtime layer – where your applications are deployed and scaled. Whether using containers or a managed platform, this layer handles traffic, performance, and operational efficiency. Data Platform: This is where your application data lives – databases, data lakes, and other storage services. It connects structured data, business logic, and real-time events. AI Platform: This is where intelligence is added. It includes access to large language models, embeddings, vector search, and other tools that support natural language interactions, automation, and decision-making. Together, these four parts form the foundation for building, deploying, and managing AI-powered Java applications. Technology Stacks for Spring Boot and Quarkus Applications To make this more relatable, we highlighted two of the most widely adopted Java frameworks – Spring Boot and Quarkus. These stacks represent popular combinations that many Java teams are already using today for building cloud-native applications. That said, these are just representative examples. There are many valid combinations that developers, platform teams, and organizations can choose – based on their existing tools, workloads, and team preferences. Representative Spring Boot Stack App Platform: App hosting service of choice AI Library: Spring AI AI Platform: OpenAI Business Data: PostgreSQL Vector Database: PostgreSQL Representative Quarkus Stack App Platform: App hosting service of choice AI Library: LangChain4j AI Platform: OpenAI Business Data: PostgreSQL Vector Database: PostgreSQL Both stacks support the core capabilities needed for building intelligent apps – from secure model access and real-time data integration to observability and system-level debugging. But the opportunity does not stop there. Traditional App Servers - Tomcat, WebLogic, JBoss EAP or WebSphere Many enterprise applications continue to run on Tomcat, WebLogic, JBoss EAP, or WebSphere. These are stable platforms that power core business systems – and they are very much part of the AI journey. If you are running on one of these platforms, you can still bring intelligence into your applications. By using a Java library of choice (Spring AI or LangChain4j) , you can connect these applications to Large Language Models (LLMs) and Model Context Protocol (MCP) servers – without needing to rewrite or migrate them. This means that intelligence can be added, not just rebuilt – a powerful approach for teams with large existing investments in Java EE or Jakarta EE applications. Whether your Java app is built with Spring Boot, Quarkus, or deployed on a traditional app server, the tools are here – and the path to intelligent applications is open. You do not have to start from scratch. You can start from where you are. Java and MCP – The Bridge to Intelligent Applications One of the most important parts of the Java and AI story is MCP - the Model Context Protocol. MCP is an open, flexible, and interoperable standard that allows large language models to connect with the outside world - and more importantly, with real applications and real data. At its core, MCP is a bridge - a structured way for models to access enterprise data, invoke tools, and collaborate with AI agents. It gives developers control over how data moves, how decisions are made, and how actions are triggered. The result is safer, more predictable AI behavior inside real-world systems. MCP servers can be implemented in any language stack – such as Java, C#, Python, and NodeJS - and integrated into any AI-powered application, regardless of how that application is written. That interoperability makes MCP especially valuable for teams working across systems and languages. If you are building an MCP server using Java, the official MCP Java SDK maintained by Anthropic provides the right starting point. You can also use frameworks like Spring or Quarkus to implement MCP servers with full enterprise capabilities. For those building applications using Spring AI or LangChain4j, both libraries support connecting to any MCP server - whether running locally or remotely - to orchestrate tools, call functions, and manage agent behavior as part of the runtime flow. In addition, ready-to-use implementations like the Azure MCP Server make it easier to add intelligence to backends, orchestrate workflows, and shape AI agent behavior without starting from scratch. Authentication and Authorization Security is a critical part of any enterprise-grade solution - and MCP is no exception. In collaboration with Anthropic, Microsoft proposed a new authorization specification for MCP. This specification has now been finalized and is being implemented across MCP clients and servers to ensure that all interactions are secure, policy-driven, and consistent across environments. This continued investment in standards, tooling, and security is helping MCP mature into a core enabler of intelligent applications - especially for enterprise Java teams looking to move fast without compromising on trust or control. Preferred Libraries – What Java Developers Are Choosing As part of our outreach to 647 Java professionals, we asked a key question: “Which AI frameworks or libraries would you consider for building or integrating intelligence into Java applications?” Here is what they told us: Spring AI - selected by 43 percent. LangChain4j – preferred by 37 percent. These two libraries clearly lead the way. They reflect the maturity of the Java ecosystem and show strong alignment with the two most active communities – Spring and Jakarta EE. Spring AI and LangChain4j offer higher-level abstractions that simplify how developers connect to AI services, manage context, interact with models, and build intelligent features. For developers already working in Spring Boot or Quarkus, these libraries feel familiar – and that lowers the barrier to adding intelligent capabilities into existing codebases. Other Developer Preferences At the same time, a considerable number of developers – 37 percent and 29 percent – also shared that they would prefer to work directly with AI service provider libraries or call REST APIs. This is not a surprise. In fact, it is a healthy signal. Many teams use these lower-level integrations to gain early access to new features or customize interactions in ways that higher-level libraries may not yet support. It is important that these developers know: You are not wrong - but you are not alone either. While direct API integration offers flexibility, AI libraries like Spring AI and LangChain4j are designed to make those experiences easier. They wrap the complexity, manage context, offer tested patterns that align with enterprise application needs - like observability, security, and structured outputs – plug your code into the Spring and Java ecosystem of possibilities for deep integration. Evolving Together As AI services evolve, day-zero support will almost always appear first in the service provider’s native SDKs and REST APIs. That is expected. But as those capabilities stabilize, AI libraries like Spring AI and LangChain4j will latch on – offering developers a smoother, more consistent programming experience. The result: Developers get to start fast with APIs – and scale confidently with higher level libraries and frameworks. Top Challenges and Areas Needing Improvement As part of our research, we asked 647 Java professionals to share the biggest challenges they face – and where they believe improvements would help the most. The answers reflected where the community is today – eager to build but still facing some friction points. Top Challenges Java Developers Encounter Lack of clear starting points and step-by-step guidance. Feeling overwhelmed by the variety of AI models, libraries, and tools. Misconceptions that machine learning expertise is required. Complexity with integrating AI features into existing applications - particularly introduced by suboptimal patterns such as directly calling REST APIs through low-level HTTP libraries, invoking Python-based routines through external OS processes, or loading models into the application’s memory at runtime using local weights and GPU resources. Missing features in some of the current libraries and frameworks. Uncertainty about scaling applications and safely using private models on the cloud. Areas Developers Believe Need the Most Improvement Clear and practical step-by-step workflows. Guidance on how to securely integrate private models. Examples that show how to use chat models for function calling and streaming completions. Educational content that explains completions, reasoning, and data validation. Tools and how-to guides for embedding-based search and question answering. Tutorials on how to leverage external data to improve model output. Java Developers – Familiar with AI vs. New to AI Among all respondents: 87 percent are familiar with AI. 13 percent are newer to AI and just getting started. Java Developers New to AI These developers are exploring use cases, evaluating models, and building early prototypes. Their top challenges are: Lack of clear starting points. Too many options in terms of tools and models. Need for simple, practical guidance. The top areas that will benefit them: Step-by-step development workflows. Examples of using chat models and completions. Simple breakdowns of key concepts like reasoning and validation. Java Developers Familiar with AI These developers are further along – often in development or production stages – and face a wide range of challenges. A top need for them is: Secure ways to integrate private models into Java apps. They also benefit from deeper technical content, patterns, and advanced tooling support. Moving Forward with Confidence This space is evolving faster than anyone expected – and that can feel overwhelming. But the top challenges Java developers face are real, and the community is actively addressing them. It is important not to worry about the number of tools, models, or libraries. That diversity is a sign of progress. Models will keep evolving. New ones will arrive. And this is exactly where higher-level Java libraries step in. Spring AI, LangChain4j, and the MCP Java SDK are designed to simplify the path forward. These libraries create a layer of abstraction that shields your application code from constant changes. You can build once – and switch models or providers as needed, without rewriting core logic. And these libraries are alive. You can see them in action on GitHub – through open issues, pull requests, and rapid updates. If you see a missing feature, open an issue. If you want to contribute, send a pull request. These are responsive communities that welcome participation. “AI, for most people today, effectively means "sending human-language sentences to an HTTP endpoint." 80% of the noise right now is about the artful and scalable connection of these models with your data and business logic - data guarded by services whose business logic is statistically implemented in Spring Boot and Spring AI. We, as JVM developers, are uniquely well-positioned to expand the AI universe. Don’t delay - start (spring.io) today!” -- Josh Long, Spring Developer Advocate, Broadcom “By combining Quarkus’s speed, Langchain4j’s AI orchestration, and MCP’s unified tool access, Java developers are in a unique position to lead this transformation - building intelligent, resilient applications using the tools and approaches we know well. Working with AI is no longer a distant specialty - it is becoming a natural part of modern Java development” -- Daniel Oh, Senior Principal Developer Advocate, Red Hat AI Concepts for App Developers As you build your first intelligent Java applications, it helps to become familiar with key AI concepts such as foundation models, chat models, embedding models, prompts, inference endpoints, context windows, and vector search (see diagram below for a curated list). These concepts are not just theory - they directly shape how your applications interact with AI systems. To make it easier, we created a simple learning prompt that you can use with your favorite Chat Model like ChatGPT and Claude: // Prompt Template for Java Developers Learning AI Concepts // (Replace <TERM> with the topic you want to learn.) I am a Java enterprise app developer focused on building AI-powered Java applications. I do not fully understand what '<TERM>' means. Please explain it to me so simply that I can think about it like a Java library or service I would naturally use. Use examples from enterprise Java (like APIs, search, summarization, customer service bots) that I would typically build. If possible, give a mental model I can remember and relate to Java patterns. Also, suggest a small sample of how I would use '<TERM>' via an API in a Java app, if that helps understanding. You can replace <TERM> with any concept you want to learn, such as "Embedding Model" or "Inference Endpoint," and get a focused, practical explanation that fits how you already think as a Java developer. By practicing with this method, you can quickly build second-nature familiarity with the terms and ideas behind AI development - without needing a research background. With the strong foundation you already have in Java, you will be ready to confidently integrate, adapt, and innovate as AI tools continue to evolve. AI Is Now a Java Developer’s Game You do not need to train your own models to start building intelligent applications. You can do it today - using popular, production-ready foundation models - all within the Java ecosystem you already know and trust. These are still Java applications at their core. What sets you apart is not just your Java expertise - it is your deep understanding of how real business processes work and how to bring them to life in code. You know how to build, secure, scale, and operate reliable systems - and now, you can apply those same skills to deliver AI-powered solutions that run in the environments your teams already use, including Microsoft Azure. AI-Assisted App Development – A Powerful Companion to Building Intelligent Java Apps No discussion about the future of software development is complete without acknowledging the rise of AI-assisted development. This is a separate - but equally important - path alongside building intelligent applications. And it is transforming how developers write, upgrade, and manage code. At the center of this shift is tools like GitHub Copilot - a tool that is reshaping how developers approach their daily work. Developers using GitHub Copilot report coding up to 55 percent faster, freeing up time to focus on design decisions, solving business problems, and writing less boilerplate code. But the benefits go deeper than speed - 75 percent of developers say that Copilot makes their work more satisfying. Today, 46 percent of all code on GitHub is written with AI assistance, and over 20,000 organizations are already embracing these tools to improve development workflows and accelerate delivery. Built Into the Tools You Already Use GitHub Copilot works where Java developers already build – in IDEs like Visual Studio Code, IntelliJ, and Eclipse. It brings contextual, customizable assistance, powered by the latest models such as gpt-4o and Claude 3.5 Sonnet. Whether it is suggesting code snippets, auto-completing functions, or helping enforce best practices, GitHub Copilot enhances the quality of code and the productivity of developers – all while keeping them in control. Helping Modernize and Maintain Java Codebases One of the most exciting capabilities is GitHub Copilot’s growing role in modernizing Java applications. Late last year, the GitHub Copilot App Modernization – Java Upgrade feature entered private preview. This tool is designed to support large, mission-critical tasks like: Upgrading Java versions and frameworks Refactoring code Updating dependencies Aligning to cloud-first practices The process starts with your local Java project. Copilot provides an AI-generated upgrade assistant, and developers stay in the loop to approve changes step-by-step. The goal is to take the heavy lifting out of routine upgrades while ensuring everything remains safe and aligned with your architecture. Beyond App Code – Towards Cloud-Ready Modernization Technology providers like Microsoft are investing deeply in this space to bring additional capabilities into developer workflows – including: Secure identity handling through passwordless authentication Certificate management and secrets handling Integration with PaaS services for storage, data, caching, and observability All of this reduces the time it takes to bring legacy apps forward and prepare them for modern, scalable deployments – so teams can spend more time building intelligent features and less time managing technical debt. Two Paths – One Goal AI-assisted development, including upgrading and modernizing apps with AI, and building intelligent apps are not the same – but together, they form a powerful foundation. One helps you write and modernize code faster, the other helps you deliver smarter features inside your apps. For Java developers, this means there is support at every step – from idea to implementation to impact. Start Today and Move the Java Community Forward The message from 647 Java professionals is clear: Java developers are ready – and the tools they need to build intelligent applications are already here. If you are a Java developer and have not started your AI journey yet now is the right time. You do not need to become an AI expert. You do not need to change your language, tools, or working style. Modern Java frameworks and libraries like Spring AI, LangChain4j, and the MCP Java SDK are designed to work the way you already build – while making it easier to add intelligence, automation, and smart experiences to your applications. You can start with what you know – and grow into what is next: aka.ms/spring-ai and aka.ms/langchain4j. To Java Ecosystem Leaders We also want to speak directly to those shaping the Java ecosystem – community leaders, experienced developers, and technical influencers. Your role is more important than ever. We invite you to: Show what is possible – share real examples of how AI features can be integrated into Java applications with minimal friction. Promote best practices – use meetups, blogs, workshops, and developer forums to spread practical guidance and patterns that others can follow. Improve the experience – contribute documentation, examples, and even code to help close the gaps developers face when starting their AI journeys. Push frameworks forward – help identify and implement missing features that can simplify Java + AI integration and speed up real-world adoption. This is not just about tools – it is about people helping people move forward. Many of you already helped make this research possible – by spreading the word on LinkedIn, sharing the survey, and encouraging others to contribute. Your support made a difference. And now, these findings belong to the entire Java ecosystem – so we can act on them together. To the Java Developers Who Participated Thank you. Your input – your time, your thoughts, your challenges, your ideas – shaped this entire report. You told us what is working, what is missing, and what you need next. We hope this reflection of your voices is helpful – to you and to the broader Java community. The road ahead is exciting – and Java is ready to lead. AI Learning Resources for Java App Developers Azure AI Services documentation Azure AI Services quick starts – like Chat Completions and Use Your Data Build Enterprise Agents using Java and Spring OpenAI RAG with Java, LangChain4j and Quarkus Spring AI Learn how to build effective agents with Spring AI Spring AI reference documentation Prompt Engineering Techniques with Spring AI Spring AI GitHub repo Spring AI examples Spring AI updates The Seamless Path for Spring Developer to the World of Generative AI LangChain4j Supercharge your Java application with the power of LLMs LangChain4j GitHub repo LangChain4j Examples Quarkus LangChain4j Quarkus LangChain4j Workshop LangChain4j updates1.4KViews2likes0CommentsModernising Registrar Technology: Implementing EPP with Kotlin, Spring & Azure Container Apps
Introduction In the domain management industry, technological advancement has often been a slow and cautious process, lagging behind the rapid innovations seen in other tech sectors. This measured pace is understandable given the critical role domain infrastructure plays in the global internet ecosystem. However, as we stand on the cusp of a new era in web technology, it is becoming increasingly clear that modernization should be a priority. This blog post embarks on a journey to demystify one of the most critical yet often misunderstood components of the industry: the Extensible Provisioning Protocol (EPP). Throughout this blog, we will dive deep into the intricacies of EPP, exploring its structure, commands and how it fits into the broader domain management system. We will walk through the process of building a robust EPP client using Kotlin and Spring Boot. Then, we will take our solutions to the next level by containerizing with Docker and deploying it to Azure Container Apps, showcasing how modern cloud technologies can improve the reliability and scalability of your domain management system. We will also set up a continuous integration and deployment (CI/CD) pipeline, ensuring that your EPP implementation remains up-to-date and easily maintainable. By the end of this blog, you will be able to provision domains programatically via an endpoint, and have the code foundation ready to create dozens of other domain management commands (e.g. updating nameservers, updating contact info, renewing and transferring domains). Who it is for This guide is tailored primarily for registrars — services who serve as the crucial intermediary between domain registrants (the end user who wishes to claim their piece of internet real estate) and the registry systems that manage those domains. While the concepts we will explore have broad applications across the domain industry, the perspective throughout will be firmly rooted in the registrar's role. The fundamental goal of this blog is to lower the barrier to entry in the domain management space, making this technology more accessible to smaller registrars, startups and individual developers. What you will need: EPP credentials The entire tech stack and the development prerequisites are listed below. But before commiting to this project, be aware that the cornerstone of this workflow is the registry EPP server. This is non-negotiable and absolutely essential for implementing and testing your EPP client. If you stumbled upon this blog, it is likely you already have accreditation with a registry. In this case, the registry will provide you with EPP credentials (expect a host, port, username and password). Note that some registries enforce an IP whitelist. For those who do not have accreditation with a registry, then you will need to go through the relevant accreditation process or use a publicly available sandbox. For this guide, I will be using the Channel Islands registry: Channel Isles: The Islands' Domain Names They offer the following TLDs: .gg, .je, .co.gg, .net.gg, .org.gg, .co.je, .net.je, .org.je. Among these, we will concentrate on provisioning .gg domains. The .gg TLD has gained significant popularity, particularly in the gaming community. My personal experience in getting accreditation with Channel Islands registry was an application process and then a fee, and afterwards they provided live EPP details and access to an OTE (Operational Test & Evaluation) environment which I will be using in this blog so as to not incur unnecessary costs. If you do not have access to any EPP server, then this blog will serve as informational only. Otherwise, you can follow along in creating the system. Understanding EPP EPP is short for Extensible Provisioning Protocol. It is a protocol designed to streamline and standardise communication between domain name registries and registrars. Developed to replace older, less efficient protocols, EPP has become the industry standard for domain registration and management operations. More technically, EPP is an XML-based protocol that facilities the provisioning and management of domain names, host objects and contact information. Key features include: Stateful connections: EPP maintains persistent connections between registrars and registries, reducing overhead and improving performance. Extensibility: As the name suggests, EPP is designed to be extensible. Registries can add custom extensions to support unique features or requirements. Standardization: EPP provides a uniform interface across different registries, simplifying integration for registrars and reducing development costs. For someone new to this field, it is easy to assume that domain provisioning would be done with registries through a REST API. But actually, the modern standard is using this protocol, and that is what this blog will cover. Choosing the tech stack We need a combination of technologies that will provide performance, scalability and developer productivity. After careful consideration, I settled on using Kotlin as the programming language, Spring for the REST API and Azure Container Apps for deployment. Kotlin Kotlin has a unique blend of features which makes it a great choice for our EPP implementation. Its seamless interoperability with Java allows us to leverage existing Java libraries commonly used in other EPP implementations while enjoying Kotlin's modern syntax. The language's conciseness and readability results in cleaner, more maintainable code, which is particularly beneficial when dealing with complex EPP commands and responses. Spring The Spring framework plays a pivotal role in our project. After implementing the EPP functions, we will be using Spring to allow us to control these actions from the outside world. We will use Spring to create endpoints that we can use from outside of our deployment on Azure Container Apps, such as through a web backend. This is a common pattern that registrars might use, whereby when a registrant attempts registration of a domain, the web application will process most of the validation and send off the instruction to our Spring REST API which will command the EPP. Azure Container Apps ('ACA') One may initially assume that Azure Spring Apps would be perfect for this project. However, it was recently announced that this service is being retired, starting Sep 30th, 2024, and ending March 31st, 2028. The official migration recommendation is to move to Azure Container Apps. Note that there are other migration paths, such as a PaaS solution with Azure App Service or a containerized solution with Azure Kubernetes Service, though we will be using ACA for this blog. Read more on the retirement: https://learn.microsoft.com/en-us/azure/spring-apps/basic-standard/retirement-announcement Azure Container Apps rounds out our tech stack, providing the ideal platform for deploying and scaling our EPP implementation. This fully managed environment allows us to focus on our application logic rather than getting bogged down in infrastructure management. One of the key advantages of ACA is its native support for microservices architecture, which makes it the perfect choice for a Spring application. Spring's embedded Tomcat server aligns with ACA's containerised approach, allowing for easy deployment with reduced development time. Moreover, ACA's built-in ingress and SSL termination capabilities complement Spring's security features, providing a robust, secure environment for our EPP operations. The platform's ability to handle multiple revisions of an application also facilitates easy A/B testing and canary deployments, which is particularly useful when rolling out updates to our EPP system. The architecture Now we are familiar with the technology, let us look at how this is all going to fit together. The architecture is fairly simple: In this blog, we will be making the EPP API and deploying it to an Azure Container App. The EPP API will, of course, need to connect and communicate with a registry server. While out of scope for this blog, I have included Azure CosmosDB to show where a custom user database could fit into this flow, and an Azure Web App to show a common use case for end users. Once we have put together this EPP API which connects to a registry with Kotlin & Spring, and deployed it on ACA, the hard part is out of the way. From there, you can create any sort of user interface that is relevant to your audience (e.g. an Azure Web App) and connect with a database in any way that is relevant to your platform (e.g. Azure CosmosDB for caching). To put this architecture into a real-world context, imagine that you are purchasing a domain from a popular registrar such as Namecheap or GoDaddy, this is the kind of backend systems they may have. The typical user journey, in simplistic steps, as illustrated by the diagram, would be: Registrant (end user) requests to purchase a domain Website backend sends instruction to EPP API (what we are making in this blog) EPP API sends command to the EPP server provided by the registry Response provided by registry and received by registrant (end user) on website Setting up the development environment Prerequisites For this blog, I will be using the following technologies: Visual Studio Code (VS Code) as the IDE (integrated development environment). I will be installing some extensions and changing some settings to make it work for our technology. Download at Download Visual Studio Code - Mac, Linux, Windows Docker CLI for containerization and local testing. Download at Get Started | Docker Azure CLI for deployment to Azure Container Registry & Azure Container Apps (you can use the portal if more comfortable). Download at How to install the Azure CLI | Microsoft Learn Git for version control and pushing to GitHub to setup CI/CD pipeline. Download at Git - Downloads (git-scm.com) VS Code Extensions These extensions are optional but will significantly improve the development experience. I would highly recommend installing them. Head to the side panel on the left, click Extensions and install the following: Kotlin Spring Initialzr Java Support Implementing EPP with Kotlin & Spring Creating the project First up, let us create a blank Spring project. We will do this with the Spring Initializr plugin we just installed: Press CTRL + SHIFT + P to open the command palette Select Spring Initialzr: Create a Gradle project... Select version (I recommend 3.3.4 ) Select Kotlin as project language Type Group Id (I am using com.stephen ) Type Artifact ID (I am using eppapi ) Select jar as packaging type Select any Java version (The version choice is yours) Add Spring Web as a dependency Choose a folder Open project Your project should look like this: We are using the Gradle build tool for this project. Gradle is a powerful, flexible build automation tool that supports multi-language development and offers convenient integration with both Kotlin & Spring. Gradle will handle our dependency management, allowing us to focus on our EPP implementation rather than build configuration intricacies. Adding the EPP dependency The Spring Initialzr has kindly added the required Spring dependencies for us. Therefore, all that is left is our EPP dependency. When exploring how best to achieve my goal in connecting to a registry through EPP, I discovered the EPP RTK (Registrar Toolkit) library. This library provides a robust implementation of the Extensible Provisioning Protocol, making it an ideal choice for our project. This library is particularly useful because: It handles the low-level details of EPP communication, allowing us to focus on business logic. It is a Java-based implementation, which integrates seamlessly with our Kotlin and Spring setup. It supports all basic EPP commands out of the box, such as domain checks, registrations and transfers. By using the EPP-RTK, we can significantly reduce the amount of boilerplate code needed to implement EPP functionality. You can download the library from there and manually import it into your project, or preferably add the following to your build.gradle in the dependencies section: implementation 'io.github.mschout:epp-rtk-java:0.9.11' Also, while we are here, I would recommend setting the Spring framework plugin to version 2.7.18 . This version is most compatible with the APIs we are using, and I have tried and tested it. To do this, in the plugins block, change the dependency to this: id 'org.springframework.boot' version '2.7.18' P.S. The EPP-RTK documentation that I used while writing this project was entirely from this page, which documents the entire API: https://epp-rtk.sourceforge.net/epp-rtk-java-0.4.1/java/doc/epp-rtk-user-guide.html Modifying the build settings With that knowledge, there is some specific things we need to change in our build.gradle to support the proper Java version. The version is entirely up to you, though I would personally recommend latest due to staying up to date with security patches. Copy/replace the following into the build.gradle : java { toolchain { languageVersion = JavaLanguageVersion.of(21) } sourceCompatibility = JavaVersion.VERSION_21 targetCompatibility = JavaVersion.VERSION_21 } kotlin { jvmToolchain(21) } tasks.withType(org.jetbrains.kotlin.gradle.tasks.KotlinCompile) { kotlinOptions { jvmTarget = "21" freeCompilerArgs = ["-Xjsr305=strict"] } } tasks.named('test') { enabled = false } At this point, it is good practice to attempt to build the project. It should build comfortably with these new settings, though if not then now is the perfect time to deal with errors before we get into the codebase. To do this, either use the built-in Gradle panel on the sidebar and click through Tasks > build > build , or run this command in the terminal: .\gradlew clean build After a few seconds, you should be met with BUILD SUCCESSFUL . The structure Our intention here is to build a REST API which will take in requests and then use EPP-RTK to beam off commands to the targeted EPP registry. I recommend the following steps for a solid project structure: Rename the main class to EPPAPI.kt (Spring auto generation did not do it justice). Split the code into two folders: epp and api , with our main class remaining at the root. Create a class inside the epp folder named EPP.kt - this is where we will connect to and manage the EPPClient soon. Create a class inside the api folder named API.kt - this is where we will configure and run the Spring API. Your file structure should now look like this: EPPAPI.kt api └── API.kt epp └── EPP.kt Before we can get to coding, there is one final step: adding environment variables. To connect to the targeted EPP server, we need four variables: host, port, username and password. These will be provided by your chosen registry. It is possible that, as in my case, the registry may also grant you access to an OTE (Operational Test & Evaluation) environment, which is essentially a 1:1 of the live EPP server that acts as a sandbox for registrars to test their systems without fear of affecting data on the live registry. I highly recommend hooking up to an OTE during testing if your registry has provided you with one to not incur unnecessary costs. Create a file in the root of your project called .env and populate with the following structure. I have prefilled with the host and port for the registry I am using to show the expected format: HOST=ote.channelisles.net PORT=700 USERNAME=X PASSWORD=X We will use these environment variables while running our project locally in VS Code and then prefill them into Docker when containerizing locally. For container apps, we will have to manually provide them when setting up the environment. The code Now comes the fun part. We have successfully set up our development environment and structured the project, so now let us populate it with some code. Given this project is in Kotlin, I will be writing solid syntax as illustrated in the Kotlin docs: https://kotlinlang.org/docs/home.html Firstly, let us tackle our EPP class. The goal with this class is to provide access to an EPPClient which we can use to connect to the EPP server and authenticate with our details. The class will extend the EPPClient provided by the EPP-RTK API and implement a singleton pattern through its companion object. The class uses the environment variables we set earlier for configuration. The create() function serves as a factory method, handling the process of establishing a secure SSL connection, logging in and initializing the client. It employs Kotlin's apply function for a concise and readable initialization block. The implementation also includes error handling and logging which will help us debug if anything goes wrong. The setupSSLContext() function configures a trust-all certificate strategy, which, while not recommended for production, is useful in development or specific controlled environments. This design will allow for easy extension through Kotlin's extension functions on the companion object. The code is as follows: import com.tucows.oxrs.epprtk.rtk.EPPClient import java.net.Socket import java.security.KeyStore import java.security.cert.X509Certificate import javax.net.ssl.KeyManagerFactory import javax.net.ssl.SSLContext import javax.net.ssl.TrustManager import javax.net.ssl.X509TrustManager class EPP private constructor( host: String, port: Int, username: String, password: String, ) : EPPClient(host, port, username, password) { companion object { private val HOST = System.getenv("HOST") private val PORT = System.getenv("PORT").toInt() private val USERNAME = System.getenv("USERNAME") private val PASSWORD = System.getenv("PASSWORD") lateinit var client: EPP fun create(): EPP { println("Creating client with HOST: $HOST, PORT: $PORT, USERNAME: $USERNAME") return EPP(HOST, PORT, USERNAME, PASSWORD).apply { try { println("Creating SSL socket...") val socket = createSSLSocket() println("SSL socket created. Setting socket to EPP server...") setSocketToEPPServer(socket) println("Socket set. Getting greeting...") val greeting = greeting println("Greeting received: $greeting") println("Connecting...") connect() println("Login successful.") client = this } catch (e: Exception) { println("Error during client creation: ${e.message}") e.printStackTrace() throw e } } } private fun createSSLSocket(): Socket { val sslContext = setupSSLContext() return sslContext.socketFactory.createSocket(HOST, PORT) as Socket } private fun setupSSLContext(): SSLContext { val trustAllCerts = arrayOf<TrustManager>(object : X509TrustManager { override fun getAcceptedIssuers(): Array<X509Certificate>? = null override fun checkClientTrusted(certs: Array<X509Certificate>, authType: String) {} override fun checkServerTrusted(certs: Array<X509Certificate>, authType: String) {} }) val keyStore = KeyStore.getInstance(KeyStore.getDefaultType()).apply { load(null, null) } val kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()).apply { init(keyStore, "".toCharArray()) } return SSLContext.getInstance("TLS").apply { init(kmf.keyManagers, trustAllCerts, java.security.SecureRandom()) } } } } Now that this is configured, let us alter our main class to ensure that we connect and authenticate into the client when our project is run. I have removed the default generated Spring content as we will move this to the dedicated API.kt class shortly. The main class should now look like this: fun main() { EPP.create() } Now your application is able to connect and authenticate with an EPP server! However, that in itself is not very useful, so next we will focus on creating specific functions that will send off EPP messages to the target server and get a response. Before we continue, it is important to understand the three main objects in domain management: domains , contacts and hosts . Domains: These are the web addresses that users type into their browsers. In EPP, a domain object represents the registration of a domain name. Contacts: These are individuals or entities associated with a domain. There are typically four types of contact: Registrant, Admin, Tech & Billing. ICANN (Internet Corporation for Assigned Names and Numbers) mandates that every provisioned domain must have a valid contact attached to it. Hosts: Also known as nameservers, these are the servers that translate domain names into IP addresses. In EPP, host objects can either be internal (subordinate to a domain in the registry) or external. Understanding these concepts is crucial because EPP operations involve creating, modifying or querying these objects. For instance, when registering a domain, you need to specify contacts and hosts. With that knowledge, let us create three folders inside our epp folder, named domain , contact and host . And the first EPP command we will make is the simplest: a domain check. Because this relates to domain objects, create a class inside the domain folder named CheckDomain.kt . Your project structure should now look like this: EPPAPI.kt api └── API.kt epp ├── contact ├── domain │ └── CheckDomain.kt ├── host └── EPP.kt Let us go and write our first EPP operation: checking if a domain is available for registration. I am going to create a Kotlin extension function inside our CheckDomain.kt class called checkDomain which can be used on our EPP class. Here's the code: import com.tucows.oxrs.epprtk.rtk.example.DomainUtils.checkDomains import epp.EPP import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCheck import org.openrtk.idl.epprtk.domain.epp_DomainCheckReq import org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.checkDomain( domainName: String, ): Boolean { val check = EPPDomainCheck().apply { setRequestData( epp_DomainCheckReq( epp_Command(), arrayOf(domainName) ) ) } val response = processAction(check) as EPPDomainCheck val domainCheck = response.responseData as epp_DomainCheckRsp return domainCheck.results[0].avail } Here is the flow of the function: We create an EPPDomainCheck object, which represents an EPP domain check command. We set the request data using epp_DomainCheckReq . This takes an epp_command (a generic EPP command) and an array of domain names to check. In this case, we are only checking one domain. We process the action using our EPP client's processAction function, which sends the request to the EPP server. We cast the response to EPPDomainCheck and extract the responseData . Finally, we return whether the domain is available or not from the first (and only result) by checking the avail value. From an EPP perspective, this function is sending a domain check command to the EPP server. The server responds with information about whether the specified domain is available for registration. Remember, EPP is an XML-based protocol, meaning that the raw output for a check of, for example, example.gg , returns the following: org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp: { m_rsp [ org.openrtk.idl.epprtk.epp_Response: { m_results [[ org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] } ]] m_message_queue [ org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] } ] m_extension_strings [null] m_trans_id [ org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728106430577] } ] } ] m_results [[ org.openrtk.idl.epprtk.epp_CheckResult: { m_value [example.gg] m_avail [false] m_reason [(00) The domain exists] m_lang [] } ]] } This is why we do the casting and filter through to the Boolean to provide back to the calling function. Otherwise, this would be a mess to deal with. It is important to do the validation and casting in this function so that we do not pass the heavy work back upstream. By implementing this as an extension function on our EPP class, we can call it super easily. Let us add it to our main class as a test: fun main() { EPP.create() println(EPP.checkDomain("example.gg")) } As opposed to a long string of XML, our function has made it so that the console is either printing true or false , in this case false . This pattern of creating extension functions for various EPP operations allows us to build a clean, intuitive API for interacting with the EPP server, while keeping our core EPP class focused on connection and authentication. Now that the basic check is done, let us look at what is required to provision a domain. Remember that domains, contacts and hosts can all be used with a number of operations, including creating, updating, deleting and querying. In order to register a domain, we will need to create a domain object, which first requires that a contact and host object be created. Let us start with creating a contact. I have created a CreateContact.kt class under my /epp/contact folder. Here is is how it looks: import com.tucows.oxrs.epprtk.rtk.xml.EPPContactCreate import epp.EPP import org.openrtk.idl.epprtk.contact.* import org.openrtk.idl.epprtk.epp_AuthInfo import org.openrtk.idl.epprtk.epp_AuthInfoType import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.createContact( contactId: String, name: String, organization: String? = null, street: String, street2: String? = null, street3: String? = null, city: String, state: String? = null, zip: String? = null, country: String, phone: String, fax: String? = null, email: String ): Boolean { val create = EPPContactCreate().apply { setRequestData( epp_ContactCreateReq( epp_Command(), contactId, arrayOf( epp_ContactNameAddress( epp_ContactPostalInfoType.INT, name, organization, epp_ContactAddress( street, street2, street3, city, state, zip, country ) ) ), phone.let { epp_ContactPhone(null, it) }, fax?.let { epp_ContactPhone(null, it) }, email, epp_AuthInfo(epp_AuthInfoType.PW, null, "pass") ) ) } val response = client.processAction(create) as EPPContactCreate val contactCreate = response.responseData as epp_ContactCreateRsp return contactCreate.rsp.results[0].m_code.toInt() == 1000 } In this command, we are using similar logic to domain checking, where we create an EPPContactCreate class which we populate from the data we took in from the constructor. Some of that data is optional, and I have given default null values to all that are optional according to the EPP specification. I am then checking for the m_code which is, for all intents and purposes, a code that indicates the result of the operation. According to the EPP specification, a result code of 1000 indicates a successful operation. The last step before we can work on provisioning a domain is creating a host object. In EPP, host objects represent the nameservers that will be associated with our domain. Registries require these for two main reasons: to ensure newly registered domains are immediately operational in the DNS, and to create necessary glue records for internal nameservers (nameservers within the same TLD as the domain). Whether or not this is required or not depends on your chosen registry. With my case study as the Channel Isles, there is no requirement that a host object must be created on the system before the EPP can provision a domain for external nameservers. However, I will share the code in case your circumstances differ with your registry. Following from our previous two commands, I added created a CreateHost.kt class in my /epp/host folder with the following code: import com.tucows.oxrs.epprtk.rtk.xml.EPPHostCreate import epp.EPP import org.openrtk.idl.epprtk.epp_Command import org.openrtk.idl.epprtk.host.epp_HostAddress import org.openrtk.idl.epprtk.host.epp_HostAddressType import org.openrtk.idl.epprtk.host.epp_HostCreateReq import org.openrtk.idl.epprtk.host.epp_HostCreateRsp fun EPP.Companion.createHost( hostName: String, ipAddresses: Array<String>? ): Boolean { val create = EPPHostCreate().apply { setRequestData( epp_HostCreateReq( epp_Command(), hostName, ipAddresses?.map { epp_HostAddress(epp_HostAddressType.IPV4, it) }?.toTypedArray() ) ) } val response = client.processAction(create) as EPPHostCreate val hostCreate = response.responseData as epp_HostCreateRsp return hostCreate.rsp.results[0].code.toInt() == 1000 } As before, this function creates the EPP host create request, processes the action, checks the result code and returns true if the code is 1000 , and false otherwise. The parameters are particularly important here and can lead to confusion for those not too familiar with how DNS works. The hostName parameter is the fully qualified domain name (FQDN) of the host we are creating. For example, ns1.example.com . The other ask is an array of IP addresses associated with the host. This is more crucial for internal nameservers, and for external nameservers (probably your use case) this can often be left null . Now the one definite and other potential prerequisite to provisioning a domain are in our codebase, let us get to the star of the show. The following function is an EPP command that will provision a domain based on objects we just created. I created the following function in a class called CreateDomain.kt in my /epp/domain folder: import epp.EPP import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCreate import org.openrtk.idl.epprtk.domain.* import org.openrtk.idl.epprtk.epp_AuthInfo import org.openrtk.idl.epprtk.epp_AuthInfoType import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.createDomain( domainName: String, registrantId: String, adminContactId: String, techContactId: String, billingContactId: String, nameservers: Array<String>, password: String, period: Short = 1 ): Boolean { val create = EPPDomainCreate().apply { setRequestData( epp_DomainCreateReq( epp_Command(), domainName, epp_DomainPeriod(epp_DomainPeriodUnitType.YEAR, period), nameservers, registrantId, arrayOf( epp_DomainContact(epp_DomainContactType.ADMIN, adminContactId), epp_DomainContact(epp_DomainContactType.TECH, techContactId), epp_DomainContact(epp_DomainContactType.BILLING, billingContactId) ), epp_AuthInfo(epp_AuthInfoType.PW, null, password) ) ) } val response = client.processAction(create) as EPPDomainCreate val domainCreate = response.responseData as epp_DomainCreateRsp return domainCreate.rsp.results[0].code.toInt() == 1000 } This createDomain function encapsulates the EPP command for provisioning a new domain. The function brings together all the pieces we have prepared: contacts, hosts and domain-specific information. As before, it constructs an EPP domain create request, associating the domain with its contacts and nameservers. It then processes this request and checks the result code to determine if the request was successful. By returning a Boolean, we can easily pass the response upstream and, if connected to a user interface such as a web application, can inform the end user. With these functions in place, we now have the ability to provision a domain. I will run the following test in my main class: import epp.EPP import epp.contact.createContact import epp.domain.createDomain fun main() { EPP.create() val contactResponse = EPP.createContact( contactId = "12345", name = "Stephen", organization = "Test", street = "Test Street", street2 = "Test Street 2", street3 = "Test Street 3", city = "Test City", state = "Test State", zip = "Test Zip", country = "GB", phone = "1234567890", fax = "1234567890", email = "test@gg.com" ) if (contactResponse) { println("Contact created") } else { println("Contact creation failed") return } val domainResponse = EPP.createDomain( domainName = "randomavailabletestdomain.gg", registrantId = "123", adminContactId = "123", techContactId = "123", billingContactId = "123", nameservers = arrayOf( "ernest.ns.cloudflare.com", "adaline.ns.cloudflare.com" ), password = "XYZXYZ", period = 1 ) if (domainResponse) { println("Domain created") } else { println("Domain creation failed") } } In this function which runs when the application first starts, we are firstly creating a contact using our createContact extension function. I have passed through every single parameter, required or optional, to show how it would look. Then, once confirming the contact has created, I am creating a domain with our createDomain extension function. I am giving it the required parameters, such domain name and the nameservers, but also providing the ID of the contact created just above in the four contact fields. It is required that the contact ID which is provided is a valid contact that has first been created in the system. Therefore, this merger of a couple functions that we have made should provision a domain. After running it, the output in console should be: Contact created Domain created And for humour, here is the XML response from the EPP server before we did our own filtering in our extension functions: org.openrtk.idl.epprtk.contact.epp_ContactCreateRsp: { m_rsp [ org.openrtk.idl.epprtk.epp_Response: { m_results [[ org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] } ]] m_message_queue [ org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] } ] m_extension_strings [null] m_trans_id [ org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331411] } ] } ] m_id [123456] m_creation_date [2024-10-05T06:38:51.408Z] } org.openrtk.idl.epprtk.domain.epp_DomainCreateRsp: { m_rsp [ org.openrtk.idl.epprtk.epp_Response: { m_results [[ org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] } ]] m_message_queue [ org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] } ] m_extension_strings [null] m_trans_id [ org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331467] } ] } ] m_name [randomavailabletestdomain2.gg] m_creation_date [2024-10-05T06:38:51.464Z] m_expiration_date [2025-10-05T06:38:51.493Z] } Both of those objects were created using our extension functions on top of the EPP-RTK which is in contact with my target EPP server. If your registry has a user interface, you should see that these objects have now been created and are usable going forward. For example, one contact can be used for multiple domains. For my case study, you can see that both objects were successfully created on the Channel Isles side through our EPP communication: In simple terms, this means they have received the instruction and successfully provisioned our domain pointing at the nameservers we provided! This now means that the domain is in my (or my registrant's) possession and now I am able to control the website showing at that domain. What about all of the other EPP commands? After all, the EPP-RTK supports the following commands: Domain check Domain info Domain create Domain update Domain delete Domain transfer Contact check Contact info Contact create Contact update Contact delete Contact transfer Host check Host info Host create Host update Host delete We have made four of these in this blog: creating a host, creating a contact, creating a domain and checking a domain. The code for the rest of these commands follows the exact same pattern, and if at any point you get stuck I highly recommend the official documentation of the EPP-RTK API: https://epp-rtk.sourceforge.net/epp-rtk-java-0.4.1/java/doc/epp-rtk-user-guide.html This documentation is where I got all my information from for these commands and for this project as a whole. If you are looking at productionizing this project and intend to implement the remaining commands, you will find that the code is almost identical across the different commands with the only exception being the required parameters for each request. Now that we have our core EPP functionality implemented, it is time to expose these capabilities through a web API. This is where Spring comes into play. Spring will allow us to create a robust, scalable REST API that will serve as an interface between client interactions and our EPP operations. What we will do here is wrap our EPP functions within Spring controllers, meaning we can create endpoints that external applications can easily consume. This abstraction layer not only makes our EPP functionality more accessible but also allows us to add additional business logic, validation and error handling. Because we know that EPP can process commands related to three object types: hosts, contacts and domains, I am going to create three separate controllers. But let us also split that up from our API.kt by putting them in their own controller folder. I am going to name my controllers HostController.kt , ContactController.kt and DomainController.kt . At this point, the file structure should look like this: EPPAPI.kt api ├── controller │ └── ContactController.kt │ └── DomainController.kt │ └── HostController.kt └── API.kt epp ├── contact ├── domain │ └── CheckDomain.kt ├── host └── EPP.kt The job of controllers in Spring is to handle incoming HTTP requests, process them and return appropriate responses. In the context of our EPP API, controllers will act as the bridge between the client interface and our EPP functionality. Therefore, it makes logical sense to split up the three major sections into multiple classes so that the code does not become unmaintainable. The simplest example we could write to link our EPP and our Spring API is checking the availability of a domain. Thankfully, earlier we wrote the EPP implementation to this in our CheckDomain.kt class. Now let us make it so that a user can trigger it via an endpoint. Because it is domain related, I will add the new code into the DomainController.kt class. Firstly, with every controller class, it must be annotated with @RestController . And then a mapping is created as below: import epp.EPP import epp.domain.checkDomain import org.springframework.http.ResponseEntity import org.springframework.web.bind.annotation.GetMapping import org.springframework.web.bind.annotation.RequestParam import org.springframework.web.bind.annotation.RestController @RestController class DomainController { @GetMapping("/domain-check") fun helloWorld( @RequestParam name: String ): ResponseEntity<Map<String, Any>> { val check = EPP.checkDomain(name) return ResponseEntity.ok( mapOf( "available" to check ) ) } } Let us break down the code and see what is happening: GetMapping("domain-check") : This annotation maps the HTTP GET requests to the domain-check route. When a GET request is made to this URL, Spring will call this function to handle it. fun helloWorld(@RequestParam name: String) : This is the function that will handle the request. The @RequestParam annotation tells Spring to extract the name parameter from the query string of the URL. For example, a request to /domain-check?=name=example.gg would set name to example.gg . This allows us to then process the EPP command with their requested domain name. ResponseEntity<Map<String, Any>> : This is the return type of the function. ResponseEntity allows us to have full control over the HTTP response, including status code, headers and body. val check = EPP.checkDomain(name) : This line calls our EPP function to check if the domain is available (remember, it returns true if available and false if not). return ResponseEntity.ok(mapOf("available" to check)) : This creates a response with HTTP status 200 (OK) and a body containing the JSON object with a single key available whose value is the result of the domain check. The mapping is crucial because it connects HTTP requests to our application logic. When a client makes a GET request to /domain-check with a domain name as a parameter, Spring routes that request to this method, which then uses our EPP implementation to check the domain's availability and returns the result. This setup allows external applications to easily check domain availability by making a simple HTTP GET request, without needing to know anything about the underlying EPP protocol or implementation. It is a great example of how we are using Spring to create a user-friendly API on top of our more complex EPP operations. The same principle we have applied to the domain check operation can be extended to all other EPP commands we have created. For instance, creating a domain might use a POST request, updating domain information could use PUT , and deleting a domain would naturally fit with the DELETE HTTP method. For domain creation, we could use @PostMapping("/domain") and accept a request body with all necessary information. Domain updates could use @PutMapping("/domain/{domainName}") , where the domain name is part of the path and the updated information is in the request body. For domain deletion, @DeleteMapping("/domain/{domainName}") would be appropriate. Similar patterns can be applied to contact and host operations. By mapping our EPP commands to these standard HTTP methods, we create an intuitive API that follows RESTful conventions. Each of these endpoints would call the corresponding EPP function we have already implemented, process the result, and return an appropriate HTTP response. This approach provides a clean separation between the HTTP interface and the underlying EPP operations, making our system more modular and easier to maintain or extend in the future. The very last step before we can finally run this project is to actually initialise the Spring side of the project like we did for the EPP side. Inside my empty API.kt class, I am going to put the following: import org.springframework.boot.autoconfigure.SpringBootApplication import org.springframework.boot.runApplication @SpringBootApplication class API { companion object { fun start() { runApplication<API>() } } } This code follows the Spring requirements to register our controllers. Our API.kt class serves as the entry point for the Spring application. Inside this class, we have defined a companion object with a start() function. This function calls runApplication<API>() to bootstrap the application, which is a Kotlin-specific way to launch a Spring application. Behind the scenes, Spring's recognition of controllers happens automatically through a process called component scanning. When the application starts, because we have registered it here, Spring examines the codebase, starting from the package containing the main class and searching through all subpackages. It looks for classes annotated with specific markers, such as the @RestController that we put at the top of our controllers. Spring then inspects these classes, looking for any functions that may be annotated as mappings (e.g. @GetMapping like above), and then uses that information to build a map of URL paths to controller functions. This means that when a request comes in, Spring knows exactly which function in which class should process the result. It would be fair to say that Spring has an unconventional approach to application structure and dependency management. Spring embraces the philosophy of "convention over configuration" and heavily leverages annotations. However, this has helped us to significantly reduce boilerplate code, making it cleaner and more maintainable for future travelers. Now that the entry point to our API is ready, all we need to do is call that start() function we just created in our APP.kt : import api.API import epp.EPP fun main() { EPP.create() API.start() } And that is a wrap for the code. Let us go ahead and run our project. The console output should look something like this: Creating client with HOST: ote.channelisles.net, PORT: 700, USERNAME: [Redacted] Creating SSL socket... SSL socket created. Setting socket to EPP server... Socket set. Getting greeting... Greeting received: org.openrtk.idl.epprtk.epp_Greeting: { m_server_id [OTE] m_server_date [2024-10-06T05:47:08.628Z] m_svc_menu [ org.openrtk.idl.epprtk.epp_ServiceMenu: { m_versions [[1.0]] m_langs [[en]] m_services [ [ urn:ietf:params:xml:ns:contact-1.0, urn:ietf:params:xml:ns:domain-1.0, urn:ietf:params:xml:ns:host-1.0 ] ] m_extensions [ [ urn:ietf:params:xml:ns:rgp-1.0, urn:ietf:params:xml:ns:auxcontact-0.1, urn:ietf:params:xml:ns:secDNS-1.1, urn:ietf:params:xml:ns:epp:fee-1.0 ] ] } ] m_dcp [ org.openrtk.idl.epprtk.epp_DataCollectionPolicy: { m_access [all] m_statements [ [ org.openrtk.idl.epprtk.epp_dcpStatement: { m_purposes [[admin, prov]] m_recipients [ [ org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [ours] m_rec_desc [null] }, org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [public] m_rec_desc [null] } ] ] m_retention [stated] } ] ] m_expiry [null] } ] } Connecting... Connected. Logging in... Login successful. . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.7.18) 2024-10-06 06:47:09.531 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : Starting EPPAPIKt using Java 1.8.0_382 on STEPHEN with PID 43872 (D:\IntelliJ Projects\epp-api\build\classes\kotlin\main started by [Redacted] in D:\IntelliJ Projects\epp-api) 2024-10-06 06:47:09.534 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : No active profile set, falling back to 1 default profile: "default" 2024-10-06 06:47:10.403 INFO 43872 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2024-10-06 06:47:10.414 INFO 43872 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2024-10-06 06:47:10.414 INFO 43872 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.83] 2024-10-06 06:47:10.511 INFO 43872 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2024-10-06 06:47:10.511 INFO 43872 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 928 ms 2024-10-06 06:47:11.220 INFO 43872 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2024-10-06 06:47:11.229 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : Started EPPAPIKt in 2.087 seconds (JVM running for 3.574) It is clear to see that this startup console output is split into two halves. Firstly, the output from our debugging messages when creating and authenticating into the EPPClient . Then the native Spring output which shows that the local server has been started on port 8080 . Now for the exciting part. Heading to localhost:8080 in the browser should resolve, but throw a fallback error page, because we have not set anything to show at that route. We have, however, created a GET route at /domain-check . If you head to just /domain-check you will be met with a 400 (BAD REQUEST) error. This is because you will need to specify the name parameter as enforced in our function. So, let us try this out with a couple domains... /domain-check?name=test.gg - {"available":false} /domain-check?name=thisshouldprobablybeavailable.gg - {"available":true} And that is it! At first it may not seem like a huge technical feat, but one should remember that is sending off a request to our Spring API which then routes it to a specific function, this then runs the code we wrapped over an EPP command which is sent off to the targeted EPP server who processes the domain check and sends the response back upstream to the user. There is a huge amount happening behind the scenes to power this simple domain check. What we have demonstrated here with the domain check functionality is just the tip of the iceberg. We could expand our API to include endpoints for various domain-related operations. For instance, domain registration could be handled by a POST request to /domain , taking contact details, nameservers, and other required information in the request body. Domain information retrieval could be a GET request to /domain/{domainName} , fetching comprehensive information about a specific domain. Updates to domain information, such as changing contacts or nameservers, could be managed through a PUT request to /domain/{domainName} . The domain deletion process could be initiated with a DELETE request to /domain/{domainName} . Domain transfer operations, including initiating, approving, or rejecting transfers, could also be incorporated into our API. Each of these operations would follow the same pattern we have established: a Spring controller method that takes in the necessary parameters, calls the appropriate EPP function, and returns the result in a user-friendly format. By expanding our API in this way, we are creating a comprehensive abstraction layer over EPP. This approach simplifies complex EPP operations, making them accessible to developers who may not be familiar with the intricacies of the protocol. It presents a consistent, RESTful interface for all domain-related operations, following web development best practices. Our EPP API can be easily consumed by various client applications, from web frontends to mobile apps or other backend services. Deploying to Azure Container Apps Now that we have our EPP API functioning locally, it is time to think about productionizing our application. Our goal is to run the API as an Azure Container App (ACA), which is a fully managed environment perfect for easy deployment and scaling of our Spring application. However, before deploying to ACA, we will need to containerise our application. This is where Azure Container Registry (ACR) comes into play. ACR will serve as the private Docker registry to store and manage our container images. It provides a centralised repository for our Docker images and integrates seamlessly with ACA, streamlining our CI/CD pipeline. Firstly, let us create a Dockerfile . This step is required to run both locally and in Azure Container Registry. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It serves as a blueprint for building a Docker container. In our case, our Dockerfile will set up the environment and instructions needed to containerise our Spring application. Create a file named Dockerfile in the root of your project with the following content: # Use OpenJDK 21 as the base image (use your JDK version) FROM openjdk:21-jdk-alpine # Set the working directory in the container WORKDIR /app # Copy the JAR file into the container COPY build/libs/*.jar app.jar # Expose the port your application runs on EXPOSE 8080 # Command to run the application CMD ["java", "-jar", "app.jar"] I have added comments alongside each instruction to explain the flow. This Dockerfile encapsulates our application and its runtime environment, ensuring consistency across different deployment environments. It is a crucial step in our journey from local development to cloud deployment, providing a standardised way to package and run our EPP API. However, before we push to the cloud, it is prudent to test it locally in a Docker container. This approach allows us to catch any containerization-related issues early and save time in the long run. We can verify that all components work correctly in a containerised environment, such as environment variable configurations and network settings. This step will help ensure a smooth transition to ACA, as the local Docker environment closely mimics the container runtime in Azure. Once we are confident that our application runs flawlessly in a local Docker container, we can push the image to ACR and deploy it to ACA, knowing we have minimised the risk of environment-specific issues. This local testing can be done in a simple three step process with the following Gradle & Docker CLI commands: ./gradlew build - build our application and package into a JAR file found under /build/libs/X.jar . docker build -t epp-api . - tells Docker to create an image named epp-api based on the instructions in our Dockerfile. docker run -p 8080:8080 --env-file .env epp-api - start a container from the image, mapping port 8080 of the container to port 8080 on the host machine. We use this port because this is the default port on which Spring exposes endpoints. The -p flag ensures that the application can be accessed through localhost:8080 on your machine. We also specify the .env file we created earlier so that Docker is aware of our EPP login details. If all went well, you should have the exact same console output as above. The key difference is the environment in which our application is running. Previously, we were executing our Spring application directly within our development environment. Now, however, our application is running inside a Docker container. This containerised environment is isolated from our host system, with its own file system, networking, and process space. It is a self-contained unit that includes not just our application, but also its runtime dependencies like the Java Development Kit. Now that we have proven our project is ready to run in a containerised environment, let us start the cloud deployment process. This process involves two main steps: pushing our Docker image to Azure Container Registry and then deploying it to Azure Container Apps. I will be using the Azure CLI as outlined in the prerequisites. Everything I am doing can be done through the portal, but the CLI drastically reduces development time. Run the following commands in this order: az login - if not already authenticated, be sure to log in through the CLI. az group create --name registrar --location uksouth - create a resource group if you have not already. I have named mine registrar and chosen the location as uksouth because that is closest to me. az acr create --resource-group registrar --name registrarcontainers --sku Basic - create an Azure Container Registry resource within our registrar resource group, with the name of registrarcontainers (note that this has to be globally unique) and SKU Basic. az acr login --name registrarcontainers - login to the Azure Container Registry. docker tag epp-api myacr.azurecr.io/epp-api:v1 - tag the local Docker image with the ACR login server name. docker push myacr.azurecr.io/epp-api:v1 - push the image to the container registry! If all went well, you should be met with a console output like this: The push refers to repository [registrarcontainers.azurecr.io/epp-api] 2111bc72193f6: Pushed 1b04c1e1a1955: Pushed ceaf9e13ebef5: Pushed 9b9b7f34d536a0: Pushed f1b59233fe4b5: Pushed v1: digest: sha256:07eba5b2awd5021awd365be927e3awdb2e9db1eb75c072d4bd75d6 size: 1365 That is the first part done. Now that our image is in ACR, we can deploy it to Azure Container Apps. This step is where we truly leverage the power of Azure's managed container services. To deploy our EPP API to ACA, I will continue to use the Azure CLI, though some may find it more comfortable to use the portal for this section as a lot of configuration is required. Run the following commands in this order: az containerapp env create --resource-group registrar --name containers --location uksouth - create the Container App environment within our resource group with name containers and location uksouth . az acr update -n registrarcontainers --admin-enabled true - ensure ACR allows admin access. az containerapp create \ --name epp-api \ --resource-group registrar \ --environment containers \ --image registrarcontainers.azurecr.io/epp-api:v1 \ --target-port 8080 \ --ingress external \ --registry-server registrarcontainers.azurecr.io \ --env-vars \ "HOST=your_host" \ "PORT=your_port" \ "USERNAME=your_username" \ "PASSWORD=your_password" - creates a new Container App named epp-api within our resource group and the containers environment. It uses the Docker image stored in the ACR. The application inside the container is configured to listen on port 8080 which is where our Spring endpoints will be accessible. The -ingress external flag makes it accessible from the internet. You must also set your environment variables or the app will crash. After running this command to create the Azure Container App, you should be me with a long JSON output to confirm the action. Then it should provide the URL to access the app. It should look like: Container app created. Access your app at https://epp-api.purpledune-772f2e5a.uksouth.azurecontainerapps.io/ Which now means... if we head to that URL, and append /domain-check?name=test.gg as we did when locally testing, we are met with: {"available":false} That concludes the deployment process. This means our API is now accessible via the internet! Setting up GitHub CI/CD Now that we have our EPP API successfully running in our Azure Container Apps, the next step is to streamline our development and deployment process. This is where CI/CD comes into play: CI/CD, which stands for Continuous Integration and Continuous Deployment, is a set of practices that automate the process of building, testing and deploying our application. In simple terms: we are going to make it so that when we push code changes to our GitHub repository our container gets automatically updated and redeployed. This saves time and allows us to deliver updates and new features to our users more rapidly and reliably. We will walk through the process of setting up the CI/CD pipeline using GitHub Actions. But first, let us setup our Git repository and send off an initial commit to GitHub. Firstly, head to GitHub and create a repository. You can create it in your personal account or an organization. I have named mine epp-api . Be sure to copy/paste or remember the URL for this repository as we will need it to link Git in a moment. Now you have an empty cloud repository, open the terminal in your workspace and run the following commands: git init - Initialise a new Git repository in your current directory. This creates a hidden .git directory that stores the repository's metadata. git add . - Stages all of the files in the current directory and its subdirectories for commit. This means that these files will be included in the next commit. git commit -m "Initial commit" - Creates a new commit with the staged files and a common initial commit message. git remote add origin <URL> - Adds a remote repository named origin to your local repository, connecting it to our remote Git repository hosted on GitHub. git push origin master - Uploads the local repository's content to the remote repository named origin , specifically to the master branch. If you refresh your repository on GitHub, you should see the commit! Now that your code is available outside of your local workspace, let us ask Azure to create the deployment workflow. On the Azure Portal, follow this trail: Head to your Container App On the sidebar, hit Settings Hit Deployment You should find yourself in the Continuous deployment section. There are two headings, let us start with GitHub settings : Authenticate into GitHub and provide permissions to repository (if published to a GH organization, give permissions also) Select organization, or your GitHub name if published on personal account Select the repository you just created (for me, epp-api ) Select the main branch (likely either master or main ) Then, under Registry settings : Ensure Azure Container Registry is selected for Repository source Select the Container Registry you created earlier (for me, registrarcontainers ) Select the image you created earlier (for me, epp-api ) It should look something like this: Once these settings have been configured, press Start continuous deployment . If all went to plan, Azure will have created a workflow file in your repository under .github/workflows with the commit message Create an auto-deploy file . Based on the content of the workflow, we can see that the trigger is on push to master . This means that, moving forward, every change you commit and push to this repository will trigger this workflow, which will in-turn trigger a build and push the new container image to the registry. However, it is likely that on the first build it will fail. This is because we need to make a couple modifications to this workflow file before it will work with our technology stack. You will need to add these changes manually, so head into the workflow and begin editing as with any other file (either through GitHub or VSC - do not forget to push if VSC!). Then, add the following jobs after Checkout to the branch and before the Azure Login job: - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Set up JDK 21 uses: actions/setup-java@v2 with: java-version: '21' distribution: 'adopt' - name: Build with Gradle run: ./gradlew build We added 3 jobs: Grant execute permission to gradlew - gradlew is a wrapper script that helps manage Gradle installations. This step grants execute permission to the gradlew file which allows this build process to execute Gradle commands, needed for the next steps. Set up JDK - This sets up the JDK as the Java envrionment for the build process. Make sure this matches the Java version you have chosen to use for this tutorial. Build with Gradle - This executes the Gradle build process which will compile our Java code and package it into a JAR file which will then be used by the last job to push to the Container Registry. The final workflow file should look like this: name: Trigger auto deployment # When this action will be executed on: # Automatically trigger it when detected changes in repo push: branches: - master paths: - '**' - '.github/workflows/AutoDeployTrigger-aec369b2-f21b-47f6-8915-0d087617a092.yml' # Allow manual trigger workflow_dispatch: jobs: build-and-deploy: runs-on: ubuntu-latest permissions: id-token: write #This is required for requesting the OIDC JWT Token contents: read #Required when GH token is used to authenticate with private repo steps: - name: Checkout to the branch uses: actions/checkout@v2 - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Set up JDK 21 uses: actions/setup-java@v2 with: java-version: '21' distribution: 'adopt' - name: Build with Gradle run: ./gradlew build - name: Azure Login uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: Build and push container image to registry uses: azure/container-apps-deploy-action@v2 with: appSourcePath: ${{ github.workspace }} dockerfilePathKey: dockerfilePath registryUrl: fdcontainers.azurecr.io registryUsername: ${{ secrets.REGISTRY_USERNAME }} registryPassword: ${{ secrets.REGISTRY_PASSWORD }} containerAppName: epp-api resourceGroup: registrar imageToBuild: registrarcontainers.azurecr.io/fdspring:${{ github.sha }} buildArgumentsKey: buildArgumentsValues Once you have pushed your workflow changes, that action itself will trigger the new workflow and hopefully you should be met with a green circle on GitHub at the top of your repository to signify the build was a success. Do not forget that at any point you can click the Actions tab and see the result of all builds, and if any build fails you can explore in detail on which job the error occured. Summary That is it! You have successfully built a robust EPP API using Kotlin and Spring Boot and now containerised it with Docker and deployed it to Azure Container Apps. This journey took us from understanding the intricacies of EPP and domain registration, through implementing core EPP operations, to creating a user-friendly RESTful API. We then containerised our application, ensuring consistency across different environments. Finally, we leveraged Azure's powerful cloud service services - Azure Container Registry for storing our Docker image, and Azure Container Apps for deploying and running our application in a scalable, managed environment. The result is a fully functional, cloud-hosted API that can handle domain checks, registrations and other EPP operations. This accomplishment not only showcases the technical implementation but also opens up possibilities for creating sophisticated domain management tools and services, such as by starting a public registrar or managing a domain portfolio internally. I hope this blog was useful, and I am happy to answer any questions in the replies. Well done on bringing this complex system to life!4.5KViews0likes0CommentsSimplify Full-stack Java Development with JHipster Online, Terraform and Bicep
In the previous blog: Build and deploy full-stack Java Web Applications on Azure Container Apps with JHipster, we explored the fundamental features of JHipster Azure Container Apps. Specifically, we demonstrated how to create and deploy a project to Azure Container Apps in just a few steps. In this blog, we will introduce some new features in JHipster Azure Container Apps, which make project creation even simpler and deployment more seamless. JHipster Online: Quick Prototyping Made Easy JHipster Online is a quick prototyping website that allows you to generate a full-stack Spring Boot project without requiring any installation! You can start building your Azure project by clicking the Create Azure Application button. 🌟Generate the project Simply answer a few guided questions, and JHipster Online will generate a project ready for building and deployment. In the final step of the questionnaire, you can choose to generate either a Terraform or Bicep file for deployment. If you prefer using the CLI version, install it with the following command: npm install -g generator-jhipster-azure-container-apps You can run create the project with: jhipster-azure-container-apps 🚀Deploy the project 💚Terraform Terraform is an infrastructure-as-code (IaC) tool that allows you to build, modify, and version cloud and on-premises resources securely and efficiently. It supports a wide range of popular cloud providers, including AWS, Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and Docker. To deploy using Terraform, ensure that Terraform is selected during the project generation step. Additionally, you must have Terraform installed and properly configured. After generating the project, navigate to the Terraform folder: cd terraform Initialize Terraform by running the following command: terraform init Once finished, privision the necessary resource on Azure with: terraform apply -auto-approve Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. ❤️Bicep Bicep is a domain-specific language that uses declarative syntax to deploy Azure resources. In order to deploy with Terraform, make sure you select Bicep in the project generation step. You may also need to have Azure CLI installed and configured. Once the project has been created, change into the Bicep folder: cd bicep Setup bicep with: az deployment sub create -f ./main.bicep --location=eastus2 --name jhipster-aca --only-show-errors Here you can replace the location and the name parameters with your own choices. Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. 💛 Deploy from Source Code, Artifact and more In addition to the options mentioned, Azure Container Apps provides a wide range of deployment methods designed to suit diverse project needs. Whether you prefer deploying directly from source code, pre-built artifacts, or container images, Azure Container Apps streamlines the entire process with its robust built-in Java support. This enables developers to focus on innovation rather than infrastructure management. From integrating with popular CI/CD pipelines to leveraging advanced deployment techniques like Github, Azure Container Apps offers the flexibility to match your workflow. Discover how to effortlessly deploy and scale your project by visiting: Launch your first Java application in Azure Container Apps.214Views0likes0CommentsSpring I/O 2024 - Join Microsoft and Broadcom to Celebrate All Things Spring and Azure!
Get ready for the ultimate Spring conference in Barcelona from May 30-31! Connect with over 1200 attendees, enjoy 70 expert speakers, and engage in 60 talks and 7 hands-on workshops. Microsoft Azure (as a Gold Sponsor) and VMware Tanzu (as a Platinum sponsor) bring you in-depth sessions on Spring and AI development, a dynamic full-day workshop, and an interactive booth experience. Learn, network, and enhance your Java skills with the latest tools and frameworks. Don't miss out on this exciting event!4.1KViews1like0CommentsAzure Spring Apps Enterprise – More Power, Scalability & Extended Spring Boot Support
Today, we are delighted to announce significant enhancements to the Azure Spring Apps Enterprise. These improvements will bolster security, quicken development speed, amplify scalability, and provide greater flexibility and reliability. We are excited to share these developments with you and look forward to seeing how they will enhance your experiences.9.2KViews0likes0CommentsNavigating Common VNET Injection Challenges with Azure Spring Apps
This article outlines the typical challenges that customers may come across while operating an Azure Spring Apps service instance using the Standard and Enterprise plans within a virtual network environment. Deploying Azure Spring Apps within a customized Virtual Network introduces a multitude of components into the system. This article will guide you to set network components such as Network Security Groups (NSG), route tables, and custom DNS servers correctly to make sure service is functioning as expected. In the following sections, we've compiled a list of common issues our customers encounter and offer recommendations for effectively resolving them. VNET Prerequisites not met Issues Custom Policy issues Custom DNS Server Resolution Issues Outbound connection issues Route Table Issues User Defined Route Issues VNET Prerequisites not met Issues Symptoms: Failed to create Azure Spring Apps service. Error Messages: "InvalidVNetPermission" error messages reporting ""Invalid Subnet xxx. This may be a customer error if 'Grant service permission to virtual network' step is missing before create Azure Spring Apps in vnets." Common Causes of the issue: Azure Spring Apps platform need to execute some management operations (e,g, create route tables, add rules into existing route tables) in the customer provided VNET. Without the above permissions, the platform operations will be blocked. How to fix: Grant the Owner permission on your virtual network to "Azure Spring Cloud Resource Provider" (id: e8de9221-a19c-4c81-b814-fd37c6caf9d2). Or you can grant User Access Administrator and Network Contributor roles to it if you can't grant Owner permission. az role assignment create \ --role "Owner" \ --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 If you associated your own route tables to the given subnets, also need to make sure to grant Owner or User Access Administrator & Network Contributor permission to "Azure Spring Cloud Resource Provider" on your route tables. az role assignment create \ --role "Owner" \ --scope ${APP_ROUTE_TABLE_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 Reference docs: Virtual network requirements Grant service permission to the virtual network Custom Policy issues Symptoms: Failed to create Azure Spring Apps service. Failed to start/stop Azure Spring Apps service. Failed to delete Azure Spring Apps service. Error Messages: Resource request failed due to RequestDisallowedByPolicy. Id: /providers/Microsoft.Management/managementGroups/<group-name>/providers/Microsoft.Authorization/policyAssignments/<policy-name> Common Causes of the issue: One of the most common policies we saw that blocks the platform operation is "Deny Public IP Address creation". The Azure Spring Apps platform need to create a public IP in the load balancer to communicate with those targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network for management and operational purposes. How to fix: Find the policy id provided in the error message, then check if the policy can be deleted or modified to avoid the issue. For Deny Public IP Address policy, if your company policy does not allow that, you can also choose to use Customize Azure Spring Apps egress with a user-defined route. Custom DNS Server Resolution Issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. App cannot register to Eureka server. App cannot resolve or resolve wrong IP address of a target URL. Error Messages: Could not resolve private dns zone. java.net.UnknownHostException If the DNS server resolved wrong IP address of a host, we may also see connection failure in the log. java.net.SocketTimeoutException Common Causes of the issue: Custom DNS server is not correctly configured to forward DNS requests to upstream public DNS server. In the Troubleshooting Azure Spring Apps in virtual network Doc, it mentioned if your virtual network is configured with custom DNS settings, be sure to add Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server. But sometimes it got misunderstood as adding both custom DNS server IP and 168.63.129.16 into the VNET DNS servers setting. This will introduce unpredictable behavior in name resolving. Some companies' policy does not allow forwarding DNS requests to Azure DNS server. Customers have the flexibility to direct their DNS requests to any upstream DNS server of their choice, provided that the chosen server is capable of resolving the public targets outlined in the Customer responsibilities running Azure Spring Apps in a virtual network. In this scenario, you will also need to add an additional DNS A record for *.svc.private.azuremicroservices.io (Service Runtime cluster ingress domain) -> the IP of your application (Find the IP for your application). While the Azure Spring Apps platform does configure this record, it is specifically recognizable by Azure DNS servers. If an alternative DNS server is in use, this record needs to be manually added into your own DNS server. Since our Spring boot Configure server and Eureka server are hosting in Service Runtime cluster, if your DNS server setting cannot resolve *.svc.private.azuremicroservices.io domain, your app may fail to load config and register itself to the Eureka server. Upon modifying your DNS configuration file or adjusting the VNET DNS server settings, it's important to note that these changes will not be instantly propagated across all the cluster nodes that host your application. To effectively implement the modifications, it is necessary to stop then start the Azure Spring Apps instance. This action enforces a reboot of the underlying cluster nodes. It's a by design behavior, since any cluster node virtual machine will only load the DNS settings from VNET for one time when it got created, restarted or being manually restart the network daemon. The network connection to the custom DNS server or Azure DNS server is blocked by NSG rule or firewall. How to investigate: Use Diagnose and Solve problems -> DNS Resolution detector Use App Connect console to test the DNS resolve result. We can connect to the App container, and directly run nslookup command to test the host's name resolve result. Please refer to Investigate Azure Spring Apps Networking Issue with the new Connect feature - Microsoft Community Hub. If the wrong DNS server setting blocked Azure Spring Apps service creation, since there is no resource being created yet, the above 2 troubleshooting method won't be available. You will need to create a jump-box (windows or Linux VM) in the same VNET subnet as your Azure Spring App, then use nslookup or dig command to verify the DNS settings. Make sure all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network can be resolved. nslookup mcr.microsoft.com Outbound connection issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. App Deployment failed to start. App Deployment cannot mount Azure File Share. App Deployment cannot send metrics/logs to Azure Monitor, Application Insights. Error Messages: "code": "InvalidVNetConfiguration", "message": "Invalid VNet configuration: Required traffic is not whitelisted, refer to https://docs.microsoft.com/en-us/azure/spring-cloud/vnet-customer-responsibilities Common Causes of the issue: If the traffic targeting *.azmk8s.io being blocked, the Azure Spring Apps service will lost connection to the underneath Kubernetes Cluster to send management requests. It will cause the service failed to create and start. If the traffic targeting mcr.microsoft.com, *.data.mcr.microsoft.com, packages.microsoft.com being blocked, the platform won't be able to pull images and packages to deploy the cluster node. It will cause the service failed to create and start. If the traffic targeting *.core.windows.net:443 and *.core.windows.net:445 being blocked, the platform won't be able to use SMB to mount the remote storage file share. It will cause the app deployment failed to start. If the traffic targeting login.microsoftonline.com being blocked, the platform authentication feature like MSI will be blocked. It will impact service start and app MSI usage. If the traffic targeting global.prod.microsoftmetrics.com and *.livediagnostics.monitor.azure.com being blocked, the platform won't to able to send data to Azure metrics, so you will lose App Metric data. How to investigate: Examine the logs of your NSG and firewall to verify whether any traffic directed towards the targets outlined in the Customer responsibilities running Azure Spring Apps in a virtual network is encountering blocks or restrictions. Review your route table settings to ensure that they are configured to effectively route traffic towards the public network (either directly or via firewall, gateway or other appliances) for the specific targets detailed in the Customer responsibilities running Azure Spring Apps in a virtual network. Use Diagnose and Solve problems -> Required Outbound Traffic Use App Connect console to test the outbound connection blocking issue. We can connect to the App container, and directly run "nc -zv <ip> <port>"command to test the outbound connection blocking issue. nc -zv <ip> <port> Please refer to Investigate Azure Spring Apps Networking Issue with the new Connect feature - Microsoft Community Hub If the outbound connection issue blocked Azure Spring Apps service creation, since there is no resource being created yet, the above 2 troubleshooting method won't be available. You will need to create a jump-box (windows or Linux VM) in the same VNET subnet as your Azure Spring App, then use "nc" or "tcpping" command to verify the connection. Make sure all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network are not blocked by your NSG or firewall rules. How to fix: Use the above method to check which outbound traffics being blocked. Fix the setting in route table, NSG and firewall in both service runtime and app subnets. Route Table Issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. Failed to delete Azure Spring Apps service. Error Messages: Invalid VNet configuration: Invalid RouteTable xxx. This may be a customer error if providing wrong route table information Invalid VNet configuration: Please make sure to grant Azure Spring Apps service permission to the scope /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Network/routeTables/xxx. Refer to https://aka.ms/asc/vnet-permission-help for more details Common Causes of the issue: You did not grant Owner or User Access Administrator & Network Contributor permission to "Azure Spring Cloud Resource Provider" on your route tables. The platform needs to write new route table rules into the given route table. Without qualified permission, the operation will fail. You provided wrong route table information, so the platform cannot find it. How to fix: Unless you have distinct prerequisites necessitating the routing of outbound traffic through a designated gateway or firewall, there is no need to establish your own route tables within the subnets allocated to Azure Spring Apps. The platform itself will autonomously generate a new route table within the subnets and configure appropriate route rules for the underlying nodes. If you need routing outbound traffic to your own gateway or firewall: You have the option to employ your personal route table. Prior to initiating the creation of Azure Spring Apps, please make sure to associate your custom route tables with the two specified subnets. If your custom subnets contain route tables, Azure Spring Apps acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations. You can also use the route tables created by the platform. Just add your new routing rules into it. We can see rules named like this: aks-nodepoolxx-xxxx-vmssxxxxx_xxxx. These are the rules added by Azure Spring Apps and they must not be updated or removed. You can add your own rules to routing other outbound traffic. For example, using 0.0.0.0/0 to route all the other traffic to your gateway or firewall. If you created your own routing rule to route internet outbound traffic, make sure the traffic can reach all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network. User Defined Route Issues Symptom Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. Failed to use Public Endpoint feature. Failed to use log stream or connect to App console from public network. Error Messages: Invalid VNet configuration: Route tables need to be bound on subnets when you select outbound type as UserDefinedRouting.. Invalid VNet configuration: Both route table need default route(0.0.0.0/0). When connecting to Console from public network, it's reporting: A network error was encountered when connecting to "xxx.private.azuremicroservices.io". Please open the browser devtools or try with "az spring app connect" to check the detailed error message. Azure CLI log stream command may report: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='xxx.private.azuremicroservices.io', port=443): Max retries exceeded with url: /api/logstream/apps/authserver/instances/xxxxx Common Causes of the issue: When you choose to use User Defined Route, the platform won’t create any public IP address on the load balancer anymore. So, it's customer's responsibility to make sure that both subnets can still make calls to all the public targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network. This is the reason why we require both route tables need default route (0.0.0.0/0) and route the traffic to your firewall. This firewall is responsible for managing Source Network Address Translation (SNAT) and Destination Network Address Translation (DNAT) processes, converting local IP addresses to corresponding public IP addresses. This configuration enables the platform to successfully establish outbound connections to targets located in the public network. By design, in UDR type of Azure Spring Apps, we cannot use the following features: - Enable Public Endpoint - Use public network to access the log stream - Use public network to access the console The same limitations also apply to the Azure Spring Apps using Bring Your Own Route Table feature when customer route egress traffics to a firewall. Because they introduce asymmetric routing into the cluster, this is where the problem occurs. Packets arrive on the firewall's public IP address but return to the firewall via the private IP address. So, the firewall must block such traffic. We can refer to this doc to get more details: Integrate Azure Firewall with Azure Standard Load Balancer How to fix: When creating Azure Spring Apps using UDR outbound type, please carefully read this Control egress traffic for an Azure Spring Apps instance doc. It is recommended to diligently adhere to each step outlined in this document while establishing your individual route tables and configuring the associated firewall settings. This approach ensures the proper and effective control of outbound traffic for your Azure Spring Apps instance. Hope the troubleshooting guide is helpful to you! To help you get started, we have monthly FREE grants on all tiers – 50 vCPU Hours and 100 memory GB Hours per tier. Additional Resources Learn using an MS Learn module or self-paced workshop on GitHub Deploy your first Spring app to Azure! Deploy the demo Fitness Store Spring Boot app to Azure Deploy the demo Animal Rescue Spring Boot app to Azure Learn more about implementing solutions on Azure Spring Apps Deploy Spring Boot apps by leveraging enterprise best practices –Azure Spring Apps Reference Architecture Migrate your Spring Boot, Spring Cloud, and Tomcat applications to Azure Spring Apps Wire Spring applications to interact with Azure services For feedback and questions, please raise your issues on our GitHub. To learn more about Spring Cloud Azure, we invite you to visit the following links: Reach out to us on StackOverflow or GitHub. Reference Documentation Conceptual Documentation Code Samples Spring Version Mapping3.5KViews1like0Comments