Spring Integrations
24 TopicsSimplify Full-stack Java Development with JHipster Online, Terraform and Bicep
In the previous blog:Build and deploy full-stack Java Web Applications on Azure Container Apps with JHipster, we explored the fundamental features of JHipster Azure Container Apps. Specifically, we demonstrated how to create and deploy a project to Azure Container Apps in just a few steps. In this blog, we will introduce some new features in JHipster Azure Container Apps, which make project creation even simpler and deployment more seamless. JHipster Online: Quick Prototyping Made Easy JHipster Online is a quick prototyping website that allows you to generate a full-stack Spring Boot project without requiring any installation! You can start building your Azure project by clicking the Create Azure Application button. 🌟Generate the project Simply answer a few guided questions, and JHipster Online will generate a project ready for building and deployment. In the final step of the questionnaire, you can choose to generate either a Terraform or Bicep file for deployment. If you prefer using the CLI version, install it with the following command: npm install -g generator-jhipster-azure-container-apps You can run create the project with: jhipster-azure-container-apps 🚀Deploy the project 💚Terraform Terraform is an infrastructure-as-code (IaC) tool that allows you to build, modify, and version cloud and on-premises resources securely and efficiently. It supports a wide range of popular cloud providers, including AWS, Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and Docker. To deploy using Terraform, ensure that Terraform is selected during the project generation step. Additionally, you must have Terraform installed and properly configured. After generating the project, navigate to the Terraform folder: cd terraform Initialize Terraform by running the following command: terraform init Once finished, privision the necessary resource on Azure with: terraform apply -auto-approve Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. ❤️Bicep Bicep is a domain-specific language that uses declarative syntax to deploy Azure resources. In order to deploy with Terraform, make sure you select Bicep in the project generation step. You may also need to haveAzure CLI installed and configured. Once the project has been created, change into the Bicep folder: cd bicep Setup bicep with: az deployment sub create -f ./main.bicep --location=eastus2 --name jhipster-aca --only-show-errors Here you can replace the location and the name parameters with your own choices. Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. 💛 Deploy from Source Code, Artifact and more In addition to the options mentioned, Azure Container Apps provides a wide range of deployment methods designed to suit diverse project needs. Whether you prefer deploying directly from source code, pre-built artifacts, or container images, Azure Container Apps streamlines the entire process with its robust built-in Java support. This enables developers to focus on innovation rather than infrastructure management. From integrating with popular CI/CD pipelines to leveraging advanced deployment techniques like Github, Azure Container Apps offers the flexibility to match your workflow. Discover how to effortlessly deploy and scale your project by visiting: Launch your first Java application in Azure Container Apps.88Views0likes0CommentsModernising Registrar Technology: Implementing EPP with Kotlin, Spring & Azure Container Apps
Introduction In the domain management industry, technological advancement has often been a slow and cautious process, lagging behind the rapid innovations seen in other tech sectors. This measured pace is understandable given the critical role domain infrastructure plays in the global internet ecosystem. However, as we stand on the cusp of a new era in web technology, it is becoming increasingly clear that modernization should be a priority. This blog post embarks on a journey to demystify one of the most critical yet often misunderstood components of the industry: the Extensible Provisioning Protocol (EPP). Throughout this blog, we will dive deep into the intricacies of EPP, exploring itsstructure, commands and how it fits into the broader domain management system. We will walk through the process of building a robust EPP client using Kotlin and Spring Boot. Then, we will take our solutions to the next level by containerizing with Docker and deploying it to Azure Container Apps, showcasing how modern cloud technologies can improve the reliability and scalability of your domain management system.We will also set up a continuous integration and deployment (CI/CD) pipeline, ensuring that your EPP implementation remains up-to-date and easily maintainable. By the end of this blog, you will be able to provision domains programatically via an endpoint, and have the code foundation ready to create dozens of other domain management commands (e.g. updating nameservers, updating contact info, renewing and transferring domains). Who it isfor This guide is tailored primarily for registrars — serviceswho serve as the crucial intermediary between domain registrants (the end user who wishes to claim their piece of internet real estate) and the registry systems that manage those domains. While the concepts we will explore have broad applications across the domain industry, the perspective throughout will be firmly rooted in the registrar's role. The fundamental goal of this blog is to lower the barrier to entry in the domain management space, making this technology more accessibletosmaller registrars, startups and individual developers. What you will need: EPP credentials The entire tech stack and the development prerequisites are listed below. But before commiting to this project, be aware that the cornerstone of this workflow is the registry EPP server. This is non-negotiable and absolutely essential for implementing and testing your EPP client. If you stumbled upon this blog, it is likely you already have accreditation with a registry. In this case, the registry will provide you with EPP credentials (expect a host, port, username and password). Note that some registries enforce an IP whitelist. For those who do not have accreditation with a registry, then you will need to go through the relevant accreditation process or use a publicly available sandbox. For this guide, I will be using the Channel Islands registry:Channel Isles: The Islands' Domain Names They offer the following TLDs:.gg, .je, .co.gg, .net.gg, .org.gg, .co.je, .net.je, .org.je. Among these, we will concentrate on provisioning .gg domains. The .gg TLD has gained significant popularity, particularly in the gaming community. My personal experience in getting accreditation with Channel Islands registry was an application process and then a fee, and afterwards they provided live EPP details and access to an OTE (Operational Test & Evaluation) environment which I will be using in this blog so as to not incur unnecessary costs. If you do not have access to any EPP server, then this blog will serve as informational only. Otherwise, you can follow along in creating the system. Understanding EPP EPP is short for Extensible Provisioning Protocol. It is a protocol designed to streamline and standardise communication between domain name registries and registrars. Developed to replace older, less efficient protocols, EPP has become the industry standard for domain registration and management operations. More technically, EPP is an XML-based protocol that facilities the provisioning and management of domain names, host objects and contact information. Key features include: Stateful connections: EPP maintains persistent connections between registrars and registries, reducing overhead and improving performance. Extensibility: As the name suggests, EPP is designed to be extensible. Registries can add custom extensions to support unique features or requirements. Standardization: EPP provides a uniform interface across different registries, simplifying integration for registrars and reducing development costs. For someone new to this field, it is easy to assume that domain provisioning would be done with registries through a REST API. But actually, the modern standard is using this protocol, and that is what this blog will cover. Choosing the tech stack We need a combination of technologies that will provide performance, scalability and developer productivity. After careful consideration, I settled on using Kotlin as the programming language, Spring for the REST API and Azure Container Apps for deployment. Kotlin Kotlin has a unique blend of features which makes it a great choice for our EPP implementation. Its seamless interoperability with Java allows us to leverage existing Java libraries commonly used in other EPP implementations while enjoying Kotlin's modern syntax. The language's conciseness and readability results in cleaner, more maintainable code, which is particularly beneficial when dealing with complex EPP commands and responses. Spring The Spring framework plays a pivotal role in our project. After implementing the EPP functions, we will be using Spring to allow us to control these actions from the outside world. We will use Spring to create endpoints that we can use from outside of our deployment on Azure Container Apps, such as through a web backend. This is a common pattern that registrars might use, whereby when a registrant attempts registration of a domain, the web application will process most of the validation and send off the instruction to our Spring REST API which will command the EPP. Azure Container Apps ('ACA') One may initially assume that Azure Spring Apps would be perfect for this project. However, it was recently announced that this service is being retired, starting Sep 30th, 2024, and ending March 31st, 2028. The official migration recommendation is to move to Azure Container Apps. Note that there are other migration paths, such as a PaaS solution with Azure App Service or a containerized solution with Azure Kubernetes Service, though we will be using ACA for this blog. Read more on the retirement: https://learn.microsoft.com/en-us/azure/spring-apps/basic-standard/retirement-announcement Azure Container Apps rounds out our tech stack, providing the ideal platform for deploying and scaling our EPP implementation. This fully managed environment allows us to focus on our application logic rather than getting bogged down in infrastructure management. One of the key advantages of ACA is its native support for microservices architecture, which makes it the perfect choice for a Spring application. Spring's embedded Tomcat server aligns with ACA's containerised approach, allowing for easy deployment with reduced development time. Moreover, ACA's built-in ingress and SSL termination capabilities complement Spring's security features, providing a robust, secure environment for our EPP operations. The platform's ability to handle multiple revisions of an application also facilitates easy A/B testing and canary deployments, which is particularly useful when rolling out updates to our EPP system. The architecture Now we are familiar with the technology, let us look at how this is all going to fit together. The architecture is fairly simple: In this blog, we will be making the EPP API and deploying it to an Azure Container App. The EPP API will, of course, need to connect and communicate with a registry server. While out of scope for this blog, I have included Azure CosmosDB to show where a custom user database could fit into this flow, and an Azure Web App to show a common use case for end users. Once we have put together this EPP API which connects to a registry with Kotlin & Spring, and deployed it on ACA, the hard part is out of the way. From there, you can create any sort of user interface that is relevant to your audience (e.g. an Azure Web App) and connect with a database in any way that is relevant to your platform (e.g. Azure CosmosDB for caching). To put this architecture into a real-world context, imagine that you are purchasing a domain from a popular registrar such as Namecheap or GoDaddy, this is the kind of backend systems they may have. The typical user journey, in simplistic steps, as illustrated by the diagram, would be: Registrant (end user) requests to purchase a domain Website backend sends instruction to EPP API (what we are making in this blog) EPP API sends command to the EPP server provided by the registry Response provided by registry and received by registrant (end user) on website Setting up the development environment Prerequisites For this blog, I will be using the following technologies: Visual Studio Code (VS Code) as the IDE (integrated development environment). I willbe installing some extensions and changing some settings to make it work for our technology. Download atDownload Visual Studio Code - Mac, Linux, Windows Docker CLIfor containerization and local testing. Download atGet Started | Docker Azure CLIfor deployment to Azure Container Registry & Azure Container Apps (you can use the portal if more comfortable). Download atHow to install the Azure CLI | Microsoft Learn Gitfor version control and pushing to GitHub to setup CI/CD pipeline. Download atGit - Downloads (git-scm.com) VS Code Extensions These extensions are optional but will significantly improve the development experience. I would highly recommend installing them. Head to the side panel on the left, click Extensions and install the following: Kotlin Spring Initialzr Java Support Implementing EPP with Kotlin & Spring Creating the project First up, let us create a blank Spring project.We will do this with the Spring Initializr plugin we just installed: Press CTRL + SHIFT + P to open the command palette Select Spring Initialzr: Create a Gradle project... Select version (I recommend 3.3.4 ) Select Kotlin as project language Type Group Id (I am using com.stephen ) Type Artifact ID (I am using eppapi ) Select jar as packaging type Select any Java version (The version choice is yours) Add Spring Web as a dependency Choose a folder Open project Your project should look like this: We are using the Gradle build tool for this project. Gradle is a powerful, flexible build automation tool that supports multi-language development and offers convenient integration with both Kotlin & Spring. Gradle will handle our dependency management, allowing us to focus on our EPP implementation rather than build configuration intricacies. Adding the EPP dependency The Spring Initialzr has kindly added the required Spring dependencies for us. Therefore, all that is left is our EPP dependency. When exploring how best to achieve my goal in connecting to a registry through EPP, I discovered the EPP RTK (Registrar Toolkit) library. This library provides a robust implementation of the Extensible Provisioning Protocol, making it an ideal choice for our project. This library is particularly useful because: It handles the low-level details of EPP communication, allowing us to focus on business logic. It is a Java-based implementation, which integrates seamlessly with our Kotlin and Spring setup. It supports all basic EPP commands out of the box, such as domain checks, registrations and transfers. By using the EPP-RTK, we can significantly reduce the amount of boilerplate code needed to implement EPP functionality. You can download the library from there and manually import it into your project, or preferably add the following to your build.gradle in the dependencies section: implementation 'io.github.mschout:epp-rtk-java:0.9.11' Also, while we are here, I would recommend setting the Spring framework plugin to version 2.7.18 . This version is most compatible with the APIs we are using, and I have tried and tested it. To do this, in the plugins block, change the dependency to this: id 'org.springframework.boot' version '2.7.18' P.S. The EPP-RTK documentation that I used while writing this project was entirely from this page, which documents the entire API: https://epp-rtk.sourceforge.net/epp-rtk-java-0.4.1/java/doc/epp-rtk-user-guide.html Modifying the build settings With that knowledge, there is some specific things we need to change in our build.gradle to support the proper Java version. The version is entirely up to you, though I would personally recommend latest due to staying up to date with security patches. Copy/replace the following into the build.gradle : java { toolchain { languageVersion = JavaLanguageVersion.of(21) } sourceCompatibility = JavaVersion.VERSION_21 targetCompatibility = JavaVersion.VERSION_21 } kotlin { jvmToolchain(21) } tasks.withType(org.jetbrains.kotlin.gradle.tasks.KotlinCompile) { kotlinOptions { jvmTarget = "21" freeCompilerArgs = ["-Xjsr305=strict"] } } tasks.named('test') { enabled = false } At this point, it is good practice to attempt to build the project. It should build comfortably with these new settings, though if not then now is the perfect time to deal with errors before we get into the codebase. To do this, either use the built-in Gradle panel on the sidebar and click through Tasks > build > build , or run this command in the terminal: .\gradlew clean build After a few seconds, you should be met with BUILD SUCCESSFUL . The structure Our intention here is to build a REST API which will take in requests and then use EPP-RTK to beam off commands to the targeted EPP registry. I recommend the following steps for a solid project structure: Rename the main class to EPPAPI.kt (Spring auto generation did not do it justice). Split the code into two folders: epp and api , with our main class remaining at the root. Create a class inside the epp folder named EPP.kt - this is where we will connect to and manage the EPPClient soon. Create a class inside the api folder named API.kt - this is where we will configure and run the Spring API. Your file structure should now look like this: EPPAPI.kt api └── API.kt epp └── EPP.kt Before we can get to coding, there is one final step: adding environment variables. To connect to the targeted EPP server, we need four variables: host, port, username and password. These will be provided by your chosen registry. It is possible that, as in my case, the registry may also grant you access to an OTE (Operational Test & Evaluation) environment, which is essentially a 1:1 of the live EPP server that acts as a sandbox for registrars to test their systems without fear of affecting data on the live registry. I highly recommend hooking up to an OTE during testing if your registry has provided you with one to not incur unnecessary costs. Create a file in the root of your project called .env and populate with the following structure. I have prefilled with the host and port for the registry I am using to show the expected format: HOST=ote.channelisles.net PORT=700 USERNAME=X PASSWORD=X We will use these environment variables while running our project locally in VS Code and then prefill them into Docker when containerizing locally. For container apps, we will have to manually provide them when setting up the environment. The code Now comes the fun part. We have successfully set up our development environment and structured the project, so now let us populate it with some code. Given this project is in Kotlin, I will be writing solid syntax as illustrated in the Kotlin docs: https://kotlinlang.org/docs/home.html Firstly, let us tackle our EPP class. The goal with this class is to provide access to an EPPClient which we can use to connect to the EPP server and authenticate with our details. The class will extend the EPPClient provided by the EPP-RTK API and implement a singleton pattern through its companion object. The class uses the environment variables we set earlier for configuration. The create() function serves as a factory method, handling the process of establishing a secure SSL connection, logging in and initializing the client. It employs Kotlin's apply function for a concise and readable initialization block. The implementation also includes error handling and logging which will help us debug if anything goes wrong. The setupSSLContext() function configures a trust-all certificate strategy, which, while not recommended for production, is useful in development or specific controlled environments. This design will allow for easy extension through Kotlin's extension functions on the companion object. The code is as follows: import com.tucows.oxrs.epprtk.rtk.EPPClient import java.net.Socket import java.security.KeyStore import java.security.cert.X509Certificate import javax.net.ssl.KeyManagerFactory import javax.net.ssl.SSLContext import javax.net.ssl.TrustManager import javax.net.ssl.X509TrustManager class EPP private constructor( host: String, port: Int, username: String, password: String, ) : EPPClient(host, port, username, password) { companion object { private val HOST = System.getenv("HOST") private val PORT = System.getenv("PORT").toInt() private val USERNAME = System.getenv("USERNAME") private val PASSWORD = System.getenv("PASSWORD") lateinit var client: EPP fun create(): EPP { println("Creating client with HOST: $HOST, PORT: $PORT, USERNAME: $USERNAME") return EPP(HOST, PORT, USERNAME, PASSWORD).apply { try { println("Creating SSL socket...") val socket = createSSLSocket() println("SSL socket created. Setting socket to EPP server...") setSocketToEPPServer(socket) println("Socket set. Getting greeting...") val greeting = greeting println("Greeting received: $greeting") println("Connecting...") connect() println("Connected. Logging in...") login(PASSWORD) println("Login successful.") client = this } catch (e: Exception) { println("Error during client creation: ${e.message}") e.printStackTrace() throw e } } } private fun createSSLSocket(): Socket { val sslContext = setupSSLContext() return sslContext.socketFactory.createSocket(HOST, PORT) as Socket } private fun setupSSLContext(): SSLContext { val trustAllCerts = arrayOf<TrustManager>(object : X509TrustManager { override fun getAcceptedIssuers(): Array<X509Certificate>? = null override fun checkClientTrusted(certs: Array<X509Certificate>, authType: String) {} override fun checkServerTrusted(certs: Array<X509Certificate>, authType: String) {} }) val keyStore = KeyStore.getInstance(KeyStore.getDefaultType()).apply { load(null, null) } val kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()).apply { init(keyStore, "".toCharArray()) } return SSLContext.getInstance("TLS").apply { init(kmf.keyManagers, trustAllCerts, java.security.SecureRandom()) } } } } Now that this is configured, let usalter our main class to ensure that we connect and authenticate into the client when our project is run. I have removed the default generated Spring content as we will move this to the dedicated API.kt class shortly. The main class should now look like this: fun main() { EPP.create() } Now your application is able to connect and authenticate with an EPP server! However, that in itself is not very useful, so next we will focus on creating specific functions that will send off EPP messages to the target server and get a response. Before we continue, it is important to understand the three main objects in domain management: domains , contacts and hosts . Domains: These are the web addresses that users type into their browsers. In EPP, a domain object represents the registration of a domain name. Contacts: These are individuals or entities associated with a domain. There are typically four types of contact: Registrant, Admin, Tech & Billing. ICANN (Internet Corporation for Assigned Names and Numbers) mandates that every provisioned domain must have a valid contact attached to it. Hosts: Also known as nameservers, these are the servers that translate domain names into IP addresses. In EPP, host objects can either be internal (subordinate to a domain in the registry) or external. Understanding these concepts is crucial because EPP operations involve creating, modifying or querying these objects. For instance, when registering a domain, you need to specify contacts and hosts. With that knowledge, let us create three folders inside our epp folder, named domain , contact and host . And the first EPP command we will make is the simplest: a domain check. Because this relates to domain objects, create a class inside the domain folder named CheckDomain.kt . Your project structure should now look like this: EPPAPI.kt api └── API.kt epp ├── contact ├── domain │ └── CheckDomain.kt ├── host └── EPP.kt Let us go and write our first EPP operation: checking if a domain is available for registration. I am going to create a Kotlin extension function inside our CheckDomain.kt class called checkDomain which can be used on our EPP class. Here's the code: import com.tucows.oxrs.epprtk.rtk.example.DomainUtils.checkDomains import epp.EPP import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCheck import org.openrtk.idl.epprtk.domain.epp_DomainCheckReq import org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.checkDomain( domainName: String, ): Boolean { val check = EPPDomainCheck().apply { setRequestData( epp_DomainCheckReq( epp_Command(), arrayOf(domainName) ) ) } val response = processAction(check) as EPPDomainCheck val domainCheck = response.responseData as epp_DomainCheckRsp return domainCheck.results[0].avail } Here is the flow of the function: We create an EPPDomainCheck object, which represents an EPP domain check command. We set the request data using epp_DomainCheckReq . This takes an epp_command (a generic EPP command) and an array of domain names to check. In this case, we are only checking one domain. We process the action using our EPP client's processAction function, which sends the request to the EPP server. We cast the response to EPPDomainCheck and extract the responseData . Finally, we return whether the domain is available or not from the first (and only result) by checking the avail value. From an EPP perspective, this function is sending a domain check command to the EPP server. The server responds with information about whether the specified domain is available for registration. Remember, EPP is an XML-based protocol, meaning that the raw output for a check of, for example, example.gg , returns the following: org.openrtk.idl.epprtk.domain.epp_DomainCheckRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728106430577] }] }] m_results [[org.openrtk.idl.epprtk.epp_CheckResult: { m_value [example.gg] m_avail [false] m_reason [(00) The domain exists] m_lang [] }]] } This is why we do the casting and filter through to the Boolean to provide back to the calling function. Otherwise, this would be a mess to deal with. It is important to do the validation and casting in this function so that we do not pass the heavy work back upstream. By implementing this as an extension function on our EPP class, we can call it super easily. Let us add it to our main class as a test: fun main() { EPP.create() println(EPP.checkDomain("example.gg")) } As opposed to a long string of XML, our function has made it so that the console is either printing true or false , in this case false .This pattern of creating extension functions for various EPP operations allows us to build a clean, intuitive API for interacting with the EPP server, while keeping our core EPP class focused on connection and authentication. Now that the basic check is done, let us look at what is required to provision a domain. Remember that domains, contacts and hosts can all be used with a number of operations, including creating, updating, deleting and querying. In order to register a domain, we will need to create a domain object, which first requires that a contact and host object be created. Let us start with creating a contact. I have created a CreateContact.kt class under my /epp/contact folder. Here is is how it looks: import com.tucows.oxrs.epprtk.rtk.xml.EPPContactCreate import epp.EPP import org.openrtk.idl.epprtk.contact.* import org.openrtk.idl.epprtk.epp_AuthInfo import org.openrtk.idl.epprtk.epp_AuthInfoType import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.createContact( contactId: String, name: String, organization: String? = null, street: String, street2: String? = null, street3: String? = null, city: String, state: String? = null, zip: String? = null, country: String, phone: String, fax: String? = null, email: String ): Boolean { val create = EPPContactCreate().apply { setRequestData( epp_ContactCreateReq( epp_Command(), contactId, arrayOf( epp_ContactNameAddress( epp_ContactPostalInfoType.INT, name, organization, epp_ContactAddress(street, street2, street3, city, state, zip, country) ) ), phone.let { epp_ContactPhone(null, it) }, fax?.let { epp_ContactPhone(null, it) }, email, epp_AuthInfo(epp_AuthInfoType.PW, null, "pass") ) ) } val response = client.processAction(create) as EPPContactCreate val contactCreate = response.responseData as epp_ContactCreateRsp return contactCreate.rsp.results[0].m_code.toInt() == 1000 } In this command, we are using similar logic to domain checking, where we create an EPPContactCreate class which we populate from the data we took in from the constructor. Some of that data is optional, and I have given default null values to all that are optional according to the EPP specification. I amthen checking for the m_code which is, for all intents and purposes, a code that indicates the result of the operation. According to the EPP specification, a result code of 1000 indicates a successful operation. The last step before we can work on provisioning a domain is creating a host object. In EPP, host objects represent the nameservers that will be associated with our domain. Registries require these for two main reasons: to ensure newly registered domains are immediately operational in the DNS, and to create necessary glue records for internal nameservers (nameservers within the same TLD as the domain). Whether or not this is required or not depends on your chosen registry. With my case study as the Channel Isles, there is no requirement that a host object must be created on the system before the EPP can provision a domain for external nameservers. However, I will share the code in case your circumstances differ with your registry. Following from our previous two commands, I added created a CreateHost.kt class in my /epp/host folder with the following code: import com.tucows.oxrs.epprtk.rtk.xml.EPPHostCreate import epp.EPP import org.openrtk.idl.epprtk.epp_Command import org.openrtk.idl.epprtk.host.epp_HostAddress import org.openrtk.idl.epprtk.host.epp_HostAddressType import org.openrtk.idl.epprtk.host.epp_HostCreateReq import org.openrtk.idl.epprtk.host.epp_HostCreateRsp fun EPP.Companion.createHost( hostName: String, ipAddresses: Array<String>? ): Boolean { val create = EPPHostCreate().apply { setRequestData( epp_HostCreateReq( epp_Command(), hostName, ipAddresses?.map { epp_HostAddress(epp_HostAddressType.IPV4, it) }?.toTypedArray() ) ) } val response = client.processAction(create) as EPPHostCreate val hostCreate = response.responseData as epp_HostCreateRsp return hostCreate.rsp.results[0].code.toInt() == 1000 } As before, this function creates the EPP host create request, processes the action, checks the result code and returns true if the code is 1000 , and false otherwise. The parameters are particularly important here and can lead to confusion for those not too familiar with how DNS works. The hostName parameter is the fully qualified domain name (FQDN) of the host we are creating. For example, ns1.example.com . The other ask is an array of IP addresses associated with the host. This is more crucial for internal nameservers, and for external nameservers (probably your use case) this can often be left null . Now the one definite and other potential prerequisite to provisioning a domain are in our codebase, let us get to the star of the show. The following function is an EPP command that will provision a domain based on objects we just created. I created the following function in a class called CreateDomain.kt in my /epp/domain folder: import epp.EPP import com.tucows.oxrs.epprtk.rtk.xml.EPPDomainCreate import org.openrtk.idl.epprtk.domain.* import org.openrtk.idl.epprtk.epp_AuthInfo import org.openrtk.idl.epprtk.epp_AuthInfoType import org.openrtk.idl.epprtk.epp_Command fun EPP.Companion.createDomain( domainName: String, registrantId: String, adminContactId: String, techContactId: String, billingContactId: String, nameservers: Array<String>, password: String, period: Short = 1 ): Boolean { val create = EPPDomainCreate().apply { setRequestData( epp_DomainCreateReq( epp_Command(), domainName, epp_DomainPeriod(epp_DomainPeriodUnitType.YEAR, period), nameservers, registrantId, arrayOf( epp_DomainContact(epp_DomainContactType.ADMIN, adminContactId), epp_DomainContact(epp_DomainContactType.TECH, techContactId), epp_DomainContact(epp_DomainContactType.BILLING, billingContactId) ), epp_AuthInfo(epp_AuthInfoType.PW, null, password) ) ) } val response = client.processAction(create) as EPPDomainCreate val domainCreate = response.responseData as epp_DomainCreateRsp return domainCreate.rsp.results[0].code.toInt() == 1000 } This createDomain function encapsulates the EPP command for provisioning a new domain. The function brings together all the pieces we have prepared: contacts, hosts and domain-specific information. As before, it constructs an EPP domain create request, associating the domain with its contacts and nameservers. It then processes this request and checks the result code to determine if the request was successful. By returning a Boolean, we can easily pass the response upstream and, if connected to a user interface such as a web application, can inform the end user. With these functions in place, we now have the ability to provision a domain. I will run the following test in my main class: import epp.EPP import epp.contact.createContact import epp.domain.createDomain fun main() { EPP.create() val contactResponse = EPP.createContact( contactId = "12345", name = "Stephen", organization = "Test", street = "Test Street", street2 = "Test Street 2", street3 = "Test Street 3", city = "Test City", state = "Test State", zip = "Test Zip", country = "GB", phone = "1234567890", fax = "1234567890", email = "test@gg.com" ) if (contactResponse) { println("Contact created") } else { println("Contact creation failed") return } val domainResponse = EPP.createDomain( domainName = "randomavailabletestdomain.gg", registrantId = "123", adminContactId = "123", techContactId = "123", billingContactId = "123", nameservers = arrayOf("ernest.ns.cloudflare.com", "adaline.ns.cloudflare.com"), password = "XYZXYZ", period = 1 ) if (domainResponse) { println("Domain created") } else { println("Domain creation failed") } } In this function which runs when the application first starts, we are firstly creating a contact using our createContact extension function. I have passed through every single parameter, required or optional, to show how it would look. Then, once confirming the contact has created, I am creating a domain with our createDomain extension function. I am giving it the required parameters, such domain name and the nameservers, but also providing the ID of the contact created just above in the four contact fields. It is required that the contact ID which is provided is a valid contact that has first been created in the system. Therefore, this merger of a couple functions that we have made should provision a domain. After running it, the output in console should be: Contact created Domain created And for humour, here is the XML response from the EPP server before we did our own filtering in our extension functions: org.openrtk.idl.epprtk.contact.epp_ContactCreateRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331411] }] }] m_id [123456] m_creation_date [2024-10-05T06:38:51.408Z] } org.openrtk.idl.epprtk.domain.epp_DomainCreateRsp: { m_rsp [org.openrtk.idl.epprtk.epp_Response: { m_results [[org.openrtk.idl.epprtk.epp_Result: { m_code [1000] m_values [null] m_ext_values [null] m_msg [Command completed successfully] m_lang [] }]] m_message_queue [org.openrtk.idl.epprtk.epp_MessageQueue: { m_count [4] m_queue_date [null] m_msg [null] m_id [916211] }] m_extension_strings [null] m_trans_id [org.openrtk.idl.epprtk.epp_TransID: { m_client_trid [null] m_server_trid [1728110331467] }] }] m_name [randomavailabletestdomain2.gg] m_creation_date [2024-10-05T06:38:51.464Z] m_expiration_date [2025-10-05T06:38:51.493Z] } Both of those objects were created using our extension functions on top of the EPP-RTK which is in contact with my target EPP server. If your registry has a user interface, you should see that these objects have now been created and are usable going forward. For example, one contact can be used for multiple domains. For my case study, you can see that both objects were successfully created on the Channel Isles side through our EPP communication: In simple terms, this means they have received the instruction and successfully provisioned our domain pointing at the nameservers we provided! This now means that the domain is in my (or my registrant's) possession and now I am able to control the website showing at that domain. What about all of the other EPP commands? After all, the EPP-RTK supports the following commands: Domain check Domain info Domain create Domain update Domain delete Domain transfer Contact check Contact info Contact create Contact update Contact delete Contact transfer Host check Host info Host create Host update Host delete We have made four of these in this blog: creating a host, creating a contact, creating a domain and checking a domain. The code for the rest of these commands follows the exact same pattern, and if at any point you get stuck I highly recommend the official documentation of the EPP-RTK API: https://epp-rtk.sourceforge.net/epp-rtk-java-0.4.1/java/doc/epp-rtk-user-guide.html This documentation is where I got all my information from for these commands and for this project as a whole. If you are looking at productionizing this project and intend to implement the remaining commands, you will find that the code is almost identical across the different commands with the only exception being the required parameters for each request. Now that we have our core EPP functionality implemented, it is time to expose these capabilities through a web API. This is where Spring comes into play. Spring will allow us to create a robust, scalable REST API that will serve as an interface between client interactions and our EPP operations. What we will do here is wrap our EPP functions within Spring controllers, meaning we can create endpoints that external applications can easily consume. This abstraction layer not only makes our EPP functionality more accessible but also allows us to add additional business logic, validation and error handling. Because we know that EPP can process commands related to three object types: hosts, contacts and domains, I am going to create three separate controllers. But let us also split that up from our API.kt by putting them in their own controller folder. I am going to name my controllers HostController.kt , ContactController.kt and DomainController.kt . At this point, the file structure should look like this: EPPAPI.kt api ├── controller │ └── ContactController.kt │ └── DomainController.kt │ └── HostController.kt └── API.kt epp ├── contact ├── domain │ └── CheckDomain.kt ├── host └── EPP.kt The job of controllers in Spring is to handle incoming HTTP requests, process them and return appropriate responses. In the context of our EPP API, controllers will act as the bridge between the client interface and our EPP functionality. Therefore, it makes logical sense to split up the three major sections into multiple classes so that the code does not become unmaintainable. The simplest example we could write to link our EPP and our Spring API is checking the availability of a domain. Thankfully, earlier we wrote the EPP implementation to this in our CheckDomain.kt class. Now let us make it so that a user can trigger it via an endpoint. Because it is domain related, I will add the new code into the DomainController.kt class. Firstly, with every controller class, it must be annotated with @RestController . And then a mapping is created as below: import epp.EPP import epp.domain.checkDomain import org.springframework.http.ResponseEntity import org.springframework.web.bind.annotation.GetMapping import org.springframework.web.bind.annotation.RequestParam import org.springframework.web.bind.annotation.RestController @RestController class DomainController { @GetMapping("/domain-check") fun helloWorld(@RequestParam name: String): ResponseEntity<Map<String, Any>> { val check = EPP.checkDomain(name) return ResponseEntity.ok( mapOf( "available" to check ) ) } } Let us break down the code and see what is happening: GetMapping("domain-check") : This annotation maps the HTTP GET requests to the domain-check route. When a GET request is made to this URL, Spring will call this function to handle it. fun helloWorld(@RequestParam name: String) : This is the function that will handle the request. The @RequestParam annotation tells Spring to extract the name parameter from the query string of the URL. For example, a request to /domain-check?=name=example.gg would set name to example.gg . This allows us to then process the EPP command with their requested domain name. ResponseEntity<Map<String, Any>> : This is the return type of the function. ResponseEntity allows us to have full control over the HTTP response, including status code, headers and body. val check = EPP.checkDomain(name) : This line calls our EPP function to check if the domain is available (remember, it returns true if available and false if not). return ResponseEntity.ok(mapOf("available" to check)) : This creates a response with HTTP status 200 (OK) and a body containing the JSON object with a single key available whose value is the result of the domain check. The mapping is crucial because it connects HTTP requests to our application logic. When a client makes a GET request to /domain-check with a domain name as a parameter, Spring routes that request to this method, which then uses our EPP implementation to check the domain's availability and returns the result. This setup allows external applications to easily check domain availability by making a simple HTTP GET request, without needing to know anything about the underlying EPP protocol or implementation. It is a great example of how we are using Spring to create a user-friendly API on top of our more complex EPP operations. The same principle we have applied to the domain check operation can be extended to all other EPP commands we have created. For instance, creating a domain might use a POST request, updating domain information could use PUT , and deleting a domain would naturally fit with the DELETE HTTP method. For domain creation, we could use @PostMapping("/domain") and accept a request body with all necessary information. Domain updates could use @PutMapping("/domain/{domainName}") , where the domain name is part of the path and the updated information is in the request body. For domain deletion, @DeleteMapping("/domain/{domainName}") would be appropriate. Similar patterns can be applied to contact and host operations. By mapping our EPP commands to these standard HTTP methods, we create an intuitive API that follows RESTful conventions. Each of these endpoints would call the corresponding EPP function we have already implemented, process the result, and return an appropriate HTTP response. This approach provides a clean separation between the HTTP interface and the underlying EPP operations, making our system more modular and easier to maintain or extend in the future. The very last step before we can finally run this project is to actually initialise the Spring side of the project like we did for the EPP side. Inside my empty API.kt class, I am going to put the following: import org.springframework.boot.autoconfigure.SpringBootApplication import org.springframework.boot.runApplication @SpringBootApplication class API { companion object { fun start() { runApplication<API>() } } } This code follows the Spring requirements to register our controllers. Our API.kt class serves as the entry point for the Spring application. Inside this class, we have defined a companion object with a start() function. This function calls runApplication<API>() to bootstrap the application, which is a Kotlin-specific way to launch a Spring application. Behind the scenes, Spring's recognition of controllers happens automatically through a process called component scanning. When the application starts, because we have registered it here, Spring examines the codebase, starting from the package containing the main class and searching through all subpackages. It looks for classes annotated with specific markers, such as the @RestController that we put at the top of our controllers. Spring then inspects these classes, looking for any functions that may be annotated as mappings (e.g. @GetMapping like above), and then uses that information to build a map of URL paths to controller functions. This means that when a request comes in, Spring knows exactly which function in which class should process the result. It would be fair to say that Spring has an unconventional approach to application structure and dependency management. Spring embraces the philosophy of "convention over configuration" and heavily leverages annotations. However, this has helped us to significantly reduce boilerplate code, making it cleaner and more maintainable for future travelers. Now that the entry point to our API is ready, all we need to do is call that start() function we just created in our APP.kt : import api.API import epp.EPP fun main() { EPP.create() API.start() } And that is a wrap for the code. Let us go ahead and run our project. The console output should look something like this: Creating client with HOST: ote.channelisles.net, PORT: 700, USERNAME: [Redacted] Creating SSL socket... SSL socket created. Setting socket to EPP server... Socket set. Getting greeting... Greeting received: org.openrtk.idl.epprtk.epp_Greeting: { m_server_id [OTE] m_server_date [2024-10-06T05:47:08.628Z] m_svc_menu [org.openrtk.idl.epprtk.epp_ServiceMenu: { m_versions [[1.0]] m_langs [[en]] m_services [[urn:ietf:params:xml:ns:contact-1.0, urn:ietf:params:xml:ns:domain-1.0, urn:ietf:params:xml:ns:host-1.0]] m_extensions [[urn:ietf:params:xml:ns:rgp-1.0, urn:ietf:params:xml:ns:auxcontact-0.1, urn:ietf:params:xml:ns:secDNS-1.1, urn:ietf:params:xml:ns:epp:fee-1.0]] }] m_dcp [org.openrtk.idl.epprtk.epp_DataCollectionPolicy: { m_access [all] m_statements [[org.openrtk.idl.epprtk.epp_dcpStatement: { m_purposes [[admin, prov]] m_recipients [[org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [ours] m_rec_desc [null] }, org.openrtk.idl.epprtk.epp_dcpRecipient: { m_type [public] m_rec_desc [null] }]] m_retention [stated] }]] m_expiry [null] }] } Connecting... Connected. Logging in... Login successful. . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.7.18) 2024-10-06 06:47:09.531 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : Starting EPPAPIKt using Java 1.8.0_382 on STEPHEN with PID 43872 (D:\IntelliJ Projects\epp-api\build\classes\kotlin\main started by [Redacted] in D:\IntelliJ Projects\epp-api) 2024-10-06 06:47:09.534 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : No active profile set, falling back to 1 default profile: "default" 2024-10-06 06:47:10.403 INFO 43872 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2024-10-06 06:47:10.414 INFO 43872 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2024-10-06 06:47:10.414 INFO 43872 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.83] 2024-10-06 06:47:10.511 INFO 43872 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2024-10-06 06:47:10.511 INFO 43872 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 928 ms 2024-10-06 06:47:11.220 INFO 43872 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2024-10-06 06:47:11.229 INFO 43872 --- [ main] com.stephen.eppapi.EPPAPIKt : Started EPPAPIKt in 2.087 seconds (JVM running for 3.574) It is clear to see that this startup console output is split into two halves. Firstly, the output from our debugging messages when creating and authenticating into the EPPClient . Then the native Spring output which shows that the local server has been started on port 8080 . Now for the exciting part. Heading to localhost:8080 in the browser should resolve, but throw a fallback error page, because we have not set anything to show at that route. We have, however, created a GET route at /domain-check . If you head to just /domain-check you will be met with a 400 (BAD REQUEST) error. This is because you will need to specify the name parameter as enforced in our function. So, let us try this out with a couple domains... /domain-check?name=test.gg - {"available":false} /domain-check?name=thisshouldprobablybeavailable.gg - {"available":true} And that is it! At first it may not seem like a huge technical feat, but one should remember that is sending off a request to our Spring API which then routes it to a specific function, this then runs the code we wrapped over an EPP command which is sent off to the targeted EPP server who processes the domain check and sends the response back upstream to the user. There isa huge amount happening behind the scenes to power this simple domain check. What we have demonstrated here with the domain check functionality is just the tip of the iceberg. We could expand our API to include endpoints for various domain-related operations. For instance, domain registration could be handled by a POST request to /domain , taking contact details, nameservers, and other required information in the request body. Domain information retrieval could be a GET request to /domain/{domainName} , fetching comprehensive information about a specific domain. Updates to domain information, such as changing contacts or nameservers, could be managed through a PUT request to /domain/{domainName} . The domain deletion process could be initiated with a DELETE request to /domain/{domainName} . Domain transfer operations, including initiating, approving, or rejecting transfers, could also be incorporated into our API. Each of these operations would follow the same pattern we have established: a Spring controller method that takes in the necessary parameters, calls the appropriate EPP function, and returns the result in a user-friendly format. By expanding our API in this way, we are creating a comprehensive abstraction layer over EPP. This approach simplifies complex EPP operations, making them accessible to developers who may not be familiar with the intricacies of the protocol. It presents a consistent, RESTful interface for all domain-related operations, following web development best practices. Our EPP API can be easily consumed by various client applications, from web frontends to mobile apps or other backend services. Deploying to Azure Container Apps Now that we have our EPP API functioning locally, it is time to think about productionizing our application. Our goal is to run the API as an Azure Container App (ACA), which is a fully managed environment perfect for easy deployment and scaling of our Spring application. However, before deploying to ACA, we will need to containerise our application. This is where Azure Container Registry (ACR) comes into play. ACR will serve as the private Docker registry to store and manage our container images. It provides a centralised repository for our Docker images and integrates seamlessly with ACA, streamlining our CI/CD pipeline. Firstly, let us create a Dockerfile . This step is required to run both locally and in Azure Container Registry. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It serves as a blueprint for building a Docker container. In our case, our Dockerfile will set up the environment and instructions needed to containerise our Spring application. Create a file named Dockerfile in the root of your project with the following content: # Use OpenJDK 21 as the base image (use your JDK version) FROM openjdk:21-jdk-alpine # Set the working directory in the container WORKDIR /app # Copy the JAR file into the container COPY build/libs/*.jar app.jar # Expose the port your application runs on EXPOSE 8080 # Command to run the application CMD ["java", "-jar", "app.jar"] I have added comments alongside each instruction to explain the flow. This Dockerfile encapsulates our application and its runtime environment, ensuring consistency across different deployment environments. It is a crucial step in our journey from local development to cloud deployment, providing a standardised way to package and run our EPP API. However, before we push to the cloud, it is prudent to test it locally in a Docker container. This approach allows us to catch any containerization-related issues early and save time in the long run. We can verify that all components work correctly in a containerised environment, such as environment variable configurations and network settings. This step will help ensure a smooth transition to ACA, as the local Docker environment closely mimics the container runtime in Azure. Once we are confident that our application runs flawlessly in a local Docker container, we can push the image to ACR and deploy it to ACA, knowing we have minimised the risk of environment-specific issues. This local testing can be done in a simple three step process with the following Gradle & Docker CLI commands: ./gradlew build - build our application and package into a JAR file found under /build/libs/X.jar . docker build -t epp-api . - tells Docker to create an image named epp-api based on the instructions in our Dockerfile. docker run -p 8080:8080 --env-file .env epp-api - start a container from the image, mapping port 8080 of the container to port 8080 on the host machine. We use this port because this is the default port on which Spring exposes endpoints. The -p flag ensures that the application can be accessed through localhost:8080 on your machine. We also specify the .env file we created earlier so that Docker is aware of our EPP login details. If all went well, you should have the exact same console output as above. The key difference is the environment in which our application is running. Previously, we were executing our Spring application directly within our development environment. Now, however, our application is running inside a Docker container. This containerised environment is isolated from our host system, with its own file system, networking, and process space. It is a self-contained unit that includes not just our application, but also its runtime dependencies like the Java Development Kit. Now that we have proven our project is ready to run in a containerised environment, let us start the cloud deployment process. This process involves two main steps: pushing our Docker image to Azure Container Registry and then deploying it to Azure Container Apps. I will be using the Azure CLI as outlined in the prerequisites. Everything I am doing can be done through the portal, but the CLI drastically reduces development time. Run the following commands in this order: az login - if not already authenticated, be sure to log in through the CLI. az group create --name registrar --location uksouth - create a resource group if you have not already. I have named mine registrar and chosen the location as uksouth because that is closest to me. az acr create --resource-group registrar --name registrarcontainers --sku Basic - create an Azure Container Registry resource within our registrar resource group, with the name of registrarcontainers (note that this has to be globally unique) and SKU Basic. az acr login --name registrarcontainers - login to the Azure Container Registry. docker tag epp-api myacr.azurecr.io/epp-api:v1 - tag the local Docker image with the ACR login server name. docker push myacr.azurecr.io/epp-api:v1 - push the image to the container registry! If all went well, you should be met with a console output like this: The push refers to repository [registrarcontainers.azurecr.io/epp-api] 2111bc7193f6: Pushed 1b04c1ea1955: Pushed ceaf9e1ebef5: Pushed 9b9b7f3d56a0: Pushed f1b5933fe4b5: Pushed v1: digest: sha256:07eba5b555f78502121691b10cd09365be927eff7b2e9db1eb75c072d4bd75d6 size: 1365 That is the first part done. Now that our image is in ACR, we can deploy it to Azure Container Apps. This step is where we truly leverage the power of Azure's managed container services. To deploy our EPP API to ACA, I will continue to use the Azure CLI, though some may find it more comfortable to use the portal for this section as a lot of configuration is required. Run the following commands in this order: az containerapp env create --resource-group registrar --name containers --location uksouth - create the Container App environment within our resource group with name containers and location uksouth . az acr update -n registrarcontainers --admin-enabled true - ensure ACR allows admin access. az containerapp create --name epp-api --resource-group registrar --environment containers --image registrarcontainers.azurecr.io/epp-api:v1 --target-port 8080 --ingress external --registry-server registrarcontainers.azurecr.io --env-vars "HOST=your_host" "PORT=your_port" "USERNAME=your_username" "PASSWORD=your_password" -creates a new Container App named epp-api within our resource group and the containers environment. It uses the Docker image stored in the ACR. The application inside the container is configured to listen on port 8080 which is where our Spring endpoints will be accessible. The -ingress external flag makes it accessible from the internet. You must also set your environment variables or the app will crash. After running this command to create the Azure Container App, you should be me with a long JSON output to confirm the action. Then it should provide the URL to access the app. It should look like: Container app created. Access your app at https://epp-api.purpledune-772f2e5a.uksouth.azurecontainerapps.io/ Which now means... if we head to that URL, and append /domain-check?name=test.gg as we did when locally testing, we are met with: {"available":false} That concludes the deployment process. This means our API is now accessible via the internet! Setting up GitHub CI/CD Now that we have our EPP API successfully running in our Azure Container Apps, the next step is to streamline our development and deployment process. This is where CI/CD comes into play: CI/CD, which stands for Continuous Integration and Continuous Deployment, is a set of practices that automate the process of building, testing and deploying our application. In simple terms: we are going to make it so that when we push code changes to our GitHub repository our container gets automatically updated and redeployed. This saves time and allows us to deliver updates and new features to our users more rapidly and reliably. We will walk through the process of setting up the CI/CD pipeline using GitHub Actions. But first, let ussetup our Git repository and send off an initial commit to GitHub. Firstly, head to GitHub and create a repository. You can create it in your personal account or an organization. I havenamed mine epp-api . Be sure to copy/paste or remember the URL for this repository as we will need it to link Git in a moment. Now you have an empty cloud repository, open the terminal in your workspace and run the following commands: git init - Initialise a new Git repository in your current directory. This creates a hidden .git directory that stores the repository's metadata. git add . - Stages all of the files in the current directory and its subdirectories for commit. This means that these files will be included in the next commit. git commit -m "Initial commit" - Creates a new commit with the staged files and a common initial commit message. git remote add origin <URL> - Adds a remote repository named origin to your local repository, connecting it to our remote Git repository hosted on GitHub. git push origin master - Uploads the local repository's content to the remote repository named origin , specifically to the master branch. If you refresh your repository on GitHub, you should see the commit! Now that your code is available outside of your local workspace, let usask Azure to create the deployment workflow. On the Azure Portal, follow this trail: Head to your Container App On the sidebar, hit Settings Hit Deployment You should find yourself in the Continuous deployment section. There are two headings, let usstart with GitHub settings : Authenticate into GitHub and provide permissions to repository (if published to a GH organization, give permissions also) Select organization, or your GitHub name if published on personal account Select the repository you just created (for me, epp-api ) Select the main branch (likely either master or main ) Then, under Registry settings : Ensure Azure Container Registry is selected for Repository source Select the Container Registry you created earlier (for me, registrarcontainers ) Select the image you created earlier (for me, epp-api ) It should look something like this: Once these settings have been configured, press Start continuous deployment . If all went to plan, Azure will have created a workflow file in your repository under .github/workflows with the commit message Create an auto-deploy file . Based on the content of the workflow, we can see that the trigger is on push to master . This means that, moving forward, every change you commit and push to this repository will trigger this workflow, which will in-turn trigger a build and push the new container image to the registry. However, it is likely that on the first build it will fail. This is because we need to make a couple modifications to this workflow file before it will work with our technology stack. You will need to add these changes manually, so head into the workflow and begin editing as with any other file (either through GitHub or VSC - do not forget to push if VSC!). Then, add the following jobs after Checkout to the branch and before the Azure Login job: - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Set up JDK 21 uses: actions/setup-java@v2 with: java-version: '21' distribution: 'adopt' - name: Build with Gradle run: ./gradlew build We added 3 jobs: Grant execute permission to gradlew - gradlew is a wrapper script that helps manage Gradle installations. This step grants execute permission to the gradlew file which allows this build process to execute Gradle commands, needed for the next steps. Set up JDK - This sets up the JDK as the Java envrionment for the build process. Make sure this matches the Java version you have chosen to use for this tutorial. Build with Gradle - This executes the Gradle build process which will compile our Java code and package it into a JAR file which will then be used by the last job to push to the Container Registry. The final workflow file should look like this: name: Trigger auto deployment # When this action will be executed on: # Automatically trigger it when detected changes in repo push: branches: [ master ] paths: - '**' - '.github/workflows/AutoDeployTrigger-aec369b2-f21b-47f6-8915-0d087617a092.yml' # Allow manual trigger workflow_dispatch: jobs: build-and-deploy: runs-on: ubuntu-latest permissions: id-token: write #This is required for requesting the OIDC JWT Token contents: read #Required when GH token is used to authenticate with private repo steps: - name: Checkout to the branch uses: actions/checkout@v2 - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Set up JDK 21 uses: actions/setup-java@v2 with: java-version: '21' distribution: 'adopt' - name: Build with Gradle run: ./gradlew build - name: Azure Login uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: Build and push container image to registry uses: azure/container-apps-deploy-action@v2 with: appSourcePath: ${{ github.workspace }} _dockerfilePathKey_: _dockerfilePath_ registryUrl: fdcontainers.azurecr.io registryUsername: ${{ secrets.REGISTRY_USERNAME }} registryPassword: ${{ secrets.REGISTRY_PASSWORD }} containerAppName: epp-api resourceGroup: registrar imageToBuild: registrarcontainers.azurecr.io/fdspring:${{ github.sha }} _buildArgumentsKey_: | _buildArgumentsValues_ Once you have pushed your workflow changes, that action itself will trigger the new workflow and hopefully you should be met with a green circle on GitHub at the top of your repository to signify the build was a success. Do not forget that at any point you can click the Actions tab and see the result of all builds, and if any build fails you can explore in detail on which job the error occured. Conclusion That is it! You have successfully built a robust EPP API using Kotlin and Spring Boot and now containerised it with Docker and deployed it to Azure Container Apps. This journey took us from understanding the intricacies of EPP and domain registration, through implementing core EPP operations, to creating a user-friendly RESTful API. We then containerised our application, ensuring consistency across different environments. Finally, we leveraged Azure's powerful cloud service services - Azure Container Registry for storing our Docker image, and Azure Container Apps for deploying and running our application in a scalable, managed environment. The result is a fully functional, cloud-hosted API that can handle domain checks, registrations and other EPP operations. This accomplishment not only showcases the technical implementation but also opens up possibilities for creating sophisticated domain management tools and services, such as by starting a public registrar or managing a domain portfolio internally. I hope this blog was useful, and I am happy to answer any questions in the replies.Well done on bringing this complex system to life!4.3KViews0likes0CommentsSpring I/O 2024 - Join Microsoft and Broadcom to Celebrate All Things Spring and Azure!
Get ready for the ultimate Spring conference in Barcelona from May 30-31! Connect with over 1200 attendees, enjoy 70 expert speakers, and engage in 60 talks and 7 hands-on workshops. Microsoft Azure (as a Gold Sponsor) and VMware Tanzu (as a Platinum sponsor) bring you in-depth sessions on Spring and AI development, a dynamic full-day workshop, and an interactive booth experience. Learn, network, and enhance your Java skills with the latest tools and frameworks. Don't miss out on this exciting event!4KViews1like0CommentsAzure Spring Apps Enterprise – More Power, Scalability & Extended Spring Boot Support
Today, we are delighted to announce significant enhancements to the Azure Spring Apps Enterprise. These improvements will bolster security, quicken development speed, amplify scalability, and provide greater flexibility and reliability. We are excited to share these developments with you and look forward to seeing how they will enhance your experiences.9.2KViews0likes0CommentsNavigating Common VNET Injection Challenges with Azure Spring Apps
This article outlines the typical challenges that customers may come across while operating an Azure Spring Apps service instance using the Standard and Enterprise plans within a virtual network environment. Deploying Azure Spring Apps within a customized Virtual Network introduces a multitude of components into the system. This article will guide you to set network components such as Network Security Groups (NSG), route tables, and custom DNS servers correctly to make sure service is functioning as expected. In the following sections, we've compiled a list of common issues our customers encounter and offer recommendations for effectively resolving them. VNET Prerequisites not met Issues Custom Policy issues Custom DNS Server Resolution Issues Outbound connection issues Route Table Issues User Defined Route Issues VNET Prerequisites not met Issues Symptoms: Failed to create Azure Spring Apps service. Error Messages: "InvalidVNetPermission" error messages reporting ""Invalid Subnet xxx. This may be a customer error if 'Grant service permission to virtual network' step is missing before create Azure Spring Apps in vnets." Common Causes of the issue: Azure Spring Apps platform need to execute some management operations (e,g, create route tables, add rules into existing route tables) in the customer provided VNET. Without the above permissions, the platform operations will be blocked. How to fix: Grant the Owner permission on your virtual network to "Azure Spring Cloud Resource Provider" (id: e8de9221-a19c-4c81-b814-fd37c6caf9d2). Or you can grant User Access Administrator and Network Contributor roles to it if you can't grant Owner permission. az role assignment create \ --role "Owner" \ --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 If you associated your own route tables to the given subnets, also need to make sure to grant Owner or User Access Administrator & Network Contributor permission to "Azure Spring Cloud Resource Provider" on your route tables. az role assignment create \ --role "Owner" \ --scope ${APP_ROUTE_TABLE_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 Reference docs: Virtual network requirements Grant service permission to the virtual network Custom Policy issues Symptoms: Failed to create Azure Spring Apps service. Failed to start/stop Azure Spring Apps service. Failed to delete Azure Spring Apps service. Error Messages: Resource request failed due to RequestDisallowedByPolicy. Id: /providers/Microsoft.Management/managementGroups/<group-name>/providers/Microsoft.Authorization/policyAssignments/<policy-name> Common Causes of the issue: One of the most common policies we saw that blocks the platform operation is "Deny Public IP Address creation". The Azure Spring Apps platform need to create a public IP in the load balancer to communicate with those targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network for management and operational purposes. How to fix: Find the policy id provided in the error message, then check if the policy can be deleted or modified to avoid the issue. For Deny Public IP Address policy, if your company policy does not allow that, you can also choose to use Customize Azure Spring Apps egress with a user-defined route. Custom DNS Server Resolution Issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. App cannot register to Eureka server. App cannot resolve or resolve wrong IP address of a target URL. Error Messages: Could not resolve private dns zone. java.net.UnknownHostException If the DNS server resolved wrong IP address of a host, we may also see connection failure in the log. java.net.SocketTimeoutException Common Causes of the issue: Custom DNS server is not correctly configured to forward DNS requests to upstream public DNS server. In the Troubleshooting Azure Spring Apps in virtual network Doc, it mentioned if your virtual network is configured with custom DNS settings, be sure to add Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server. But sometimes it got misunderstood as adding both custom DNS server IP and 168.63.129.16 into the VNET DNS servers setting. This will introduce unpredictable behavior in name resolving. Some companies' policy does not allow forwarding DNS requests to Azure DNS server. Customers have the flexibility to direct their DNS requests to any upstream DNS server of their choice, provided that the chosen server is capable of resolving the public targets outlined in the Customer responsibilities running Azure Spring Apps in a virtual network. In this scenario, you will also need to add an additional DNS A record for *.svc.private.azuremicroservices.io (Service Runtime cluster ingress domain) -> the IP of yourapplication (Find the IP for your application). While the Azure Spring Apps platform does configure this record, it is specifically recognizable by Azure DNS servers. If an alternative DNS server is in use, this record needs to be manually added into your own DNS server. Since our Spring boot Configure server and Eureka server are hosting in Service Runtime cluster, if your DNS server setting cannot resolve *.svc.private.azuremicroservices.io domain, your app may fail to load config and register itself to the Eureka server. Upon modifying your DNS configuration file or adjusting the VNET DNS server settings, it's important to note that these changes will not be instantly propagated across all the cluster nodes that host your application. To effectively implement the modifications, it is necessary to stop then start the Azure Spring Apps instance. This action enforces a reboot of the underlying cluster nodes. It's a by design behavior, since any cluster node virtual machine will only load the DNS settings from VNET for one time when it got created, restarted or being manually restart the network daemon. The network connection to the custom DNS server or Azure DNS server is blocked by NSG rule or firewall. How to investigate: Use Diagnose and Solve problems -> DNS Resolution detector Use App Connect console to test the DNS resolve result. We can connect to the App container, and directly run nslookup command to test the host's name resolve result. Please refer to Investigate Azure Spring Apps Networking Issue with the new Connect feature - Microsoft Community Hub. If the wrong DNS server setting blocked Azure Spring Apps service creation, since there is no resource being created yet, the above 2 troubleshooting method won't be available. You will need to create a jump-box (windows or Linux VM) in the same VNET subnet as your Azure Spring App, then use nslookup or dig command to verify the DNS settings. Make sure all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network can be resolved. nslookup mcr.microsoft.com Outbound connection issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. App Deployment failed to start. App Deployment cannot mount Azure File Share. App Deployment cannot send metrics/logs to Azure Monitor, Application Insights. Error Messages: "code": "InvalidVNetConfiguration", "message": "Invalid VNet configuration: Required traffic is not whitelisted, refer to https://docs.microsoft.com/en-us/azure/spring-cloud/vnet-customer-responsibilities Common Causes of the issue: If the traffic targeting *.azmk8s.io being blocked, the Azure Spring Apps service will lost connection to the underneath Kubernetes Cluster to send management requests. It will cause the service failed to create and start. If the traffic targeting mcr.microsoft.com, *.data.mcr.microsoft.com, packages.microsoft.com being blocked, the platform won't be able to pull images and packages to deploy the cluster node. It will cause the service failed to create and start. If the traffic targeting *.core.windows.net:443 and *.core.windows.net:445 being blocked, the platform won't be able to use SMB to mount the remote storage file share. It will cause the app deployment failed to start. If the traffic targeting login.microsoftonline.com being blocked, the platform authentication feature like MSI will be blocked. It will impact service start and app MSI usage. If the traffic targeting global.prod.microsoftmetrics.com and *.livediagnostics.monitor.azure.com being blocked, the platform won't to able to send data to Azure metrics, so you will lose App Metric data. How to investigate: Examine the logs of your NSG and firewall to verify whether any traffic directed towards the targets outlined in the Customer responsibilities running Azure Spring Apps in a virtual network is encountering blocks or restrictions. Review your route table settings to ensure that they are configured to effectively route traffic towards the public network (either directly or via firewall, gateway or other appliances) for the specific targets detailed in the Customer responsibilities running Azure Spring Apps in a virtual network. Use Diagnose and Solve problems -> Required Outbound Traffic Use App Connect console to test the outbound connection blocking issue. We can connect to the App container, and directly run "nc -zv <ip> <port>"command to test the outbound connection blocking issue. nc -zv <ip> <port> Please refer to Investigate Azure Spring Apps Networking Issue with the new Connect feature - Microsoft Community Hub If the outbound connection issue blocked Azure Spring Apps service creation, since there is no resource being created yet, the above 2 troubleshooting method won't be available. You will need to create a jump-box (windows or Linux VM) in the same VNET subnet as your Azure Spring App, then use "nc" or "tcpping" command to verify the connection. Make sure all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network are not blocked by your NSG or firewall rules. How to fix: Use the above method to check which outbound traffics being blocked. Fix the setting in route table, NSG and firewall in both service runtime and app subnets. Route Table Issues Symptom: Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. Failed to delete Azure Spring Apps service. Error Messages: Invalid VNet configuration: Invalid RouteTable xxx. This may be a customer error if providing wrong route table information Invalid VNet configuration: Please make sure to grant Azure Spring Apps service permission to the scope /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Network/routeTables/xxx. Refer to https://aka.ms/asc/vnet-permission-help for more details Common Causes of the issue: You did not grant Owner or User Access Administrator & Network Contributor permission to "Azure Spring Cloud Resource Provider" on your route tables. The platform needs to write new route table rules into the given route table. Without qualified permission, the operation will fail. You provided wrong route table information, so the platform cannot find it. How to fix: Unless you have distinct prerequisites necessitating the routing of outbound traffic through a designated gateway or firewall, there is no need to establish your own route tables within the subnets allocated to Azure Spring Apps. The platform itself will autonomously generate a new route table within the subnets and configure appropriate route rules for the underlying nodes. If you need routing outbound traffic to your own gateway or firewall: You have the option to employ your personal route table. Prior to initiating the creation of Azure Spring Apps, please make sure to associate your custom route tables with the two specified subnets. If your custom subnets contain route tables, Azure Spring Apps acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations. You can also use the route tables created by the platform. Just add your new routing rules into it. We can see rules named like this: aks-nodepoolxx-xxxx-vmssxxxxx_xxxx. These are the rules added by Azure Spring Apps and they must not be updated or removed. You can add your own rules to routing other outbound traffic. For example, using 0.0.0.0/0 to route all the other traffic to your gateway or firewall. If you created your own routing rule to route internet outbound traffic, make sure the traffic can reach all the targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network. User Defined Route Issues Symptom Failed to create Azure Spring Apps service. Failed to start Azure Spring Apps service. Failed to use Public Endpoint feature. Failed to use log stream or connect to App console from public network. Error Messages: Invalid VNet configuration: Route tables need to be bound on subnets when you select outbound type as UserDefinedRouting.. Invalid VNet configuration: Both route table need default route(0.0.0.0/0). When connecting to Console from public network, it's reporting: A network error was encountered when connecting to "xxx.private.azuremicroservices.io". Please open the browser devtools or try with "az spring app connect" to check the detailed error message. Azure CLI log stream command may report: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='xxx.private.azuremicroservices.io', port=443): Max retries exceeded with url: /api/logstream/apps/authserver/instances/xxxxx Common Causes of the issue: When you choose to use User Defined Route, the platform won’t create any public IP address on the load balancer anymore. So, it's customer's responsibility to make sure that both subnets can still make calls to all the public targets mentioned in Customer responsibilities running Azure Spring Apps in a virtual network. This is the reason why we require both route tables need default route (0.0.0.0/0) and route the traffic to your firewall. This firewall is responsible for managing Source Network Address Translation (SNAT) and Destination Network Address Translation (DNAT) processes, converting local IP addresses to corresponding public IP addresses. This configuration enables the platform to successfully establish outbound connections to targets located in the public network. By design, in UDR type of Azure Spring Apps, we cannot use the following features: - Enable Public Endpoint - Use public network to access the log stream - Use public network to access the console The same limitations also apply to the Azure Spring Apps usingBring Your Own Route Tablefeature when customer route egress traffics to a firewall. Because theyintroduce asymmetric routing into the cluster, this is where the problem occurs. Packets arrive on the firewall's public IP address but return to the firewall via the private IP address. So, the firewall must block such traffic. We can refer to this doc to get more details:Integrate Azure Firewall with Azure Standard Load Balancer How to fix: When creating Azure Spring Apps using UDR outbound type, please carefully read this Control egress traffic for an Azure Spring Apps instance doc. It is recommended to diligently adhere to each step outlined in this document while establishing your individual route tables and configuring the associated firewall settings. This approach ensures the proper and effective control of outbound traffic for your Azure Spring Apps instance. Hope the troubleshooting guide is helpful to you! To help you get started, we havemonthly FREE grantson all tiers – 50 vCPU Hours and 100 memory GB Hours per tier. Additional Resources Learn using anMS Learn moduleorself-paced workshopon GitHub Deployyour first Spring app to Azure! Deploythe demo Fitness Store Spring Boot app to Azure Deploythe demo Animal Rescue Spring Boot app to Azure Learnmoreabout implementing solutions on Azure Spring Apps Deploy Spring Boot apps by leveraging enterprise best practices –Azure Spring Apps Reference Architecture Migrate yourSpring Boot,Spring Cloud, andTomcatapplications to Azure Spring Apps Wire Spring applications tointeract with Azure services For feedback and questions, please raise your issues on our GitHub. To learn more about Spring Cloud Azure, we invite you to visit the following links: Reach out to us onStackOverfloworGitHub. Reference Documentation Conceptual Documentation Code Samples Spring Version Mapping3.4KViews1like0CommentsTroubleshooting guide for Application Configuration Service on Azure Spring Apps
Application Configuration Service overview Application Configuration Service for VMware Tanzu(ACS) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories. Application Configuration Service is offered in two versions: Gen1 and Gen2. The Gen1 version mainly serves existing customers for backward compatibility purposes and is supported only until April 30, 2024. New service instances should use Gen2. The Gen2 version usesfluxas the backend to communicate with Git repositories and provides better performance compared to Gen1. You can check the generation information via Azure Portal The below article will introduce thetroubleshooting guide for both generations. Case 1: Application fails to start due to configuration not available 1. Make sure your Application Configuration Service setting is correct. There are several checkpoints in theApplication Configuration Service settings. The Git URI and label are correct. E.g. we have met several cases that use `master` branch but the default branch in GitHub has been changed to `main`. The credentials are correct. If you are using a private Git repo, it is recommended to use `SSH` auth for security considerations. `HTTP basic` auth also works but be cautioned that the token usually has an expiration date. Please remember to update the token before it expires. Please check Authentication section in our docs. To verify the above things, you may take a look atApplication Configuration Service's logs through Azure Log analysis. The log will hint reason if it is not able to access to your Git repository. // Both works for Application Configuration Service Gen1 and Gen2 AppPlatformSystemLogs | where LogType == "ApplicationConfigurationService" | project TimeGenerated , ServiceName , Log , _ResourceId | limit 100 If you are using Application Configuration Service Gen2, it is also worth a while to take a look at `Flux` logs. // Only available in Application Configuration Service Gen2 AppPlatformSystemLogs | where LogType == "Flux" | project TimeGenerated , ServiceName , Log , _ResourceId | limit 100 2. Make sure the app is bonded to ACS. To explicitly useApplication Configuration Service feature in an app, you have to bind the app through Azure Portal or Azure command line. It is unbound by default. # Azure Command line to bind app az spring application-configuration-service bind --app <app-name> 3.Make sure the deployment is configured with the corrected pattern. A pattern is a combination of{application}/{profile}. Toexplicitly tell Azure Spring Apps which pattern your deployment wants to use, you can do that through Azure Portal or Azure command line. // Bind config file pattern to your deployment az spring app deploy \ --name <app-name> \ --artifact-path <path-to-your-JAR-file> \ --config-file-pattern <config-file-pattern> 4. Restart the app You have to restart the application after the bind operation. Note that restart is not mandatory if you do an app deploy instead. Case 2: Configuration not refreshed in application The refresh strategiesprovides some code examples about the end to end workflow torefresh your Java Spring Boot application configuration after you update the configuration file in the Git repository. The refresh frequency is 60 seconds inAzure Spring Apps but please allow another 60 seconds to reflect the change to the configmap. If you still hit any issue, you can also follow the below troubleshooting guide. 1. Make sure theApplication Configuration Service setting still uses the correct credentials. Credentials may be expired and not been updated inApplication Configuration Service settings. You can verify it through the same step in Case 1 vialogs inAzure Log analysis. 2. Restart the app Another possible reason that the refresh doesn't work in your app is that the Spring context is not refreshed. It could be a code issue in the app. You may restart the app to check the result. Hope the troubleshooting guide is helpful to you! To help you get started, we havemonthly FREE grantson all tiers – 50 vCPU Hours and 100 memory GB Hours per tier. Additional Resources Learn using anMS Learn moduleorself-paced workshopon GitHub Deployyour first Spring app to Azure! Deploythe demo Fitness Store Spring Boot app to Azure Deploythe demo Animal Rescue Spring Boot app to Azure Learnmoreabout implementing solutions on Azure Spring Apps Deploy Spring Boot apps by leveraging enterprise best practices –Azure Spring Apps Reference Architecture Migrate yourSpring Boot,Spring Cloud, andTomcatapplications to Azure Spring Apps Wire Spring applications tointeract with Azure services For feedback and questions, please raise your issues on our GitHub. To learn more about Spring Cloud Azure, we invite you to visit the following links: Reach out to us onStackOverfloworGitHub. Reference Documentation Conceptual Documentation Code Samples Spring Version Mapping1.9KViews2likes0Comments