ParikshitSavjani in the previous blog post explained about the Pgbouncer and the importance of using a connection pooling with Azure Database for PostgreSQL. Besides, he showed the performance improvements with a simple benchmark test with pgbench, and shared the steps to install the pgbouncer in a VM. The blog post is intended to focus on steps for install and setup the PgBouncer as a sidecar pattern enabling connection pooling between a containerized application and Azure DB for PostgreSQL Single-Server. If you are looking to install and run PgBouncer on Ubuntu VM, you should move on to the last blog post Steps to install and setup PgBouncer connection pooling proxy with Azure DB for PostgreSQL.
Azure Database for PostgreSQL Single Server is a fully managed platform service that provides a lot of database capabilities to support your application in a cloud native design implementation. However, it is a PaaS offering in which you won’t have access to install any external component on the database server, like a Pgbouncer connection pooling. Supposing that you have a containerized application and looking forward to having PGbouncer implemented no matter it's running on Azure Kubernetes Service (AKS), Azure Container Instance (ACI), Azure Container Apps (ACA), or Azure Red Hat OpenShift (ARO) you can setup PGbouncer as Sidecar container at the same pod where your application’s container is wrapped up. AKS has been chosen for didactic purposes following the below architecture:
Workflow
The proposed architecture consists of the following workflow:
- The application running on AKS.
- The application uses in the database layer the Azure Database for PostgreSQL Single-Server.
- The application will connect with the database through pgbouncer connection pooling.
- The PGbouncer image it's deployed as a sidecar container on the same pod of the application’s container.
- The pgbouncer image requires two configuration files as following below:
- A file named pgbouncer.ini which has all configurations of the pgbouncer
- A file named userlist.txt has all PostgreSQL credentials.
- The Azure Key vault provides a security layer for managing and storing the application credentials and connection strings among others. In this architecture, de Azure Key Vault will be used to securely store the database credentials.
- The Azure Storage account stores the pgbouncer.ini configuration file that is shareable to multiple containers.
Note: In this article, a PSQL client image will be deployed rather than a sample web application to establish the connection through pgbouncer.
This article describes best practices for those who are still using the Single Server option based on their own requirements. However, we encourage you to adopt Flexible Server which has richer capabilities including Pgbouncer built-in. Take a look at the comparison table between Single Server and Flexible Server |
Steps to setup PgBouncer on Azure Kubernetes Service (AKS)
All of the steps below will be executed using Visual Studio Code, Azure CLI, and Kubectl.
Define global variables
##Define global variables
resourceGroup="pgbouncer-aks"
region="eastus"
aksClusterName="pgbouncer-aks"
postgreSvrName="pginst01"
postgreAdminUser="azureuser"
postgreAdmPwd="Bluelephant1"
stgAccountName="stgpgfiles"
stgFileShare="fspgfiles"
keyVaultName="kvpgsidecar"
Create the resource group
An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group using the az-group-create command in the east us region.
az group create --name $resourceGroup --location $region
Deploy the AKS cluster
Deploy an AKS cluster to run the pgBouncer sidecar container and PSQL client in order to test the pgBouncer connectivity with the PostgreSQL server.
az aks create -n $aksClusterName -g $resourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity --node-count 1 --node-osdisk-type Ephemeral --enable-secret-rotation
Install kubectl
The kubectl its the Kubernetes command-line client. In order to connect to the Kubernetes cluster from the Visual Studio running on the local computer, we will need to install it. If you use the Azure Cloud Shell, kubectl is already installed.
az aks install-cli
Is necessary to add kubectl in your variable's environment.
Create the Azure Database for PostgreSQL Single-Server
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. Let's use the command below to create the PostgreSQL instance.
az postgres server create --name $postgreSvrName --resource-group $resourceGroup --location $region --admin-user $postgreAdminUser --admin-password $postgreAdmPwd --sku-name "GP_Gen5_2"
##Allow client IP addresses on PostgreSQL server firewall
##Read more here: Configure a server-based firewall rule
az postgres server firewall-rule create --resource-group $resourceGroup --server $postgreSvrName --name AllowIps --start-ip-address <client-ip> --end-ip-address <client-ip>
Now, let's going to create the application credential to be used in the database connection.
CREATE ROLE app WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD 'app123';
GRANT ALL PRIVILEGES ON DATABASE postgres TO app;
Create the Azure Key Vault
Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys. In our scenario, we going to use the Azure Key Vault to secure the user and password of the application which the administrator will put in the userlist.txt file:
az keyvault create $keyVaultName --name $keyVaultName--resource-group $resourceGroup --location $region
Create the Azure Storage Account and the File Share service
The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. The Azure Storage platform includes different services, and in our scenario, we going to use the Azure File Share service, which is a fully managed file share in the cloud. Azure Files can be used to simplify new cloud development projects, and we will use it in the containerization approach putting it as a persistent volume for the containers.
## Storage Account
az storage account create --resource-group $resourceGroup --name $stgAccountName --location "EastUS" --kind StorageV2 --sku Standard_LRS --enable-large-file-share --output none
##Retrieving the storage account key and store within variable
stgAccountKey=$(az storage account keys list -g $resourceGroup -n $stgAccountName --query [0].value -o tsv)
## File Share
az storage share-rm create --resource-group $resourceGroup --storage-account $stgAccountName --name $stgFileShare --quota 1024 --enabled-protocols SMB --output none
Configuring the Pgbouncer image
Now, we going to execute the steps described in the PGbouncer reference. To configure and use this image, we will need to execute these 3 actions below:
- In the pgbouncer.ini file, we will copy and paste the configurations below and change the line host=mypgserver.postgres.database.azure.com port=5432 to reflect the server connection address from created Azure Database for PostgreSQL.
[databases] # Please change "host=postgres.default port=5432" # to reflect your PostgreSQL target: * = host=pginst01.postgres.database.azure.com port=5432 # CHANGE HERE [pgbouncer] # Do not change these settings: listen_addr = 0.0.0.0 auth_file = /etc/pgbouncer_sec/userlist.txt auth_type = trust server_tls_sslmode = verify-ca server_tls_ca_file = /etc/root.crt.pem # These are defaults and can be configured # please leave them as defaults if you are # uncertain. listen_port = 5432 unix_socket_dir = user = postgres pool_mode = transaction max_client_conn = 100 ignore_startup_parameters = extra_float_digits admin_users = postgres # Please add any additional settings below this line
- Input the username and password that have been created for the application connection with the database in the userlist.txt file. This file must be uploaded to the Azure Key Vault.
- Proceed with uploading the file pgbouncer.ini to the Azure File Share that we created in the below steps.
az storage file upload --account-name $stgAccountName --account-key $stgAccountKey --share-name $stgFileShare --source "pgbouncer.ini"
Configuring the Azure Key Vault
Now, we going to proceed with the Azure Key Vault configuration.
- Execute the command below to upload the Azure Postgres Database username and password credentials on the Key Vault.
Note: the userlist.txt file will be mounted within the pod as a secret volume as explained in the next step.az keyvault secret set --name kvpguserslist --vault-name $keyVaultName --file userlist.txt
- Set the secrets of Storage Account credentials that will be used for the AKS to authenticate with the Storage Account.
az keyvault secret set --name kvazurestorageaccountname --vault-name $keyVaultName --value $stgAccountName az keyvault secret set --name kvazurestorageaccountkey --vault-name $keyVaultName --value $stgAccountKey
Deploy the application and resources on AKS
Let's create a YAML file to deploy the AKS workloads. In this YAML file we will configure the structures below:
- PostgreSQL Client Image: It is a container image with a wrapped PSQL client tool used for the pgBouncer connectivity test with the PostgreSQL server.
- Pgbouncer image sidecar. It is a PgBouncer Sidecar container image of PgBouncer, preconfigured for connection pooling when connecting to Azure Database for PostgreSQL.
- Secret Provider Class: The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of an Azure key vault as a secret store with an Azure Kubernetes Service (AKS) cluster via a CSI volume.
- Persistent Volume & Persistent Volume Claim
Note: Is necessary to retrieve the Azure Keyvault Secret Provider Identity automatically provisioned by the AKS add-on by following the command:
##Retrieving the managed identity created on AKS deployment when setting the parameter "--enable-managed-identity"
az aks show -n $aksClusterName -g $resourceGroup --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
Copy the command output and replace it on the stated YAML below beside other configurations. Here is the YAML file that we will use: akspgbouncer.yml (github.com)
Once edited the YAML file, execute the command below to deploy:
kubectl apply -f c:\temp\[filename].yml
Expected Results
Once the Pgbouncer is deployed on the AKS cluster. If you correctly followed all the steps above you can expect the pods in running status as well as outputs represented by the commands and screenshots below:
- Run the command below to list and check the pod status:
kubectl get pods -l app=psql
- Run the command below to check both containers deployed ( psql & azure-pgbouncer sidecar):
kubectl describe pod -l app=psql
- Run the command below to see the connectivity through pgbouncer.
Note: Postgre sidecar image runs in a container in the same pod as the application (PSQL client), it appears to the PSQL client as localhost, so it uses 127.0.0.1 to connect to PGBouncer. The PSQL client image has configured the PGbouncer localhost as default like below:kubectl logs -l app=psql -c psql --follow
- The output below shows the succeeded connectivity: