@Parikshit Savjani in the previous blog post explained about the Pgbouncer and the importance of using a connection pooling with Azure Database for PostgreSQL. Besides, he showed the performance improvements with a simple benchmark test with pgbench, and shared the steps to install the pgbouncer in a VM. The blog post is intended to focus on steps for install and setup the PgBouncer as a sidecar pattern enabling connection pooling between a containerized application and Azure DB for PostgreSQL Single-Server. If you are looking to install and run PgBouncer on Ubuntu VM, you should move on to the last blog post Steps to install and setup PgBouncer connection pooling proxy with Azure DB for PostgreSQL.
Azure Database for PostgreSQL Single Server is a fully managed platform service that provides a lot of database capabilities to support your application in a cloud native design implementation. However, it is a PaaS offering in which you won’t have access to install any external component on the database server, like a Pgbouncer connection pooling. Supposing that you have a containerized application and looking forward to having PGbouncer implemented no matter it's running on Azure Kubernetes Service (AKS), Azure Container Instance (ACI), Azure Container Apps (ACA), or Azure Red Hat OpenShift (ARO) you can setup PGbouncer as Sidecar container at the same pod where your application’s container is wrapped up. AKS has been chosen for didactic purposes following the below architecture:
The proposed architecture consists of the following workflow:
Note: In this article, a PSQL client image will be deployed rather than a sample web application to establish the connection through pgbouncer.
| This article describes best practices for those who are still using the Single Server option based on their own requirements. However, we encourage you to adopt Flexible Server which has richer capabilities including Pgbouncer built-in. Take a look at the comparison table between Single Server and Flexible Server |
All of the steps below will be executed using Visual Studio Code, Azure CLI, and Kubectl.
##Define global variables
resourceGroup="pgbouncer-aks"
region="eastus"
aksClusterName="pgbouncer-aks"
postgreSvrName="pginst01"
postgreAdminUser="azureuser"
postgreAdmPwd="Bluelephant1"
stgAccountName="stgpgfiles"
stgFileShare="fspgfiles"
keyVaultName="kvpgsidecar"
An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group using the az-group-create command in the east us region.
az group create --name $resourceGroup --location $region
Deploy an AKS cluster to run the pgBouncer sidecar container and PSQL client in order to test the pgBouncer connectivity with the PostgreSQL server.
az aks create -n $aksClusterName -g $resourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity --node-count 1 --node-osdisk-type Ephemeral --enable-secret-rotation
The kubectl its the Kubernetes command-line client. In order to connect to the Kubernetes cluster from the Visual Studio running on the local computer, we will need to install it. If you use the Azure Cloud Shell, kubectl is already installed.
az aks install-cli
Is necessary to add kubectl in your variable's environment.
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. Let's use the command below to create the PostgreSQL instance.
az postgres server create --name $postgreSvrName --resource-group $resourceGroup --location $region --admin-user $postgreAdminUser --admin-password $postgreAdmPwd --sku-name "GP_Gen5_2"
##Allow client IP addresses on PostgreSQL server firewall
##Read more here: Configure a server-based firewall rule
az postgres server firewall-rule create --resource-group $resourceGroup --server $postgreSvrName --name AllowIps --start-ip-address <client-ip> --end-ip-address <client-ip>
Now, let's going to create the application credential to be used in the database connection.
CREATE ROLE app WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD 'app123';
GRANT ALL PRIVILEGES ON DATABASE postgres TO app;
Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys. In our scenario, we going to use the Azure Key Vault to secure the user and password of the application which the administrator will put in the userlist.txt file:
az keyvault create $keyVaultName --name $keyVaultName--resource-group $resourceGroup --location $region
The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. The Azure Storage platform includes different services, and in our scenario, we going to use the Azure File Share service, which is a fully managed file share in the cloud. Azure Files can be used to simplify new cloud development projects, and we will use it in the containerization approach putting it as a persistent volume for the containers.
## Storage Account
az storage account create --resource-group $resourceGroup --name $stgAccountName --location "EastUS" --kind StorageV2 --sku Standard_LRS --enable-large-file-share --output none
##Retrieving the storage account key and store within variable
stgAccountKey=$(az storage account keys list -g $resourceGroup -n $stgAccountName --query [0].value -o tsv)
## File Share
az storage share-rm create --resource-group $resourceGroup --storage-account $stgAccountName --name $stgFileShare --quota 1024 --enabled-protocols SMB --output none
Now, we going to execute the steps described in the PGbouncer reference. To configure and use this image, we will need to execute these 3 actions below:
[databases]
# Please change "host=postgres.default port=5432"
# to reflect your PostgreSQL target:
* = host=pginst01.postgres.database.azure.com port=5432 # CHANGE HERE
[pgbouncer]
# Do not change these settings:
listen_addr = 0.0.0.0
auth_file = /etc/pgbouncer_sec/userlist.txt
auth_type = trust
server_tls_sslmode = verify-ca
server_tls_ca_file = /etc/root.crt.pem
# These are defaults and can be configured
# please leave them as defaults if you are
# uncertain.
listen_port = 5432
unix_socket_dir =
user = postgres
pool_mode = transaction
max_client_conn = 100
ignore_startup_parameters = extra_float_digits
admin_users = postgres
# Please add any additional settings below this line
az storage file upload --account-name $stgAccountName --account-key $stgAccountKey --share-name $stgFileShare --source "pgbouncer.ini"
Now, we going to proceed with the Azure Key Vault configuration.
az keyvault secret set --name kvpguserslist --vault-name $keyVaultName --file userlist.txt
Note: the userlist.txt file will be mounted within the pod as a secret volume as explained in the next step.az keyvault secret set --name kvazurestorageaccountname --vault-name $keyVaultName --value $stgAccountName
az keyvault secret set --name kvazurestorageaccountkey --vault-name $keyVaultName --value $stgAccountKey
Let's create a YAML file to deploy the AKS workloads. In this YAML file we will configure the structures below:
Note: Is necessary to retrieve the Azure Keyvault Secret Provider Identity automatically provisioned by the AKS add-on by following the command:
##Retrieving the managed identity created on AKS deployment when setting the parameter "--enable-managed-identity"
az aks show -n $aksClusterName -g $resourceGroup --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
Copy the command output and replace it on the stated YAML below beside other configurations. Here is the YAML file that we will use: akspgbouncer.yml (github.com)
Once edited the YAML file, execute the command below to deploy:
kubectl apply -f c:\temp\[filename].yml
Once the Pgbouncer is deployed on the AKS cluster. If you correctly followed all the steps above you can expect the pods in running status as well as outputs represented by the commands and screenshots below:
kubectl get pods -l app=psql
kubectl describe pod -l app=psql
kubectl logs -l app=psql -c psql --followNote: Postgre sidecar image runs in a container in the same pod as the application (PSQL client), it appears to the PSQL client as localhost, so it uses 127.0.0.1 to connect to PGBouncer. The PSQL client image has configured the PGbouncer localhost as default like below:You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.