As cloud engineers and architects embrace containerization, ensuring secure communication becomes paramount. Data transmission and access control are critical aspects of security that need to be considered. SSH tunneling is a technique that can help achieve secure communication between different components of an application or solution. SSH tunneling creates an encrypted channel over an existing SSH connection, allowing secure data transmission between a local machine (SSH client) and a remote server (SSH server).
In this article, we will show how to set up SSH tunneling between containers running in the cloud that need to communicate with downstream resources via an SSH server hosted in a cloud VM.
Before diving into the implementation, let’s have a quick refresher about SSH tunneling. Also known as SSH port forwarding, SSH tunneling allows secure communication between two endpoints by creating an encrypted tunnel over an existing SSH connection. It enables data to be transmitted securely between a local machine (SSH client) and a remote server (SSH server) through an intermediary channel. Here is an overview of different scenarios where SSH tunneling can be used:
1. Secure Remote Access to Internal Services: An organization has internal services (e.g., databases, internal web applications) that are not exposed to the public internet for security reasons. Using SSH tunneling, employees can securely connect to these internal services from remote locations without exposing the services to the internet.
2. Bypassing Firewall Restrictions: Developers need to access specific resources that are behind a corporate firewall, but the firewall restricts direct access. By setting up an SSH tunnel, developers can securely forward traffic through the firewall, allowing them to access the restricted resources.
3. Protecting Sensitive Data in Transit: An application needs to send sensitive data between different components or services, and there's a risk of data interception. SSH tunneling can be used to encrypt the data as it travels between the components, ensuring that it remains secure in transit.
4. Accessing a Remote Database Securely: A developer needs to access a remote database server for maintenance or development purposes, but direct access is not permitted due to security policies. The developer can set up an SSH tunnel to securely connect to the remote database server without exposing it to the public internet.
5. Securely Using Insecure Protocols: An application uses an insecure protocol (e.g., FTP, HTTP) to communicate between different services. By wrapping the insecure protocol within an SSH tunnel, the communication can be secured, protecting the data from being intercepted.
6. Remote Debugging: A developer needs to debug an application running on a remote server, but direct access to the debugging port is restricted. SSH tunneling can be used to forward the debugging port from the remote server to the local machine, allowing the developer to securely debug the application.
7. Protecting IoT Device Communication: IoT devices need to communicate with a central server, but the communication is vulnerable to interception or tampering. By establishing an SSH tunnel between the IoT devices and the central server, the communication can be encrypted and secured, protecting the data in transit.
8. Secure File Transfer: Files need to be transferred securely between different systems or locations. SSH tunneling can be used to securely transfer files over the encrypted tunnel, ensuring that the data remains confidential and integrity is maintained.
9. Accessing Remote Services: A user needs to access services or resources hosted on a remote server securely. By setting up an SSH tunnel, the user can securely access the remote services as if they were running locally, protecting the data in transit.
10. Protecting Web Traffic: Web traffic needs to be secured when accessing websites or web applications over untrusted networks. SSH tunneling can be used to create a secure connection to a remote server, encrypting the web traffic and protecting it from eavesdropping or interception.
For this article, we will implement the following scenario:
Based on the scenario described above, we will now script the implementation of the architecture. The script will create the necessary resources, configure the SSH tunnel, and deploy the containerized applications.
Before running the script, ensure that you have the following prerequisites:
Let's start by defining the parameters that will be used in the script. These parameters include the resource group name, location, virtual network names, subnet names, VM name, VM image, VM size, SSH key, admin username, admin password, container apps environment name, container registry name, container app image names, SSH client container image name, SSH port, and NGINX port. A random string is generated and appended to the resource group name, container apps environment name, and container registry name to ensure uniqueness.
random=$(echo $RANDOM | tr '[0-9]' '[a-z]')
echo "Random:" $random
export RESOURCE_GROUP=rg-ssh-$(echo $random)
echo "RESOURCE_GROUP:" $RESOURCE_GROUP
export LOCATION="australiaeast"
export INFRA_VNET_NAME="myInfraVNet"
export APP_VNET_NAME="myAppVNet"
export INFRA_SUBNET_NAME="mySubnet"
export APP_SUBNET_NAME="acaSubnet"
export VM_NAME="nginxVM"
export VM_IMAGE="Ubuntu2204"
export VM_SIZE="Standard_DS1_v2"
export VM_KEY=mykey$(echo $random)
export ADMIN_USERNAME="azureuser"
export ADMIN_PASSWORD="Password123$" # Replace with your actual password
export CONTAINER_APPS_ENV=sshacae$(echo $random)
export REGISTRY_NAME=sshacr$(echo $random)
export REGISTRY_SKU="Basic"
export CONTAINER_APP_IMAGE="mycontainerapp:latest"
export SSH_CLIENT_CONTAINER_IMAGE="sshclientcontainer:latest"
export CONTAINER_APP_NAME="mycontainerapp"
export SSH_CLIENT_CONTAINER_APP_NAME="sshclientcontainerapp"
export SSH_PORT=22
export NGINX_PORT=80
Create a resource group using the az group create
command. The resource group name and location are passed as parameters.
az group create --name $RESOURCE_GROUP --location $LOCATION
Create two virtual networks, myInfraVNet
and myAppVNet
, using the az network vnet create
command. The address prefixes and subnet prefixes are specified for each virtual network. The az network vnet subnet update
command is used to delegate the Microsoft.App/environments
to the myAppVNet
subnet.
az network vnet create --resource-group $RESOURCE_GROUP --name $INFRA_VNET_NAME --address-prefix 10.0.0.0/16 --subnet-name $INFRA_SUBNET_NAME --subnet-prefix 10.0.0.0/24
az network vnet create --resource-group $RESOURCE_GROUP --name $APP_VNET_NAME --address-prefix 10.1.0.0/16 --subnet-name $APP_SUBNET_NAME --subnet-prefix 10.1.0.0/24
az network vnet subnet update --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --name $APP_SUBNET_NAME --delegations Microsoft.App/environments
Create a VNET peering between myInfraVNet
and myAppVNet
using the az network vnet peering create
command. Two peering connections are created, one from myInfraVNet
to myAppVNet
and the other from myAppVNet
to myInfraVNet
.
az network vnet peering create --name VNet1ToVNet2 --resource-group $RESOURCE_GROUP --vnet-name $INFRA_VNET_NAME --remote-vnet $APP_VNET_NAME --allow-vnet-access
az network vnet peering create --name VNet2ToVNet1 --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --remote-vnet $INFRA_VNET_NAME --allow-vnet-access
Create a network security group (NSG) for the nginxVM
using the az network nsg create
command. Two NSG rules are created to allow SSH traffic on port 22 and HTTP traffic on port 80.
az network nsg create --resource-group $RESOURCE_GROUP --name ${VM_NAME}NSG
az network nsg rule create --resource-group $RESOURCE_GROUP --nsg-name ${VM_NAME}NSG --name AllowSSH --protocol Tcp --direction Inbound --priority 1000 --source-address-prefixes '*' --source-port-ranges '*' --destination-address-prefixes '*' --destination-port-ranges $SSH_PORT --access Allow
az network nsg rule create --resource-group $RESOURCE_GROUP --nsg-name ${VM_NAME}NSG --name AllowHTTP --protocol Tcp --direction Inbound --priority 1001 --source-address-prefixes '*' --source-port-ranges '*' --destination-address-prefixes '*' --destination-port-ranges $NGINX_PORT --access Allow
Create a network interface for the nginxVM using the az network nic create
command. The NIC is associated with the myInfraVNet
and mySubnet
and the NSG created earlier.
az network nic create --resource-group $RESOURCE_GROUP --name ${VM_NAME}NIC --vnet-name $INFRA_VNET_NAME --subnet $INFRA_SUBNET_NAME --network-security-group ${VM_NAME}NSG
Create a virtual machine using the az vm create
command. The VM is created with the specified image, size, admin username, and password. The NIC created earlier is associated with the VM. Ensure that you have provided a value for the password in the ADMIN_PASSWORD
variable.
az vm create --resource-group $RESOURCE_GROUP --name $VM_NAME --image $VM_IMAGE --size $VM_SIZE --admin-username $ADMIN_USERNAME --admin-password $ADMIN_PASSWORD --nics ${VM_NAME}NIC
export VM_PRIVATE_IP=$(az vm show -d -g $RESOURCE_GROUP -n $VM_NAME --query privateIps -o tsv)
echo "VM Private IP: $VM_PRIVATE_IP"
Generate an SSH key pair using the ssh-keygen
command. The public key is added to the VM using the az vm user update
command.
# Generate an SSH key pair
ssh-keygen -t rsa -b 4096 -f $VM_KEY -N ""
# Add the public key to the VM
az vm user update --resource-group $RESOURCE_GROUP --name $VM_NAME --username $ADMIN_USERNAME --ssh-key-value "$(cat $VM_KEY.pub)"
# Print success message
echo "SSH key pair generated and public key added to VM $VM_NAME"
Install NGINX and SSH server on the VM using the az vm run-command invoke
command. This command runs a shell script on the VM to update the package repository, install NGINX, start the NGINX service, install the SSH server, and start the SSH service.
az vm run-command invoke --command-id RunShellScript --name $VM_NAME --resource-group $RESOURCE_GROUP --scripts "sudo apt-get update && sudo apt-get install -y nginx && sudo systemctl start nginx && sudo apt-get install -y openssh-server && sudo systemctl start ssh"
Create an Azure Container Registry using the az acr create
command to store the container images that will be deployed to the container apps.
az acr create --resource-group $RESOURCE_GROUP --name $REGISTRY_NAME --sku $REGISTRY_SKU --location $LOCATION --admin-enabled true
Login to the Azure Container Registry using the az acr login
command.
az acr login --name $REGISTRY_NAME
Create a Dockerfile for the mycontainerapp
. The Dockerfile specifies the base image, working directory, copy files, install packages, expose port, define environment variable, and run the application.
echo "
# Use an official Python runtime as a parent image
FROM python:3.8-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD [\"python\", \"app.py\"]
" > Dockerfile.mycontainerapp
Create a Dockerfile for the sshclientcontainer
. The Dockerfile specifies the base image, install SSH client, copy SSH key, set working directory, copy files, expose port, and run the SSH client.
echo "
# Use an official Ubuntu as a parent image
FROM ubuntu:20.04
# Install SSH client
RUN apt-get update && apt-get install -y openssh-client && apt-get install -y curl
# Copy SSH key
COPY ${VM_KEY} /root/.ssh/${VM_KEY}
RUN chmod 600 /root/.ssh/${VM_KEY}
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Make port 80 available to the world outside this container
EXPOSE 80
# Run the SSH client when the container launches
CMD [\"bash\", \"-c\", \"ssh -i /root/.ssh/${VM_KEY} -o StrictHostKeyChecking=no -L 0.0.0.0:80:localhost:80 ${ADMIN_USERNAME}@${VM_PRIVATE_IP} -N\"]
" > Dockerfile.sshclientcontainer
Create a simple app that can be hosted on mycontainerapp. The app.py
file contains a simple Flask application that fetches content from the NGINX server running on the VM and renders it along with other content.
echo "
import requests
from flask import Flask, render_template_string
app = Flask(__name__)
@app.route('/')
def hello_world():
response = requests.get('http://sshclientcontainerapp:80')
html_content = \"\"\"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Response Page</title>
</head>
<body>
<header>
<h1>Welcome to My Container App</h1>
</header>
<main>
<div>Response Content - The following response has been received from the NGINX server running on the VM via SSH tunnel</div>
<hr/>
<div>{}</div>
</main>
<footer>
<hr/>
<p>© 2024 My Flask App</p>
</footer>
</body>
</html>
\"\"\".format(response.text)
return render_template_string(html_content)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
" > app.py
echo "
Flask==2.0.0
Werkzeug==2.2.2
requests==2.25.1
" > requirements.txt
Build the Docker images for sshclientcontainer
and mycontainerapp
using the docker build
command. The images are tagged with the Azure Container Registry name and pushed to the registry using the docker push
command.
# Build the Docker image for sshclientcontainer
docker build -t $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE -f Dockerfile.sshclientcontainer .
# Push the Docker image for sshclientcontainer to Azure Container Registry
docker push $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE
# Build the Docker image for mycontainerapp
docker build -t $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE -f Dockerfile.mycontainerapp .
# Push the Docker image for mycontainerapp to Azure Container Registry
docker push $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE
Create an Azure Container Apps environment using the az containerapp env create
command. The environment is associated with the virtual network and subnet created earlier.
# Get the subnet ID for the infrastructure subnet
export INFRA_SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $APP_VNET_NAME --name $APP_SUBNET_NAME --query id --output tsv)
echo $INFRA_SUBNET_ID
# Create the Azure Container Apps environment
az containerapp env create --name $CONTAINER_APPS_ENV --resource-group $RESOURCE_GROUP --location $LOCATION --infrastructure-subnet-resource-id $INFRA_SUBNET_ID
Deploy the container apps to the Azure Container Apps environment using the az containerapp create
command. The container images are pulled from the Azure Container Registry, and the container apps are configured to use the SSH tunnel for secure communication.
az acr login --name $REGISTRY_NAME
# Deploy sshclientcontainerapp
az containerapp create --name $SSH_CLIENT_CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --environment $CONTAINER_APPS_ENV --image $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE --target-port 80 --ingress 'external' --registry-server $REGISTRY_NAME.azurecr.io
az acr login --name $REGISTRY_NAME
az containerapp create --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --environment $CONTAINER_APPS_ENV --image $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE --target-port 80 --ingress 'external' --registry-server $REGISTRY_NAME.azurecr.io
After deploying the container apps, you can test the deployment by accessing the public URL of the mycontainerapp
. The app should fetch content from the NGINX server running on the VM through the SSH tunnel and render it along with other content.
Retrieve the public URL of the mycontainerapp
:
MY_CONTAINER_APP_URL=$(az containerapp show --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --query 'properties.configuration.ingress.fqdn' -o tsv)
echo "mycontainerapp URL: http://$MY_CONTAINER_APP_URL"
Open the URL in your web browser by copying and pasting the URL printed in the previous step.
You should see a webpage that includes the response content from the NGINX server running on the VM via the SSH tunnel.
After testing the deployment, you can clean up the resources by deleting the resource group. This will remove all the resources created in the script.
az group delete --name $RESOURCE_GROUP --yes --no-wait
In this article, we demonstrated how to secure containerized applications using SSH tunneling. We covered the steps to set up the necessary infrastructure, create and deploy containerized applications, and establish an SSH tunnel for secure communication between the container apps and a VM hosting an NGINX server.
By following these steps, you can ensure that your containerized applications communicate securely with downstream resources, enhancing the overall security of your cloud-native architecture.
For more information on securing containerized applications, refer to the Azure Container Apps documentation.
If you have any questions or need further assistance, feel free to consult the Azure documentation or reach out to Microsoft support.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.