Blog Post

Apps on Azure Blog
3 MIN READ

Use Nginx as a reverse proxy in Azure Container App

wanjing's avatar
wanjing
Icon for Microsoft rankMicrosoft
Jul 13, 2023

A recent requirement from a customer was to host multiple container apps on Azure Container Apps within the same environment to optimize resource usage. To limit the traffic within the container app environment, an Nginx container was utilized as a reverse proxy server and public endpoint to direct all inbound traffics to the appropriate container app based on the URL. The traffic flow for this architecture is as follows:

To demonstrate how to achieve this, I created a simple proof-of-concept (POC) and would like to share the configuration details in case anyone is interested in creating a similar system. 

 

Prerequisites

As stated in our official documentation for Azure Container Apps, there are two types of ingress that can be enabled: External and Internal. In addition, it supports both HTTP and TCP for ingress protocols.

 

Using this knowledge, I developed two basic container apps using Python Flask that just displayed a simple "hello world" message on the screen. Next, I created an Nginx container app to act as a reverse proxy server to forward requests to the appropriate container app. All three containers are deployed into the same Azure Container App environment. The Nginx container app was configured to accept incoming traffic from "anywhere"(External), while the two basic containers only accepted traffic "limited to within the Container App environment"(Internal). 

 

Senario1: HTTP ingress

My upstream container app only allows HTTP ingress.

To establish an HTTP connection between the upstream container and my Nginx container app, I created the following default.conf file. This configuration directs all requests made to the /app2 path to a container app hosted at https://app2.internal.clustername.region.azurecontainerapps.io/. Additionally, it provides a default HTML page to users when accessing the root path.

 

server{

        listen 80;
        server_name _;

        location /app2{
                proxy_pass https://app2.internal.clustername.region.azurecontainerapps.io/;
                proxy_ssl_server_name on;
                proxy_http_version 1.1;
        }

        location /{
                root /usr/share/nginx/html;
                index index.html;
        }
}

 

Result:

Reference doc: Nginx HTTP Load Balancing.

 

Senario2: TCP ingress

My upstream container app only allows TCP ingress.

 

Per Nginx documentation - Nginx TCP Load Balancing, configurations for the TCP ingress must be instream {} block. Such as:

 

stream {
  upstream rtmp_servers {
    least_conn;
    server  10.0.0.2:1935;
    server  10.0.0.3:1935;
  }

  server {
    listen     1935;
    proxy_pass rtmp_servers;
  }
}

 

 

However, the stream {} block must be defined at the same level as the http {} block in the nginx.conf file. This means, if you add your stream {} configurations to the default.conf file, as we did above, it may cause an error, and the Nginx container may fail to start with the following message:

"nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/default.conf"

 

To avoid this error, you need to check the nginx.conf file, and confirm that the default.conf file is included outside the http {} block and at the same level as http {}

 

nginx.conf is the main configuration file for NGINX. It contains directives that specify how NGINX should behave, and it controls the behavior of the entire NGINX process. This file typically includes configuration blocks for HTTP, mail, and stream servers.

 

default.conf is an example configuration file that can be included in nginx.conf to specify the configuration for a specific HTTP server. This file typically specifies settings, such as server name, listen ports, SSL/TLS certificates, and its location.

 

Below is my solution:

nginx.conf

 

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

}

include /etc/nginx/conf.d/*.conf;

 

 

default.conf

 

stream {
    upstream mybackend_server {
        server containerapp3:8080;
    }

    server {
        listen 80;
        proxy_pass mybackend_server;
    }
}

 

 

Dockerfile:

 

FROM nginx:alpine

COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./default.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

 

 

I hope this helps! I believe this solution could potentially be applied to other platforms as well.

Updated Jul 12, 2023
Version 1.0
  • General_Tso's avatar
    General_Tso
    Copper Contributor

    I've just been trying to do something similar to this, but I can't get it working.

     

    For some reason, setting the host header in the upstream request causing nginx to return a 403 error:

     

    proxy_set_header Host $host;
     
    If this is removed, it works.  It also works fine on normal Kubernetes, so I think something in Container Apps is causing this.  Any ideas?
  • angelplayer's avatar
    angelplayer
    Copper Contributor

    Hello,

     

    I have problem with websocket. I have configuration like this

        location /api/ {
          proxy_ssl_server_name on;
          proxy_http_version 1.1;
        }

        location /ws/ {
          proxy_http_version 1.1;
          proxy_ssl_server_name on;
     
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection 'Upgrade';
          proxy_set_header Host $host;
        }

    The api part is working but the websocket. Any ideas how can I make it work for websocket ?
  • vienltkidl's avatar
    vienltkidl
    Brass Contributor

    Hi wanjing, I've just tried to add the location in the server block and this error happens: 

    [emerg] 1#1: "location" directive is not allowed here in /etc/nginx/conf.d/default.conf:19
    nginx: [emerg] "location" directive is not allowed here in /etc/nginx/conf.d/default.conf:19
  • tgram-3d's avatar
    tgram-3d
    Copper Contributor

    This post was immensely helpful, specifically the comments about removing proxy_set_header Host $host. I had the same issue where it worked fine in Docker compose but not in ACA until I removed that header. My setup is similar to angelplayer with websockets and http. 

  • vienltkidl's avatar
    vienltkidl
    Brass Contributor

    There are many issues when migrating from App Service with Docker Compose to Azure Container Apps. The NGINX configuration files should be modified accordingly. Just FYR: the following of default.conf file works well for me :cool:

     

    variables_hash_bucket_size      128;
    variables_hash_max_size         2048;
    
    server {
        listen 80;
        server_name nginx.xxx-xxxxxxxx.southeastasia.azurecontainerapps.io;
        large_client_header_buffers 4 128k;
        proxy_read_timeout 300;
        proxy_connect_timeout 300;
        proxy_send_timeout 300;
    
        location / {
            proxy_pass              http://app1.internal.xxx-xxxxxxxx.southeastasia.azurecontainerapps.io;
            proxy_http_version      1.1;
            proxy_set_header        Upgrade $http_upgrade;
            proxy_set_header        Connection keep-alive;
            proxy_set_header        X-Real-IP               $remote_addr;
            proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Ssl         on;
            proxy_set_header        X-Forwarded-Proto       https;
            proxy_set_header        X-Forwarded-Host        $host;
            proxy_set_header        X-Forwarded-Server      $host;
            add_header              Front-End-Https         on;
            fastcgi_buffers         32                      32k;
            fastcgi_buffer_size     64k;
            proxy_buffer_size       128k;
            proxy_buffers           4                       256k;
            proxy_busy_buffers_size 256k;
            proxy_read_timeout      300;
            proxy_connect_timeout   300;
            fastcgi_read_timeout    300;
        }
    
        location /api {
            rewrite /api/(.*) /$1 break;
            proxy_pass              http://app2.internal.xxx-xxxxxxxx.southeastasia.azurecontainerapps.io;
            proxy_http_version      1.1;
            proxy_read_timeout      300;
            proxy_connect_timeout   300;
            fastcgi_read_timeout    300;
        }
    }

     

  • Rsimk's avatar
    Rsimk
    Copper Contributor

    I had a problem with websocket too in similar setup. My app is based on Streamlit. I solved it by disabling CORS in Streamlit. Maybe there is better solution, but solved by problem.

Share