Blog Post

Apps on Azure Blog
16 MIN READ

Securing your AKS Deployments - Microservice User Authentication using Azure AD and Oauth 2 Proxy

owaino's avatar
owaino
Icon for Microsoft rankMicrosoft
May 17, 2023

 

 

 

Following on from my previous blog post covering SSL Termination and NGINX, in this post we will expand our deployment to also now include user authentication of a new web app.

 

As with every article in this series this has been driven by customer use cases. In this instance the customer desired having a development web application on a public domain but ring fenced to only allow access to authenticated users from Azure AD.

 

Before we dive deeper into the use case and implementation its important to understand the various components if unfamiliar. I will speak about benefits of certain technologies as I go through but it is worth taking a quick look at these links as a level set if you need it:

 

 


Use Case

 

As mentioned the customer was looking to add authentication to their development applications. There are many other reasons you may want to add user authentication to your application for example any application that wants to serve differing content or features to users based on an associated property. More information on how scopes and permissions in the Microsoft Identity Platform can be found here.

 

The customer in this instance wanted their production code base and development code base to be as similar as possible using the Microsoft Authentication Library was not possible.

 

This got me thinking as to the developer overhead that goes into implementing authentication at the application level. I started speaking to some developers I know and found a common theme summarized eloquently by one developer who stated "I just want to spend time developing value add features, why do I have to care about authentication".

I agree, in most cases authentication is an infrastructure and security concern.

 

Shifting your authentication outside of your application to middleware has two clear benefits:

 

  1. Developer Overhead - Developers can spend less time concerned about implementing authentication and more time working on features that add value to your business and customers.
  2. Application Workload - Offloading the authentication to dedicated middleware decreases the processing required by the compute that is hosting your application.

 

That being said this isn't suitable for every application or every business. It could still be the case that if developers remain responsible for authentication once shifting outside of the application. In this event it can actually increase developer workload.

 

DAPR

 

It would be worth noting that OAuth2 Proxy was not the first authentication middleware I worked with when creating this demo and blog. Initially I used DAPR's Oauth2 Middleware Component. This component has a lot of promise and utilities a sidecar architecture to deploy its authentication component. This is brilliant as it allows you to configure your authentication individually for each microservice. However unfortunately DAPR's authentication component is still in Alpha and has some unexpected behavior. I have raised a relevant issue here, hopefully in the future this component will be something to revisit.


Implementation

 

I will be using Azure Key Vault to store some of the sensitive data used in this blog. I will do a quick walk through below of how this is configured and setup for AKS. This is not essential to setup Oauth2 in AKS and will add unnecessary complexity if building this as a POC/Demo. Feel free to skip this part if not relevant for you.

 

I will also be building on the architecture created in my original blog post. This is not essential to complete before this implementation but will provide context.

 

I have changed the application being deployed from my boring API to quite an exciting web app developed by Mark Harrison, a Senior Specialist here at Microsoft UK.

 

We will be deploying a web application with an Oauth2 reverse proxy ensuring that only authenticated users are able to access the web app. We will also be deploying an API for the web app which wont be use the Oauth proxy and will only be accessible from inside the cluster. Take a look at the pods in architecture at the top of the page for more clarity.

 

Prerequisites

 

 

 

Optional

 

 

 

The complete deployment files for this post can be found at: https://github.com/owainow/microservice-authentication-oauth2-aks

 

Azure Active Directory - Register an application

 

As we are going to be authenticating users accessing our application using Azure Active Directory we will need to create an application registration in our directory.

 

These registrations can be considered the definition of the application and are the objects that describe the application to Azure AD. To create our app registration we will go through the portal into Azure Active Directory.

 

When registering the application we have some information we need to pass over all of which can be changed once the app registration is created. The Redirect URI is important to highlight here as it will redirect users once authentication is finished. We will pass through a callback URL which will be used later when we configure our Oauth2 Proxy.

 

 

 

 

Creating the app registration.

 

Update the redirect URI to your suit your domain and protocol.

 

https://<yourdomain>/oauth2/callback

 

Once your application is created take a note of the Application (client) ID on the overview page. We will need this later.

Within our application registration we will also need to create a client secret to be used to identify our application. We could also use a certificate for higher assurance however for this example a secret will suffice.

 

 

Client secret.

 

Take a note of your client secret when it is created as it will only be available to view once (You can create a new secret if you loose it).

 

This is all the setup required on the application registration for now however it is worth highlighting some additional features. This app registration allows you to create custom branding for your login to provide an integrated experience with the rest of your application.

 

API permissions is also important to be aware of. By default the Microsoft Graph is added for this application to enable retrieval of basic user data when signed in. Additional permissions can be added if they are required by the application however the user will need to consent to the application using this data when they first login.

 

Azure Key Vault

 

In this scenario we will use Azure Key Vault to secure our secrets when being used by our AKS applications.

In this demo we will be using secure cookies and as a result we will need to create a cookie secret. We can create a cookie secret with the following command using OpenSSL.

 

export cookie_secret="$(openssl rand -hex 16)" # Create local variable 
echo $cookie_secret                            # Check output 
bbf240d482fc7236cd0bf01cec54422d 

 

Now we must set the secrets in Key Vault. To do this we will use the Azure CLI to save some time. This can also be done through the portal. First ensure that you are logged in and are in the correct subscription. Then run:

 

az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-proxy-client-id" --value "<Application (Client) ID>"
az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-proxy-client-secret" --value "<Client Secret>"
az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-proxy-cookie-secret" --value $cookie_secret

 

If we then check our Key Vault we should see our secrets:

 

 

Key vault secrets

 

Azure Kubernetes Service

 

We now need to deploy our applications and components into our Kubernetes cluster.

 

Azure Key Vault Integration

 

To start with we will need to enable the Azure Secret Store CSI Driver on our cluster if we did not enable it at creation. We can do so with the following command:

 

az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup

 

We can then verify the install by running:

 

kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver,secrets-store-provider-azure)

 

Now that our driver is running on our cluster we need to decide how we are going to allow our AKS cluster to access our Key Vault. We have a number of options for doing that:

 

 

In this deployment we will use user-assigned managed identities although very soon workload identity will be GA. I would encourage you to take some time to take a look at the difference between the user assigned managed identity we will use today and workload identity.

 

In the future I will edit these deployment files and blog to also show how to leverage workload identity.

 

To create a managed identity for our cluster we can use the following command:

 

az aks update -g <resource-group> -n <cluster-name> --enable-managed-identity

 

We can then query the client ID of the identity created for us.

 

az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv

 

We then need to add that client ID to our key vault with the appropriate permissions:

 

# set policy to access keys in your key vault
az keyvault set-policy -n <keyvault-name> --key-permissions get --spn <identity-client-id>
# set policy to access secrets in your key vault
az keyvault set-policy -n <keyvault-name> --secret-permissions get --spn <identity-client-id>
# set policy to access certs in your key vault
az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id>

 

Feel free to check the permissions have been set in your Key Vault through the portal.

 

We must now create a SecretProviderClass. The secret provider class will access our KeyVault using the managed identity we have just created. As Oauth2 Proxy also requires our secrets to be passed through as environment variables we will include some secret objects in this file. Two important things to note here:

 

  1. Kubernetes Secrets - These key vault secrets are still fundamentally being passed through in this instance as Kubernetes secrets as we are passing them as environment variables. This may not be secure enough for some environments. We could alternatively mount these secrets to the pod and reference the mount point. This however still has its risks. We do still benefit from being able to rotate, update and disable secrets from Azure Key Vault.
  2. Secret Syncing - The great thing about the secret provider class is that however you are accessing the secrets you can continually synchronize the secrets with the version in your key vault. The secret is created when the pod is mounted at which point the latest version will be used from the key vault. That being said the secret provider class does not restart application pods that are already running.

 

The secret provider class requires the Managed Identity client ID and the tenant ID of your key vault (Github Link)

 

apiVersion: v1 
kind: Namespace # We are splitting our app & API across namespaces for later usage.
metadata:
  labels:
    app.kubernetes.io/name: colors
  name: colors-web
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-aks-zero-trust-user-msi # needs to be unique per namespace
  namespace: colors-web
spec:
  provider: azure
  secretObjects:                                # secretObjects defines the desired state of synced K8s secret objects
    - secretName: client-id
      type: opaque
      data:
        - objectName: oauth2-proxy-client-id
          key: oauth2_proxy_client_id
    - secretName: client-secret
      type: opaque
      data:
        - objectName: oauth2-proxy-client-secret
          key: oauth2_proxy_client_secret
    - secretName: cookie-secret
      type: opaque
      data:
        - objectName: oauth2-proxy-cookie-secret
          key: oauth2_proxy_cookie_secret
  parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "true"          
    userAssignedIdentityID: <Managed Identity Client ID> 

    keyvaultName: aks-zero-trust-kv       # Set to the name of your key vault
    cloudName: ""                         # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
    objects:  |
      array:
        - |
          objectName: oauth2-proxy-client-id
          objectType: secret              # object types: secret, key, or cert
          objectVersion: ""               # [OPTIONAL] object versions, default to latest if empty
        - |
          objectName: oauth2-proxy-client-secret
          objectType: secret              # object types: secret, key, or cert
          objectVersion: ""               # [OPTIONAL] object versions, default to latest if empty
        - |
          objectName: oauth2-proxy-cookie-secret
          objectType: secret              # object types: secret, key, or cert
          objectVersion: ""               # [OPTIONAL] object versions, default to latest if empty


    tenantId: <Your tenant ID>        # The tenant ID of your key vault

 

We then must apply the secret provider class:

 

kubectl apply -f secretproviderclass.yaml

 

It is worth checking the secrets currently and noting that currently the objects defined are not created. This is because as mentioned these are created when the application requires them.

 

Application Deployment

 

We now need to deploy our application. We will be deploying two applications in this example. One web app and one API. First lets deploy the API with the following manifest (Github Link)

 

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/name: colors
  name: colors-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: colors-api-depl
  namespace: colors-api


spec:
  replicas: 1
  selector:
    matchLabels:
      app: colors-api-service
  template:
    metadata:
      labels:
        app: colors-api-service
    spec:
      containers:
       - name: colors-api-image
         image: ghcr.io/markharrison/colorsapi:latest
         imagePullPolicy: Always
         resources:
          requests:
            memory: "128Mi"
            cpu: "500m"
          limits:
            memory: "256Mi"
            cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
  name: colors-api-srv
  namespace: colors-api
spec:
  type: ClusterIP
  selector:
    app: colors-api-service
  ports:
  - name: colors-api-service-http
    protocol: TCP
    port: 80
    targetPort: 80

 

We now apply this manifest:

 

kubectl apply -f colorswebapi.yaml

 

Next we must deploy the web app with the following manifest (Github Link):

 

apiVersion: v1 
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/name: colors
  name: colors-web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: colors-web-depl
  namespace: colors-web


spec:
  replicas: 1
  selector:
    matchLabels:
      app: colors-web-service
  template:
    metadata:
      labels:
        app: colors-web-service
    spec:
      containers:
       - name: colors-web-image
         image: ghcr.io/markharrison/colorsweb:latest
         imagePullPolicy: Always
         resources:
          requests:
            memory: "128Mi"
            cpu: "500m"
          limits:
            memory: "256Mi"
            cpu: "1000m"
---
---
apiVersion: v1
kind: Service
metadata:
  name: colors-web-clusterip-srv
  namespace: colors-web
spec:
  type: ClusterIP
  selector:
    app: colors-web-service
  ports:
  - name: colors-web-service-http
    protocol: TCP
    port: 80
    targetPort: 80

 

Notice up to this point there is no mention of oauth2. We are not passing any information to the application and the application has no user authentication built in.

 

We now need to create two components our Oauth2 container to handle the authentication and our ingress resource. First we will deploy the Oauth2 application.

 

The deployment looks as follows (Github Link)

 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    application: colors-service-oauth2-proxy
  name: colors-service-oauth2-proxy-deployment
  namespace: colors-web


spec:
  replicas: 1
  selector:
    matchLabels:
      application: colors-service-oauth2-proxy
  template:
    metadata:
      labels:
        application: colors-service-oauth2-proxy
    spec:
      containers:
      - args:
        - --provider=oidc
        - --azure-tenant=<Azure tenant ID> # Azure AD OAuth2 Proxy application Tenant ID
        - --pass-access-token=true
        - --cookie-name=_proxycookie 
        - --upstream=<Redirect URL>
        - --cookie-csrf-per-request=true
        - --cookie-csrf-expire=5m           # Avoid unauthorized csrf cookie errors.
        - --email-domain=*                  # Email domains allowed to use the proxy
        - --http-address=0.0.0.0:4180
        - --oidc-issuer-url=https://login.microsoftonline.com/<Tenant ID>/v2.0
        - --user-id-claim=oid



        name: colors-service-oauth2-proxy
        image: quay.io/oauth2-proxy/oauth2-proxy:v7.4.0
        imagePullPolicy: Always
        volumeMounts:
        - name: secrets-store01-inline
          mountPath: "/mnt/secrets-store"
          readOnly: true
        env:
        - name: OAUTH2_PROXY_CLIENT_ID # keep this name - it\'s required to be defined like this by OAuth2 Proxy
          valueFrom:
            secretKeyRef:
              name: client-id
              key: oauth2_proxy_client_id
        - name: OAUTH2_PROXY_CLIENT_SECRET # keep this name - it\'s required to be defined like this by OAuth2 Proxy
          valueFrom:
            secretKeyRef:
              name: client-secret
              key: oauth2_proxy_client_secret
        - name: OAUTH2_PROXY_COOKIE_SECRET # keep this name - it\'s required to be defined like this by OAuth2 Proxy
          valueFrom:
            secretKeyRef:
              name: cookie-secret
              key: oauth2_proxy_cookie_secret
        ports:
        - containerPort: 4180
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi
            
      volumes:
        - name: secrets-store01-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "azure-aks-zero-trust-user-msi"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    application: colors-service-oauth2-proxy
  name: colors-service-oauth2-proxy-svc
  namespace: colors-web
spec:
  ports:
  - name: http
    port: 4180
    protocol: TCP
    targetPort: 4180
  selector:
    application: colors-service-oauth2-proxy
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "2000m"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
  name: colors-service-oauth2-proxy-ingress
  namespace: colors-web
spec:
  ingressClassName: nginx
  rules:
     - http:
        paths:
          - path: /oauth2
            pathType: Prefix
            backend:
              service:
                name: colors-service-oauth2-proxy-svc
                port:
                  number: 4180

 

In this deployment we are referencing the secret objects included in the secrets provider which once we apply this manifest will be created. We can also see that we are using the secrets volumes. The volumes specifies the CSI and the secret provider class and the volume mount then mounts the secrets to the pod. This allows the deployment to create the secrets.

 

Once you have added the ID's for your specific deployment we can apply this deployment.

 

kubectl apply oauth2proxy.yaml

 

If we look for our secrets we should now see the secrets have been created.

 

REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main
$ kubectl get secrets
NAME            TYPE                DATA   AGE
client-id       opaque              1      2s
client-secret   opaque              1      2s
cookie-secret   opaque              1      2s
tls-secret      kubernetes.io/tls   2      6d17h)

 

Now we will deploy the ingress. If you are following from the previous post we will replace the existing ingress. If you are not please install the NGINX Ingress Controller on your cluster.

 

The ingress manifest is as follows (Github Link)

 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-srv
  namespace: colors-web
  annotations:
    kuberentes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: 'true'
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "360"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "360"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "360"
    nginx.ingress.kubernetes.io/proxy-body-size: "2000m"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/auth-url: "https://www.owain.online/oauth2/auth"
    nginx.ingress.kubernetes.io/auth-signin: "https://www.owain.online/oauth2/start?rd=https://www.owain.online/oauth2/callback"
    nginx.ingress.kubernetes.io/auth-response-headers: "Authorization, X-Auth-Request-Email, X-Auth-Request-User, X-Forwarded-Access-Token"


  labels:
    app: colors-web-service


spec: 
  ingressClassName: nginx
  tls:
   - hosts: 
     - owain.online
     secretName: test-tls


  rules: 
    - host: owain.online
    - http: 
        paths:
          - path: /
            pathType: Prefix
            backend: 
              service: 
                name: colors-web-clusterip-srv
                port: 
                  number: 80
                  

 

There are many annotations on the nginx ingress. The three to highlight are:

 

nginx.ingress.kubernetes.io/auth-url: "https://www.owain.online/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://www.owain.online/oauth2/start?rd=https://www.owain.online/oauth2/callback"
nginx.ingress.kubernetes.io/auth-response-headers: "Authorization, X-Auth-Request-Email, X-Auth-Request-User, X-Forwarded-Access-Token"

 

The auth-url indicates the url that your requests will be forwarded too when they hit this ingress. Here we are using the /oauth2 endpoint that our oauth proxy will stand up. Auth-Sign-In points to the starting url of the authentication flow and passes our callback url and finally the auth response headers allow us to specify what values from our authorization we would like to forward for use by the application.

 

These are the basic required annotations for NGINX external authentication but I would encourage you to take a look at the broader set, some of which are very powerful.

 

Now we understand the auth related annotations we can apply our ingress:

 

kubectl apply -f ingress-srv.yaml

 

We can now check that our pods are deployed and running as expected:

 

REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main)
$ kubectl get pods
NAME                                                      READY   STATUS    RESTARTS        AGE
colors-service-oauth2-proxy-deployment-78ffb756f5-sd4v5   1/1     Running   0               16m
colors-web-depl-554b54449c-ntflf                          1/1     Running   2 (5d18h ago)   6d17h


REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main)
$ kubectl get pods -n colors-api
NAME                               READY   STATUS    RESTARTS   AGE
colors-api-depl-79c887f867-vhgg9   1/1     Running   0          13m

 

Providing you see no errors you should now be able to head the the domain or IP address you have been using to configure this deployment and on the route you specified be greeted by an Azure AD Login screen.

 

 

Once we authenticate with a user in your Azure AD directory we are greeted by Marks great colors application.

 

 

Azure AD Login

 

 

We have now managed to ring fence our web application with Azure AD authentication without having to make any code changes and with Azure Key Vault integration!

 

Finally its time to configure our application to see what it does and also highlight that we can now access our internal API without publicly exposing it.

 

To configure the application we require the FQDN of the API service we deployed earlier. As Kubernetes by default does not restrict traffic between pods or namespaces we can specify the service name of our API for internal calls. As our API service isn't exposed to the internet we need to un-tick the box so that our calls are made using the pods running our application and not the client.

 

We will also need to include the route to the API which in this case is /colors/random but you can also take a look at the other options available here.

 

To get the FQDN of our service we can execute the following commands:

 

kubectl exec -it -n colors-api deployment/colors-api-depl -- apt-get update -y
kubectl exec -it -n colors-api deployment/colors-api-depl -- apt-get install dnsutils -y
kubectl exec -it -n colors-api deployment/colors-api-depl -- nslookup colors-api-srv

 

The output will contain our FQDN:

 

REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main
$ kubectl exec -it -n colors-api deployment/colors-api-depl -- nslookup colors-api-srv
Server:         10.0.0.10
Address:        10.0.0.10#53


Name:   colors-api-srv.colors-api.svc.cluster.local
Address: 10.0.254.92)

 

Now lets configure the application:

 

We need to add the API route and un-check the box for direct calls:

 

http://colors-api-srv.colors-api.svc.cluster.local/colors/random

 

 

App configuration

Conclusion

 

It's worth taking another look at the architecture as a refresher as to what has been implemented including the SSL termination if following from the first blog!

 

 

Deployed Architecture

 

The use of Oauth2 reverse proxy has enabled us to authenticate at the ingress level. Although this application doesn't do anything with the headers that are forwarded on to it you could easily now set feature flags or unique user content based on the authentication information passed through in the headers.

 

In my next blog post I will take a look at Network Policies and Open Service Mesh to examine how we can leverage different features to restrict network traffic, enable mTLS and manage observability.

 

As mentioned the yaml manifests included in this post are available at: https://github.com/owainow/microservice-authentication-oauth2-aks

Updated Jul 11, 2023
Version 3.0
  • tnabil's avatar
    tnabil
    Copper Contributor

    Thanks owaino, for this article. Looking at the OAuth2 proxy GitHub home, I couldn't find any information about which organisation is behind it or who is supporting its development. With many open source projects gettting abandoned after some time, I'm conscious of using something that is not guaranteed to be properly maintained and updated in the future.

    I checked out the Dapr OAuth2 middleware component but there's nothing indicating that it is still in alpha. As a matter of fact, it's documented in the stable 1.11 release. I can see the issue you've raised has been closed as statle. I was wondering if you had a chance to give this another try since the article was published.

  • shawncic's avatar
    shawncic
    Copper Contributor

    Is the Oauth2 proxy (or other proxies) the "best" way to host a web app in AKS behind a service/deployment that needs Azure AD authentication to end users?  ie. Was looking for something as simple as what is available in App Services -- which manages the proxy/facade and ensuring users are authenticated.

     

     

  • Hi shawncic, I haven't evaluated every possible way to say with certainty that Oauth2 is the best way. For API's for example you can use APIM to handle authentication as opposed to a sidecar. If you want something akin to easy auth in App Service, Azure Container Apps offers this functionality out of the box. I am yet to revisit DAPR's oauth component which may also since this was written be a good option. Finally you could look at third party load balancers that support auth such as Citrix to ring fence your cluster. 

  • ConorGriffin's avatar
    ConorGriffin
    Copper Contributor

    Hi owaino I've tried going through this tutorial end-to-end and I cannot access the web application, I'm getting an nginx error when browsing to the application. Any ideas what I could be doing wrong?

     

     

     

    App Registration:

     

     

    Pods are all running successfully:

     

    Secrets:


     

    Ingresses:

     

    I have the DNS A record configured and also SSL is setup inside the cluster. The only thing I can think of is maybe some compatibility issues with the version of kubernetes nginx ingress used in this tutorial vs what I am using now which is 1.10.1 ? (actual nginx version is nginx/1.25.3)