JojiVarghese
12 TopicsSecuring Kubernetes Applications with Ingress Controller and Content Security Policy
In this guide, we’ll walk through an end-to-end example showing how to: Install the NGINX Ingress Controller as a DaemonSet Configure it to automatically inject a Content Security Policy (CSP) header Deploy a simple “Hello World” NGINX app (myapp) Tie everything together with an Ingress resource that routes traffic and verifies CSP By the end, you’ll have a pattern where: Every request to your application carries a strong CSP header Your application code remains unchanged (CSP is injected at the gateway layer) You can test both inside the cluster and externally to confirm CSP is enforced Why CSP Matters at the Ingress Layer Content Security Policy (CSP) is an HTTP header that helps mitigate cross-site scripting (XSS) and other injection attacks by specifying which sources of content are allowed. Injecting CSP at the Ingress level has three big advantages: Centralized Enforcement : Instead of configuring CSP in each application pod, you define it once in the Ingress controller. All apps behind that controller automatically inherit the policy. Minimal Application Changes : Your Docker images and web servers remain untouched—security lives at the edge (the Ingress). Consistency and Auditability : You can update the CSP policy in one place (the controller’s ConfigMap) and immediately protect every Ingress without modifying application deployments. What are the steps involved a. Install NGINX Ingress Controller as a DaemonSet with CSP Enabled We want the controller to run one Pod per node (High Availability), and we also want it to inject a CSP header globally. Helm makes this easy: 1. Add the official ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update 2. Install (or upgrade) the chart with the following values: controller.kind=DaemonSet -> run one controller Pod on each node controller.config.enable-snippets=true -> allow advanced NGINX snippets controller.config.server-snippet="add_header Content-Security-Policy …" -> define the CSP header helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" What this does: Installs (or upgrades) the ingress-nginx chart into ingress-nginx namespace. Runs the controller as a DaemonSet (one Pod per node) on Linux nodes. Sets enable-snippets: "true" in the controller’s ConfigMap. Defines a server-snippet so every NGINX server block will contain: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; Exposes the ingress controller via a LoadBalancer (or NodePort) IP, depending on your cluster. This adds below CSP entry to the ingress controllers configmap. apiVersion: v1 data: enable-snippets: "true" server-snippet: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; kind: ConfigMap . . . Validation can be done using this command: kubectl edit configmap ingress-nginx-controller -n ingress-nginx Another option would be to use Patch with restart (avoids manual edit) kubectl patch configmap ingress-nginx-controller \ -n ingress-nginx \ --type=merge \ -p '{"data":{"enable-snippets":"true","server-snippet":"\n add_header Content-Security-Policy \"default-src '\''self'\''; script-src '\''self'\''; style-src '\''self'\''; img-src '\''self'\''; object-src '\''none'\''; frame-ancestors '\''self'\''; frame-src '\''self'\''; connect-src '\''self'\''; upgrade-insecure-requests; report-uri /csp-report\" always;"}}' \ && kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx 3. Roll out and verify kubectl rollout restart daemonset/ingress-nginx-controller -n ingress-nginx kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx You should see one Pod per node in “Running” state. 4. (Optional) Inspect the controller’s ConfigMap kubectl get configmap ingress-nginx-controller -n ingress-nginx -o Under data: you’ll find: enable-snippets: "true" server-snippet: | add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report" always; At this point, your NGINX Ingress Controller is running as a DaemonSet, and it will inject the specified CSP header on every response for any Ingress it serves. b. Deploy a Simple 'myapp' NGINX Application Next, we’ll deploy a minimal NGINX app (labeled app=myapp) so we can confirm routing and CSP. 1. Deployment (save as myapp-deployment.yaml): apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest # Replace with your custom image as needed ports: - containerPort: 80 2. Service (save as myapp-service.yaml): apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP 3. Apply both resources: kubectl apply -f myapp-deployment. -f myapp-service . 4. Verify your Pods and Service: kubectl get deployments,svc -l app=myapp -n default You should see: Deployment/myapp-deployment 1/1 1 1 30s Service/myapp-service ClusterIP 10.244.x.y <none> 80/TCP 30s 5. Test the Service directly (inside the cluster): kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: ... Content-Type: text/html Content-Length: 612 Connection: keep-alive At this point, your application is up and running, accessible at http://myapp-service.default.svc.cluster.local from within the cluster c. Create an Ingress to Route to myapp-service Now that the Ingress Controller and app Service are in place, let’s configure an Ingress resource: 1. Ingress definition (save as myapp-ingress.): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 2. Apply the Ingress: kubectl apply -f myapp-ingress . 3. Verify that the Ingress is registered: You should see: NAME CLASS HOSTS ADDRESS PORTS AGE myapp-ingress nginx myapp.local <pending> 80 10s The ADDRESS may be <pending> but in cloud environments, it will eventually become a LoadBalancer IP. d. Verify End‐to‐End (Ingress + CSP): From Inside the Cluster 1. Run a curl pod that sends a request to the Ingress Controller’s internal DNS, setting 'Host: myapp.local': kubectl run curl-test --image=radial/busyboxplus:curl \ --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" 2. Expected output: HTTP/1.1 200 OK Server: nginx/1.23.x Date: Wed, 05 Jun 2025 12:05:00 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Content-Security-Policy: default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report Notice the Content-Security-Policy header. This confirms the controller has injected our CSP. e. Verify End‐to‐End (Ingress + CSP): From Your Laptop or Browser (Optional) Retrieve the External IP (if your Service is a LoadBalancer): kubectl get svc ingress-nginx-controller -n ingress-nginx Once you have an external IP (e.g. 52.251.20.187), add this to your /etc/hosts: 52.251.20.187 myapp.local Then in your terminal or browser: curl -I http://myapp.local/ You can port‐forward and curl to validate if app is running: kubectl port-forward service/ingress-nginx-controller 8080:80 -n ingress-nginx Add to /etc/hosts: 127.0.0.1 myapp.local Then: curl -I http://myapp.local:8080/ You should again see the 200 OK response along with the Content-Security-Policy header. Recap & Best Practices 1. Run the Ingress Controller as a DaemonSet Ensures one Pod per node for true high availability. Achieved by --set controller.kind=DaemonSet in Helm. 2. Enable Snippets & Inject CSP Globally --set controller.config.enable-snippets=true turns on snippet support. --set-string controller.config.server-snippet="add_header Content-Security-Policy …" inserts a literal block into the controller’s ConfigMap. This causes every server block (all Ingresses) to include your CSP header without modifying individual Ingress manifests. 3. Keep Your Apps Unchanged The CSP lives in the Ingress Controller, not in your app pods. You can standardize security across all Ingresses by adjusting the single CSP snippet. 4. Deploy Your Application, Service, Ingress, and Test We used a minimal NGINX “myapp” Deployment + ClusterIP Service -> Ingress -> Controller -> CSP injection. Verified inside‐cluster with curl -I … -H "Host: myapp.local" that CSP appears. Optionally tested from outside via /etc/hosts or LoadBalancer IP. 5. Next Steps Adjust the CSP policy to fit your application’s needs, for e.g. if you load scripts from a CDN, add that domain under script-src. Add additional security headers (HSTS, X-Frame-Options, etc.) by appending more add_header lines in server-snippet. If you have multiple Ingress Controllers, repeat the same pattern for each. Full Commands (for Reference) #1) Install/Upgrade NGINX Ingress Controller (DaemonSet + CSP) helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx --create-namespace \ --set controller.kind=DaemonSet \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.externalTrafficPolicy=Local \ --set defaultBackend.image.image=defaultbackend-amd64:1.5 \ --set controller.config.enable-snippets=true \ --set-string controller.config.server-snippet="add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; object-src 'none'; frame-ancestors 'self'; frame-src 'self'; connect-src 'self'; upgrade-insecure-requests; report-uri /csp-report\" always;" # Wait for DaemonSet kubectl rollout status daemonset/ingress-nginx-controller -n ingress-nginx #2) myapp-deployment. apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment namespace: default labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-container image: nginx:latest ports: - containerPort: 80 --- # 3) myapp-service. apiVersion: v1 kind: Service metadata: name: myapp-service namespace: default spec: selector: app: myapp ports: - port: 80 targetPort: 80 protocol: TCP # Apply Deployment + Service kubectl apply -f myapp-deployment. -f myapp-service. # Test the Service internally kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it \ -- sh -c "curl -I http://myapp-service.default.svc.cluster.local/" # 4) myapp-ingress. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress namespace: default spec: ingressClassName: "nginx" rules: - host: myapp.local http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80 # Apply the Ingress kubectl apply -f myapp-ingress. # Verify and test with CSP kubectl run curl-test --image=radial/busyboxplus:curl --restart=Never --rm -it -- sh -c \ "curl -I http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/ -H 'Host: myapp.local'" With these steps in place, all traffic to myapp.local is routed through your NGINX Ingress Controller, and the strong CSP header is automatically applied at the gateway. This pattern can be adapted to any Kubernetes‐hosted web application, by injecting additional security headers, tailoring the CSP to your needs, and keeping your apps running unmodified. Happy “ingressing!”Enhancing Azure DevOps Traceability: Tracking Pipeline-Driven Resource Changes
Introduction One of the key challenges in managing cloud infrastructure with Azure DevOps is maintaining clear traceability between pipeline executions and modifications to Azure resources. Unlike some integrated CI/CD platforms, Azure does not natively link resource changes to specific DevOps pipelines, making it difficult to track which deployment introduced a particular modification. To address this gap, two approaches have been presented that improve traceability and accountability: Proactive tagging of resources with pipeline metadata during deployment. Analyzing Azure Activity Logs to trace resource changes back to specific DevOps pipeline runs. By implementing these solutions, organizations can enhance governance, streamline debugging, and ensure better auditability of Azure DevOps deployments. Challenges in Traceability Without a built-in integration between Azure DevOps pipelines and Azure resource modifications, teams face difficulties such as: Lack of direct ownership tracking for resource updates. Manual investigation of deployment logs, which is time-consuming. Difficulty in debugging incidents caused by misconfigured or unauthorized changes. Approach 1: Proactive Tagging of Azure Resources in Pipelines This approach embeds metadata directly into Azure resources by adding tags at deployment time. The tags include: Pipeline Name (‘PipelineName’) Pipeline ID (‘PipelineId’) Run ID (‘RunId’) These tags serve as persistent markers, allowing teams to quickly identify which pipeline and build triggered a specific resource change. Link to Github repository can be found here. Implementation A YAML pipeline file (‘PipelineTaggingResources.yml’) is used to automate tagging as part of the deployment process. Key Steps Pipeline triggers on updates to the main branch. Tags are dynamically generated using environment variables. Tags are applied to resources during deployment. Pipeline logs and artifacts store metadata for verification. Code Snippet Below is an example of applying tags in an Azure DevOps pipeline: - task: AzureCLI@2 displayName: "Tag Resources with Pipeline Metadata" inputs: azureSubscription: "Your-Service-Connection" scriptType: bash scriptLocation: inlineScript inlineScript: | az tag create --resource-id "/subscriptions/xxxxx/resourceGroups/xxxx/providers/Microsoft.Web/sites/MyAppService" \ --tags PipelineName=$(Build.DefinitionName) PipelineId=$(Build.DefinitionId) RunId=$(Build.BuildId) Approach 2: Using Azure Activity Logs to Trace Resource Changes This approach is useful after changes have already occurred. It analyzes Azure Activity Logs to track modifications and correlates them with Azure DevOps build logs. Implementation A PowerShell script (‘ActivityLogsTrace.ps1’) extracts relevant details from Azure Activity Logs and compares them with DevOps pipeline execution records. Key Steps Retrieve activity logs for a given resource group and timeframe. Filter logs by operation type, e.g., ‘Microsoft.Web/serverfarms/write’. List Azure DevOps pipelines and their recent runs. Compare pipeline logs with activity log metadata to find a match. Output the responsible pipeline name and run ID. Comparison of Approaches Approach When to Use Advantages Limitations Tagging Azure Resources in Pipelines Before deployment (Proactive) Immediate traceability. No need to query logs Requires modifying deployment scripts Using Azure Activity Logs After changes have occurred (Reactive) Works for any past modification. No need to modify pipelines Requires Azure Monitor logs. Manual correlation of logs Which Approach Should You Use? For new deployments? Tagging resources is the best approach. For investigating past changes? Use Azure Activity Logs. For a robust solution? Combine both approaches for full traceability. Conclusion Tagging resources in Azure DevOps pipelines provides immediate traceability, while analyzing Azure Activity Logs enables retrospective identification of resource modifications. Tagging requires modifying deployment scripts, whereas log analysis works post-change but needs manual correlation. Combining both ensures robust traceability. For details, check the GitHub Repository.