Blog Post

Core Infrastructure and Security Blog
6 MIN READ

Getting Started with Logging Using EFK on Kubernetes

HoussemDellai's avatar
HoussemDellai
Icon for Microsoft rankMicrosoft
Apr 23, 2020

 

Introduction

 

After creating a Kubernetes cluster and deploying the apps, the question that rises is: How can we handle the logs?

One option to view the logs is using the command: kubectl logs POD_NAME. That is useful for debugging. But there is a better option suited for production systems. That is using EFK. The rest of the article will introduce EFK, install it on Kubernetes and configure it to view the logs.

 

What is EFK

 

EFK is a suite of tools combining Elasticsearch, Fluentd and Kibana to manage logs. Fluentd will collect the logs and send it to Elasticsearch. This latter will receive the logs and save it on its database. Kibana will fetch the logs from Elasticsearch and display it on a nice web app. All three components are available as binaries or as Docker containers.

Info: ELK is an alternative to EFK replacing Fluentd with Logstash.

For more details on the EFK architecture, follow this video:

https://www.youtube.com/watch?v=mwToMPpDHfg&list=PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE&index=4

 

 

Installing EFK on Kubernetes

 

Because EFK components are available as docker containers, it is easy to install it on k8s. For that, we’ll need the following:

  • Kubernetes cluster (Minikube or AKS…)
  • Kubectl CLI
  • Helm CLI

 

1. Installing Elasticsearch using Helm

 

We’ll start with deploying Elasticsearch into Kubernetes using the Helm chart available here on Github. The chart will create all the required objects:

  • Pods to run the master and client and manage data storage.
  • Services to expose Elasticsearch client to Fluentd.
  • Persistent Volumes to store data (logs).

 

$ helm install elasticsearch stable/elasticsearch 

 

Let’s wait for a few (10-15) minutes to create all the required components. After that, we can check the created pods using the command:

 

$ kubectl get pods

 

 

Then we can check for the created services with the command:

 

$ kubectl get services

 

 

2. Installing Fluentd as DaemonSet

 

Fluentd should be installed on each node on the Kubernetes cluster. To achieve that, we use the DaemonSet. Fluentd development team provided a simple configuration file available here: https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml

 

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  # namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-client"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          # Option to configure elasticsearch plugin with self signed certs
          # ================================================================
          - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
            value: "false" # changed by me
          # Option to configure elasticsearch plugin with tls
          # ================================================================
          - name: FLUENT_ELASTICSEARCH_SSL_VERSION
            value: "TLSv1_2"
          # X-Pack Authentication
          # =====================
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "changeme"
          # Logz.io Authentication
          # ======================
          - name: LOGZIO_TOKEN
            value: "ThisIsASuperLongToken"
          - name: LOGZIO_LOGTYPE
            value: "kubernetes"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

 

Fluentd should be able to send the logs to Elasticsearch. It needs to know its service name, port number and the schema. These configurations will be passed to the Fluentd pods through environment variables. Thus, in the yaml file, we notice the following configurations:

 

        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-client"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"

 

The value "elasticsearch-client" is the name of the Elasticsearch service that routes traffic into the client pod.

Note: A DaemonSet is a Kubernetes object used to deploy a Pod on each Node.

Note: Fluentd could be deployed also using a Helm chart available on this link https://github.com/helm/charts/tree/master/stable/fluentd.

 

Now let’s deploy Fluentd using the command:

 

$ kubectl apply -f .\fluentd-daemonset-elasticsearch.yaml

 

We can verify the install by checking the 3 new pods, 3 because we have 3 nodes in the cluster:

 

 

3. Installing Kibana using Helm

 

The last component to install in EFK is Kibana. Kibana is available as a Helm chart that could be found here: https://github.com/helm/charts/tree/master/stable/kibana.

 

The chart will deploy a single Pod, a Service and a ConfigMap. The ConfigMap get its key values from the values.yaml file. This config will be loaded by the Kibana container running inside the Pod. This configuration is specific to Kibana, to get the Elasticsearch host or service name, for example. The default value for the Elasticsearch host is http://elasticsearch:9200. While in our example it should be http://elasticsearch-client:9200. We need to change that.

 

The Service that will route traffic to Kibana Pod is using type ClusterIP by default. As we want to access the dashboard easily, we’ll override the type to use LoadBalancer. That will create a public IP address.

In Helm, we can override some of the config in values.yaml using another yaml file. We’ll call it kibana-values.yaml. Lets create that file with the following content:

 

files:
  kibana.yml:
    ## Default Kibana configuration from kibana-docker.
    server.name: kibana
    server.host: "0"
    ## For kibana < 6.6, use elasticsearch.url instead
    elasticsearch.hosts: http://elasticsearch-client:9200
service:
  type: LoadBalancer # ClusterIP

 

Now, we are ready to deploy Kibana using Helm with the overridden configuration:

 

$ helm install kibana stable/kibana -f kibana-values.yaml

 

In a few seconds, when the deployment is complete, we can check whether the created Pod is running using kubectl get pods command.

 

 

Then we check for the created service with LoadBalancer type.

 

 

From here, we can copy the external IP address (51.138.9.156 here) and open it in a web browser. We should not forget to add the port number which is 443.

 

 

Then click on “Discover”. And there we’ll find some logs !

 

 

4. Deploying and viewing application logs

 

In this section, we’ll deploy a sample container that outputs log files in an infinite loop. Then we’ll try to filter these logs in Kibana.

 

## counter.yaml
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c, 'i=0; while true; do echo "Demo log $i: $(date)"; i=$((i+1)); sleep 1; done']

 

Let’s first deploy this sample Pod:

 

$ kubectl apply -f .\counter.yaml

 

We make sure it is created successfully:

 

 

Now, if we switch to Kibana dashboard and refresh it, we’ll be able to see logs collected from the counter Pod:

 

 

We can also filter the logs using queries like: kubernetes.pod_name: counter.

The content of this article is also available as a video on youtube on this link: https://www.youtube.com/watch?v=9dfNMIZjbWg&list=PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE&index=5

 

 

Conclusion

 

It was easy to get started with EFK stack on Kubernetes. From here we can create custom dashboards with nice graphs to be used by the developers.

 

Additional notes

Note: We installed the ELK stack in the default namespace for simplicity. But it is recommended to install it on either kube-system or a dedicated namespace.

 

Note: Elasticsearch have its own repo built to support v7, it is still in preview today https://github.com/elastic/helm-charts/tree/master/elasticsearch

 

Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

 

Updated Apr 23, 2020
Version 7.0
  • abhski's avatar
    abhski
    Copper Contributor

    Hi

    Firstly wants to thank you for an amazing article above.

    I have a scenario I want to get some help with, So I have installed Elastic search, Filebeat and Kibana on AKS cluster. Now, since I am using Nginx Ingress controller to expose application on Reverse proxy load balancer which is hooked to a hostname say http://xyz.com

    I am not able to expose kibana to outside. I get 404 error. I have tried to add serverbasepath variable in kibana deployment etc. but still get 404 error.

    - name: SERVER_BASEPATH
    value: "/kibana"

    Below is my setup:
    1. Ingress.yaml

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
    name: ingress-dev
    annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    spec:
    rules:
    - host: xyz.com
    http:
    paths:
    - backend:
    serviceName: frontend-ui-service
    servicePort: 80
    path: /(.*)
    - backend:
    serviceName: home-micro-service
    servicePort: 3333
    path: /api-dev(/|$)(.*)
    - backend:
    serviceName: kibana-kibana
    servicePort: 5601
    path: /kibana(/|$)(.*)

     

    2. Kibana - values.yaml
    ---
    elasticsearchHosts: "http://elasticsearch-master:9200"

    replicas: 1

    # Extra environment variables to append to this nodeGroup
    # This will be appended to the current 'env:' key. You can use any of the kubernetes env
    # syntax here
    extraEnvs:
    # - name: "NODE_OPTIONS"
    # value: "--max-old-space-size=1800"
    # - name: MY_ENVIRONMENT_VAR
    # value: the_value_goes_here

    # Allows you to load environment variables from kubernetes secret or config map
    envFrom: []
    # - secretRef:
    # name: env-secret
    # - configMapRef:
    # name: config-map

    # A list of secrets and their paths to mount inside the pod
    # This is useful for mounting certificates for security and for mounting
    # the X-Pack license
    secretMounts: []
    # - name: kibana-keystore
    # secretName: kibana-keystore
    # path: /usr/share/kibana/data/kibana.keystore
    # subPath: kibana.keystore # optional

    image: "docker/docker.elastic.co/kibana/kibana"
    imageTag: "7.9.1"
    imagePullPolicy: "IfNotPresent"

    # additionals labels
    labels: {}

    podAnnotations: {}
    # iam.amazonaws.com/role: es-cluster

    resources:
    requests:
    cpu: "1000m"
    memory: "2Gi"
    limits:
    cpu: "1000m"
    memory: "2Gi"

    protocol: http

    serverHost: "0.0.0.0"

    healthCheckPath: "/app/kibana"

    # Allows you to add any config files in /usr/share/kibana/config/
    # such as kibana.yml
    kibanaConfig: {}
    # kibana.yml: |
    # key:
    # nestedkey: value

    # If Pod Security Policy in use it may be required to specify security context as well as service account

    podSecurityContext:
    fsGroup: 1000

    securityContext:
    capabilities:
    drop:
    - ALL
    # readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1000

    serviceAccount: ""

    # This is the PriorityClass settings as defined in
    # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
    priorityClassName: ""

    httpPort: 5601

    extraContainers: ""
    # - name: dummy-init
    # image: busybox
    # command: ['echo', 'hey']

    extraInitContainers: ""
    # - name: dummy-init
    # image: busybox
    # command: ['echo', 'hey']

    updateStrategy:
    type: "Recreate"

    service:
    type: ClusterIP
    loadBalancerIP: ""
    port: 5601
    nodePort: ""
    labels: {}
    annotations: {}
    # cloud.google.com/load-balancer-type: "Internal"
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    # service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
    # service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
    loadBalancerSourceRanges: []
    # 0.0.0.0/0

    ingress:
    enabled: false
    annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    path: /
    hosts:
    - chart-example.local
    tls: []
    # - secretName: chart-example-tls
    # hosts:
    # - chart-example.local

    readinessProbe:
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 3
    timeoutSeconds: 5

    imagePullSecrets: []
    nodeSelector: {}
    tolerations: []
    affinity: {}

    nameOverride: ""
    fullnameOverride: ""

    lifecycle: {}
    # preStop:
    # exec:
    # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
    # postStart:
    # exec:
    # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

    # Deprecated - use only with versions < 6.6
    elasticsearchURL: "" # "http://elasticsearch-master:9200"

     

    All services and pods are running. I just need a way to navigate to http://xyz.com/kibana and access dashboard.

    One more thing, If I try to add path in ingress file to /app/Kibana. Then when I navigate to http://xyz.com/app/kibana, I see elastic showing up at the title bar but white screen is displayed. (means somethin is loading)