Blog Post

Apps on Azure Blog
3 MIN READ

Getting started with Azure Kubernetes Service and Loki

lastcoolnameleft1's avatar
May 25, 2022

Searching through application logs is a critical part of any operations team. And as the Cloud Native ecosystem grows and evolves, more modern approaches for this use case are emerging.

 

The thing about retaining logs is that the storage requirements can get big. REALLY big.

 

One of the most common log search and indexing tools is ElasticSearch. ElasticSearch is exceptionally good at finding a needle in the haystack (e.g. When did the string "Error message #123" occur in any copy of application X on March 17th). It does this by indexing the contents of the log message which can significantly increase your storage consumption.

 

The enthusiastic team at Grafana created Loki to address this problem. Instead of indexing the full log message, Loki only indexes the metadata (e.g. label, namespace, etc.) of the log, significantly reducing your storage needs. You can still search for the content of the log messages with LogQL, but it's not indexed.

 

The UI for Loki is Grafana, which you might already be familiar with if you're using Prometheus.  For details on getting started with Grafana and Prometheus, see Aaron Wislang's post on Using Azure Kubernetes Service with Grafana and Prometheus

 

Getting started with Loki on Azure Kubernetes Service (AKS) is pretty easy. These instructions are inspired by the official Loki Getting Started steps with some modifications streamlined for AKS.

 

 

# Set some starter env-vars
AKS_RG=loki-rg
AKS_LOCATION=southcentralus
AKS_NAME=loki-aks

# Create the AKS cluster
az group create -n $AKS_RG -l $AKS_LOCATION
az aks create -n $AKS_NAME -g $AKS_RG
az aks get-credentials -n $AKS_NAME -g $AKS_RG


# Helm update and install
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

# Create a Helm release of Loki with Grafana + Prometheus using a PVC
# NOTE: This diverges from the Loki docs as it uses storageClassName=default instead of "standard" 
helm upgrade --install loki grafana/loki-stack --namespace grafana --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=default,loki.persistence.size=5Gi

# The Helm installation uses a non-default password for Grafana.  This command fetches it.
# Should look like gtssNbfacGRYZFCa4f3CFmMuendaZzrf9so9VgLh
kubectl get secret loki-grafana -n grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

# Port-forward from the Grafana service (port 80) to your desktop (port 3000)
kubectl port-forward -n grafana svc/loki-grafana 3000:80

# In your browser, go to http://127.0.0.1:3000/
# User: admin
# Password: Output of the "kubectl get secret" command. 

 

Now you're ready to start exploring Loki!

 

We'll start by using Loki to look at Loki's own logs.

  • Hover over the "Explore" icon (Looks like a compass)

     

  • Select "Loki" from the Data Sources menu

     

  • Click "Log Browser", which will open up a panel

  • Under "1. Select labels to search in", click "app"

  • Under "2. Find values for the selected labels", click "loki"

  • Under "3. Resulting selector", click "Show logs"

You should now have a view of the Loki logs as such:

 

Congrats! You've now created an AKS cluster, deployed Loki and Grafana on it, exposed the Grafana endpoint to your desktop and browsed Loki logs using Loki.

 

When you're ready to clean up the Azure resources, run the following command which will delete everything in your resource group and avoid ongoing billing for these resources.  This will remove the AKS cluster and the instance of Loki running inside.

 

az aks delete -n $AKS_NAME -g $AKS_RG
 
Updated May 24, 2022
Version 1.0
  • Great article, I was wondering if I want to index more than just metadata, but part of other key information, would it be possible to do it as well?