Logs from K8s with Kibana, Elasticsearch and Fluentd

Run up Kibana and Elastisearch outside the K8s cluster connected by Fluentd to capture logs from all pods

Logs from K8s with Kibana, Elasticsearch and Fluentd

We run up Kibana and Elastisearch outside the K8s cluster with docker. We install Fluentd as a deamon set to capture logs from all pods and push it to Elasticsearch with Kibana providing the logging dashboard.

Kibana and ElasticSearch

Using this docker-compose,yml file (taken from Chris Cooney's article, see below).

version: '3'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    environment:
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "9200:9200"
  kibana:
    image: docker.elastic.co/kibana/kibana:7.6.2
    ports:
      - "5601:5601"
docker-compose -f docker-compose.yml start

Kibana will be up and running on

http://0:5601

To later stop these services

docker-compose -f docker-compose.yml stop

Fluentd DeamonSet

create fluentd-values.yml

elasticsearch:
  hosts: ["192.168.0.129:9200"]

192.168.0.129 is the IP address of the host where elasticsearch is running on my machine.

Add the fluentd repo

helm repo add kiwigrid https://kiwigrid.github.io

Install it (helm v2.15.2):

helm install --name fluentd-logging kiwigrid/fluentd-elasticsearch -f fluentd-values.yml
kubectl get pods | grep fluent
fluentd-logging-fluentd-elasticsearch-wnm4q

kubectl logs -f fluentd-logging-fluentd-elasticsearch-wnm4q

Kibana Logs

http://0:5601

Navigate to Management (cog wheel) -> Kibana -> Index Patterns -> Create Index Pattern. For index pattern enter logstash-*, next select @timestamp for time filter and create the index.

Then click on discover (compass), we are going to look at the logs of our vadal-mq services (see previous blog). Search for kubernetes.labels.app. Select vadal-mq-receiver. Expand the log entry, on the JSON tab, select message (toggle column).  If the select is greyed out, go to management -> index patterns and refresh the index and try again.

Conclusion

We have added the ability to gather logs from our k8s cluster pumped into elasticsearch by fluentd and viewed by kibana. There are many other features around data masking, ML, metric,  and security that are possible with this set up.

Further details

More details on architecture, implementation options and security can be found in this article by Chris Cooney, here:

https://coralogix.com/log-analytics-blog/kubernetes-logging-with-elasticsearch-fluentd-and-kibana/

Related Article