How to run Filebeat in a Docker container

protecting sensitive data in docker
Reading Time: 4 minutes

Introduction

Hi everyone! Today in this blog we are going to learn how to run Filebeat in a container environment. For a quick understanding –

  • Filebeat is used to forward and centralize log data.
  • It is lightweight, has a small footprint, and uses fewer resources.
  • It is installed as an agent on your servers.
  • It monitors the log files from specified locations.
  • It collects log events and forwards them to Elascticsearch or Logstash for indexing.

Set-up

Setup

In this setup, I have an ubuntu host machine running Elasticsearch and Kibana as docker containers. I will bind the Elasticsearch and Kibana ports to my host machine so that my Filebeat container can reach both Elasticsearch and Kibana. I won’t be using Logstash for now.

I’ve also got another ubuntu virtual machine running which I’ve provisioned with Vagrant. In this client VM, I will be running Nginx and Filebeat as containers.

The idea is that the Filebeat container should collect all the logs from all the containers running on the client machine and ship them to Elasticsearch running on the host machine.

You can configure Filebeat to collect logs from as many containers as you want. Here, I will only be installing one container for this demo. Now, let’s start with the demo.

Demo

1. Run Elastic Search and Kibana as Docker containers on the host machine

To run Elastic Search and Kibana as docker containers, I’m using docker-compose as follows –

version: '2.2'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:7.9.2
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: "http://elasticsearch:9200"
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch

volumes:
  esdata1:
    driver: local

Copy the above dockerfile and run it with the command – sudo docker-compose up -d

This docker-compose file will start the two containers as shown in the following output –

docker-compose

You can check the running containers using – sudo docker ps

docker ps

The logs of the containers using the command can be checked using – sudo docker-compose logs -f

We must now be able to access Elastic Search and Kibana from your browser.

Just type localhost:9200 to access Elasticsearch. You should see –

elasticsearch webpage

Similarly for Kibana type localhost:5601 in your browser.

kibana webpage

2. Run Nginx and Filebeat as Docker containers on the virtual machine

Now, let’s move to our VM and deploy nginx first. Type the following command –

sudo docker run -d -p 8080:80 –name nginx nginx

docker run

You can check if it’s properly deployed or not by using this command on your terminal –

curl localhost:8080

This should get you the following response –

curl nginx

We should also be able to access the nginx webpage through our browser. For that, we need to know the IP of our virtual machine. You can find it like this.

ip

Now type 192.168.1.14:8080 in your browser. The following webpage should open –

nginx webpage

Now, we only have to deploy the Filebeat container. Use the following command to download the image – sudo docker pull docker.elastic.co/beats/filebeat:7.9.2

Filebeat

3. Setting up the Filebeat container

Now to run the Filebeat container, we need to set up the elasticsearch host which is going to receive the shipped logs from filebeat. This command will do that –

sudo docker run \
docker.elastic.co/beats/filebeat:7.9.2 \
setup -E setup.kibana.host=host_ip:5601 \
-E output.elasticsearch.hosts=["host_ip:9200"]

Replace the field host_ip with the IP address of your host machine and run the command.

Filebeat

Now let’s set up the filebeat using the sample configuration file given below –

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'

We just need to replace elasticsearch in the last line with the IP address of our host machine and then save that file so that it looks like this –

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:192.168.1.7:9200}'

Finally, use the following command to mount a volume with the Filebeat container.

docker run -d \
  --name=filebeat \
  --user=root \
  --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
  --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  docker.elastic.co/beats/filebeat:7.9.2 filebeat -e --strict.perms=false

Our setup is complete now. Now we can go to Kibana and visualize the logs being sent from Filebeat.

Filebeat

That’s it for now. I hope this article was useful to you. Please feel free to drop any comments, questions, or suggestions.

Written by 

Riya is a DevOps Engineer with a passion for new technologies. She is a programmer by heart trying to learn something about everything. On a personal front, she loves traveling, listening to music, and binge-watching web series.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading