Log file monitoring with the ELK stack

The Elastic Stack or ELK (Elasticsearch, Logstash, Kibana) is an industry standard forensic toolchain that helps you consolidate logs from all your different systems and analyze the data for application and infrastructure monitoring, and faster troubleshooting.

This topic describes the the Elastic Stack configuration with Deploy. You can also use forensic data to determine usage, peak load patterns, or determine the root cause for outages.

In this Topic

  1. Enable logback logstash communication
  2. Display logs in Kibana
  3. Fluentd - an alternative to Logstash
  4. Logstash vs Fluentd

Enable Logback Logstash Communication

To configure Elastic Stack with Deploy, logback in Deploy should communicate with Logstash.

You can enable this communication by adding a configuration to the logback.xml file.

There are different ways you can do this. See logstash logback encoder for more information. We recommend that you add the TcpSocketAppenderto the logback.xml file.


  1. Add the appender, TcpSocketAppender to the logback.xml file as follows:
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
   <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
  1. Reference as shown below:

    <root level="info">
    <appender-ref ref="STDOUT" />
    <appender-ref ref="FILE" />
    <appender-ref ref="LOGSTASH" />
  2. Add the Logstash Logback Encoder JAR file to your library.

Display logs in Kibana

Logstash is part of the Elastic Stack, for log search, whereas Elasticsearch and Kibana are used for analysis and visualising respectively. Therefore, once the Deploy server is started, in order to display(visualize) the logs in Kibana, you must follow these steps:

  1. Create an Index Pattern from your Kibana Dashboard.


  1. Define the index pattern so it matches the pattern name.


  1. Finalize it.


  1. After creating the index pattern, go to Discover, you can view the Deploy logs here.


  1. Click the add filter button on the left, to define the filter parameter,


for instance you can filter logs for a specific task Id to view logs for that taskId.


Fluentd - an alternative to Logstash

Fluentd is an alternative to Logstash. As Fluentd is an independent tool, it can integrate with different visualing, search and analysis tools. It can work with Elasticsearch and Kibana as well.

To enable Fluentd with Deploy, we need to establish the communication between Fluentd and logback. The communication between fluentd to logback is through a program called fluency.

The basic fluency configuration in the logback.xml file, looks like this:

<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender">

Important: Once again, do not forget to reference it:

<root level="info">
   <appender-ref ref="STDOUT" />
      <appender-ref ref="FILE" />
      <appender-ref ref="FLUENCY_SYNC" />

Fluentd has to be configured properly to understand where and how to communicate with other tools. The name of the fluentd conf file depends on how you installed it. You may find further information here: https://docs.fluentd.org/configuration/config-file

For instance for fluentd using in docker it is /fluentd/etc/fluent.conf and a sample conf file looks like this:

  @type forward
  port 24224

<match *.**>
  @type copy

    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 1s

    @type stdout

Basically, this configuration tells fluentd which port to listen at and which analysis tool to be used.

Once the communication is set, Kibana configurations are exactly the same as those for Logstash except that the index pattern name should be :