Log analysis tool in Deploy

Elastic Stack

For Elastic Stack ( Logstash, Elasticsearch, Kibana ), to work with Deploy, logback in Deploy should communicate with logstash. That is done by adding a configuration to logback.xml.

There are different ways to do it as can be seen here: One of the most convenient way to do it is to use TcpSocketAppender. Add following appender to your logback.xml.

localhost:5000

And do not forget to reference it:

Since logstash is part of Elastic Stack, for log search, analysis and visualising Elasticsearch and Kibana is used respectively. Once the xl-deploy is started, to display the logs in Kibana, first create Index Pattern from Kibana Dashboard.

Define the index pattern so it matches the pattern name.

And finalize it.

After creating the index pattern, go to Discover, xl-deploy logs will be displayed.

You may add any filter to filter the logs in the way you want. First click on the add filter button on the left.

Then define the filter parameter, for instance let’s filter logs for specific task id.

So we will see logs for that taskId:

Fluentd

Fluentd is an alternative to logstash. It is an independent tool so it can integrate with different visualing, search and analysis tools. It can work with Elasticsearch and Kibana as well.

The communication between fluentd to logback is through a program called fluency. The basic fluency configuration in logback.xml looks like this:

<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender">

<tag>debug</tag>

<remoteHost>localhost</remoteHost>
<port>24224</port>

Of course once again do not forget to reference it:

<root level="info">
    <appender-ref ref="STDOUT" />
    <appender-ref ref="FILE" />
    <appender-ref ref="FLUENCY_SYNC" />
</root>

Fluentd has to be configured properly to understand where and how to communicate with other tools. The name of the fluentd conf file depends on how you installed it. You may find further information here: https://docs.fluentd.org/configuration/config-file

For instance for fluentd using in docker it is /fluentd/etc/fluent.conf and a sample conf file is like following:

@type forward port 24224 bind 0.0.0.0

<match *.**> @type copy

@type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y%m%d include_tag_key true type_name access_log tag_key @log_name flush_interval 1s @type stdout

Basically, this configuration tells fluentd to which port to listen fluentd at, which analysis tool to be used and listened at what port etc.

Once the communication is set, Kibana configuration are exactly the same apart from the index pattern to follow this is:

Logstash and Fluentd Comparison

Logstash is part of Elastic Stack which makes easier to config with Elasticsearch and Kibana but is limited to those tools. On the other hand, fluentd can work with any tool in the market, but it is slightly more difficult to configure. For more comparison please read through this article: https://logz.io/blog/fluentd-logstash/