Log file monitoring with the ELK stack
The Elastic Stack or ELK (Elasticsearch, Logstash, Kibana) is an industry standard forensic toolchain that helps you consolidate logs from all your different systems and analyze the data for application and infrastructure monitoring, and faster troubleshooting.
This topic will walk you through configuring the Elastic Stack with Deploy. You can also use forensic data to determine usage, peak load patterns, or zero down on the root cause for outages.
- Enable logback logstash communication
- Display logs in Kibana
- Fluentd - an alternative to Logstash
- Logstash vs Fluentd
In order to configure Elastic Stack with Deploy, logback in Deploy should communicate with Logstash.
You can enable this communication by adding a configuration to the
There are different ways you can do this. See logstash logback encoder for more information.
One of the most convenient way to do it is to add an appender,
TcpSocketAppender to the
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:5000</destination> <encoder class="net.logstash.logback.encoder.LogstashEncoder"/> </appender>
Important: Do not forget to reference it as shown below:
<root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="FILE" /> <appender-ref ref="LOGSTASH" /> </root>
Logstash is part of the Elastic Stack, for log search, whereas Elasticsearch and Kibana are used for analysis and visualising respectively. Therefore, once the Deploy server is started, in order to display(visualize) the logs in Kibana, you must follow these steps:
- Create an Index Pattern from your Kibana Dashboard.
- Define the index pattern so it matches the pattern name.
- Finalize it.
- After creating the index pattern, go to Discover, you can view the Deploy logs here.
- Click the add filter button on the left, to define the filter parameter,
for instance you can filter logs for a specific task Id to view logs for that
Fluentd is an alternative to Logstash. As Fluentd is an independent tool, it can integrate with different visualing, search and analysis tools. It can work with Elasticsearch and Kibana as well.
To enable Fluentd with Deploy, we need to establish the communication between Fluentd and logback. The communication between fluentd to logback is through a program called fluency.
The basic fluency configuration in the
logback.xml file, looks like this:
<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender"> <tag>debug</tag> <remoteHost>localhost</remoteHost> <port>24224</port> </appender>
Important: Once again, do not forget to reference it:
<root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="FILE" /> <appender-ref ref="FLUENCY_SYNC" /> </root>
Fluentd has to be configured properly to understand where and how to communicate with other tools. The name of the
fluentd conf file depends on how you installed it. You may find further information here: https://docs.fluentd.org/configuration/config-file
For instance for fluentd using in docker it is /fluentd/etc/fluent.conf and a sample conf file looks like this:
<source> @type forward port 24224 bind 0.0.0.0 </source> <match *.**> @type copy <store> @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y%m%d include_tag_key true type_name access_log tag_key @log_name flush_interval 1s </store> <store> @type stdout </store> </match>
Basically, this configuration tells fluentd which port to listen at and which analysis tool to be used.
Once the communication is set, Kibana configurations are exactly the same as those for Logstash except that the index pattern name should be :