Log analysis tool in Deploy
For Elastic Stack ( Logstash, Elasticsearch, Kibana ), to work with Deploy, logback in Deploy should communicate with logstash. That is done by adding a configuration to
There are different ways to do it. One of the most convenient way to do it is to use
TcpSocketAppender. Add the following appender to your
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:5000</destination> <encoder class="net.logstash.logback.encoder.LogstashEncoder"/> </appender>
And do not forget to reference it:
<root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="FILE" /> <appender-ref ref="LOGSTASH" /> </root>
Since logstash is part of Elastic Stack, for log search, analysis, and visualising, Elasticsearch and Kibana are used respectively. Once the Digital.ai Deploy is started, to display the logs in Kibana:
- First create Index Pattern from Kibana Dashboard.
- Define the index pattern so it matches the pattern name and finalize it.
- After creating the index pattern, go to Discover, Digital.ai Deploy logs will be displayed.
- You may add any filter to filter the logs in the way you want. First click on the add filter button on the left.
- Then define the filter parameter, for instance let’s filter logs for specific task id.
So we will see logs for that taskId:
Fluentd is an alternative to logstash. It is an independent tool so it can integrate with different visualing, search and analysis tools. It can work with Elasticsearch and Kibana as well.
The communication between fluentd to logback is through a program called fluency. The basic fluency configuration in logback.xml looks like this:
<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender"> <tag>debug</tag> <remoteHost>localhost</remoteHost> <port>24224</port> </appender>
Of course once again do not forget to reference it:
<root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="FILE" /> <appender-ref ref="FLUENCY_SYNC" /> </root>
Fluentd has to be configured properly to understand where and how to communicate with other tools. The name of the fluentd conf file depends on how you installed it. You may find further information here:
For instance for fluentd using in docker it is /fluentd/etc/fluent.conf and a sample conf file is like following:
<source> @type forward port 24224 bind 0.0.0.0 </source> <match *.**> @type copy <store> @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y%m%d include_tag_key true type_name access_log tag_key @log_name flush_interval 1s </store> <store> @type stdout </store> </match>
Basically, this configuration tells fluentd to which port to listen fluentd at, which analysis tool to be used and listened at what port etc.
Once the communication is set, Kibana configuration are exactly the same apart from the index pattern to follow this is:
Logstash is part of Elastic Stack which makes easier to config with Elasticsearch and Kibana but is limited to those tools. On the other hand, fluentd can work with any tool in the market, but it is slightly more difficult to configure. For more comparison please read through this article: