When running multiple instances of an application server, it is quite hard to see correlations between events. One of the best tools to enable that is the ELK stack - Elasticsearch for building fulltext index of the log entries, Logstash for managing the inflow the events, and Kibana as a user interface on top of that.
Solutions for Payara Server exist, that use better parseable log format which can be then processed by Logstash Filebeat in order to have these log entries processed by a remote Logstash server.
In our project, we chose a different path — we replaced all logging in the server and our applications with Logback, and make use of the
logback-logstash-appender to push the events directly to Logstash over a TCP socket. The appender uses LMAX disruptor internally to push the logs, so the processes does not block the application flow. This article will show you how to have this configured for your project as well.
Log everything via Logstash
In first step, we replace the backend for Payara Server’s logging with Logstash.
Project Goodees was started by me to collect useful small components for Java EE, and Payara Server in particular. The first released component is payara-logback, that includes all libraries properly packaged to use logback as the logging backend in Payara Server and also includes an installation script.
Follow the instructions there to switch your logging.
Logback and logstash configuration
Our logback configuration looks similar to following:
The following points should be noted:
slf4j and `java.util.loggingis needed
We put the logging directory outside of the domain directory, and controlled by an environment variable.
instanceNameis a JVM argument, that is unique for every instance in our system.
We keep the JSON formatted logs on disk with greater retention period than our Elasticsearch does. If we need to, we can re-upload those to analyze past events.
Mutliple destinations in form of
host:port[,host:port]can be defined for the Logstash socket appender. We keep them in an environment variable.
We have similar configuration for
Finally, the Logstash configuration.
It is quite a simple configuration, however access log events tend to have field names that are incompatible with recent versions of elasticsearch. To work around this problem, they need to be adapted a bit, as shown in the filter below:
Adapt your application
Your application also needs small tuning now to prevent various class incompatibility errors. Both
logback-classic should now be provided dependencies of your application. In other words, they should not be included in your .war or .ear build.
It is important to note that
asadmin set-log-levels no longer works with this configuration, as logback is the one filtering the levels. You will need a different way of controlling log levels with this kind of configuration.
Let us know how this setup works for you in the comments below. You’re also invited to share your projects' goodees!