2017-01-24

You can pipe system and application logs from the nodes in a DC/OS cluster to your existing ElasticSearch, Logstash, and Kibana (ELK) server. This document describes how to store all unfiltered logs directly on ElasticSearch, and then perform filtering and specialized querying on ElasticSearch directly. The Filebeat output from each node is sent directly to a centralized ElasticSearch instance, without using Logstash. If you’re interested in using Logstash for log processing or parsing, consult the Filebeat and Logstash documentation.

Important: This document does not describe how to set up secure TLS communication between the Filebeat instances and ElasticSearch. For details on how to achieved this, please check the Filebeat and ElasticSearch documentation.

Prerequisites

These instructions are based on CentOS 7 and might differ substantially from other Linux distributions.

All DC/OS nodes must be able to connect to your ElasticSearch server on the port used for communication between ElasticSearch and Filebeat (9200 by default).

Step 1: All Nodes

For all nodes in your DC/OS cluster:

Install Elastic’s Filebeat. Installers are available for most major platforms.

Create the /var/log/dcos directory:

Move the default Filebeat configuration file to a backup copy:

Populate a new filebeat.yml configuration file, including an additional input entry for the file /var/log/dcos/dcos.log. The additional log file will be used to capture the DC/OS logs in a later step. Remember to substitute the variables $ELK_HOSTNAME and $ELK_PORT below for the actual values of the host and port where your ElasticSearch is listening on.

Step 2: Master Nodes

For each Master node in your DC/OS cluster:

Create a script that will parse the output of the DC/OS master journalctl logs and funnel them all to /var/log/dcos/dcos/dcos.log.

Tip: This script can be used with DC/OS and Enterprise DC/OS. Log entries that do not apply are ignored.

Step 2: Agent Nodes

For each Agent node in your DC/OS cluster:

Create a script that will parse the output of the DC/OS agent journalctl logs and funnel them all to /var/log/dcos/dcos/dcos.log.

Tip: This script can be used with DC/OS and Enterprise DC/OS. Log entries that do not apply are ignored.

Step 3: All Nodes

For all nodes, start and enable the Filebeat log parsing services created above:

Step 3: ELK Node Notes

The ELK stack will receive, store, search and display information about the logs parsed by the Filebeat instances configured above for all nodes in the cluster.

Important: This document describes how to directly stream from Filebeat into ElasticSearch. Logstash is not used in this architecture. If you’re interested in filtering, parsing and grok’ing the logs with an intermediate Logstash stage, please check the Logstash documentation.

You must modify the default parameter values to prepare ElasticSearch to receive information. For example, edit the ElasticSearch configuration file (typically /etc/elasticsearch/elasticsearch.yml):

Other parameters in the file are beyond the scope of this document. For details, please check the ElasticSearch documentation.

Known Issue

The agent node Filebeat configuration expects tasks to write logs to stdout and stderr. Some DC/OS services, including Cassandra and Kafka, do not write logs to stdout and stderr. If you want to log these services, you must customize your agent node Filebeat configuration.

What’s Next

For details on how to filter your logs with ELK, see Filtering DC/OS logs with ELK.

The post Log Management in DC/OS with ELK appeared first on Mesosphere DC/OS Documentation.

Show more