2016-07-11



MongoDB, one of the most popular NoSQL databases today, is designed to process and store massive amounts of data. The tool is used by many well-known, modern IT organizations such as Facebook, eBay, Foursquare, and Expedia. Monitoring is a critical component of all database administration, and tight monitoring of your MongoDB cluster will allow you to assess the state of your database. However, due to its complex architecture which allows for virtually unlimited scaling, monitoring is a challenging task.

In this article, we will explain how to collect and analyze some of the MongoDB metrics using the ELK Stack so that you can keep a close eye on your MongoDB performance and growth.

MongoDB Metrics to Track

In this article, we will use the latest version of MongoDB (version 3.2) and focus on metrics that are available using the WiredTiger storage engine. This is currently MongoDB 3.0’s default storage engine. We will focus on tracking and metric analysis to get an overview of  its database performance, resource utilization, and saturation. These are accessible using MongoDB commands.

Throughput

MongoDB (with WiredTiger storage engine) provides several commands that can be used to collect metrics using mongo shell. Mongo shell is  an interactive JavaScript interface for MongoDB that allows you to query data and take administrative actions.

One of the rich commands that provides a lot of information about items including  operations, connections, journaling, background flushing, memory, locking, asserts, cursors, and cache is the serverStatus (i.e., db.serverStatus()).

These throughput metrics are important since they can be used to avoid many performance issues, such as resource overloading. To get a general overview of your MongoDB cluster activities, you should first look at number of read/write clients and the number of db operations that they perform. These metrics can be retrieved using serverStatus opcounters and globalLock objects.

The objects’ output is in JSON, such as show in the example below:

The opcounters part of the serverStatus output

Opcounters.query and opcounters.getmore commands return metrics that indicate the number of read requests received from the time the mongod (a process that handles data requests and manages data access) instance last began. On the other hand, opcounters.insert, opcounters.update, and opcounters.delete return the number of write requests received.

By monitoring the number of read and write requests, you can quickly prevent resource saturation as well as spot bottlenecks and the root cause of overloads. In addition, these metrics will allow you to assess when and how you need to scale your cluster.

As shown above, globalLock is a document that reports on  the database’s lock state and can provide you with information regarding read/write request statuses. These will allow you to check if requests are accumulating faster than they are being processed. The same applies to activeClients.readers and activeClients.writers. These can enable you to learn about the relationship between the amount of current active clients and your database load.

The globalLock part of the serverStatus output

Performance and Failover

Using a replica set (a master-slave replication that facilitates load balancing and failover) is a must to ensure your production robustness. The oplog (operations log) is the main component of the MongoDB replication mechanism. Below, you can see the relevant metrics that can be retrieved using the getReplicationInfo and replSetGetStatus commands.

As shown below, replica set member statuses are composed of a few indications such as the replica state and optimeDate field, which is important for calculating the replication lag metric that contains the date when the last entry from the oplog is applied to that member):

The member part of the replSetGetStatus output

Replication lag is used to show the difference between the primary and secondary. Since you want to avoid serving outdated information, it’s important to keep the difference between the two as narrow as possible. If you lack any existing load issues, your replication lag will be zero. This is ideal. However, if the number rises for your secondary nodes, the integrity of your data is at risk. To avoid such events, we recommend setting alerts on these metrics so that you can constantly monitor your replica status. Learn more about replication lag here.

Resource utilization

One of the most important metrics is the number of client connections. This includes current active connected clients and the unused connections as well. These can be reported using serverStatus:

The connections part of the serverStatus output

An unexpected rise in the client connections metric can occur if the connection is not handled well or if there is an  issue inside of the MongoDB driver that is used for handling the connection. Tracking the behavior of these metrics will allow you to set the relevant summary metrics such as the average amount as alerts’ thresholds.

Another set of very important metrics is related to storage. These can be be retrieved using the db.stats() command, which will return statistics for the selected database. Running it using the Mongo shell to get statistics on the database test_mongo_db looks like this:

The next JSON snippet is from the db.stats output:

Example of db.stats output

If you look inside the output of db.stats command, you will find (similar to the example above) metrics for the number of objects (documents) within all of the collections (collections property in the output), the size of all documents (dataSize property in bytes), the size of all indexes (indexSize property, in bytes) and total amount of space allocated to collections in this database for document storage.

Monitoring dataSize, indexSize or storageSize metrics will show you the change in physical memory allocation and will help you to keep your cluster healthy with enough storage to serve your database. On the other hand, a large drop in dataSize can also indicate that there are many requested deletions, which should be investigated to confirm that they are legitimate operations.

The following metrics that should be monitored are the memory metrics using serverStatus. The interested tuple of metrics is virtual memory usage, which is located in the mem.virtual property (in MB), and the amount of memory used by the database, which is located in the mem.resident property (in MB). Similar to the storage metrics, memory metrics are important to monitor because overloading RAM memory within your server(s) is never good. This can lead to the slowing or crashing of your server, which will leave your cluster weakened. Or, even worse, if you have only one dedicated server, MongoDB can dramatically slow down or even crash.

Another important set of metrics is located in the extra_info.page_faults property of the serverStatus output: the number of page faults or the number of times MongoDB failed to get data from the disk.

The mem and extra_info part of the serverStatus output

Collecting and Monitoring Using ELK

In this section, we will described how to ship, store, and monitor your MongoDB performance metrics detailed above using the Logz.io ELK Stack.

We will use the Ubuntu Server 16.04 on Amazon cloud. You can also read our step-by-step article if you would like to know how to install and configure the ELK stack on Amazon cloud.

Extracting the MongoDB Metrics

In the next step, we will demonstrate how to ship metrics to Elasticsearch with Logstash. Using some programming to retrieve metrics will give you better control and allow you to run complex pre-shipping actions.

To ship logs, we will create a Logstash configuration file with the input path, including how to interpret it and where to send it. Learn more about Logstash configuration here.

Before we create the Logstash configuration file, we will describe how to retrieve the MongoDB metrics specifically — using the mongo shell interface via the bash of your OS.

If we want to execute the serverStatus command via our terminal, without staying in the mongo shell program, we can use –eval flag of the mongo shell program as follows:

And the output:

The output format from the serverStatus command

As you can see, the first two lines of the output contain information about the MongoDB shell version and to which database the shell is currently connected. Since this format does not comply with strict JSON rules and complicates our Logstash configuration file, we will use the pipeline approach to cut off the first two lines of the output with the tail command.

So, our command will look like this:

Now, the output file will only contain the JSON part.

Next, we want to remove the NumberLong(x) and ISODate(x) from the JSON file. Again, sending these to Logstash will trigger a JSON parsing exception, and storing in Elasticsearch will fail. To transform the stream of the text, we will use the sed command with a regex pattern that will find NumberLong and ISODate data types. It will then replace it with the arguments that exist inside these data types:

The example of the serverStatus ouput with NumberLong and ISODate data types

Now, using the pipeline command and adding the piece for transforming the text, the final command will look as follows:

Running this command will generate a pure JSON file without the MongoDB metadata.

In addition to the serverStatus command, we will also use the db.stats() command to gather storage metrics for specific databases. For the purpose of this tutorial, we created two databases for which we want to monitor storage allocation with the names test_mongo_db_1 and test_mongo_db_2.

Again, we will use the commands for gathering storage statistics for these two databases together with pipeline and tail commands to comply with the JSON formatting rules:

Configuring Logstash

Next, we will take the created commands from above and place them in the Logstash configuration file (logstash.config) using the exec input plugin. To forward the data to Elasticsearch, we will use the Elasticsearch output plugin:

The Logstash configuration for getting MongoDB metrics and sending it to the Elasticsearch

We’re now going to start the Logstash configuration using the next command:

After a short while, you will begin to receive the first MongoDB metrics via Logstash.



Discover section of Kibana after short time of waiting until Logstash start with sending metrics to the Elasticsearch

Shipping to Logz.io Using Logstash

Logz.io provides the ELK Stack as an end-to-end service so that the logs that you send to us are indexed and stored in Elasticsearch and available in real-time through Kibana.

While we support a wide range of techniques for shipping the logs (available under the Log Shipping section in the UI), in the next section I will explain how to use our Logstash integration to ship MongoDB logs into Logz.io.

In the Logz.io UI, select the Log Shipping tab located at the top of the page, and under the Platforms menu on the left, select the Logstash item.

On the right, you will see what needs to be added to the current Logstash configuration to send logs to Logz.io. Two additional changes are required: One is adding token through the filter plugin, and the second is changing the output, where the elasticsearch output is replaced with tcp pointing to the listener.logz.io server in charge of processing incoming logs.



Logstash shipping page

After adding these changes, the Logstash configuration file for shipping logs to Logz.io looks like this:

Logstash configuration file for shipping the logs to Logz.io

After starting Logstash with the new configuration file, you will notice that logs will begin to appear in the Discover section within the Logz.io UI.

The Logz.io Discover section after starting a new Logstash configuration

Shipping to Logz.io Using Amazon S3

Another way to ship logs into Logz.io is with AWS S3. You would first need to create the log files themselves from the MongoDB command output, and then use the AWS CLI to sync with an S3 bucket.

Creating the log files

In the previous section, we used the pipeline command to execute and filter command output. The next step is to redirect this output to the file.

First, we will create a new log file:

Next, we will do the same for the command that generates the database stats:

We can now use these commands for periodic cron jobs to take charge of collecting the logs in a periodic manner.

Syncing with S3 and shipping to Logz.io

Logz.io supports shipping from S3 natively. In the Logz.io UI, open the Log Shipping section and expand the AWS section. Select the S3 bucket option, and configure Logz.io to be able to read from your S3 bucket.

To find more information on how to configure this type of shipping of the logs and how to use AWS CLI sync command to copy files to an S3 bucket, you can read the section S3 Syncing and Shipping in our article on creating a PCI DSS dashboard.

The MongoDB Performance Dashboard

Now that all of our MongoDB metrics are shipped to Elasticsearch, we are ready to build a monitoring dashboard. We will start with a series of Kibana visualizations for the throughput metrics.

First and as an example, we will create a line chart that visualizes the number of read requests. After clicking on the Visualize section and selecting the Line chart visualization type from the menu, we will set up metrics fields on the left side in the Kibana:

The metrics configuration for query number

A line chart for query number

We will do the same thing for the rest of the throughput metrics. The configuration will only differ in the aggregation fields used (where for query we pointed on the opcounters.query from the field dropdown).

After adding and saving these charts in the KIbana dashboard, you will be able to see throughput metrics visualized:

A dashboard with visualized throughput metrics

In a similar fashion, we can visualize the other metrics described in the MongoDB Metrics section.

The final dashboard for MongoDB metrics

To help you to hit the ground running, we’ve added this dashboard to ELK Apps — our free library of ready-made visualizations and dashboards that can be installed in one click. Simply search for MongoDB in the ELK Apps page, and click to install.

Your job doesn’t necessarily stop there — set up alerts for the metrics that we have added here. Learn how to create alerts for the ELK Stack.

Logz.io is a predictive, cloud-based log management platform that is built on top of the open-source ELK Stack and can be used for log analysis, application monitoring, business intelligence, and more. Start your free trial today!

Show more