2015-05-05

Safari Blog
Safari Blog - technology, business, and design

This post will detail what I learned about PostgreSQL temporary tables, how to take some preventative measures, give some examples of how to test your work using Docker, and why I cared at all.

First, some background

I was awoken at 3am by PagerDuty telling me the disk for one of our database nodes was filling up.

After logging in, I saw several large SELECT statements that had been running for hours.

I determined that the temporary table space was filling up the OS disk, and fast.

I decided to kill the queries because they endangered the entire health of the system. Had the OS disk filled up, the entire system would be down and we would risk data corruption.

Later, in a post-mortem meeting, we asked how the temporary table space had begun to fill the disk and what actions we could take to prevent this problem in the future.

What are temporary tables?



There are a few situations where PostgreSQL saves temporary files, ones that are not critical to database operation. Tables created using CREATE TEMPORARY TABLE and their respective indexes are one source. Probably more importantly, when the database is doing a query that involves a sort operation, and the data exceeds work_mem, temporary files are created for that purpose. So in situations where your users will be doing lots of sorting of large tables, like in a data warehouse, there can be quite a bit of activity going to disk for this purpose.



One interesting property of temporary files is that they’re prime candidates for storage even on less reliable drives, such as you might have your operating system on. It’s possible to safely, and sometimes quite usefully, put the temporary files onto a directory on your OS disks if they are underutilized. Just be careful because if those disks are lost, you’ll need to recreate that empty tablespace on the OS drives of the replacement, or remove it from the temp_tablespaces list.

What happens when your temporary table space fills up?

Because we can move the location that Postgres uses to write temporary table information, a good preventative measure would be to use a disk or partition separate from our OS or database data storage. That way, the temporary space filling up will not impact other pieces of our system.

But what does happen when that isolated temporary space fills up? Will Postgres keel over? Will the query hang? Are we just moving the problem and not really making the overall system more resilient? To answer these questions I did an experiment.

Experimenting via Docker

To understand more about Postgres’ actual behavior in stressful situations, I setup a local Docker container and installed Postgres on it. The goal of this experiment is to watch Postgres fill its temporary table space and observe what happens.

Establish a testing setup

The first thing I did was create a new space for Postgres to write its temporary table space to. I wanted this space to be small so that the failure would occur more quickly and so I could observe the results. To do this I created a loopback file system that was only 1MB in size and mounted that in the Docker container. This seemed like the easiest way to test my idea of putting temporary table space on a different disk. Then I made a directory owned by postgres inside of it for the tablespace to go into, as Postgres will need to be able to write to this directory.

Edit the postgresql.conf in your favorite editor (the configuration file is located at /etc/postgresql/9.1/main/postgresql.conf on my system).

First I set the work_mem to 64k. This is the smallest value Postgres will allow and ensures that my tests will fail quickly, as this value will constrain the amount of work Postgres can do in memory.

Next I set the log_temp_files to 0. This handy configuration value will tell Postgres to log each time it writes to the temporary table space on disk.

We’re finished with setup after setting the temp_tablespaces value:

That table space needs to be created in Postgres, which is where the setup for this experiment is headed next.

But first start or restart the Postgres daemon to load the configuration changes we have made.

Then I connected to the running Postgres daemon on my Docker container as the postgres user and with the psql command.

After that, I created a Postgres TABLESPACE .

Note that the table space name pgsql_temp_tblspc matches the value we gave the the temp_tablespaces option, and that the location /mnt/pgsql_tmp/pg_tblspc matches the directory we created earlier and gave permissions to.

Now that I had a weakened, instrumented Postgres, I created a test database and table and loaded some data.

First I created a test database named test and connected to it.

Then I created a table in my test database and named it test.

I then executed a Perl one liner to generate some data and used the COPY command from inside Postgres to load that data into the test table.

If you have a copy of some production data you would like to play with, that would work as well. This data set is tailored to my experiment, and ensures that Postgres will have to do some sorting that will quickly spill over into temporary table space.

Now we test

Now that we have an instrumented, special setup, we can test with increasing amounts of work_mem needed:

Note the “Sort Method” used in these two cases. The first (100 rows) can be done all in memory, the second (1,000 rows) needed to write some portion to disk. The important bit here is that because our query asks Postgres to order the results by our “random_text” column, Postgres is forced to do some sorting that quickly requires more temporary table space than can fit in the reduced work_mem we set earlier.

If you’re quick enough you can see the temporary table space filling up as the query runs in /mnt/pgsql_tmp/pg_tblspc/PG_9.1_201105231/pgsql_tmp/.

Finding the answer

Based on my first tests, I ran a query that I knew would take more than 1MB of temporary table space (100,000 rows) and watched it fail gracefully, log an error message, and clean-up after itself.

This to me is the best case scenario: the query used and filled the temporary table space I allocated for it, then Postgres killed it, cleaned up the disk space, and logged to the console and the log file what happened.

In this scenario no one would need to be woken up at 3am. We could send these log messages to our friends at Logentries, set an alert for that log pattern, and alert a HipChat room. That way in the waking hours a less sleep deprived human could track down how that query went off the rails and remedy it.

Afterword

I hope that this experiment was as enlightening to you as it was to me about Postgres temporary table space, and that you are now armed with more knowledge to battle those nasty 3am disk usage alerts from PagerDuty on your Postgres database servers.

I used Kitematic for the Mac to test these theories with Docker and it was super easy and fast.

I shamelessly liberated the work_mem testing methodology from: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/

If we were using Postgres 9.2 we could use this setting:

http://www.postgresql.org/docs/9.2/static/runtime-config-resource.html

temp_file_limit (integer)

Specifies the maximum amount of disk space that a session can use for temporary files, such as sort and hash temporary files, or the storage file for a held cursor. A transaction attempting to exceed this limit will be cancelled. The value is specified in kilobytes, and -1 (the default) means no limit. Only superusers can change this setting.

This setting constrains the total space used at any instant by all temporary files used by a given PostgreSQL session. It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does not count against this limit.

Exploring PostgreSQL temporary table explosions with Docker
Augie Schwer

Show more