2016-09-16

There seem to be quite a few popular Postgres conferences peppering the globe these days. This year, Simon Riggs of 2ndQuadrant gave the sponsored keynote at Postgres Open. I’m not entirely sure it was intentional since it wasn’t the title of his presentation, but he uttered the words “working together to make Postgres better for everyone” at one point. The phrase “Working Together” really stood out, because that’s a significant part of what makes Postgres so great. It resonated acutely with the impetus behind the Unconference track that remains a regular fixture at PGCon.

Then Simon dropped another bomb that shocked everyone in attendance. Everywhere I looked were visions of absolute disbelief and awe. He suggested that somewhere in the twisting catacombs of 2ndQuadrant was a beta version of WordPress running on Postgres instead of MySQL. Considering its roots in PHP before there were readily available database abstraction layers, and the fact many extensions call the MySQL functions outright, this is clearly an impossibility. This is clearly a cruel hoax, and I demand proof that this particular unicorn exists for reasons completely unrelated to exorcising MySQL from my own website.

Perhaps primarily accidental, but I seem to be a regular fixture at Postgres Open since it began in 2011. Most of this is because it started in Chicago, which made it extremely convenient to attend for the first four years of its existence. Now I’m just operating on pure momentum. So what did I learn this time around? I’ve been doing this too long to learn much regarding Postgres, and short of diving into writing extensions in C, that isn’t going to change. But maybe I can offer a useful perspective on the conference itself and the talks I visited.

PGLogical – the logical replication for PostgreSQL

First on the agenda was a talk on PGLogical by Petr Jelinek, yet another 2ndQuadrant contribution. It works by interpreting the Postgres WAL stream and decoding activity into generic SQL statements that can be replayed on a remote database. Because of this decoupling from the binary contents of the transaction log, there are numerous advantages and newly available replication options:

Multiple data sources can transmit content to a single aggregate table. This is great for data warehouses and centralized reporting systems.

Stripped of version-specific binary details, replication can span Postgres versions, and thus be used to link disparate instances, or facilitate upgrades.

Replication is normally an all-or-nothing exercise. In parsed form, replication can be filtered by intent or target.

Interpreting WAL contents removes trigger overhead common to logical replication solutions.

Unfortunately, PGLogical does retain some of the limitations of being content-driven. DDL is not included for example, so table modifications must be explicitly deplicated on targets, or passed through a companion function that will pass the changes on to replicas. This particular limitation is something I don’t quite understand. Postgres is ACID compliant, and as such, table modifications must be reflected in the transaction log somewhere. I wonder what makes them resistant to decoding automatically.

Either way, extensions like this help deprecate existing solutions like Slony, Londiste, or Bucardo. They had their time, but it’s time to move on.

A look at the Elephants trunk – PostgreSQL 9.6

In this talk, Magnus Hagander spent some time ruminating on the upcoming 9.6 features he thought were particularly noteworthy. And of course, I acted as a secondary filter. Many of these deserve much closer attention, but that’s for a future article. So what’s new in 9.6?

Tired of “idle in transaction” connections blocking things up? Well there’s finally a timeout to kill them. Huzzah!

VACUUM now skips already “frozen” pages. This makes it much faster on primarily read-based tables, because unchanged data isn’t checked again. Data warehouses will rejoice.

Database replicas will now actually show which primary they’re following. This used to only be available by manually checking recovery.conf.

Synchronous replication will now allow multiple synchronous standby servers. This means it’s possible to create a pool of five replicas, and allow any two of them to acknowledge writes, for example. This is not only safer in the long run, but much less likely to disrupt the primary unless several replicas become unavailable. I might actually start using the feature now!

There are a lot of lock-based, date/time, allocation, and memory management performance improvements. This includes tweaks to kernel page flushing that should dramatically reduce overwhelming storage with write spikes.

The Postgres foreign data wrapper can now push JOIN and ORDER BY to the remote database. Previously, data would be fetched locally and then joined and sorted. This should make an incredible difference to foreign query performance.

And of course, who could forget parallelism of various operations? It’s still pretty new and somewhat experimental, but now that 9.6 is a release candidate, it should see a lot more field testing.

The Elephant Meets the Whale – Bringing PostgreSQL to Production on Docker

Phil Vacca may not be a Docker expert, but he was certainly thorough in examining the “can we” versus “should we” debate regarding deploying Postgres on Docker containers. Docker is one of those things I didn’t know much about aside from its existence and general usage patterns. But once Phil related a Docker container to a glorified chroot environment, everything clicked.

It turns out that Postgres being a data-driven service makes it much harder to run in a container. There’s another session that goes into more depth in this regard, but short of overloading the Postgres container itself, there must be an entire constellation of related containers churning along with it. Each one of these will need a slew of configuration file overlays, environment variables, extension layers, permissions, and volume mappings.

I get really uncomfortable around so many moving parts, or otherwise fragile implementations. It would seem that, short of including a heavy stack management infrastructure, Postgres and Docker are not a good production environment match. This is a quickly evolving area though, so that statement is likely to expire soon.

Perhaps unrelated, but can anyone tell me why everyone is chasing Docker instead of LXC? A versioned, privately directed, arbitrarily controlled, quickly deprecated moving target seems like a bad thing to depend on. Well, unless you’re the developers of Docker. I must be missing something, here.

All the Big Data Sciency stuff you never knew Postgres could do

If there’s ever a notorious speaker, it’s Jim Nasby. This talk actually seemed a tame compared to his previous forays. Though he did write a data type designed to handle Python NumPy ndarray data structures, so it’s definitely topical! In other news, Postgres now supports a Python NumPy data type.

This talk actually featured a pretty good summary of all the different ways to use Postgres to manage scientific or complex analysis:

JSON & associated functionality

FDWs

Extensible types (such as numpy ndarray)

Various replication features, including logical replicaction and BDR

Parallel queries

Distributed approaches like citusdata and Postgres-XL

Column store approach (cstore_fdw)

CPU-driven support (PGStrom)

A REST API in PostgREST

ToroDB, a MongoDB API on top of Postgres. It basically transforms JSON into multiple Postgres tables. Theoretically this means access is available via JSON or SQL. Though MongoDB sharding isn’t supported yet, so scalability is limited.

MADlib, a machine learning algo system via SQL. This one is especially intriguing and I need to run it past our business intelligence team. If our data can support this kind of analysis, we have several TB of data to dump on it.

Just more support that Postgres is becoming more of a data middleware than simply a database engine.

Locked Up: Advances in Postgres Data Encryption

Vibhor Kumar brings us pgcrypto, the presentation. Beyond just covering the basics of using the encryption extension, he provided some encouraging performance results of various approaches such as:

Encrypting the data itself with pgcrypto functions.

Using disk-based encryption.

Both of these with and without SSL.

What’s particularly interesting here, and something I never considered, is that column-based encryption completely breaks indexes. In retrospect, this should have been fairly obvious. Once data is transformed, indexes only operate on the resulting encoded information, making them effectively meaningless. Short of intricate and magical mathematical analysis of data leakage through the encryption model, this is unfixable. Which suggests that the real conclusion of this talk is: use disk-based encryption and force SSL when data must be protected. It’s hard to argue with that.

Bitemporal Data Model: making it happened in Postgres

This is one of the only sessions with two presenters. Chad Slaughter and Henrietta Dombrovskaya usage of the Postgres interval type to track historical events is compelling and insightful. It turns out, simply having an event log for bookkeeping is problematic because that makes obtaining the current summary convoluted. So they combined an event log with intervals to track modification history for a rolling summary. It basically boils down to this:

Start with an effective interval to reflect when data should be included in reports.

Add an assertive interval to track when the modifications were asserted, or applied.

This effectively decouples modifications from presentation, yet retains both. Through the use of intervals, there are definite boundaries reports can target, and through the assertive interval, we can immediately tell how long the modification was valid through the single data point.

Maintaining this does require a lot of custom enforcement mechanics, though. Postgres constraints can’t incorporate that kind of logic automatically, so it falls to triggers and helper functions to step in. Still, the concept of a composite current and historical relation is an interesting theory, and definitely worthy of further exploration.

Administering Postgres using Docker and Openshift

Jeff McCormick over at Crunchy Data gave the second Docker talk I alluded to earlier. A pretty good summary of the presentation boils down to their Docker stack. While Phil presented the why, here we got to see how. Give the rather unwieldy nature of the suite itself, there really needs to be a management element. Apparently either OpenShift or another Kubernetes-based system can fill that role, but they’re hardly free in themselves.

That’s a ton of overhead to “just” run Postgres. Of course, that kind of setup is more of a dynamic allocation model. Given a reliable backing storage source, Postgres could reliably churn along in such an abstraction layer. I like that it prevents relying on administration of the underlying OS. Platform as a Service is something I’d like to see more in practice. It’s just not something I’m going to touch with a ten foot pole; I’ll let our infrastructure department worry about coordinating such a monstrosity.

Amazon RDS for PostgreSQL: New Features and Lessons Learned

I hate to say Grant McAlister presented an hour-long commercial for Amazon RDS, but I’d be lying if I didn’t. Though as one of the primary architects, he’d probably be remiss if he didn’t.

What I came away with, is that RDS has come a long way recently. There are a lot more available metrics and tracking, performance levels, resource credit balancing, and Postgres extensions. Filesystem and snapshot encryption are standard, and encrypted snapshots can even be shared, provided they have an associated custom key. Heck, they’ve even integrated logical replication already.

For organizations that don’t need custom everything and can budget potentially paying per-minute pricing, it seems like a pretty good approach.

Elephant Herd as a Service: Managing Hundreds of Postgres Instances

My own talk was on ElepHaaS, a Postgres instance wrangler if there ever was one. I designed it to manage over 100 instances spread across multiple data centers, environments, and servers. At one point, it served a crucial role in testing our disaster recovery readiness. All I had to do was click a checkbox, hit “go”, wait a couple minutes, and about a dozen herds had failed over to the most up-to-date replica. The tool allowed me to repurpose the old primary instances as new replicas, and repeat the process in reverse. All without SSHing to each individual server and carefully invoking the process.

After the presentation, Sean Chittenden (of Groupon fame) brought up Consul as a way to make the ele_tools service companion better at Postgres instance auto-discovery. Or maybe replace it entirely. I won’t know until I spend some time looking at what it does and how it works. Either way, it seems to be a nice way of making ElepHaaS better.

ElepHaaS could scale to thousands of instances, but we don’t have that many. Someone does, though. Pull requests welcome!

Big Data with PostgreSQL

Again with Simon Riggs, eh? Sure! This time around, the focus was on handling very large data sources. During this, he brought up a major shortcoming of sharding data on a primary key: secondary keys. In essence, related data in separate tables is likely to be sharded on different columns, forcing non-local data federation that adversely affects performance due to cross-communication requirements. This is why so many existing solutions require N-N node communication topographies.

There are basically two ways to circumvent this:

Find a way to keep associated data together on the secondary key by following the primary as a principle hash. This doesn’t always work however, as the principle shard element may require further subdivision for larger secondary distributions.

Implement a feedback system that pre-fetches applicable data into a front-end layer for local aggregation. There were a few slides showing pgpredict and 2UDA partially filling these roles.

And then of course, there was the requisite hawking of Postgres-XL. We’re still trying to get this one to work, but just recently ran into some kind of bug that corrupted a couple of system catalog tables in our development environment. That’s going to be fun to fix! But I’m still pushing it, because this approach really is the way to go in the long run for extremely large warehouses. Simon emphasized that they want to move as much of the Postgres-XL patches to Postgres core as possible, and then make the remaining functionality available as an extension. Maintaining such a divergence long-term isn’t really viable.

Oh, and he also mentioned that Postgres has a 32TB per table size limit. I immediately ran to our 50TB system and frantically checked the biggest tables before remembering we implemented partitioning specifically to avoid maintenance issues caused by vast monolithic tables. That of course, solved this problem as well. Otherwise, our largest table would have been 8TB. Not bad! These limitations are obviously documented, but it’s easy to forget.

Non-Relational Postgres

As the last talk for the conference, Bruce Momjian spent some time covering non-relational data types supported by Postgres. Starting with arrays, he continued through all of the greats like ranges, geometric types, XML, JSON and JSONB, composite row types, and even characters, since they can be interpreted any number of ways depending on tokenization contexts.

One particularly interesting point he raised was that range types index better for data that often follows a start/end dating model. This is because Postgres only really uses one index at a time per sub-clause. In addition, independent columns also have separately tabulated statistics. When these statistics are combined, they can lead to underestimations of row counts, which can lead to poor performance due to inefficient query plans.

That’s an important point that applies to many of the composite non-relational types. Keeping related data together in many cases is an optimization, even when a single column contains multiple data points. So it’s not just to reduce query complexity!

Bruce always keeps his presentations (including this one) on his website. I highly recommend perusing any that look interesting.

Closing Notes

As always, most of the presenters eventually link their slide decks on the Postgres Wiki. Though I personally wish the Postgres Open site itself eventually published them on the abstract pages for each talk. In either case, videos of all of these will eventually be on YouTube, so for any I didn’t cover, or sounded good enough to see first-hand, keep an eye out for them.

Of the big names that normally attend, Robert Haas and Josh Berkus were conspicuously absent. For now, I’ll just blame the newly formed Postgres Vision for scheduling too closely to Postgres Open. It’s not the first conference to cannibalize another for speakers, and it won’t be the last.

In the end, I think this was a pretty good summary of Postsgres past, present, and future. As always, I lament the general lack of end-to-end solution demonstrations like some of those we’ve seen in the past. That’s the risk of being a Postgres advocate: everyone wants to talk about some cool new method they’ve developed, but there isn’t a lot of real-world application.

Either way, there was definitely enough material to inspire a bit of intellectual exploration of database techniques. If you couldn’t attend, there are a couple more Postgres conferences this year, and a slew for 2017. Don’t be a stranger!

Show more