Today I gave a presentation on “Read/Write Splitting with PHP” for Percona Webinars. If you missed it, you can still register to view the recording and my slides.
Thanks to everyone who attended, and especially to folks who asked the great questions. I answered as many as I could during the session, but here are all the questions with my complete answers:
Q: I wasn’t able to start up the webinar until it was 20 minutes in progress. Is it possible to get a recording once it is over?
A: Yes, we will email a link to all webinar attendees with a recording of the webinar and a link to the slides within 24 hours. Folks who did not attend the webinar can also visit the webinar registration page and get access to the recording and slides.
Q: Are there similar plugins or resources for Java based applications?
A: The recent versions of Connector/J include load-balancing and failover features, but these features don’t seem to measure slave lag. They seem to assume that all slaves are ready to be queried all the time.
Another solution that could be used independently of application language would be MySQL Proxy with a RW Splitting plugin. However, if the splitting is delegated to a middleware layer, we would need another method to choose a given query’s slave lag tolerance. Perhaps embed the application’s choice in an SQL comment?
Q: Can the solution you proposed be used with Percona Xtradb Cluster. Or are there any other issues we should look for?
A: Percona XtraDB Cluster is a synchronous replication technology, therefore the slaves should never lag behind the master. It’s not necessary to check the “seconds behind master” or the binlog execution position, because all nodes int the cluster are always caught up fully to the original writer. The Doctrine class extensions I’m experimenting with are meant for environments that use traditional MySQL asynchronous replication.
Q: For randomizing the slave requests, should we use some randomize function or HA proxy?
A: The Doctrine connection wrapper class randomly chooses one the slaves for you. If the randomly-selected slave is not usable because it’s outside the application’s specified tolerance, then the wrapper class chooses another slave. No slave is chosen more than once. If no slaves pass the tolerance test, then the wrapper class executes the query against the master.
Q: What are advantages/disadvantages of filtering-out tables from replication in master side vs slave side?
A: You can configure filtering either on the master or the slave. Master-side filtering records only a subset of changes to the binary log. You could use this to reduce binary log growth, if you have some databases that don’t need scale-out or failover capability. For example tables used for logging less critical information. In this scenario, not all queries could scale out to slaves, because some of the data would exist only on the master.
Alternatively, you can filter on the slaves. Each slave downloads the full binlog containing all changes, but each slave only replays changes for a subset of the binlog, skipping the other binlog entries. In this way, each slave could have an easier time keeping up with changes written by the master, and each slave would “specialize” by serving read queries for only specific databases. In this scenario, you couldn’t pick a slave at random, your application would need to direct queries to the slave(s) that have a copy of the right database.
Q: Based on the example, I assume that PoC is based on tolerance type selection per connection. It would probably makes sense to improve it to query-level. As you mentioned, many applications require different tolerance levels per query.
A: Yes, that’s a good point. I wrote the proof of concept code with a separate connection wrapper class for each tolerance mode, but another way to design it would be a single class that can switch between different modes on successive queries.
Q: Is RBR faster replicating vs SBR replication? RBR is faster than statement based, correct?
A: Often that’s true, Row-Based Replication (RBR) can be faster than Statement-Based Replication (SBR). But there are exception cases too, so we can’t make such a broad general statement. It depends on the types of queries your application runs.
Q: Hello, was there any good experiences with MySQL Proxy (RW-splitting)?
A: Most solutions with MySQL Proxy I see have stern disclaimers against using it in production. Even the official manual page for MySQL Proxy 0.8.2 still says: “MySQL Proxy is currently an Alpha release and should not be used within production environments.” The plugin for RW Splitting also makes similar disclaimers.
I assume it’s practically impossible for a middleware product like MySQL Proxy to assure stability, given that its intended use encourages developers to write their own plugins in Lua. They can’t be responsible for bugs in developers’ Lua plugins. :-)
Q: Would it work with the ORM layer of Doctrine, considering it would only work for one unit of work?
A: Yes, if you layer an ORM on top of the DAL, this makes it inconvenient to choose the degree of slave lag tolerance as I have shown. This is probably a good reason to redesign my prototype to use hints embedded in SQL comments, instead of instantiating different wrapper classes.
Q: Does storing the position in $_SESSION lead to local storage and limit scalabilty?
A: It depends how you store $_SESSION. I have audited sites that stored $_SESSION in the MySQL database for persistence, but when we measured query load with pt-query-digest, we found that 30% of the database response time was solely due to updating the session table. For best scalability, I’d recommend storing $_SESSION in an in-memory data store such as Memcached or APC. Store in $_SESSION only values that are ephemeral and easily recreated. Anything that needs to be persistent, store it more deliberately in a persistent database.
Q: Can you compare / contrast this approach with using a write-through cache?
A: The InnoDB buffer pool is effectively a type of write-through cache. That is, when you change data on a given InnoDB page, the change is persisted immediately, but the page also resides in the buffer pool and it can serve subsequent queries. This allow you speedy access to recently-changed data on a given database instance, as long as the page remains in the buffer pool.
But the read/write split solution is intended for cases where the query load is too high for a single database instance, and you need to scale out to multiple database instances.
Q: Isn’t opening multiple database connections to the master and then subsequently potentially a number of “unsafe” slaves going to be a high performance cost?
A: Connecting to a MySQL database isn’t a great performance cost, compared to the queries themselves. If it is, then persistent connections can help with that.
By the way, we’re seeing reports that MySQL 5.6 non-persistent connections are much faster, so we have that to look forward to.
Q: What about write splitting (updates,inserts) on multiple MySQL servers? Or is it possible to split only read queries (select etc.)?
A: You can set up a pair of MySQL instances to replicate from each other, so you can write to either instance. But you don’t gain much scale-out benefits from this, since both instances have to do approximately the same work to apply the changes. In other words, you end up multiplying work instead of multiplying capacity.
Also, some care is necessary to work with this kind of dual-writer architecture, as my colleague Jay Janssen described in his recent webinar, “The Hazards of Multi-writing in a Dual-Master Setup.” I encourage you to watch the recording of Jay’s presentation.
Percona XtraDB Cluster reduces the hazards of multi-writing by keeping all the instances in sync continuously.
Q: If a slave is slow because of bandwidth issues, the seconds behind master will report zero seconds behind if the SQL thread is caught up and the IO thread is waiting.
A: Yes, the “seconds behind master” is really “seconds behind the latest binlog entry downloaded from the master,” which is a subtle distinction. It means that the slave could have an inaccurate idea of how much it’s lagging, if there are more binary logs yet to download. This can happen if the slave is offline for a while, or if the network between master and slave is slow or unreliable.
A solution to measure slave lag more accurately is to use pt-heartbeat, which inserts the system timestamp into a dummy table once per second on the master. As these timestamp values replicate to a slave, you can compare the latest value inserted to the slave’s system clock and get a much more reliable measure of slave lag (assuming the system clocks on master and slave are in sync). But I designed my proof of concept Doctrine classes not to assume use of pt-heartbeat.
Thanks again for attending my webinar! Here are some more tips:
Check out upcoming Percona Training classes in North America and Europe.
Join Percona and the MySQL community in Santa Clara, April 22-25 2013 for the Percona Live MySQL Conference & Expo.
Watch more webinars from Percona in the future!
The post Read/Write Splitting with PHP Webinar Questions Followup appeared first on MySQL Performance Blog.