SQL Hot Backup
January 15, 2005 1:29 AM   Subscribe

MySQL hot backup: I need to set one up.
One of my clients has started to get worried about their data in their MySQL database, which they have become very dependent on. The database is currently backed up twice daily, once at noon and once at end of business, and the backups are encrypted and stored offsite. However, they want to have a up-to-the-minute backup in case things go very pear shaped with the server. What's the best way to do this? (Note: MySQL 3.23.x)
posted by SpecialK to Computers & Internet (12 answers total)
I believe that you want MySQL's replication features. I'm not a database guy though, so I don't know how reliable etc it is.
posted by reynaert at 2:07 AM on January 15, 2005

Pear shaped? Does something terrible happen when your server gets a fat ass?
posted by b1tr0t at 2:14 AM on January 15, 2005

b1tr0t: Allow me to remind you that snarky comments aren't really welcome here, and on top of that, this is the context I was using it in.
posted by SpecialK at 2:25 AM on January 15, 2005

SpecialK: If update/delete/insert/create commands are logged somewhere, you could simply save that log and run those commands on the most recently saved database. As the changes and log file would be incremental, it would be much smaller but just as effective as a minute by minute database backup.
posted by zelphi at 2:27 AM on January 15, 2005

reynaert: Yeah, that's one solution.
The reason I wasn't considering it was that the backup needs to be offsite and replication requires me to build custom RPMs each time they release a new version of mySQL, since the RPMs in the Fedora repository doesn't support binary logs out of the box.
Unfortunately, this has got to be secure, and as far as I can tell the MySQL replication can't be sent over an SSH tunnel. (I don't think, at least.)
posted by SpecialK at 2:31 AM on January 15, 2005

A quick google shows replication via SSH or IPSEC is definately possible!
posted by hardcode at 3:29 AM on January 15, 2005

I don't have as much experience with MySQL as MSSQL so I could be talking out of my arse here. Generally when you have mission critical data and need frequent updates then the best thing to do is have full backups as you do now but then also backup the transaction logs every hour or half hour.

In one place I worked we had two SQL servers side by side. The secondary one not only had the backup files on it, but those files were restored to it's database. This meant that if things went pear shaped then we didn't have to wait for the data to be restored we only had to change the data source to the secondary server.

I get the feeling I may have missed the point though....
posted by dodgygeezer at 3:31 AM on January 15, 2005

Store the transaction logs on a separate scratch and/or server. It not improves your server performance, but it makes it easy to recover from pretty much any catastrophe.
posted by purephase at 6:53 AM on January 15, 2005

Based on the scenario given, I would echo the idea of implementing a solution using two local boxes, the production server and a backup box that gets a full refresh every 15 minutes to 1 hour from production (less than every 15 minutes might lead to application performance issues). Those two boxes should be hot-swapable for core applications through a simple DNS name change. The backup box need not be a server, a decent workstation would do the trick. Then continue your regular off-site backups.

The advantages to this solution are having a quick-fix for server failure and improving data backup routines while avoiding large increases in offsite network traffic. If you do choose frequent off-site backups, have a plan for off-site network failures.

There are a lot of options for local replication. I haven't tried MySQL 3.23's built-in replication, but I've used MSSSQL's DTS to replicate two non-MSSQL db's pretty routinely. Alternatively, I seem to remember creating a script to copy the MySQL data files without taking the db off-line and restoring on another machine, but that was long ago.
posted by McGuillicuddy at 7:05 AM on January 15, 2005

More echoing: Yes, replication. Replication also means that you can take your regular archival backups without any downtime or long locks on the master.

If you do go replication, I'd probably look at moving to 4.0.x because of one significant replication improvement in the 4.0 stream: the copying between master and slave and the replaying of the redo logs on the slave are separate processes, so if the replaying fails you're still pulling the logs over from the master.

Don't use Fedora's MySQL rpms at all (not supporting binlogs is prima facie evidence that they were built by someone who did not think things through) -- use MySQL's. Those are the same builds they give to paying customers, AFAIK. They know MySQL better than Fedora or Red Hat does (especially since Red Hat is a Postgres shop).

Even if you don't use replication the solution will need binlogs -- the standard enterprise database backup solution is to back up snapshots regularly but not too frequently, and then to back up the redo logs often. That way not only do you have a to-the-moment backup, but you also have all of the possible backups between the snapshot and the latest, just by stopping the replay of the redo logs earlier than the end. (If you do go that way, InnoDB tables will allow you to take that snapshot with no downtime, but you'll have to buy InnoDB Hot Backup to do it.)
posted by mendel at 9:27 AM on January 15, 2005

The other info on this page is pretty helpful, but doesn't mention one thing to watch out for. If you're doing hot backups, it's very possible to have your backup corrupted in exactly the same way as your main server. You need to make sure you a) don't use this as a substitute for regular backups and b) make sure you kill the replication/whatever if you detect problems with the master.
posted by Caviar at 1:17 PM on January 15, 2005

Oh, I missed a bit: to confirm, yes, replication is just a tcp connection, and can be encrypted the same way you'd do any other (I do ssh port forwarding, but an stunnel tunnel works fine, as does a VPN between the two endpoints.)
posted by mendel at 2:54 PM on January 15, 2005

« Older Noticing New Words   |   where to get garum colatura in the US? Newer »
This thread is closed to new comments.