How do I increase partition size in a RAID 1 array with minimal downtime?
May 4, 2012 6:08 AM Subscribe
RAID 1 Partition Filter: Physical capacity increased. How do we increase the partition size to match on a production machine? Details within.
We have a production server running Debian with a RAID 1. Yesterday, the 3ware RAID card showed that one of the 120 gig drives had become degraded and we replaced it with a 250 gig drive. After the array rebuilt, the tech on site decided to pull the other 120 gig drive and replace it with a 500 gig drive and increase the storage on the machine to 250 gigs. The array rebuilt again and all is running fine, however, the partition size remains at 120 gigs (minus swap, etc).
What's the best way to resize the partition with the least amount of downtime for the machine? Ideally, it would be on the fly. A few restarts would be ok. A backup/restore would probably be more trouble than it's worth, but if that's the only option, we'd like to hear about it anyways. The host handles mail and MySQL slave duties.
Snowflakery: Linux 3.1.0-1-amd64, Debian wheezy/testing, 3DM 2 version 2.07.00.003, ext3
Further info upon request. Thanks in advance.
We have a production server running Debian with a RAID 1. Yesterday, the 3ware RAID card showed that one of the 120 gig drives had become degraded and we replaced it with a 250 gig drive. After the array rebuilt, the tech on site decided to pull the other 120 gig drive and replace it with a 500 gig drive and increase the storage on the machine to 250 gigs. The array rebuilt again and all is running fine, however, the partition size remains at 120 gigs (minus swap, etc).
What's the best way to resize the partition with the least amount of downtime for the machine? Ideally, it would be on the fly. A few restarts would be ok. A backup/restore would probably be more trouble than it's worth, but if that's the only option, we'd like to hear about it anyways. The host handles mail and MySQL slave duties.
Snowflakery: Linux 3.1.0-1-amd64, Debian wheezy/testing, 3DM 2 version 2.07.00.003, ext3
Further info upon request. Thanks in advance.
(If you want to resize the non-root partition, you can likely do it online.)
posted by devnull at 6:53 AM on May 4, 2012
posted by devnull at 6:53 AM on May 4, 2012
A backup - restore is the "industry standard best practice".
However, I run a server at home with a linux (fedora) software raid-5 that has been expanded online all the way from 40gb component drives all the way up to 1000gb drives.
What you need to do is this: (and please double check my work before implementing it!)
1- Make sure you have a good backup.
2- Make sure the array is healthy.
3- Use the 3ware utilities to increase the size of the raw volume. This may have already been done automatically.
4- If that's successful, use fdisk (or your favorite partition utility) to increase the size of the partition.
5- use
Note: I don't use a partition table on mine- the raw drives combine to a raw virtual device, which is then filled completely with an ext3 filesystem. So I don't have to do this step. You'll have to investigate whether this can be done with the filesystem mounted. It *should* be possible, since I think ext3 doesn't really pay any attention to the end of the partition table and will just ignore the extra space in the partition until the filesystem is grown. I'd probably do that part offline just to be safe. Warning: this can only work if you aren't moving the beginning of the partition! Don't move the beginning of the partition! That will wreck everything. You can only move the end of it, and anything on the disk after the end of it will be wrecked.
It would basically work like this: (using fake numbers)
If your raid disk is /dev/sda, then the current partition table would have looked like this prior to the disk swapping:
After swapping disks and changing the raid container size, it probably looks like this:
Then you do the swap thing, leaving you with this:
Finally, you edit the ext3 partition to fill in the empty space:
I have run resize2fs online with no problems. It may take forever, and slow down the server, if the server is under heavy usage. Your best bet might be to take the machine offline at the end of the day, resize the partition and then let resize2fs run overnight.
Basically, the filesystem is contained within the partition, which is contained within the raw device, which is created from the raid disks.
Caveat: if you have a swap partition rather than a swap file, you'll need to create a swap file, initialize it (mkswap, or swapon, I think), kill swap on the partition, create a new swap partition, and create swap on that partition before expanding the ext3 partition. It might be easier to just keep swap on a file. I don't think it affects performance very much. If it does, you might be better off just using a separate disk for swap in the first place.
posted by gjc at 7:04 AM on May 4, 2012
However, I run a server at home with a linux (fedora) software raid-5 that has been expanded online all the way from 40gb component drives all the way up to 1000gb drives.
What you need to do is this: (and please double check my work before implementing it!)
1- Make sure you have a good backup.
2- Make sure the array is healthy.
3- Use the 3ware utilities to increase the size of the raw volume. This may have already been done automatically.
fdisk -l
(that's a lowercase L) should list the raw size of the /dev/sdX raid 1 device.4- If that's successful, use fdisk (or your favorite partition utility) to increase the size of the partition.
5- use
resize2fs /dev/sdX
to grow the filesystem.Note: I don't use a partition table on mine- the raw drives combine to a raw virtual device, which is then filled completely with an ext3 filesystem. So I don't have to do this step. You'll have to investigate whether this can be done with the filesystem mounted. It *should* be possible, since I think ext3 doesn't really pay any attention to the end of the partition table and will just ignore the extra space in the partition until the filesystem is grown. I'd probably do that part offline just to be safe. Warning: this can only work if you aren't moving the beginning of the partition! Don't move the beginning of the partition! That will wreck everything. You can only move the end of it, and anything on the disk after the end of it will be wrecked.
It would basically work like this: (using fake numbers)
If your raid disk is /dev/sda, then the current partition table would have looked like this prior to the disk swapping:
disk /dev/sda size 120,000m size start end /dev/sda1 boot 100m 0 100 /dev/sda2 ext3 118,900m 101 119,000 /dev/sda3 swap 1000m 119,001 120,000
After swapping disks and changing the raid container size, it probably looks like this:
disk /dev/sda size 250,000m size start end /dev/sda1 boot 100m 0 100 /dev/sda2 ext3 118,900m 101 119,000 /dev/sda3 swap 1000m 119,001 120,000
Then you do the swap thing, leaving you with this:
disk /dev/sda size 250,000m size start end /dev/sda1 boot 100m 0 100 /dev/sda2 ext3 118,900m 101 119,000 /dev/sda3 swap 1000m 249,001 250,000
Finally, you edit the ext3 partition to fill in the empty space:
disk /dev/sda size 250,000m size start end /dev/sda1 boot 100m 0 100 /dev/sda2 ext3 118,900m 101 249,000 /dev/sda3 swap 1000m 249,001 250,000
I have run resize2fs online with no problems. It may take forever, and slow down the server, if the server is under heavy usage. Your best bet might be to take the machine offline at the end of the day, resize the partition and then let resize2fs run overnight.
Basically, the filesystem is contained within the partition, which is contained within the raw device, which is created from the raid disks.
Caveat: if you have a swap partition rather than a swap file, you'll need to create a swap file, initialize it (mkswap, or swapon, I think), kill swap on the partition, create a new swap partition, and create swap on that partition before expanding the ext3 partition. It might be easier to just keep swap on a file. I don't think it affects performance very much. If it does, you might be better off just using a separate disk for swap in the first place.
posted by gjc at 7:04 AM on May 4, 2012
You can make an ext3 filesystem bigger without unmounting it using resize2fs, but you need to unmount before shrinking one.
Also, I totally fail to understand putting swap partitions after the filesystem partitions. Especially on modern drives, swap partitions are so small compared to the total drive size that putting them first (on the disk's outermost tracks) has virtually no impact on filesystem performance, yet can speed up swap speed quite significantly.
posted by flabdablet at 10:33 AM on May 4, 2012
Also, I totally fail to understand putting swap partitions after the filesystem partitions. Especially on modern drives, swap partitions are so small compared to the total drive size that putting them first (on the disk's outermost tracks) has virtually no impact on filesystem performance, yet can speed up swap speed quite significantly.
posted by flabdablet at 10:33 AM on May 4, 2012
« Older Yes I'm Sirius, and don't call me Shirley. | A week's worth of reading in one suitcase Newer »
This thread is closed to new comments.
Look at resize2fs to do this.
You'll need to unmount the partition to run it. This may or may not mean downtime, depending on how your server is configured.
Here's a bit more: http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html (that doc is about what to do after extending a LVM partition, which should be a similar enough situation to what you've done). There are other HOWTOs lying around the Interweb. Just search for "resize filesystem linux" or something like that.
posted by chengjih at 6:26 AM on May 4, 2012 [1 favorite]