Computer RAID for maximum write speed?
April 17, 2013 5:32 PM Subscribe
I have somewhat of an unusual computer requirement: I need to be able to write data to disk at more than 1 gigabyte per second. Can you help me make this happen?
We are trying to run an Andor Zyla camera at it's full speed. At full speed, this particular camera puts out 1.1 GB/sec of data and we would like to be able to record to disk at that speed. The vendor recommends four soild state drives in RAID 0, which we have, but the highest continual write speed we've been able to achieve is about 950 MB/sec, which is not quite fast enough.
We have an Intel RMS25PB080 RAID card and four 240 GB Intel 520 series SSDs. On paper it looks like it should be capable of the speeds we need but it's not quite there, and I've been unable to find much documentation at all about how to optimize a RAID for sequential write performance. So far we've found that having write caching on in Windows is critical, and that there are minor effects from changing the RAID from write-back to write-through mode. We're testing the write speed using CrystalDiskMark, writing 4 GB at a time.
Do you have any ideas as to what we should try testing in the RAID configuration options? Or other things we should try? The PC is a dual quad-core Xeon (E5 2643) with 32 GB 1600 MHz RAM.
We are trying to run an Andor Zyla camera at it's full speed. At full speed, this particular camera puts out 1.1 GB/sec of data and we would like to be able to record to disk at that speed. The vendor recommends four soild state drives in RAID 0, which we have, but the highest continual write speed we've been able to achieve is about 950 MB/sec, which is not quite fast enough.
We have an Intel RMS25PB080 RAID card and four 240 GB Intel 520 series SSDs. On paper it looks like it should be capable of the speeds we need but it's not quite there, and I've been unable to find much documentation at all about how to optimize a RAID for sequential write performance. So far we've found that having write caching on in Windows is critical, and that there are minor effects from changing the RAID from write-back to write-through mode. We're testing the write speed using CrystalDiskMark, writing 4 GB at a time.
Do you have any ideas as to what we should try testing in the RAID configuration options? Or other things we should try? The PC is a dual quad-core Xeon (E5 2643) with 32 GB 1600 MHz RAM.
How much data do you need to capture at once? Maxing out on RAM and using a RAM disk would be the absolute fastest option, assuming your system supports enough memory to fit all the data you need to capture.
posted by zsazsa at 6:01 PM on April 17, 2013
posted by zsazsa at 6:01 PM on April 17, 2013
I am not particularly experienced in high throughput I/O, but what would happen if you were to use two of the drives in a RAID 0? If you get (950/2) MB/sec, that might point to the individual drives (or possibly the sata connection from card -> drive) as the bottleneck. If you still get 950 MB/sec, then the bottleneck is more likely somewhere between the data in memory and the card (OS, PCI, the card, etc.).
The above suggestion for trying larger writes and taking the RAID/disks out of the equation is also good.
posted by yaarrgh at 6:01 PM on April 17, 2013 [2 favorites]
The above suggestion for trying larger writes and taking the RAID/disks out of the equation is also good.
posted by yaarrgh at 6:01 PM on April 17, 2013 [2 favorites]
You might check this article - they got close to your requirement using those drives and a LSI MegaRAID 9260-8i.
posted by nightwood at 6:33 PM on April 17, 2013
posted by nightwood at 6:33 PM on April 17, 2013
The card is rated for 6Gb/s...6 gigabits per second. Call Intel first, who will probably want you to make sure your motherboard can transfer 1.1 gigabytes/s between the camera interface and the RAID, and if all of that checks out, start tweaking at the drive level. You may need to max out all the ports on the card to get top speed.
posted by rhizome at 6:43 PM on April 17, 2013
posted by rhizome at 6:43 PM on April 17, 2013
rhizome - that's 6 Gb/s per SATA port. The motherboard interface is PCI Express 2 x8, so 500 MB/s * 8 = 4 GB/s.
pombe - your NTFS cluster size should match your RAID controller stripe size. Given your workload, I'd go with the NTFS maximum (64 KB, which the Intel also supports).
posted by djb at 6:47 PM on April 17, 2013
pombe - your NTFS cluster size should match your RAID controller stripe size. Given your workload, I'd go with the NTFS maximum (64 KB, which the Intel also supports).
posted by djb at 6:47 PM on April 17, 2013
The hardware appears to be capable enough. I would make sure that the raid card is in sata 6gb/s mode and not failing back. Then I'd make sure the adapter card is taking full advantage of all 8 lanes of pci-express and also not failing back. (According to some googling, 8x pci-e maxes out at 2GB/s. So you are real close to the theoretical maximum of the backplane, with filesystem overhead, etc.
Then I would run tests on the stripe sizes of the underlying RAID, as well as the NTFS block size.
There are some filesystem tweaks you can do, but I can't remember them. I'm sure a google search of ntfs speed or something like that will help.
Lastly, I'd run individual tests on each drive to make sure they are all performing up to their specifications. In fact, you might want to do that first. Experiment with the individual drives until you figure out the fastest transfer rate, and then use that to set up your raid. If the drives are fastest when they get data in 4kb chunks, that's your stripe size. If it's 64k chunks, use that. Etc.
Look at your performance monitor and see if any of the indicators are pegging.
But honestly, 1+ gb/s is a lot to ask. It's possible the controller card just can't go that fast. You'll have to check the specs and see if the card can even do what you ask. (Even though the interconnects are plenty fast enough, the card's processor might not be able to do it.)
This is something I would consider using Serial attached Scsi for and loading up a rack of 15k 2.5" SAS drives. I wouldn't expect SSD to be cost effective for your application- SSD's big upside is in random transfers, not raw throughput. Plus, SSD gets slower as it wears out. Even if you get up to the right speed now, what happens in 3 months when all the drives start getting slow?
posted by gjc at 6:59 PM on April 17, 2013
Then I would run tests on the stripe sizes of the underlying RAID, as well as the NTFS block size.
There are some filesystem tweaks you can do, but I can't remember them. I'm sure a google search of ntfs speed or something like that will help.
Lastly, I'd run individual tests on each drive to make sure they are all performing up to their specifications. In fact, you might want to do that first. Experiment with the individual drives until you figure out the fastest transfer rate, and then use that to set up your raid. If the drives are fastest when they get data in 4kb chunks, that's your stripe size. If it's 64k chunks, use that. Etc.
Look at your performance monitor and see if any of the indicators are pegging.
But honestly, 1+ gb/s is a lot to ask. It's possible the controller card just can't go that fast. You'll have to check the specs and see if the card can even do what you ask. (Even though the interconnects are plenty fast enough, the card's processor might not be able to do it.)
This is something I would consider using Serial attached Scsi for and loading up a rack of 15k 2.5" SAS drives. I wouldn't expect SSD to be cost effective for your application- SSD's big upside is in random transfers, not raw throughput. Plus, SSD gets slower as it wears out. Even if you get up to the right speed now, what happens in 3 months when all the drives start getting slow?
posted by gjc at 6:59 PM on April 17, 2013
Response by poster: Thanks for all the suggestions. To answer some questions:
We do have to use Windows. The microscope this camera is attached to only has drivers for Windows.
We can write to RAM faster than we can write to disk. The image capture software will buffer data to RAM and if we run the camera too fast we see the RAM buffer filling up as the disk can't keep up.
I'll be testing the various suggestions later this week.
posted by pombe at 7:12 PM on April 17, 2013
We do have to use Windows. The microscope this camera is attached to only has drivers for Windows.
We can write to RAM faster than we can write to disk. The image capture software will buffer data to RAM and if we run the camera too fast we see the RAM buffer filling up as the disk can't keep up.
I'll be testing the various suggestions later this week.
posted by pombe at 7:12 PM on April 17, 2013
Have you contacted the vendor and asked them specifically what exact hardware they're using? There may be only a few raid cards that can actually hit the theoretical max, or a few SSDs. Or maybe it's your motherboard, etc.
I know that some SSDs don't play super nice with raid, and i've also heard of the reverse with some controller chipsets or even specific models of card having weird bottleneck issues. It might also be something with your specific motherboard, or motherboard chipset, etc. Have you tried using onboard video and using the primary pci x16 slot for the controller? have you tried it in another slot besides that if one is available?
The first thing i'd think though, is that maybe 4 SSDs just isn't enough to really get that throughput. Id probably try slapping in a spare SSD to make a 5 disk array before i tried almost anything on that list, if possible.
posted by emptythought at 7:16 PM on April 17, 2013 [1 favorite]
I know that some SSDs don't play super nice with raid, and i've also heard of the reverse with some controller chipsets or even specific models of card having weird bottleneck issues. It might also be something with your specific motherboard, or motherboard chipset, etc. Have you tried using onboard video and using the primary pci x16 slot for the controller? have you tried it in another slot besides that if one is available?
The first thing i'd think though, is that maybe 4 SSDs just isn't enough to really get that throughput. Id probably try slapping in a spare SSD to make a 5 disk array before i tried almost anything on that list, if possible.
posted by emptythought at 7:16 PM on April 17, 2013 [1 favorite]
Is your data compressible? There are fast compression libraries (famously, LZO, although Google's new Snappy might beat that) which can handle 250 MB/s on a single core, and you have eight sitting around largely unused.
posted by d. z. wang at 7:46 PM on April 17, 2013
posted by d. z. wang at 7:46 PM on April 17, 2013
Is it possible you could get in touch with someone at an affiliated supercomputer center? 7 Gb/s disk write speed is getting into HPC territory.
posted by demiurge at 8:38 PM on April 17, 2013
posted by demiurge at 8:38 PM on April 17, 2013
Response by poster: I think I've figured out what's going on with our RAID speed. If I break up the RAID and just look at a single Intel 520 SSD, I get ~242 MB/sec write speed. This squares well with the ~960 MB/sec we see for all four drives in RAID 0.
The individual drive speed is far short of the 500 MB/sec the drives are benchmarked at, but it turns out the drives are benchmarked with highly compressible data, whereas our data is pretty incompressible (raw camera images with lots of random noise). It turns out that some other drive manufacturers use different controllers that don't require compression to get the 500 MB/sec write speed so I am going to see if we can swap our Intel 520s for Samsung 840 Pros.
posted by pombe at 11:11 AM on April 19, 2013 [1 favorite]
The individual drive speed is far short of the 500 MB/sec the drives are benchmarked at, but it turns out the drives are benchmarked with highly compressible data, whereas our data is pretty incompressible (raw camera images with lots of random noise). It turns out that some other drive manufacturers use different controllers that don't require compression to get the 500 MB/sec write speed so I am going to see if we can swap our Intel 520s for Samsung 840 Pros.
posted by pombe at 11:11 AM on April 19, 2013 [1 favorite]
Response by poster: One final followup: We now have four Samsung 840 Pro 256 GB SSDs on the same RAID card as before and are seeing sequential write speeds of ~2.1 GB/sec, testing with random data. So if you need to write uncompressible data, beware how your SSD is benchmarked.
posted by pombe at 5:09 PM on May 10, 2013
posted by pombe at 5:09 PM on May 10, 2013
This thread is closed to new comments.
Does it have to be windows?
If you configure say a 10gb ramdisk in windows, what's the highest rate you can write, say, 8gb to it? If the bottleneck is somewhere in the system aside from the raid setup, this may help identify it.
posted by RustyBrooks at 5:56 PM on April 17, 2013 [1 favorite]