Known-good Linux PCI-e SAS HBAs?
November 17, 2009 3:50 AM   Subscribe

Can you recommend a known-good 4-port PCI-Express SAS adapter for Linux? I need out-of-the-box in-kernel driver support.

I'm using mdadm software RAID-5 and 6 on Linux. I need to find a reasonably fast and solid PCI-Express 4-8 port SAS adapter, to which I will connect several standard SATA drives. I would prefer to pay $100-$200, but can go higher if needed.

I've got a Supermicro AOC-SASLP-MV8 card - which is almost the perfect card, except that when I start the mdadm array over it the mvsas driver barfs and drops the drives right off the controller. Dev entries gone and everything.

I've seen recommendations for LSI controllers, but linux-raid mailing list reports say that these similarly freak out and drop drives when you send SMART over it.

I'm kind of surprised by this seeming SAS-in-Linux driver immaturity, and am quite certain that I just haven't yet found some excellent card, already out there with driver support for ages. Can you help? Any hints most welcome.
posted by krilli to Computers & Internet (6 answers total)
 
It is hinted that adaptec has vendor support, so that may be worth looking in to - the page linked hints that some of the problems with Linux SAS support has to do with arguments about whether the adaptec supplied code was too adaptec-specific.
posted by idiopath at 4:52 AM on November 17, 2009 [1 favorite]


I think you're looking for something that is different than what you have.

mdadm - software raid - is "fake" raid in that the CPU handles the parity calculation.

It's more effective to offload this parity and all of the I/O to a card with a customized CPU and firmware that's designed to handle that sort of thing.

Unfortunately, each card maker has it's own CPU and it's own firmware and it's own way of initializing and managing disks... none of which match the linux mdadm way of doing things. That means that you're not going to be able to push the CPU-based mdadm software RAID straight off to the CPU -- you'll need to back up the data on the array first, then build the array using the controller's software, and then restore the data to the newly built array that's being managed by the card. At that point, all you'll see is one SCSI disk device that contains all your data and you won't use mdadm but instead will use the utilities on the card's controller (usually accessed during boot.)
posted by SpecialK at 11:00 AM on November 17, 2009


Thanks for the answers.

Actually, SpecialK, Linux software raid is The Bomb. (Solaris and ZFS even more so.) In the past years, there has been some very cool work done on RAID-5 and -6, on both theory and implementation. Couple this with much faster busses between disks, RAM and CPU, and new intrinsic functions for parity-calculations being added to the architectures ... software raid is IMO more efficient these days. I'm running fakeraid mdadm RAID-5 and -6 arrays over dumb JBOD controllers, and they're easily capable of pulling datastreams saturating the drives without saturating either busses or CPU. No latency to speak of either.

Besides, fault-tolerance is much better. I can recover the array in any machine as it doesn't rely on any controllers or black boxes, only ubiquitous hardware and free software.

Seriously, check it out, any doubts I had about software RAID have been happily dispelled. The ZFS introduction videos are an awesome look at what software RAID can do these days.

Currently, I've got a striped array built on two RAID-6 arrays, all mdadm arrays. The speed is insane, and it's practically nuke-proof. What I need now is to expand the arrays, so I'm looking for more SATA/SAS ports, simply. The only requirement is that they are reasonably fast and stable, and hence the search for a good controller that fits in the machine.
posted by krilli at 11:53 AM on November 17, 2009


krilli, I like linux software RAID too, but there are downsides. I/O for parity calculations end up hitting the system buses, rather than being isolated on the RAID card. 8 drives bursting to or from cache are going the eat 3GB/s of PCIe capacity, which will require a PCIe x4 slot.

The bigger issue is that, near as I can tell, the md layer doesn't really handle write barriers for anything above RAID 1, which makes it hard to guarantee that multi-disk writes land on disk in a consistent state. To work around this, software RAID devotees end up disabling the disk write caches, which can really hurt write performance, so they end up using caching battery-backed RAID controllers is JBOD mode to make sure that all writes will land eventually. This is changing. It looks like the MD maintainer recently submitted a patch

All that said, I can't recommend a specific controller.
posted by Good Brain at 12:57 PM on November 17, 2009


After some research, the Promise SuperTrak TX8650 looks good. It's a PCIe x8 card, so it won't fit quite as widely as the Supermicro card. Will be trying one, as it looks like a good fit for us.

Areca ARC-1300 might be promising also.
posted by krilli at 3:25 AM on November 27, 2009


Also, the 3Ware 9690SA is promising.
posted by krilli at 12:01 PM on November 27, 2009


« Older Dual layer DVD movie quality?   |   My aquarium got cloudy and I don't know why Newer »
This thread is closed to new comments.