Which SSD drive to buy for fastest seek time?
November 16, 2009 11:25 AM
Which internal SSD drive to buy for fastest seek time?
I use a piece of scientific software which has massive disk based binary database files (>100GB). The application does random seeks into these files looking for a match. It will do millions of seeks, but few long runs of "reads"
I'm looking to buy a SSD drive to speed up this application.
My requirements:
fast seek time
larger than 128GB
commercially available (e.g. I can get it consistently online)
under $1500US, but under $1000 is best
Will be used under Windows
I use a piece of scientific software which has massive disk based binary database files (>100GB). The application does random seeks into these files looking for a match. It will do millions of seeks, but few long runs of "reads"
I'm looking to buy a SSD drive to speed up this application.
My requirements:
fast seek time
larger than 128GB
commercially available (e.g. I can get it consistently online)
under $1500US, but under $1000 is best
Will be used under Windows
They all have low seek times, but Intel SSDs are much faster then most of the competition in benchmarks, especially the "-E" versions. In particular, there were problems with some drives last year where they would lock up after doing lots of small reads and writes.
Toms Hardware did a performance comparison across a bunch of drives, and the Intel came out way ahead.
Here's a chart for actual read access time. Almost all the drives clocked in at 0.10ms, which appears to the smallest quantum that the benchmarking software can resolve.
On the other hand, check out the Workstation benchmark pattern. Other drives are just starting to catch up with the "-M" Intel drives.
posted by delmoi at 11:39 AM on November 16, 2009
Toms Hardware did a performance comparison across a bunch of drives, and the Intel came out way ahead.
Here's a chart for actual read access time. Almost all the drives clocked in at 0.10ms, which appears to the smallest quantum that the benchmarking software can resolve.
On the other hand, check out the Workstation benchmark pattern. Other drives are just starting to catch up with the "-M" Intel drives.
posted by delmoi at 11:39 AM on November 16, 2009
You should read Anand Lal Shimpi's recent revisitation to his exhaustive, cannonical article on SSDs from earlier this year:
The SSD Relapse: Understanding and Choosing the Best SSD.
It's stretched out over a slightly excruciating 27 pages, but I read it last night and it's superlatively informative.
posted by Liver at 12:42 PM on November 16, 2009
The SSD Relapse: Understanding and Choosing the Best SSD.
It's stretched out over a slightly excruciating 27 pages, but I read it last night and it's superlatively informative.
posted by Liver at 12:42 PM on November 16, 2009
I'm not sure if SSD drives do this, but its possible they have a sequential read mode that would reduce addressing latency; if your database can be organized, that would be helpful. I don't know if you can fiddle with the database itself, but standard DBs can build indexes for faster searching, or reorganize the layout if a lot of rows were deleted. Additionally, newer filesystems will use extents to store metadata, so that your btree will be substantially compressed when linear. But given you're tied to Windows, I'd stick with NTFS.
SSDs don't have a zero seek time, they have constant seek time, which would solve your problem for the most part, if you can't get it going. You can get an Intel drive in your budget. Judging by the anandtech article, that'll get you 100GB of random reads in around 30 minutes. If you really need read performance, you might buy two and mirror it on quality RAID. Don't get a cheap one or you may find single disk performance is faster!
posted by pwnguin at 1:19 PM on November 16, 2009
SSDs don't have a zero seek time, they have constant seek time, which would solve your problem for the most part, if you can't get it going. You can get an Intel drive in your budget. Judging by the anandtech article, that'll get you 100GB of random reads in around 30 minutes. If you really need read performance, you might buy two and mirror it on quality RAID. Don't get a cheap one or you may find single disk performance is faster!
posted by pwnguin at 1:19 PM on November 16, 2009
I purchased a Corsair 256GB SSD about three weeks ago and have been having great luck with it. It also wasn't too pricey at about $670. I can't offer any hard figures on the drive but if you have a program that runs representative benchmarks of the type of disk IO you plan on doing that runs on OS X, I'd be happy to run it.
posted by cgomez at 1:30 PM on November 16, 2009
posted by cgomez at 1:30 PM on November 16, 2009
Oops, I suck. The X-25E isn't big enough. But it is wicked, wicked fast. If you can afford it, you could RAID a few of them.
posted by procrastination at 1:37 PM on November 16, 2009
posted by procrastination at 1:37 PM on November 16, 2009
Ideally you want an Intel X-25E, but they don't make them big enough for your needs. I'd have a look at the Intel X-25M 160 GB drive.
posted by Ctrl_Alt_ep at 2:20 PM on November 16, 2009
posted by Ctrl_Alt_ep at 2:20 PM on November 16, 2009
As I understand it (I am by no means an expert on SSDs or RAIDs), you can get about double the read performance by pairing two drives up in a RAID-0. The usual problem with RAID-0 (catastrophic data loss if one drive crashes) is mitigated by the fact that the SSDs don't (normally) fail mechanically.
It looks like Intel-brand 80GB SSDs run about $260 right now, so a pair is well within your budget, plus whatever hardware RAID you want.
posted by neckro23 at 2:39 PM on November 16, 2009
It looks like Intel-brand 80GB SSDs run about $260 right now, so a pair is well within your budget, plus whatever hardware RAID you want.
posted by neckro23 at 2:39 PM on November 16, 2009
You should read this Slashdot thread, which tends to wander from the main topic but still has information you should see - specifically about the poor reliability of JMicron-based drives and overall love that the majority have for their Intel drives.
posted by exhilaration at 3:04 PM on November 16, 2009
posted by exhilaration at 3:04 PM on November 16, 2009
The usual problem with RAID-0 (catastrophic data loss if one drive crashes) is mitigated by the fact that the SSDs don't (normally) fail mechanically.
You can also get double the read performance with mirroring. Striping just lets you hit disk faster. The problem is, these drives are fast enough that a poor controller will actually slow things down. And floam has a good point about TRIM.
If you wanted to go crazy, you could also build a absolutely massive RAM drive. For bioinformatics where you have an upper bound on data collected this can be a total win, even in the days when 4GB+ RAM was crazy expensive. Just don't forget the UPC, backups and other fun tools.
posted by pwnguin at 3:53 PM on November 16, 2009
You can also get double the read performance with mirroring. Striping just lets you hit disk faster. The problem is, these drives are fast enough that a poor controller will actually slow things down. And floam has a good point about TRIM.
If you wanted to go crazy, you could also build a absolutely massive RAM drive. For bioinformatics where you have an upper bound on data collected this can be a total win, even in the days when 4GB+ RAM was crazy expensive. Just don't forget the UPC, backups and other fun tools.
posted by pwnguin at 3:53 PM on November 16, 2009
The Intel SSDs generally have the reputation of being the best. Two Intel 160 GB SSDs in a RAID 0 1 configuration would be within your budget.
Be sure to get the G2 version and not the older G1. Also, get a proper hardware RAID controller and don't use software-based RAID.
posted by kenliu at 6:52 PM on November 16, 2009
Be sure to get the G2 version and not the older G1. Also, get a proper hardware RAID controller and don't use software-based RAID.
posted by kenliu at 6:52 PM on November 16, 2009
I decided to get the G2 version of the Intel X25-M 160 GB drive. I'll check back when it's delivered and I've run it through its paces. Thank you everyone who replied.
posted by bottlebrushtree at 11:33 PM on November 17, 2009
posted by bottlebrushtree at 11:33 PM on November 17, 2009
Congrats. Actually if floam is right and RAID 0 striping slows plays poorly with SSDs, JBOD (just a bunch of disks or spanning ) might be a better option for performance. Basically it's just concatenating each volume, so when one disk fills up the next one is used.
posted by delmoi at 1:38 AM on November 19, 2009
posted by delmoi at 1:38 AM on November 19, 2009
(oops, I see you just got one. Shouldn't be an issue then :)
posted by delmoi at 1:46 AM on November 19, 2009
posted by delmoi at 1:46 AM on November 19, 2009
Here's the results of our analysis. We're seeing program run times that are 10% of their previous times. e.g. 3 minutes from 30 minutes, 1 hour from 10 hours after moving to SSD.
All and all a pretty amazing speed up. Your mileage will vary. Our system is very disk lookup heavy so seek times were dominating our running time.
posted by bottlebrushtree at 2:53 PM on March 2, 2010
All and all a pretty amazing speed up. Your mileage will vary. Our system is very disk lookup heavy so seek times were dominating our running time.
posted by bottlebrushtree at 2:53 PM on March 2, 2010
« Older how do I control exposure for panorama photos? | Where can I find a large glass goblet in Austin... Newer »
This thread is closed to new comments.
posted by Chocolate Pickle at 11:31 AM on November 16, 2009