The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Random Read/Write Speed
This test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random writes that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time:
As we established in previous articles, the disk operations that feel the slowest are the random small file reads and writes. Both of which are easily handled by an SSD. A good friend of mine and former AnandTech Editor, Matthew Witheiler, asked me if he'd notice the performance improvement. I showed him the chart above.
He asked again if he'd notice. I said, emphatically, yes.
Now this is super interesting. Intel's X25-M G1 drops from 40.8MB/s when new down to 26.3MB/s in a well used state. Unfortunately for the G1, it will never get TRIM and will spend more time in the lower performance state over the life of the drive. But look at what happens with the X25-M G2: it drops from 36.1MB/s to 35.8MB/s - virtually no performance is lost. In fact, the G2 is so fast here that it outperforms the super expensive X25-E. Granted you don't get the lifespan of the X25-E and the SLC drive should perform better on more strenuous random write tests, but this is a major improvement.
The explanation? It actually boils down to the amount of memory on the drive. The X25-M G1 had 16MB of 166MHz SDRAM on-board, the G2 upped it to 32MB of slower 133MHz DRAM. Remember that Intel doesn't keep any user data in DRAM, it's only used for the remapping, defragmenting and tracking of all of the data being written to the drive. More DRAM means that the drive can now track more data, which means that even in the heaviest of random-write workloads you could toss at it on a normal desktop you will not actually lose any performance with the drive in a used state. And this is the drive Intel has decided to grant TRIM to.
The G2 is good.
The Indilinx drives do lose performance here. They drop from roughly 13MB/s down to 7MB/s. We're still talking ~5x the speed of a VelociRaptor, so there's no cause for alarm. But it's clear that even Indilinx's SLC drive can't match Intel's random write performance. And from what I hear, Intel's performance is only going to get better.
This is what the X25-M price premium gets you.
Bahahaha, look at the hard drive scores here: 0.7MB/s and 0.3MB/s? That's freakin' terrible! And that's why your system feels so slow when you start it up, there are a ton of concurrent random reads and writes happening all over the place which your hard drive crunches through at roughly 0.5MB/s. Even the Samsung based OCZ Summit manages a significant performance advantage here.
The Indilinx drives all cluster around the 30 - 40MB/s mark for random read performance, nothing to be ashamed of. The Intel drives kick it up a notch and give you roughly 60MB/s of random read performance. It's a noticeable improvement. As our application launch tests will show however, loading a single app on either an Indilinx or Intel drive will take about the same amount of time. It's only in the heavy multitasking and "seat of the pants" feel that you'll have a chance at feeling a difference.
295 Comments
View All Comments
GourdFreeMan - Tuesday, September 1, 2009 - link
Yes, rewriting a cell will refill the floating gate with trapped electrons to the proper voltage level unless the gate has begun to wear out, so backing up your data, secure erasing your drive and copying the data back will preserve the life (within reason) of even drives that use minimalistic wear leveling to safeguard data. Charge retention is only a problem for users if they intend to use the drive for archival storage, or operate the drive at highly elevated temperatures.It is a bigger problem for flash engineers, however, and one of the reasons why MLC cannot be moved easily to more bits per cell without design changes. To store n-bits in a single cell you need 2^n separate energy levels to represent them, and thus each bit is only has approximately 1/(2^(n-1)) the amount of energy difference between states when compared to SLC using similar designs and materials.
Zheos - Tuesday, September 1, 2009 - link
Man you seem to know a lot about what you're talking about :)Yeah now i understand why SSD for database and file storage server would be quite a bad idea.
But for personal windows & everyday application storage, seems like a pure win to me if you can afford one :)
I was only worried about its life-span but thankx to you and you're quick replys (and for the maths and technical stuff about how it realy work ;) im sold on the fact that i will buy one soon.
The G2 from Intel seems like the best choice for now but I'll just wait and see how it's going when TRIM will become almost enable on every SSD and i'll make my decision there in a couple of months =)
GourdFreeMan - Wednesday, September 2, 2009 - link
It isn't so much that SSDs make a bad storage server, but rather that you can't neglect to make periodic backups, as with any type of storage, if your data has great monetary or sentimental value. In addition to backups, RAID (1-6) is also an option if cost is no object and you want to use SSDs for long term storage in a running server. Database servers are a little more complicated, but SSDs can be an intelligent choice there as well if your usage patterns aren't continuous heavy small (i.e. <= 4K) writes.I plan on getting a G2 myself for my laptop after Intel updates the firmware to support TRIM and Anand reviews the effects in Windows 7, and I have already been using an Indilinx-based SLC drive in my home server.
If you do anything that stresses your hard drive(s), or just like snappy boot times and application load times you will probably be impressed by the speeds of a new SSD. The cost per GB and lack of long term reliability studies are really the only things holding them back from taking the storage market by storm now.
ninevoltz - Thursday, September 17, 2009 - link
GourdFreeMan could you please continue your explanation? I would like to learn more. You have really dived deeply into the physical properties of these drives.GourdFreeMan - Tuesday, September 1, 2009 - link
Minor correction to the second paragraph in my post above -- "each bit is only has" should read "each representation only has" in the last sentence.philosofool - Monday, August 31, 2009 - link
Nice job. This has been a great series.I'm getting a SSD once I can get one at $1/GB. I want a system/program files drive of at least 80GB and then a conventional HDD (a tenth of the cost/GB) for user data.
Would keeping user data on a conventional HDD affect these results? It would seem like it wouldn't, but I would like to see the evidence.
I would really like to see more benchmarks for these drives that aren't synthetic. Have you tried things like Crysis or The Witcher load times? (Both seemed to me to have pretty slow loads for maps.) I don't know if these would be affected, but as real world applications, I think it makes sense to try them out.
Anand Lal Shimpi - Monday, August 31, 2009 - link
Personally I keep docs on my SSD but I keep pictures/music on a hard drive. Neither gets touched all that often in the grand scheme of things, but one is a lot smaller :)In The SSD Anthology I looked at Crysis load times. Performance didn't really improve when going to an SSD.
Take care,
Anand
Eeqmcsq - Monday, August 31, 2009 - link
I would have thought that the read speed of an SSD would have helped cut down some of the compile time. Is there any tool that lets you analyze disk usage vs cpu usage during the compile time, to see what percentage of the compile was spent reading/writing to disk vs CPU processing?Is there any way you can add a temperature test between an HDD and an SSD? I read a couple of Newegg reviews that say their SSDs got HOT after use, though I think that may have just been 1 particular brand that I don't remember. Also, there was at least one article online that tested an SSD vs an HDD and the SSD ran a little warmer than the HDD.
Also, garbage collection does have one advantage: It's OS independent. I'm still using Ubuntu 8.04 at work, and I'm stuck on 8.04 because my development environment WORKS, and I won't risk upgrading and destabilizing it. A garbage collecting SSD would certainly be helpful for my system... though your compiling tests are now swaying me against an SSD upgrade. Doh!
And just for fun, have you thought about running some of your benchmarks on a RAM drive? I'd like to see how far SSDs and SATA have to go before matching the speed of RAM.
Finally, any word from JMicron and their supposed update to the much "loved" JMF602 controller? I'd like to see some non-stuttering cheapo SSDs enter the market and really bring the $$$/GB down, like the Kingston V-series. Also, I'd like to see a refresh in the PATA SSD market.
"Am I relieved to be done with this article? You betcha." And I give you a great THANK YOU!!! for spending the time working on it. As usual, it was a great read.
Per Hansson - Monday, August 31, 2009 - link
Photofast have released Indilinx based PATA drives;http://www.photofastuk.com/engine/shop/category/G-...">http://www.photofastuk.com/engine/shop/category/G-...
aggressor - Monday, August 31, 2009 - link
What ever happened to the price drops that OCZ announced when the Intel G2 drives came out? I want 128GB for $280!