OCZ Vertex 3 MAX IOPS & Patriot Wildfire SSDs Reviewed
by Anand Lal Shimpi on June 23, 2011 4:35 AM ESTLet's start with the elephant in the room. There's a percentage of OCZ Vertex 3/Agility 3 customers that have a recurring stuttering/instability issue. The problem primarily manifests itself as regular BSODs under Windows 7 although OCZ tells me that the issue is cross platform and has been seen on a MacBook Pro running OS X as well.
How many customers are affected? OCZ claims it's less than two thirds of a percent of all Vertex 3/Agility 3 drives sold. OCZ came up with this figure by looking at the total number of tech support enquiries as well as forum posts about the problem and dividing that number by the total number of drives sold through to customers. I tend to believe OCZ's data here given that I've tested eight SF-2281 drives and haven't been able to duplicate the issue on a single drive/configuration thus far.
Most of the drives were from OCZ and I've tested them all on four separate platforms - three Windows 7 and one OS X. The latter is my personal system where I have since deployed a 240GB Vertex 3 in place of Intel's SSD 510 for long term evaluation. If you're curious, the 3 months I had the 510 in the MacBook Pro were mostly problem-free. It's always tough narrowing down the cause of system-wide crashes so it's hard to say whether or not the 510 was responsible for any of the hard-resets I had to do on the MacBook Pro while it was deployed. For the most part the 510 worked well in my system although I do know that there have been reports of issues from other MBP owners.
But I digress, there's a BSOD issue with SF-2281 drives and I haven't been able to duplicate it. OCZ has apparently had a very difficult time tracking down the issue as well. OCZ does a lot of its diagnostic work using a SATA bus analyzer, a device that lets you inspect what's actually going over the SATA bus itself rather than relying on cryptic messages that your OS gives you about errors. Apparently sticking a SATA bus analyzer in the chain between the host controller and SSD alone was enough to make the BSOD problem go away, which made diagnosing the source of the BSOD issue a pain.
OCZ eventually noticed odd behavior involving a particular SATA command. Slowing down timings associated with that command seems to have resolved the problem although it's tough to be completely sure as the issue is apparently very hard to track down.
OCZ's testing also revealed that the problem seems to follow the platform, not the drive itself. If you have a problem, it doesn't matter how many Vertex 3s you go through - you'll likely always have the problem. Note that this doesn't mean your motherboard/SATA controller is at fault, it just means that the interaction between your particular platform and the SF-2281 controller/firmware setup causes this issue. It's likely that either the platform or SSD is operating slightly out of spec or both are operating at opposite ends of the spec, but still technically within it. There's obviously chip to chip variance on both sides and with the right combination you could end up with some unexpected behaviors.
OCZ and SandForce put out a stopgap fix for the problem. For OCZ drives this is firmware revision 2.09 (other vendors haven't released the fix yet as far as I can tell). The firmware update simply slows down the timing of the SATA command OCZ and SF believe to be the cause of these BSOD issues.
In practice the update seems to work. Browsing through OCZ's technical support forums I don't see any indications of users who had the BSOD issue seeing it continue post-update. It is worth mentioning however that the problem isn't definitely solved since the true cause is still unknown, it just seems to be addressed given what we know today.
Obviously slowing down the rate of a particular command can impact performance. In practice the impact seems to be minimal, although a small portion of users are reporting huge drops in performance post-update. OCZ mentions that you shouldn't update your drive unless you're impacted by this problem, advice I definitely agree with.
What does this mean? Well, most users are still unaffected by the problem if OCZ's statistics are to be believed. I also don't have reason to believe this is exclusive to OCZ's SF-2281 designs so all SandForce drives could be affected once they start shipping (note that this issue is separate from the Corsair SF-2281 recall that happened earlier this month). If you want the best balance of performance and predictable operation, Intel's SSD 510 is still the right choice from my perspective. If you want the absolute fastest and are willing to deal with the small chance that you could also fall victim to this issue, the SF-2281 drives continue to be very attractive. I've deployed a Vertex 3 in my personal system for long term testing to see what living with one of these drives is like and so far the experience has been good.
With that out of the way, let's get to the next wave of SF-2281 based SSDs: the OCZ Vertex 3 MAX IOPS and the Patriot Wildfire.
The Vertex 3 MAX IOPS Drive
In our first review of the final, shipping Vertex 3, OCZ committed to full disclosure in detailing the NAND configuration of its SSDs to avoid any confusion in the marketplace. Existing Vertex 3 drives use Intel 25nm MLC NAND, as seen below:
A 240GB Vertex 3 using 25nm Intel NAND
Not wanting to be completely married to Intel NAND production, OCZ wanted to introduce a version of the Vertex 3 that used 32nm Toshiba Toggle NAND - similar to what was used in the beta Vertex 3 Pro we previewed a few months ago. Rather than call the new drive a Vertex 3 with a slightly different model number, OCZ opted for a more pronounced suffix: MAX IOPS.
Like the regular Vertex 3, the Vertex 3 MAX IOPS drive is available in 120GB and 240GB configurations. These drives have 128GB and 256GB of NAND, respectively, with just under 13% of the NAND set aside for use as a combination of redundant and spare area.
The largest NAND die you could ship at 32/34nm was 4GB - the move to 25nm brought us 8GB die. What this means is that for a given capacity, the MAX IOPS edition will have twice as many MLC NAND die under the hood. The table below explains it all:
OCZ SF-2281 NAND Configuration | |||||||
Number of NAND Channels | Number of NAND Packages | Number of NAND die per Package | Total Number of NAND die | Number of NAND per Channel | |||
OCZ Vertex 3 120GB | 8 | 16 | 1 | 16 | 2 | ||
OCZ Vertex 3 240GB | 8 | 16 | 2 | 32 | 4 | ||
OCZ Vertex 3 MI 120GB | 8 | 8 | 4 | 32 | 4 | ||
OCZ Vertex 3 MI 240GB | 8 | 16 | 4 | 64 | 8 |
The standard 240GB Vertex 3 has 32 die spread across 16 chips. The MAX IOPS version doubles that to 64 die in 16 chips. The 120GB Vertex 3 only has 16 die across 16 chips while the MAX IOPS version has 32 die, but only using 8 chips. The SF-2281 is an 8-channel controller so with 32 die you get a 4-way interleave and 8-way with the 64 die version. There are obviously diminishing returns to how well you can interleave requests to hide command latencies - 4 die per channel seems to be the ideal target for the SF-2281.
112 Comments
View All Comments
Paazel - Thursday, June 23, 2011 - link
Do you allow your computer to sleep? I had a Vertex 2 die on me, and forum speculation led me to believe that allowing my computer to sleep may have been the culprit.Anand Lal Shimpi - Thursday, June 23, 2011 - link
My personal machine that it's deployed in is a notebook that is allowed to sleep (and does so) regularly.I also don't do any of the odd stability optimizations on my testbeds either. Sleep is always enabled and definitely allowed to happen (I don't always catch my testbeds after they've finished a long test so they'll go off to sleep).
While I do believe that earlier issues may have been sleep related, I'm not sure about this one in particular.
Take care,
Anand
Ryan Smith - Thursday, June 23, 2011 - link
Just to throw in my own $0.02, although I put my Vertex 2 in a desktop, my results are the same as what Anand has seen. My desktop hybrid sleeps regularly, and I have not encountered any issues.JasonInofuentes - Friday, June 24, 2011 - link
+1 On an Agility 2 90GB, MicroCenter Sandforce 64GB drive and Agility 2 40GB in a desktop, netbook and HTPC setting, all allowed to sleep. Indeed I blame many of my PC related issues to my inability to sleep.sam. - Saturday, June 25, 2011 - link
I have a 120GB Vertex with the Indilinx controller and had mine die on me after about a year and a half of average use in my laptop. (Mind you the RMA process was good, and they replaced it with a new identical SSD). I had nearly 2700 power on times (putting my laptop to sleep multiple times a day) and 3.7 terrabytes written onto the SSD before it started corrupting registry files and BSODing.To be honest, a year and a half as a lifespan seems really bad for what was a high end product, though from what I hear the Sandforce controller is better in terms of reliability. I am still willing to let my laptop sleep though, though just doing my best to write less to the SSD.
kahwaji_n - Thursday, June 23, 2011 - link
i don't think so, maybe if your computer hibernate a Lot then it may be the reason for that, cause when computer sleep the ram will still hold the data and little data has to be written to disk drive contrary to hibernation where the Ram will put to sleep and all data will be written back to disk drive, if you have windows 7 and SSD in raid setup (where no trim command could be pass to controller) and your computer hibernate periodically! run the index Performance in windows 7 and see how the Performance is degraded severely.iwod - Thursday, June 23, 2011 - link
I think the first few Graph / Charts pretty much sums up what i have been saying. With Double the Seq Read, Random Read numbers, you only get less then 10% performance difference. The bottleneck for majority of our workload has shifted back from SSD storage to CPU processing speed.Which means, the best time to get an SSD is now!, If you can afford it and the Storage space is enough for a main OS drive.
L. - Thursday, June 23, 2011 - link
Err .. it's going to be dirt cheap pretty soon .. I wouldn't spend "GFX bucks" on a storage device tbh. (Seriously, for that price I prefer my 2TBWDgreen raid10 ... makes so much more sense even though it does not serve the same purpose...)khan321 - Thursday, June 23, 2011 - link
Why no mention of the increased lifespan of 32nm NAND? This is a massive benefit to me over 25nm.B3an - Thursday, June 23, 2011 - link
Because Anand has pointed this out before. Theres absolutely nothing to worry about regarding the lifespan on 25nm with a good controller, as it would last many many decades. The nand flash will lose it's charge before this happens anyway.