I know this is a bit pre-emptive, but i really hope that if the introduction of TRIM goes well that we see a double bar graph showing new AND used performance bars for each drive (ie: TRIM on, TRIM off). Once TRIM is implemented it may level the playing field for some drives and that should be easily and fairly shown in the charts. I know you had shown used vs new benchmarks during the anthology but in different charts sometimes on different pages made it tricky to see exactly the difference. Again, if TRIM is all it's touted to be, "new" state may be, the longterm state of an SSD.
Here's hoping Intel get's it act together with those seq writes.
PS, i own an 50nm 80gb intel, i'm semi ticked intel won't release a firware with TRIM in it, at the same time, it still has the second fastest random read and write (second only to it's newer version) and it feels rediculously fast to me launching every application on my computer at once when i start windows, and a sub 20 second boot time in OSX.
I think computer market will slowdown in the next two years. Industry results in July shows profit warnings and losses in many companies. Intel lost 398 US$ mill and AMD 335 US$ mill.
New Google Chrome operating system (which has lesser hardware requirements than Windows), virtualization runing on multicore computers (one computer, many users) and financial crisis can hit very hard the sales for PC market.
Intel with it's high IOPS, and SSD's in general with their non-existent response time seem poised to dominate multi-user, and virtualization setup's. It does seem like hard times ahead, but that's where bang for buck is really going to count and not to many people actually "need" a 2TB drive, where as HD's have been the bottleneck in computer's ever since the GHZ race spread out into the multi-core arena. SSD's seems ready to stand up in a multi user server, or virtualization setup to take it from "hold on a second i'm on a shared server login" to.. "really, this is a shared computer?"
In general, the X25-M lags HEAVILY when doing a seq write (e.g. file copy) while doing random reads (e.g. opening microsoft word). Heavy RANDOM writes are not a problem, suggesting possibly poor interleaving of large seq writes and small reads.
What makes this even harder to understand is that performance on the drive is dynamic - the algorithms gradually accommodate changing workloads under a fragmented condition. This only happens under LOW free space conditions in the used state.
I copied a lot of files sequentially to the SSD these few days. Seq write speeds increased from 30-40MB/s to over 80MB/s these few days, but random writes dropped to 15MB/s. These are Crystaldiskmark benches so I don't trust them much at all, but seq writes definitely became faster (can't feel the effect of slower random writes).
The performance profile of these Intel drives is VERY confusing. TRIM will probably help a lot to make performance less confusing.
I ordered one from Puget last week. I can confirm that they are doing exactly as they said in the linked post and contacting people with pending orders directly.
I fell into the category of "not likely to use a BIOS password on the drive, but willing to wait if Intel thought a hardware change was necessary".
Once they confirmed that a simple firmware update would resolve the issue, I gave them the go-ahead to ship mine. I should be receiving it soon.
So far I've been very impressed with the customer service at Puget Systems. They are definitely responsive to their customers.
Seen the same on a couple sites. Apparently setting a drive password and then trying to change or disable it can cause the drive to become inaccessible. They're also no longer listed on Newegg.
I bought one of these drives from Newegg the day they were released. I just received an email today saying I was getting a refund because they were out of stock and that the item is discontinued. Price was at listed 449.
I know application performance is on its way, but I'd love to see individual perforamnce time of actually running applications. The original Intel SSD review focused on copy/launch times whereas I'd be interested in run times of apps after they are launched.
For example, I do a lot of work in Photoshop CS4 with big RAW files. It's unclear whether the random read/write speed of the Intel drive would be superior to the sequential write of the Vertex drive when working in PS CS4 because the temp files can get into the 100's of megabytes.
These performance tests do not tell you anything.
1. How do they compare when actually running programs in an operating system?
2. How do they compare to a standard 500gig Hard Drive?
How are these to be used? Are people planning on using them as hard drive replacements in laptops? What about Heat and cooling requirements like you might mention for RAM or a video card?
The reason I am asking these questions is Microsoft Windows as an operating system is not designed that well to use these devices, and they dont show much advantage when used in cooperation with a hard drive to boot a computer faster. This seems like much todo about nothing when I can purchase a 320GB Drive for about $100.00.
Well, some application benchmarks would be nice. But then, the article is titled "performance preview". If you don't know how to read these numbers and take a guess at what it will mean for you, then don't.
They compare very, very favorably to a top of the line, high performance hard disk, and you're asking how well they compare to a drive that said high performance disk will eat for breakfast? Seriously, the comparison almost isn't worth making. If you're really curious, find reviews of the disk you want compared to, and see what high performance rotating drives they compared it against (or go see what regular drives that high performance drive was compared against in its reviews).
People are considering using these on laptops, on desktops as a primary (OS / apps / heavily used data) drive with rotating media for bulk storage, in silent /fanless computers, etc.
Why do you even need to ask about cooling and heat sinks? It doesn't need them. It draws 150mW of power. Putting heat sinks on it would be ludicrous.
So what if Windows isn't "designed" to use these disks? It will use them without any problems. Perhaps a more carefully designed OS could eek a bit more out of them, but so what? It should be worlds faster than any rotating media you can compare it to.
If you want capacity, go buy a 1TB drive to use a secondary data drive for $100, and use one of these for things that care about io performance rather than storage. Splitting your data across the two disks to get both good performance and lots of space isn't exactly hard for the normal desktop usage case.
I was discussing these drives with someone who works in the storage industry with Flash technology, but not these specific drives. He had an interesting observation: a lot of flash drives will keep a pool of pre-erased pages available for writing. However, they can't erase new pages fast enough to keep up with the peak random write performance indefinitely. Once the pre-erased pool runs dry, random write performance drops dramatically.
Is this the case with the new (or old, for that matter) Intel drives? How long a time period do your random write benchmarks run for? Would you be willing to run a random writes benchmark that runs long enough to overwrite a larger fraction of the disk, and tell us whether the performance drops under sustained load?
I just read other review site that Intel SSD uses SUBSTANTIALLY more CPU usage then other competing SSD. I hope your future detail review will test this out as well.
Even thought Intel SSD uses much less power then competing SSD, the CPU usage required will discount those saving. ( Possibly uses even more power. )
In could be another case for Intel to push you buying a more powerful CPU. Which is rather sad. Since this is exactly what happen to USB, which makes it slow and unresponsive compare to Firewire.
So you don't wonder too much any more:
RE: How Come Intel Can't Compete The Sequential Performances Of Its Rivals?
by Anand Lal Shimpi, 1 days ago:
I'm not sure it's a question of why Intel can't, I think it's a conscious architectural decision. It's something I've been chasing down for months but I can't seem to put my finger on it just yet.
This is indeed an implementation decision.
The difference between "ordinary" drives and the Intel drives is in the way they write data to the disk:
"Ordinary" drives can have "high" sequential throughput because they employ only coarse levelling algorithms. That is, they keep erasure counts for entire blocks and write data to the block that currently has the lowest erasure count.
First a bit of glossary for the terms I use:
Block - the smallest unit on a flash chip that can be erased by erase command (typically 256MB).
Sector - The smallest data unit controller's algorithm recognizes. This is probably not equal to 256 or 512 bytes (actually it's a bit more for CRCs and stuff) which is a typical sector size on a flash chip because the reallocation tables would take way too much space then.
Below I only explain the reallocation algorithm - which is in reality also complemented by a wear leveling algorithm (the wear leveling you already know and understand).
Intel's controller (also OCZ Vertex) do this differently: they convert all writes to sequential where they look for the block which has the most "free" sectors. Of course some weal leveling is also applied. So each write is preceded by a lookup of such a block, then this block is erased, the non-free sectors re-written to it along with the new sector.
This method creates massive fragmentation issues and to keep track of it, some quite large lookup tables have to be maintained in the controller.
In order for this method to work as efficiently as possible, it is always good to leave some space on the disk free for the algorithm itself. I know the disk itself doesn't have 80GB capacity, more like 70 available for an OS to use. I wouldn't be surprised if the amount of flash chips on the disk itself was more than 80GB, but I just checked out the PCB pics and this doesn't seem to be the case.
The translation / lookup tables take 1MB per 1GB of storage space if "cluster" size is 4KB, which I strongly suspect it is (default FS cluster size for many file systems). Additionally you need derived "free sector" tables which are also quite large and barely fit into that poor 128Mbit SRAM chip that's there on the PCB if at all. This is only for the most simple implementation os sector reordering algorithm. In order to make things faster, they probably need even larger / more tables.
This free space on the disk simulates the announced TRIM support. Since you always have a certain percentage of the drive "free" for your algorithm to use, you can always assume (and be right) that you will find a block that has at least some sectors free. Since typical disk writing occurs mostly in a few clusters, this then translates into "the same" blocks alternating among themselves.
The reason intel's drives showed performance decrease when in "used" state (BTW: this state is achieved at the same moment when you have written disk's capacity blocks to the disk, but it doesn't mean that the disk is at that time full - it just performed, say 80GB of writes) is because at that particular moment the disk itself has run out of 100% free blocks and it has to start looking for blocks that have as many as possible free sectors. But, naturally, only rarely are those blocks >50% free so writing speed decreases because after block erasure, the controller must write back the sectors that were already full in the block.
So, supporting TRIM will improve the situation for these disks because more sectors will be marked as free. They will still suffer some performance degradation, but not nearly as much as G1 models. Of course this will hugely depend on the amount of disk space that is actually taken (what explorer reports as used capacity). The more free space on the disk, the more sectors "TRIMmed" on the disk itself and the more free sectors to use when writing.
It will also not be necessary to wipe the drive when the TRIM support is installed. It will only take a bit longer for most of the sectors to actually become marked as TRIMmed. So the benefit will be there. But I can't be sure what will happen to the drive's data when the new firmware is flashed on it. If the flashing process or algoritm differences (there shouldn't be any) should wipe the data from the disk, the TRIM benefit will then only come with a wiped drive, won't it?
Also even generation 1 drives should exhibit a lot less "used state syndrome" if they were "resized" to a smaller size. 40GB (half the disk) should be small enough for the degraded performance to never show. Unfortunately I can't test this since I don't have an Intel SSD.
So, finally, to answer the question:
Intel's drives have lesser sequential performance (in theory) than "ordinary" SSD drives because they perform sector reallocation. This algorithm takes it's processing time and the controller chip isn't exactly a 253THz organic computer - if you know what I mean. Additionally, the reallocation tables are quite large and need to be written to the disk along with the data sectors. The ordinary disks have only small reallocation tables and don't suffer so much from having to write them.
1. Intel spec's 312,581,808 user-addressable sectors = 160GB excluding a "small portion for NAND flash management". Most of that management overhead is below the visibility of the user, and at worst likely to be ~3-5% even if they subtracted all of the overhead from the user-accessible space.
2. A "page" is the smallest unit recognized by a NAND unit--that is, the smallest unit of data that can be read or written. Pages are grouped into blocks. A block is the smallest unit that can be erased. A page must be erased--or more accurately in the unprogrammed state--before it can be written/programmed.
a) You don't need to "write back sectors" (pages) unless you are overwriting an existing page; doing so would simply prematurely degrade the flash. The algorithm is: Has this *page* been programmed? No: write it, end; Yes: Does the block contain other valid pages? Yes: Read valid pages, erase block, write back valid pages and new page, end. Obviously you'd prefer to find a unprogrammed page rather than go through the read-write cycle.
b) We don't know the geometry of the Intel flash chips; specifically the page and block size. However, the ONFI 1.0 spec (on which Intel says these are based) states "The number of pages per block shall be a multiple of 32."
3. All reasonable implementations perform remapping as part of wear levelling. Ideally that remapping should be at the page level, as that is the smallest unit of programming (which may be larger than a sector); you also need to keep track of the number of erase cycles for each block.
a) Page mapping requires LOG2(number of pages) bits for the remap table for each page. E.g., for 160GB and page size of 4KB; then: # pages/entries = ~39M; bits/entry = LOG2(39M) = ~25 bits/entry = rounding up to the next byte = 4/bytes/entry * 39M entries = ~160MB
b) Erase cycle counts requires either a counter for each block, a FIFO/LRU that orders blocks based on preference for next write, or more likely a combination of the two (the latter optimized for run-time to answer the question "which page should I write next?").
c) Assuming an erase cycle granularity of 1/2^16 (i.e., a 16-bit counter for each block), a page size of 4KB, a block size of 32 pages (128KB), then for a 160GB drive = 2 bytes/block * ~9.7M blocks = ~20MB. (Again, however, trying to maintain a sorted list of counters is probably not the most efficient way to determine "where should I write next?).
4. Given the above assumptions, the total in-memory wear-leveling overhead for a very reasonable and high performance algorithm for a 160GB drive is ~180MB, less with a bit of optimization. From that we can posit:
a) The page and block sizes are larger than assumed above (less memory needed).
b) The Intel controller has quite a bit of RAM (above the external 32MB DRAM chip).
c) The Intel controller is partitioning operations (reduce the amount of memory required at any time).
d) Something else.
In short, it is far from apparent why differential between Intel's and the competition's serial vs. random IO performance. By all means if you really do know, please explain, but your previous "explanation" appears dubious at best, and ill-informed at worst.
Sorry, had no intention to offend you.
However:
The difference between user addressable blocks and actual flash size is quite substantial:
On the PCB you have 160GB of flash (160 * 1024^3 = 171798691840 bytes). Of that, you have 160 billion bytes (~160 * 1000^3) user addressable.
That's almost 11GB difference. Even if my info about disk free space is incorrect, this difference makes up for almost 7% of algorithm useable sectors. Flash chips do come in power of 2 capacities, unlike rotating platter disks.
Your calculations for the tables taking 180MB therefore still leave almost 11GB for the reallocation sectors.
I have also tried to be as plain as possible in describing the algorithm. I delved in no details, but if you insist:
It's impossible to retain information in RAM, be it on controller or on the external RAM module. So no tables there, only cache.
Your calculations actually just "specify" what I told in general terms. For such specification and detailed calculations you would have to actually know the actual algorithm employed.
Your assumption a is incorrect because there's enough space for the tables, so no compressio / optimization is necessary.
Your assumption b probably fails, controllers typically don't have massive amounts of ram. Even if there was massive ram in there, it would still only be used as cache.
I don't see how partitioning operations would reduce disk space requirements, but I believe that partitioning is done. Partition being one sector (smallest storage unit size).
There's probably quite a few "something elses" in there. With only 7% of spare disk space to work with, the algorithm is IMHO a lot more complex than I suggested.
So - why slower seq writes (in a bit more detail):
The controller has to read reallocation tables for the relevant tables. That may mean more than one sector, probably more like 3 or 4, depending how deep the reallocation tables trees are.
Then the sector's physical position is determined and it is written and - if it's time, the modified reallocation tables are written too.
Both reading and writing "excess" information take time. As well as the calculation of the appropriate new address.
I don't know if that's what Intel implemented, but it seems to me a good guess, so don't beat me over it.
Apologies, that last statement in my previous post was unwarranted. No you didn't offend me, and I didn't mean to beat you up, I was just being a cranky a*hole. Again my apologies.
My comments were primarily directed at controller run-time memory overhead, not overhead within the flash device itself. As you said, controllers aren't likely to have that much memory; specifically, sufficient for an in-memory logical-to-physical (LBA-PBA) map for every page of the SSD (never mind for every sector).
Yes, there is going to be some storage overhead; specifically, that needed for garbage collection and long-term wear leveling, where unused pages are coalesced into free blocks (improve write speed), and data in blocks with low erase counts are redistributed (improve longevity).
The importance of the latter is often overlooked, and if not done will lead to premature failure of some blocks, and result in diminishing capacity (which will tend to accelerate). E.g., you load a bunch of data and never (or rarely) modify it; without long-term leveling/redistribution, writes are now distributed across the remaining space, leading to more rapid wear and failure of that remaining space.
I expect they're doing at least garbage collection, and I hope long-term leveling, and may have opted for more predictable performance by reserving bandwidth/time for those operations (write algorithm treats random IO as the general case, which would play to SSD's strengths). OTOH, it may be a lot simpler and we're just seeing the MLC write bandwidth limits and housekeeping is noise (relatively speaking); e.g., ~7MBs for 2-plane x8 device = X 10 channels = ~70MBs.
I won't lay claim to any answers, and AFAICT the simple/obvious explanations aren't. The only thing clear is that there's a lot of activity under the covers that effects both performance and longevity. Intel has done a pretty good job with their documentation; I'd like to see more from the rest of the vendors.
Also interesting from there was the fact that errors in hard drives usually occur in bursts i.e P(error in bit b|error in bit b-1) is high vs. errors in SSDs which are basically randomly distributed.
Thanks for the quick update. I'm on the edge of my seat with these new drives. The suspense reminds me of when Conroe first debuted ;)
Given the apparent quality of the controller, I'd like to assume that the low sequential writes were intentional to some degree. I'm sure Intel's engineers had to make some design decisions, and it appears as if they've chosen to sacrifice sequential writes in most (if not all) cases in favor of random writes.
I think Intel is on the right track with the controllers on these drives. If you look at desktop usage patterns, your random reads / writes reign supreme. Sequential writes are the most infrequent operations.
Anyway, that's based on the assumption that random write performance and sequential write performance are mutually exclusive somehow (supported by the X-25E benches...).
VERY interesting.
I'm also very interested to see how interfaces and controllers try to keep up with the drastic increase in storage bandwidth for the enterprise. The current mass storage architecture is mature and versatile. Going straight PCI-E seems to be a step backwards in architecture in exchange for raw performance. It seems to me to be an immediate stop-gap, and I'm not sure how many serious companies will buy into this fusion-IO thing long-term.
I'd personally rather have 7 SLC's in a RAID 5EE over two redundant PCI-E cards and one hot-spare. It's far more cost efficient, and I think everyone will agree hot-swapping SAS/SATA is a lot easier than hot-swapping an internal card.
PCI-E Will be even be an even shorter term stop gap than most people realize.
PCI-E x1 bandwidth is the same as regular PCI: 133MBps.
So PCI-E x4 like the Fusion I/O uses is actually slightly below the SATA 3.0 600MBps spec that will be out soon.
You really dont need 4x Slot. PCI - E 2.0 2x Slot already gives you 1GB/s
Since PCI-E Express 3.0 is coming at 1GB/s single Slot. I think 2x for compatibility is reasonable enough.
...this is basically saying the drives are great (assuming the price is much better than the X25-E) as long as you're not moving large files around?
i.e. the relatively infrequent software installations wouldn't be optimal, but otherwise it would be quite a good drive? or like an OS drive basically?
can you throw in an average 7200 rpm hard drive into the mix for a relative comparison?
The 64Gb 8Gx8 MLC cost average 12.5. Ofcoz since Intel are making the Flash themselves ( or Joint Venture ) they are already making a profit on Flash. 10 of those = 125. I think controller may be a 90nm tech cost around $15. Again Intel is making profit on the controller as well. Packaging and DRAM etc, the 80GB SSD should cost $160 to make.
I believe that NAND price is still based on 50NM - 40NM price. So 34nm should cost less. Hopefully in a years time it will cost 50% less.
Yeah Intel have confirmed that TRIM support will be delivered as part of a firmware upgrade to be released when Windows7 supports this. Apparently for XP & Vista machines this will also require supporting software to be installed as neither have inbuilt TRIM functionality.
Not so good news for the original G1 drives as it does not seem that Intel will be releasing similar firmware for these.
Hi, thanks for the great article, although would love to see reviews on the smaller capacity drives (e.g. the 80GB Intel) that most people buy, instead of the expensive ones.
Also I read terrible reports of unreliability on Intel SSDs (blue screens, incompatibilities). It seems way too risky to me to put any SSD in RAID-0
Would love a review on the new Crucial SSDs, they just came out this week, have very good read/write rates, they are available NOW on the Crucial website, are quite CHEAP, but there are no reviews yet, not even what controller they use.
No technology is 100% reliable. Even enterprise hard disks fail quite a lot. Intel's SSD is as reliable any other SSD. I dont think you have to worry on that count.
I wish intel had bumped up the warranty to 3 years. That would speed up the adoption of ssd.
I too am disappointed that Intel have failed to improve their sequential write figures when everything else seems to have been.
What I would like to know is if Intel can do anything with the G2 firmware that will improve sequential write performance without effecting the other numbers? or is this a hardware limitation and as such we will need to wait for the G3 or 34nm SLC drives?
Also in the real world, where will this sequential write performance bottleneck really affect or be noticable for laptop users? Copying large files to the drive? Unzipping files? or some other task?
This sort of information would really be useful in deciding whether to go for the new Intel drive or the Vertex in time for Windows 7.
Out of interest, is there any news on when the OCZ Vertex Turbo drives will be included with these tests?
Honestly I doubt we'll see significant gains in sequential write performance with this generation. Intel could improve sequential write performance but I believe it would be at the expense of random performance, which for desktop users is a bit more important.
Copying large files to the drive will be the biggest indication of the limited write speeds compared to the Samsung/Indilinx offerings. Even at 70MB/s the X25-M is faster than any hard drive you'd stick in your notebook, but it's definitely slower than the Vertex/Samsung SSDs.
In my opinion it's random read/write performance that ultimately matters, as you tend to not copy huge files to such a small drive on a regular basis once you have everything setup.
I should have the first Vertex Turbo drive by the end of next week.
Personally my feelings on this is that I believe that [medium to large] Sequential Writes are equally as important as [small] Random Writes.
I have noticed that especially on SSD reviews, the common random write tests are 4kb sized and deemed most important by many.
I honestly have not seen any one validated bit of information anywhere showing that [small] Random Writes are the highest percentage file IO that desktop PCs write out, in fact it is just the opposite
Most data files written by common productivity software are almost always larger than 4kb, and most software either performs a backup-rename or write-then-delete so partial file small block update/overwrite is not an issue; with growing web content the browser cache files are getting larger, as is most temp files; and one of the most heavy users of the hard drive is the Windows page file and they are optimized to perform large sequential file writes. See this page: http://blogs.msdn.com/e7/archive/2009/05/05/suppor...">http://blogs.msdn.com/e7/archive/2009/0...nd-q-a-f...
"Should the Page File be placed on SSDs? Yes."
"Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1."
"Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size."
Personally I am looking for SSDs with higher Sequential Write performance and would welcome some "real world" tests in the many reviews on the Internet since I don't believe that what we are getting is valid information in this regard. For example, 128kb through 1MB writes would give a more valid idea of page file performance on a drive.
Reviewers need to educate themselves on how the OS works.
The poor random write performance is likely due to the controller lacking hardware (or having slow hardware) for the MLC ECC calculations. SLC flash doesn't require such heavy ECC, and so the X25-E's don't hit this bottleneck. It's unlikely to be fixed via firmware, though I was personally hoping that they would include it in this new revision of the controller.
Don't know about you, but 70 MB/s is more than enough for me. I'll happily trade sequential write for random write. Especially since the alternatives are so much slower in random write. You'll notice random write performance much more than you will sequential write performance. How many times per day do you copy/move multi-GB files, especially since you're probably only going to use it as a system drive? Maybe a couple times per day? Compare that to the fact that random writes keep happening in the background for as long as you use your computer.
I've read AnandTech's SSD Anthology and decided to go with the Intel X25-M G1 80GB MLC SSD drive on 2009-06-10 for $314 USD at Newegg. Due to a incompatibility with the nVidia nForce4 chipset I couldn't use the drive and I had to upgrade my entire system early to an Intel Core i7 with Asus P6T Motherboard with X58 and ICH10R. I only got to use the SSD starting in July, 2009 and I've been using it for a month with Windows 7 RC.
The performance is absolutely fantastic and I think that this upgrade and the troubles were worth it. However now I hear that Intel might not release a firmware upgrade for the Generation 1 drives to enable TRIM support and this seriously pisses me off. On top of this the new G2 drives seems to perform faster for Random Writes. I am contemplating on putting up my G1 drive for sale on eBay now to recoup almost all of the money that I spent on it so I can purchase a G2 drive with TRIM support upgrade potential.
If Intel would have come out and promissed support for TRIM for the Generation 1 drives I would most likely keep it but unless that happens in the next week or two I'm going to dump this G1 like a hot potato.
One thing that I would love to know is what is the actual sale date of the G2 drives and what will the street price be for the 80GB model.
Jakfrost - both G2 models have been on sale here in Tokyo for 2 days already, albeit in limited quantities. I picked up a G2 160 Gb one yesterday for JPY 49,000, that's $519 in today's exchange rate. It's humming along nicely in my MacBook Pro now, I'll have to pick up another one ASAP.
Real Power usage - 150mw stated in the previous article is far too low compare to other SSD. Which is 2 - 4W. If what Intel state is true then it is many times better then its competitions.
How much different do we get if we could use DDR3 Ram? I.e if the cache is 5 times faster.
And why is Intel only using 32Mb Ram when others are already using 64Mb? ( Limited by SD Ram?? )
Another questions i have in mind, although not relevant to SSD, i hope someone or Anand could answer it here - WHY SD Ram. Why is it still existing? DDR2, which i believe to be the cheapest Ram out there, is faster, lower power, and higher capacity. Why do we keep producing SD Ram anyway?
Did Intel artificially limit the Seq Write performance?
Any pricing for X-18M? I would prefer my laptop using more space for battery.
Most likely cost saving. As memory (even PC133) is much faster (bandwidth wise) than Flash, and latencies are about the same, it's simply cheaper to go with SDRAM. The same reason why they didn't paint the G2...
In a mixed random read/write scenario the X25-E was 3 times faster. This is what other tests suggests as well the X25-M has a randowm write IO/sec of under 1000 while the X25-E is somewhere 3000~5000 IOps? But the Random 4kb write graphs shows the X25-Mg2 is faster here. I am missing something here? Is there a test missing showing the X25-Mg2's poor random IOps write performans to the X25-E? An IOmeter test?
I am considering 4 X25-Mg2 in a raid-0 on my Areca 1231ML for working on a 15GB~20GB dataware house on SQL Server, or would 2~3 X25-Eg1's be faster?
1. PC Perspective's recent benchmarks show the X25M-G2 and the X25M-G1 both peaking at ~16K IOPs with IOMeter database pattern at a queue depth of 32.
3. The THG numbers in that article (and elsewhere) appear low compared to many others. They also show falloff starting at queue depth 8 for the X25E and 16 for the X25M--those are odd and suspicious. They don't provide details, but I suspect they used one of the Promise SATA controllers on the test system, and the funny numbers are primarily an artifact of the controller.
3. THG used IOMeter 2003.05.10 for the IO performance tests in that article, which is ancient; 2006.07.27 was the last release. I've noticed significant differences in results in what are otherwise nominally similar benchmark setups depending on IOmeter version. E.g., if you look in the "charts" section at THG, you'll find the X25E tested with IOMeter 2006.07.27; database pattern = 6400 IOPs vs. ~5000 IOPs for IOMeter 2003.05.10 in that article; yet in another article using IOMeter 2006.07.27 they show ~66oo IOPs.
4. We don't know the test parameters... partition with a test file? raw disk? how big? While I'd expect test size to have much less effect with SSD's than HDD's, that information isn't provided.
In short, until sites provide much more information about the details of their tests configurations and parameters, comparing benchmarks from different sites--or often from the same site--is a crap shoot. Not to mention that you can be pretty sloppy with HDD benchmarks and still get pretty similar numbers; SSD benchmarking really needs more attention to details that might not matter much in the HDD world.
p.s. The X25M-G2 (both 80GB and 160GB) is spec'd faster than the X25E in all respects except serial write and write latency which is the same for both, which suggests the X25M-G2. However, I haven't seen any reputable comparable benchmarks between the two.
Oops, sorry, clarification: "...except serial write and write latency which is the same for both..." should read "...except serial write which is slower, and write latency which is the same..."
Did you by any chance have the opportunity to compare the various Indilinx Barefoot (MLC)s? I'm curious if there's any variations in the firmwares or even unspotted hardware differences.
Any chance of seeing any of the new expresscard SSD's reviewed soon? This would be the way I would go... but I don't know enough about these drives to take a chance on it meeting my needs for my laptop. The wintec filemate 48gb would be perfect for OS and programs and would allow me to keep everything else in my system the same. This drive claims read/write speeds at 115/65mbps.
I guess what I'm getting at is... do these studder? Are they fast? Would make a great review...
Could you possibly benchmark these drives with BitLocker or TruCrypt as I'm interested to see the performance impact of using full hard drive encryption with these SSDs.
I've got an x-25M G1 80GB and I installed truecrypt whole-disk encryption on it.
Windows Experience Index dropped from 7.4 to 5.9 after I encrypted.
ATTO disk benchmarks dropped from around 70MB/sec writes and 220MB/sec reads to ~65MB/sec writes and 130-145MB/sec reads.
"Seek" latencies are still pretty low.
Keep in mind I have a slower CPU (AMD AthlonX2 4800+) so my encrypt/decrypt speed isn't stellar (TrueCrypt benchmarks AES at around 200MB/sec which is what I'm using)
Anyone else care to share? :) I'm curious about BitLocker performance too.
I too am extremely interested in this type of testing. Just the numbers on one of the new Intel drives with and without fde would suffice, don't need an all-out comparison.
Thanks for the update! Based on your numbers, a couple items you might look at when you do more testing, assuming you're using IOmeter(?) ...
1. 4KB random write 34.5MBs = ~8.4K IOPs and a latency of ~118us = very close to the Intel spec ("up to" 8.6K IOPs and 115us latency).
2. 4KB Random read 58.5MBs = ~14.3K IOPs and a latency of ~70us = significantly lower IOPs and a bit lower latency than Intel specs ("up to" 35K IOPs and 85us latency), and significantly lower latency than you show (200us).
Intel uses IOmeter with a queue depth of 32; are you using the same, or is there another possible bottleneck (especially on the read side) in your test system?
Or maybe methinks Intel's 35K IOPs 4KB random read claim is bogus? That would imply a latency of ~29us (or ~143MBs), but they also spec 85us latency, which gives ~11.8K IOPs--which more closely matches your test numbers.
Apologies, last paragraph in my previous post has a brain fart... it assumes IOPs are proportional to latency, which is obviously false with multiple channels and pipelining. Duh. Other benchmarks (e.g., PC Perspective) suggest 35K IOPs might be achievable in theory, but I've yet to see anything close to that number in practice.
"While I don't believe that 70MB/s write speeds are terrible, Intel does need to think about improving performance here because the competition is already far better. The more important values are still random read/write performance, but sequential write speed is an area that Intel continues to lag behind in."
Well maybe with the introduction of the 320GB model with increased Random 4KB Writes (10.6 IOPS?) and with 166MHz SDRAM instead of 133MHz maybe they can hit around 95MB/s (85 advertised) in Sequential write performance
Unfortunately as Anand mentioned, Intel doesn't provide detailed durability specs for the M/MLC drives, other than a "Minimum Useful Life: 5 years" (from the datasheet). While the 34nm parts may have actually have a lower per-cell durability than the 50nm parts, I would hope and expect they have compensated for that with additional sparing and ECC. (The BER for all Intel's SSD drives are the same: "1 sector per 10^15 bits read".)
Intel provides more data for the E/SLC drives: "1 petabyte of random writes (32 GB)" and "2 petabyte of random writes (64 GB)" (from the datasheet). Assuming "random writes" means evenly distributed, that implies ~31.5K erase/write cycles. Again, however, I would hope and expect that Intel is being very conservative with those numbers. MLC drives typically can sustain an order-of-magnitude (or more) fewer cycles than SLC, but then again they're cheaper, so more room for spare cells and better ECC.
I think (or at least hope) there's little reason to doubt Intel's "5 year" assertion; validation is pretty straightforward, and they seem to have done their homework in all other respects. Not to mention that SSD typical failure mode (write and non-catastrophic) is preferrable to HDD (read and catastrophic), assuming of course it's done properly...
Given that Intel is holding their controller very close to the vest, my guess, and only a guess, is that Intel is being very conservative with writes (more power and slower programming = longer life), or might be doing a full write-read-verify, which might explain why their write speeds are lower than the competition. If so, good for them. Last thing the SSD market needs is to have large numbers of drives failing in weird ways in a couple years.
p.s As far as pagefile activity... I wouldn't worry about it unless you're constantly and seriously overcommitted on memory. There's a lot less pagefile activity than most people think. But that's another subject.
I'd make a conjecture that intel's very good random write performance may somehow relate to why the sequential write isn't as good, although this isn't evident in the SLC version of their SSD. The other possibility is that Intel is distinguishing between their SLC and MLC series through lower write. I'd have to say that random read and write are way more important for a normal desktop user than anything else though, I always notice latency and responsiveness far more than a small change in long term transfer speed. Personally I'm very interested in getting a 80GB/160GB as a main os drive for both my primary box and a ZFS RAID server/storage box.
With the recent price drops I'm considering getting this drive in its 80GB flavor as an OS drive. Sequential write speed is the least of my concerns, because once you load on your OS and other apps, you barely write in big blocks.
I also pick Intel as number one but the battle for second place is a little grayer, Samsung is more expensive which leads many to believe it is faster which of course is not the case.
Take a close look at the part numbers. A bit hard to read given the resolution of the pic's, but I'd bet the old unit uses the equivalent of Micron MT29F64G08CFxxx 64Gb parts, and the new unit uses the equivalent of Micron MT29F128G08CJxxx 128Gb parts.
Micron production MLC parts for both are available only in TSOP 48. The package dimensions also appear to be the same, and per ONFI 1.0 (on which Intel says they're based), that could be easily verified from the package dimensions. The controller is obviously BGA.
As to why the potting or lack of... thermal, shock, anti-whatever... but I'd guess Intel has just gotten better with the qualification/manufacturing process.
BGA chips typically do not need potting. In fact, the vast, vast majority of BGAs - including some that run very hot - are not potted at all.
If the original Intel SSD used extensive potting - I don't know myself, I've not opened up my 60GB SLC drive - I'd assume it would be as an anti-counterfiting measure to prevent far-east outfits from screwing with the innards and then selling the drives more expensively as a higher capacity unit.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
87 Comments
Back to Article
Bolas - Friday, August 21, 2009 - link
So... does it work yet?What's the current status of the G2 firmware bug? Any idea when we'll be able to buy G2 drives on Newegg?
krazyderek - Sunday, August 16, 2009 - link
I know this is a bit pre-emptive, but i really hope that if the introduction of TRIM goes well that we see a double bar graph showing new AND used performance bars for each drive (ie: TRIM on, TRIM off). Once TRIM is implemented it may level the playing field for some drives and that should be easily and fairly shown in the charts. I know you had shown used vs new benchmarks during the anthology but in different charts sometimes on different pages made it tricky to see exactly the difference. Again, if TRIM is all it's touted to be, "new" state may be, the longterm state of an SSD.Here's hoping Intel get's it act together with those seq writes.
PS, i own an 50nm 80gb intel, i'm semi ticked intel won't release a firware with TRIM in it, at the same time, it still has the second fastest random read and write (second only to it's newer version) and it feels rediculously fast to me launching every application on my computer at once when i start windows, and a sub 20 second boot time in OSX.
Alberto122 - Tuesday, August 4, 2009 - link
I think computer market will slowdown in the next two years. Industry results in July shows profit warnings and losses in many companies. Intel lost 398 US$ mill and AMD 335 US$ mill.New Google Chrome operating system (which has lesser hardware requirements than Windows), virtualization runing on multicore computers (one computer, many users) and financial crisis can hit very hard the sales for PC market.
Well see how can they sale new processors.
http://managersmagazine.com/index.php/2009/08/info...">http://managersmagazine.com/index.php/2...ctorial-...
krazyderek - Sunday, August 16, 2009 - link
Intel with it's high IOPS, and SSD's in general with their non-existent response time seem poised to dominate multi-user, and virtualization setup's. It does seem like hard times ahead, but that's where bang for buck is really going to count and not to many people actually "need" a 2TB drive, where as HD's have been the bottleneck in computer's ever since the GHZ race spread out into the multi-core arena. SSD's seems ready to stand up in a multi user server, or virtualization setup to take it from "hold on a second i'm on a shared server login" to.. "really, this is a shared computer?"jimhsu - Sunday, August 2, 2009 - link
Hey Anand, I think I finally found the Achilles Heel of the X25-M: Poor random read performances under a heavy seq write workload.http://www.hardforum.com/showpost.php?p=1034430137...">http://www.hardforum.com/showpost.php?p=1034430137...
Reproduce with:
PerformanceTest 7.0 (x64)
Advanced Disk Test
2 synchronous threads
500MB test files
Uncached Win32 API
Thread 1: 100% Read, 100% Random, 4096 byte block size
Thread 2a: 100% Write, 0% Random, 1 MB block size
Thread 2b: 100% Write, 100% Random, 4096 byte block size
In general, the X25-M lags HEAVILY when doing a seq write (e.g. file copy) while doing random reads (e.g. opening microsoft word). Heavy RANDOM writes are not a problem, suggesting possibly poor interleaving of large seq writes and small reads.
Worth discussing with Intel?
jimhsu - Saturday, August 15, 2009 - link
What makes this even harder to understand is that performance on the drive is dynamic - the algorithms gradually accommodate changing workloads under a fragmented condition. This only happens under LOW free space conditions in the used state.I copied a lot of files sequentially to the SSD these few days. Seq write speeds increased from 30-40MB/s to over 80MB/s these few days, but random writes dropped to 15MB/s. These are Crystaldiskmark benches so I don't trust them much at all, but seq writes definitely became faster (can't feel the effect of slower random writes).
The performance profile of these Intel drives is VERY confusing. TRIM will probably help a lot to make performance less confusing.
iwodo - Monday, August 10, 2009 - link
Interesting, does this problem exsist in other SSD drive from Samsung or Indilix?HexiumVII - Wednesday, July 29, 2009 - link
Do these new drives support it?Alkapwn - Sunday, July 26, 2009 - link
Came across this information, and wonder if there is any way to verify it. Anyone heard of drive problems on these new g2's?http://www.pugetsystems.com/blog/2009/07/24/minor-...">http://www.pugetsystems.com/blog/2009/07/24/minor-...
somedude1234 - Tuesday, July 28, 2009 - link
I ordered one from Puget last week. I can confirm that they are doing exactly as they said in the linked post and contacting people with pending orders directly.I fell into the category of "not likely to use a BIOS password on the drive, but willing to wait if Intel thought a hardware change was necessary".
Once they confirmed that a simple firmware update would resolve the issue, I gave them the go-ahead to ship mine. I should be receiving it soon.
So far I've been very impressed with the customer service at Puget Systems. They are definitely responsive to their customers.
has407 - Sunday, July 26, 2009 - link
Seen the same on a couple sites. Apparently setting a drive password and then trying to change or disable it can cause the drive to become inaccessible. They're also no longer listed on Newegg.Jefrach - Monday, July 27, 2009 - link
I bought one of these drives from Newegg the day they were released. I just received an email today saying I was getting a refund because they were out of stock and that the item is discontinued. Price was at listed 449.billybob54321 - Saturday, July 25, 2009 - link
Anand-I know application performance is on its way, but I'd love to see individual perforamnce time of actually running applications. The original Intel SSD review focused on copy/launch times whereas I'd be interested in run times of apps after they are launched.
http://www.anandtech.com/storage/showdoc.aspx?i=34...">http://www.anandtech.com/storage/showdoc.aspx?i=34...
For example, I do a lot of work in Photoshop CS4 with big RAW files. It's unclear whether the random read/write speed of the Intel drive would be superior to the sequential write of the Vertex drive when working in PS CS4 because the temp files can get into the 100's of megabytes.
Thanks for your hard work!
erikejw - Friday, July 24, 2009 - link
Is this benchmarked in a used state of the drive?That is how we all use hard drives so if it is not it is pretty worthless for users.
Cov - Friday, July 24, 2009 - link
Here you can find results of someine who tested his SSD (that he just bought)with CrystalDiskMark:http://hardforum.com/showthread.php?t=1436928&...">http://hardforum.com/showthread.php?t=1436928&...
(last posting on that side)
piasabird - Friday, July 24, 2009 - link
These performance tests do not tell you anything.1. How do they compare when actually running programs in an operating system?
2. How do they compare to a standard 500gig Hard Drive?
How are these to be used? Are people planning on using them as hard drive replacements in laptops? What about Heat and cooling requirements like you might mention for RAM or a video card?
The reason I am asking these questions is Microsoft Windows as an operating system is not designed that well to use these devices, and they dont show much advantage when used in cooperation with a hard drive to boot a computer faster. This seems like much todo about nothing when I can purchase a 320GB Drive for about $100.00.
evand - Friday, July 24, 2009 - link
Well, some application benchmarks would be nice. But then, the article is titled "performance preview". If you don't know how to read these numbers and take a guess at what it will mean for you, then don't.They compare very, very favorably to a top of the line, high performance hard disk, and you're asking how well they compare to a drive that said high performance disk will eat for breakfast? Seriously, the comparison almost isn't worth making. If you're really curious, find reviews of the disk you want compared to, and see what high performance rotating drives they compared it against (or go see what regular drives that high performance drive was compared against in its reviews).
People are considering using these on laptops, on desktops as a primary (OS / apps / heavily used data) drive with rotating media for bulk storage, in silent /fanless computers, etc.
Why do you even need to ask about cooling and heat sinks? It doesn't need them. It draws 150mW of power. Putting heat sinks on it would be ludicrous.
So what if Windows isn't "designed" to use these disks? It will use them without any problems. Perhaps a more carefully designed OS could eek a bit more out of them, but so what? It should be worlds faster than any rotating media you can compare it to.
If you want capacity, go buy a 1TB drive to use a secondary data drive for $100, and use one of these for things that care about io performance rather than storage. Splitting your data across the two disks to get both good performance and lots of space isn't exactly hard for the normal desktop usage case.
evand - Friday, July 24, 2009 - link
I was discussing these drives with someone who works in the storage industry with Flash technology, but not these specific drives. He had an interesting observation: a lot of flash drives will keep a pool of pre-erased pages available for writing. However, they can't erase new pages fast enough to keep up with the peak random write performance indefinitely. Once the pre-erased pool runs dry, random write performance drops dramatically.Is this the case with the new (or old, for that matter) Intel drives? How long a time period do your random write benchmarks run for? Would you be willing to run a random writes benchmark that runs long enough to overwrite a larger fraction of the disk, and tell us whether the performance drops under sustained load?
iwodo - Friday, July 24, 2009 - link
I just read other review site that Intel SSD uses SUBSTANTIALLY more CPU usage then other competing SSD. I hope your future detail review will test this out as well.Even thought Intel SSD uses much less power then competing SSD, the CPU usage required will discount those saving. ( Possibly uses even more power. )
In could be another case for Intel to push you buying a more powerful CPU. Which is rather sad. Since this is exactly what happen to USB, which makes it slow and unresponsive compare to Firewire.
pmeinl - Friday, July 24, 2009 - link
Does anybody know PC cases optimized for SSDs.I only found the following enclosures to mount SSDs in cases designed for HDs:
http://vr-zone.com/articles/a-data-ssd-enclosure-t...
http://www.patriotmemory.com/products/detailp.jsp?...">http://www.patriotmemory.com/products/d...prodgrou...
http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...
velis - Friday, July 24, 2009 - link
So you don't wonder too much any more:RE: How Come Intel Can't Compete The Sequential Performances Of Its Rivals?
by Anand Lal Shimpi, 1 days ago:
I'm not sure it's a question of why Intel can't, I think it's a conscious architectural decision. It's something I've been chasing down for months but I can't seem to put my finger on it just yet.
This is indeed an implementation decision.
The difference between "ordinary" drives and the Intel drives is in the way they write data to the disk:
"Ordinary" drives can have "high" sequential throughput because they employ only coarse levelling algorithms. That is, they keep erasure counts for entire blocks and write data to the block that currently has the lowest erasure count.
First a bit of glossary for the terms I use:
Block - the smallest unit on a flash chip that can be erased by erase command (typically 256MB).
Sector - The smallest data unit controller's algorithm recognizes. This is probably not equal to 256 or 512 bytes (actually it's a bit more for CRCs and stuff) which is a typical sector size on a flash chip because the reallocation tables would take way too much space then.
Below I only explain the reallocation algorithm - which is in reality also complemented by a wear leveling algorithm (the wear leveling you already know and understand).
Intel's controller (also OCZ Vertex) do this differently: they convert all writes to sequential where they look for the block which has the most "free" sectors. Of course some weal leveling is also applied. So each write is preceded by a lookup of such a block, then this block is erased, the non-free sectors re-written to it along with the new sector.
This method creates massive fragmentation issues and to keep track of it, some quite large lookup tables have to be maintained in the controller.
In order for this method to work as efficiently as possible, it is always good to leave some space on the disk free for the algorithm itself. I know the disk itself doesn't have 80GB capacity, more like 70 available for an OS to use. I wouldn't be surprised if the amount of flash chips on the disk itself was more than 80GB, but I just checked out the PCB pics and this doesn't seem to be the case.
The translation / lookup tables take 1MB per 1GB of storage space if "cluster" size is 4KB, which I strongly suspect it is (default FS cluster size for many file systems). Additionally you need derived "free sector" tables which are also quite large and barely fit into that poor 128Mbit SRAM chip that's there on the PCB if at all. This is only for the most simple implementation os sector reordering algorithm. In order to make things faster, they probably need even larger / more tables.
This free space on the disk simulates the announced TRIM support. Since you always have a certain percentage of the drive "free" for your algorithm to use, you can always assume (and be right) that you will find a block that has at least some sectors free. Since typical disk writing occurs mostly in a few clusters, this then translates into "the same" blocks alternating among themselves.
The reason intel's drives showed performance decrease when in "used" state (BTW: this state is achieved at the same moment when you have written disk's capacity blocks to the disk, but it doesn't mean that the disk is at that time full - it just performed, say 80GB of writes) is because at that particular moment the disk itself has run out of 100% free blocks and it has to start looking for blocks that have as many as possible free sectors. But, naturally, only rarely are those blocks >50% free so writing speed decreases because after block erasure, the controller must write back the sectors that were already full in the block.
So, supporting TRIM will improve the situation for these disks because more sectors will be marked as free. They will still suffer some performance degradation, but not nearly as much as G1 models. Of course this will hugely depend on the amount of disk space that is actually taken (what explorer reports as used capacity). The more free space on the disk, the more sectors "TRIMmed" on the disk itself and the more free sectors to use when writing.
It will also not be necessary to wipe the drive when the TRIM support is installed. It will only take a bit longer for most of the sectors to actually become marked as TRIMmed. So the benefit will be there. But I can't be sure what will happen to the drive's data when the new firmware is flashed on it. If the flashing process or algoritm differences (there shouldn't be any) should wipe the data from the disk, the TRIM benefit will then only come with a wiped drive, won't it?
Also even generation 1 drives should exhibit a lot less "used state syndrome" if they were "resized" to a smaller size. 40GB (half the disk) should be small enough for the degraded performance to never show. Unfortunately I can't test this since I don't have an Intel SSD.
So, finally, to answer the question:
Intel's drives have lesser sequential performance (in theory) than "ordinary" SSD drives because they perform sector reallocation. This algorithm takes it's processing time and the controller chip isn't exactly a 253THz organic computer - if you know what I mean. Additionally, the reallocation tables are quite large and need to be written to the disk along with the data sectors. The ordinary disks have only small reallocation tables and don't suffer so much from having to write them.
has407 - Friday, July 24, 2009 - link
1. Intel spec's 312,581,808 user-addressable sectors = 160GB excluding a "small portion for NAND flash management". Most of that management overhead is below the visibility of the user, and at worst likely to be ~3-5% even if they subtracted all of the overhead from the user-accessible space.2. A "page" is the smallest unit recognized by a NAND unit--that is, the smallest unit of data that can be read or written. Pages are grouped into blocks. A block is the smallest unit that can be erased. A page must be erased--or more accurately in the unprogrammed state--before it can be written/programmed.
a) You don't need to "write back sectors" (pages) unless you are overwriting an existing page; doing so would simply prematurely degrade the flash. The algorithm is: Has this *page* been programmed? No: write it, end; Yes: Does the block contain other valid pages? Yes: Read valid pages, erase block, write back valid pages and new page, end. Obviously you'd prefer to find a unprogrammed page rather than go through the read-write cycle.
b) We don't know the geometry of the Intel flash chips; specifically the page and block size. However, the ONFI 1.0 spec (on which Intel says these are based) states "The number of pages per block shall be a multiple of 32."
3. All reasonable implementations perform remapping as part of wear levelling. Ideally that remapping should be at the page level, as that is the smallest unit of programming (which may be larger than a sector); you also need to keep track of the number of erase cycles for each block.
a) Page mapping requires LOG2(number of pages) bits for the remap table for each page. E.g., for 160GB and page size of 4KB; then: # pages/entries = ~39M; bits/entry = LOG2(39M) = ~25 bits/entry = rounding up to the next byte = 4/bytes/entry * 39M entries = ~160MB
b) Erase cycle counts requires either a counter for each block, a FIFO/LRU that orders blocks based on preference for next write, or more likely a combination of the two (the latter optimized for run-time to answer the question "which page should I write next?").
c) Assuming an erase cycle granularity of 1/2^16 (i.e., a 16-bit counter for each block), a page size of 4KB, a block size of 32 pages (128KB), then for a 160GB drive = 2 bytes/block * ~9.7M blocks = ~20MB. (Again, however, trying to maintain a sorted list of counters is probably not the most efficient way to determine "where should I write next?).
4. Given the above assumptions, the total in-memory wear-leveling overhead for a very reasonable and high performance algorithm for a 160GB drive is ~180MB, less with a bit of optimization. From that we can posit:
a) The page and block sizes are larger than assumed above (less memory needed).
b) The Intel controller has quite a bit of RAM (above the external 32MB DRAM chip).
c) The Intel controller is partitioning operations (reduce the amount of memory required at any time).
d) Something else.
In short, it is far from apparent why differential between Intel's and the competition's serial vs. random IO performance. By all means if you really do know, please explain, but your previous "explanation" appears dubious at best, and ill-informed at worst.
velis - Monday, July 27, 2009 - link
Sorry, had no intention to offend you.However:
The difference between user addressable blocks and actual flash size is quite substantial:
On the PCB you have 160GB of flash (160 * 1024^3 = 171798691840 bytes). Of that, you have 160 billion bytes (~160 * 1000^3) user addressable.
That's almost 11GB difference. Even if my info about disk free space is incorrect, this difference makes up for almost 7% of algorithm useable sectors. Flash chips do come in power of 2 capacities, unlike rotating platter disks.
Your calculations for the tables taking 180MB therefore still leave almost 11GB for the reallocation sectors.
I have also tried to be as plain as possible in describing the algorithm. I delved in no details, but if you insist:
It's impossible to retain information in RAM, be it on controller or on the external RAM module. So no tables there, only cache.
Your calculations actually just "specify" what I told in general terms. For such specification and detailed calculations you would have to actually know the actual algorithm employed.
Your assumption a is incorrect because there's enough space for the tables, so no compressio / optimization is necessary.
Your assumption b probably fails, controllers typically don't have massive amounts of ram. Even if there was massive ram in there, it would still only be used as cache.
I don't see how partitioning operations would reduce disk space requirements, but I believe that partitioning is done. Partition being one sector (smallest storage unit size).
There's probably quite a few "something elses" in there. With only 7% of spare disk space to work with, the algorithm is IMHO a lot more complex than I suggested.
So - why slower seq writes (in a bit more detail):
The controller has to read reallocation tables for the relevant tables. That may mean more than one sector, probably more like 3 or 4, depending how deep the reallocation tables trees are.
Then the sector's physical position is determined and it is written and - if it's time, the modified reallocation tables are written too.
Both reading and writing "excess" information take time. As well as the calculation of the appropriate new address.
I don't know if that's what Intel implemented, but it seems to me a good guess, so don't beat me over it.
has407 - Monday, July 27, 2009 - link
Apologies, that last statement in my previous post was unwarranted. No you didn't offend me, and I didn't mean to beat you up, I was just being a cranky a*hole. Again my apologies.My comments were primarily directed at controller run-time memory overhead, not overhead within the flash device itself. As you said, controllers aren't likely to have that much memory; specifically, sufficient for an in-memory logical-to-physical (LBA-PBA) map for every page of the SSD (never mind for every sector).
Yes, there is going to be some storage overhead; specifically, that needed for garbage collection and long-term wear leveling, where unused pages are coalesced into free blocks (improve write speed), and data in blocks with low erase counts are redistributed (improve longevity).
The importance of the latter is often overlooked, and if not done will lead to premature failure of some blocks, and result in diminishing capacity (which will tend to accelerate). E.g., you load a bunch of data and never (or rarely) modify it; without long-term leveling/redistribution, writes are now distributed across the remaining space, leading to more rapid wear and failure of that remaining space.
I expect they're doing at least garbage collection, and I hope long-term leveling, and may have opted for more predictable performance by reserving bandwidth/time for those operations (write algorithm treats random IO as the general case, which would play to SSD's strengths). OTOH, it may be a lot simpler and we're just seeing the MLC write bandwidth limits and housekeeping is noise (relatively speaking); e.g., ~7MBs for 2-plane x8 device = X 10 channels = ~70MBs.
I won't lay claim to any answers, and AFAICT the simple/obvious explanations aren't. The only thing clear is that there's a lot of activity under the covers that effects both performance and longevity. Intel has done a pretty good job with their documentation; I'd like to see more from the rest of the vendors.
velis - Friday, July 24, 2009 - link
Oops: Block size is 256 KB, not MBrcpratt - Thursday, July 23, 2009 - link
Now on newegg, fyi.StormyParis - Thursday, July 23, 2009 - link
I'm deeply bothered by the disapearance of the tests that showed (only) 25% better level-loading times, and my comments about that.I thought Anand was one of the last intelligent and honest sites around. Good bye guys, it's been nice while it lasted.
Kougar - Thursday, July 23, 2009 - link
Noticed that on the last page the Random Read and Random Write charts need to be switched to align with the actual text above/between them.jimhsu - Thursday, July 23, 2009 - link
Pure conjecture, but read this, specifically page 14:http://www.imation.com/PageFiles/83/SSD-Reliabilit...">http://www.imation.com/PageFiles/83/SSD-Reliabilit...
Possibly Intel is using a longer BCH error correction code to get the 10-15/bit BER rate?
jimhsu - Thursday, July 23, 2009 - link
Also interesting from there was the fact that errors in hard drives usually occur in bursts i.e P(error in bit b|error in bit b-1) is high vs. errors in SSDs which are basically randomly distributed.Robear - Thursday, July 23, 2009 - link
Thanks for the quick update. I'm on the edge of my seat with these new drives. The suspense reminds me of when Conroe first debuted ;)Given the apparent quality of the controller, I'd like to assume that the low sequential writes were intentional to some degree. I'm sure Intel's engineers had to make some design decisions, and it appears as if they've chosen to sacrifice sequential writes in most (if not all) cases in favor of random writes.
I think Intel is on the right track with the controllers on these drives. If you look at desktop usage patterns, your random reads / writes reign supreme. Sequential writes are the most infrequent operations.
Anyway, that's based on the assumption that random write performance and sequential write performance are mutually exclusive somehow (supported by the X-25E benches...).
VERY interesting.
I'm also very interested to see how interfaces and controllers try to keep up with the drastic increase in storage bandwidth for the enterprise. The current mass storage architecture is mature and versatile. Going straight PCI-E seems to be a step backwards in architecture in exchange for raw performance. It seems to me to be an immediate stop-gap, and I'm not sure how many serious companies will buy into this fusion-IO thing long-term.
I'd personally rather have 7 SLC's in a RAID 5EE over two redundant PCI-E cards and one hot-spare. It's far more cost efficient, and I think everyone will agree hot-swapping SAS/SATA is a lot easier than hot-swapping an internal card.
All and all, very exciting.
glugglug - Thursday, July 23, 2009 - link
PCI-E Will be even be an even shorter term stop gap than most people realize.PCI-E x1 bandwidth is the same as regular PCI: 133MBps.
So PCI-E x4 like the Fusion I/O uses is actually slightly below the SATA 3.0 600MBps spec that will be out soon.
glugglug - Thursday, July 23, 2009 - link
Actually I just looked this up, its rated higher than I thought, especially since they doubled it with PCI-express 2.0.2.0 is 500MB/s per lane, so theoretically the PCI-e 4x cards could get up to 2GB/s
Still with the rate these things are improving I think that is 2 years away.
iwodo - Thursday, July 23, 2009 - link
You really dont need 4x Slot. PCI - E 2.0 2x Slot already gives you 1GB/sSince PCI-E Express 3.0 is coming at 1GB/s single Slot. I think 2x for compatibility is reasonable enough.
araczynski - Thursday, July 23, 2009 - link
...this is basically saying the drives are great (assuming the price is much better than the X25-E) as long as you're not moving large files around?i.e. the relatively infrequent software installations wouldn't be optimal, but otherwise it would be quite a good drive? or like an OS drive basically?
can you throw in an average 7200 rpm hard drive into the mix for a relative comparison?
araczynski - Thursday, July 23, 2009 - link
ooops, never mind, forgot about the velociraptor in there :)bobsmith1492 - Thursday, July 23, 2009 - link
Perhaps a log scale would be appropriate to show the orders of magnitude in difference between the drives!iwodo - Thursday, July 23, 2009 - link
The 64Gb 8Gx8 MLC cost average 12.5. Ofcoz since Intel are making the Flash themselves ( or Joint Venture ) they are already making a profit on Flash. 10 of those = 125. I think controller may be a 90nm tech cost around $15. Again Intel is making profit on the controller as well. Packaging and DRAM etc, the 80GB SSD should cost $160 to make.I believe that NAND price is still based on 50NM - 40NM price. So 34nm should cost less. Hopefully in a years time it will cost 50% less.
In 2010, SSD should finally take off.
KenAF - Thursday, July 23, 2009 - link
Has Intel committed to supporting TRIM with a firmware update on G2?smjohns - Thursday, July 23, 2009 - link
Yeah Intel have confirmed that TRIM support will be delivered as part of a firmware upgrade to be released when Windows7 supports this. Apparently for XP & Vista machines this will also require supporting software to be installed as neither have inbuilt TRIM functionality.Not so good news for the original G1 drives as it does not seem that Intel will be releasing similar firmware for these.
ITexpert77 - Thursday, July 23, 2009 - link
Hi, thanks for the great article, although would love to see reviews on the smaller capacity drives (e.g. the 80GB Intel) that most people buy, instead of the expensive ones.Also I read terrible reports of unreliability on Intel SSDs (blue screens, incompatibilities). It seems way too risky to me to put any SSD in RAID-0
Would love a review on the new Crucial SSDs, they just came out this week, have very good read/write rates, they are available NOW on the Crucial website, are quite CHEAP, but there are no reviews yet, not even what controller they use.
trivik12 - Thursday, July 23, 2009 - link
No technology is 100% reliable. Even enterprise hard disks fail quite a lot. Intel's SSD is as reliable any other SSD. I dont think you have to worry on that count.I wish intel had bumped up the warranty to 3 years. That would speed up the adoption of ssd.
smjohns - Thursday, July 23, 2009 - link
I too am disappointed that Intel have failed to improve their sequential write figures when everything else seems to have been.What I would like to know is if Intel can do anything with the G2 firmware that will improve sequential write performance without effecting the other numbers? or is this a hardware limitation and as such we will need to wait for the G3 or 34nm SLC drives?
Also in the real world, where will this sequential write performance bottleneck really affect or be noticable for laptop users? Copying large files to the drive? Unzipping files? or some other task?
This sort of information would really be useful in deciding whether to go for the new Intel drive or the Vertex in time for Windows 7.
Out of interest, is there any news on when the OCZ Vertex Turbo drives will be included with these tests?
Anand Lal Shimpi - Thursday, July 23, 2009 - link
Honestly I doubt we'll see significant gains in sequential write performance with this generation. Intel could improve sequential write performance but I believe it would be at the expense of random performance, which for desktop users is a bit more important.Copying large files to the drive will be the biggest indication of the limited write speeds compared to the Samsung/Indilinx offerings. Even at 70MB/s the X25-M is faster than any hard drive you'd stick in your notebook, but it's definitely slower than the Vertex/Samsung SSDs.
In my opinion it's random read/write performance that ultimately matters, as you tend to not copy huge files to such a small drive on a regular basis once you have everything setup.
I should have the first Vertex Turbo drive by the end of next week.
Take care,
Anand
deegee - Wednesday, August 5, 2009 - link
Personally my feelings on this is that I believe that [medium to large] Sequential Writes are equally as important as [small] Random Writes.I have noticed that especially on SSD reviews, the common random write tests are 4kb sized and deemed most important by many.
I honestly have not seen any one validated bit of information anywhere showing that [small] Random Writes are the highest percentage file IO that desktop PCs write out, in fact it is just the opposite
Most data files written by common productivity software are almost always larger than 4kb, and most software either performs a backup-rename or write-then-delete so partial file small block update/overwrite is not an issue; with growing web content the browser cache files are getting larger, as is most temp files; and one of the most heavy users of the hard drive is the Windows page file and they are optimized to perform large sequential file writes. See this page: http://blogs.msdn.com/e7/archive/2009/05/05/suppor...">http://blogs.msdn.com/e7/archive/2009/0...nd-q-a-f...
"Should the Page File be placed on SSDs? Yes."
"Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1."
"Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size."
Personally I am looking for SSDs with higher Sequential Write performance and would welcome some "real world" tests in the many reviews on the Internet since I don't believe that what we are getting is valid information in this regard. For example, 128kb through 1MB writes would give a more valid idea of page file performance on a drive.
Reviewers need to educate themselves on how the OS works.
emboss - Thursday, July 23, 2009 - link
The poor random write performance is likely due to the controller lacking hardware (or having slow hardware) for the MLC ECC calculations. SLC flash doesn't require such heavy ECC, and so the X25-E's don't hit this bottleneck. It's unlikely to be fixed via firmware, though I was personally hoping that they would include it in this new revision of the controller.emboss - Thursday, July 23, 2009 - link
Arrgh, just noticed I put random instead of sequential. Replace "poor random write performance" with "poor sequential write performance".ssj4Gogeta - Thursday, July 23, 2009 - link
Don't know about you, but 70 MB/s is more than enough for me. I'll happily trade sequential write for random write. Especially since the alternatives are so much slower in random write. You'll notice random write performance much more than you will sequential write performance. How many times per day do you copy/move multi-GB files, especially since you're probably only going to use it as a system drive? Maybe a couple times per day? Compare that to the fact that random writes keep happening in the background for as long as you use your computer.JakFrost - Wednesday, July 22, 2009 - link
I've read AnandTech's SSD Anthology and decided to go with the Intel X25-M G1 80GB MLC SSD drive on 2009-06-10 for $314 USD at Newegg. Due to a incompatibility with the nVidia nForce4 chipset I couldn't use the drive and I had to upgrade my entire system early to an Intel Core i7 with Asus P6T Motherboard with X58 and ICH10R. I only got to use the SSD starting in July, 2009 and I've been using it for a month with Windows 7 RC.The performance is absolutely fantastic and I think that this upgrade and the troubles were worth it. However now I hear that Intel might not release a firmware upgrade for the Generation 1 drives to enable TRIM support and this seriously pisses me off. On top of this the new G2 drives seems to perform faster for Random Writes. I am contemplating on putting up my G1 drive for sale on eBay now to recoup almost all of the money that I spent on it so I can purchase a G2 drive with TRIM support upgrade potential.
If Intel would have come out and promissed support for TRIM for the Generation 1 drives I would most likely keep it but unless that happens in the next week or two I'm going to dump this G1 like a hot potato.
One thing that I would love to know is what is the actual sale date of the G2 drives and what will the street price be for the 80GB model.
Gary Key - Wednesday, July 22, 2009 - link
Pricing is here - http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">http://www.anandtech.com/cpuchipsets/showdoc.aspx?...NishiGotanda - Wednesday, July 22, 2009 - link
Jakfrost - both G2 models have been on sale here in Tokyo for 2 days already, albeit in limited quantities. I picked up a G2 160 Gb one yesterday for JPY 49,000, that's $519 in today's exchange rate. It's humming along nicely in my MacBook Pro now, I'll have to pick up another one ASAP.DatabaseFrk - Wednesday, July 22, 2009 - link
compare the X25-Mg1's workstation against the X25-Eg1's:
http://www.tomshardware.com/reviews/ssd-performanc...">http://www.tomshardware.com/reviews/ssd-performanc...
http://www.tomshardware.com/reviews/ssd-performanc...">http://www.tomshardware.com/reviews/ssd-performanc...
iwodo - Wednesday, July 22, 2009 - link
Real Power usage - 150mw stated in the previous article is far too low compare to other SSD. Which is 2 - 4W. If what Intel state is true then it is many times better then its competitions.How much different do we get if we could use DDR3 Ram? I.e if the cache is 5 times faster.
And why is Intel only using 32Mb Ram when others are already using 64Mb? ( Limited by SD Ram?? )
Another questions i have in mind, although not relevant to SSD, i hope someone or Anand could answer it here - WHY SD Ram. Why is it still existing? DDR2, which i believe to be the cheapest Ram out there, is faster, lower power, and higher capacity. Why do we keep producing SD Ram anyway?
Did Intel artificially limit the Seq Write performance?
Any pricing for X-18M? I would prefer my laptop using more space for battery.
jimhsu - Saturday, August 1, 2009 - link
Re: SDRAMMost likely cost saving. As memory (even PC133) is much faster (bandwidth wise) than Flash, and latencies are about the same, it's simply cheaper to go with SDRAM. The same reason why they didn't paint the G2...
DatabaseFrk - Wednesday, July 22, 2009 - link
Apart from sequential write performance (which isn't much an argument for enterprise performance) of the X25-Eg1, the X25Eg1's high price compared to the X25-Mg1 was argumented by the fact that it had superiour random write IOps: check tom's IOmeter File server I/O:
http://www.tomshardware.com/reviews/intel-x25-e-ss...">http://www.tomshardware.com/reviews/intel-x25-e-ss...
In a mixed random read/write scenario the X25-E was 3 times faster. This is what other tests suggests as well the X25-M has a randowm write IO/sec of under 1000 while the X25-E is somewhere 3000~5000 IOps? But the Random 4kb write graphs shows the X25-Mg2 is faster here. I am missing something here? Is there a test missing showing the X25-Mg2's poor random IOps write performans to the X25-E? An IOmeter test?
I am considering 4 X25-Mg2 in a raid-0 on my Areca 1231ML for working on a 15GB~20GB dataware house on SQL Server, or would 2~3 X25-Eg1's be faster?
has407 - Thursday, July 23, 2009 - link
Couple things to note about those benchmarks...1. PC Perspective's recent benchmarks show the X25M-G2 and the X25M-G1 both peaking at ~16K IOPs with IOMeter database pattern at a queue depth of 32.
3. The THG numbers in that article (and elsewhere) appear low compared to many others. They also show falloff starting at queue depth 8 for the X25E and 16 for the X25M--those are odd and suspicious. They don't provide details, but I suspect they used one of the Promise SATA controllers on the test system, and the funny numbers are primarily an artifact of the controller.
3. THG used IOMeter 2003.05.10 for the IO performance tests in that article, which is ancient; 2006.07.27 was the last release. I've noticed significant differences in results in what are otherwise nominally similar benchmark setups depending on IOmeter version. E.g., if you look in the "charts" section at THG, you'll find the X25E tested with IOMeter 2006.07.27; database pattern = 6400 IOPs vs. ~5000 IOPs for IOMeter 2003.05.10 in that article; yet in another article using IOMeter 2006.07.27 they show ~66oo IOPs.
4. We don't know the test parameters... partition with a test file? raw disk? how big? While I'd expect test size to have much less effect with SSD's than HDD's, that information isn't provided.
In short, until sites provide much more information about the details of their tests configurations and parameters, comparing benchmarks from different sites--or often from the same site--is a crap shoot. Not to mention that you can be pretty sloppy with HDD benchmarks and still get pretty similar numbers; SSD benchmarking really needs more attention to details that might not matter much in the HDD world.
p.s. The X25M-G2 (both 80GB and 160GB) is spec'd faster than the X25E in all respects except serial write and write latency which is the same for both, which suggests the X25M-G2. However, I haven't seen any reputable comparable benchmarks between the two.
has407 - Thursday, July 23, 2009 - link
Oops, sorry, clarification: "...except serial write and write latency which is the same for both..." should read "...except serial write which is slower, and write latency which is the same..."vol7ron - Wednesday, July 22, 2009 - link
Did you by any chance have the opportunity to compare the various Indilinx Barefoot (MLC)s? I'm curious if there's any variations in the firmwares or even unspotted hardware differences.Thanks,
vol7ron
Diesel Donkey - Wednesday, July 22, 2009 - link
Anand, thanks for the great preliminary review. I think, however, that your figure for average latency in random reads is missing.anactoraaron - Wednesday, July 22, 2009 - link
Any chance of seeing any of the new expresscard SSD's reviewed soon? This would be the way I would go... but I don't know enough about these drives to take a chance on it meeting my needs for my laptop. The wintec filemate 48gb would be perfect for OS and programs and would allow me to keep everything else in my system the same. This drive claims read/write speeds at 115/65mbps.I guess what I'm getting at is... do these studder? Are they fast? Would make a great review...
Sunday Ironfoot - Wednesday, July 22, 2009 - link
Could you possibly benchmark these drives with BitLocker or TruCrypt as I'm interested to see the performance impact of using full hard drive encryption with these SSDs.swish - Wednesday, January 6, 2010 - link
I've got an x-25M G1 80GB and I installed truecrypt whole-disk encryption on it.Windows Experience Index dropped from 7.4 to 5.9 after I encrypted.
ATTO disk benchmarks dropped from around 70MB/sec writes and 220MB/sec reads to ~65MB/sec writes and 130-145MB/sec reads.
"Seek" latencies are still pretty low.
Keep in mind I have a slower CPU (AMD AthlonX2 4800+) so my encrypt/decrypt speed isn't stellar (TrueCrypt benchmarks AES at around 200MB/sec which is what I'm using)
Anyone else care to share? :) I'm curious about BitLocker performance too.
Cheers,
Swish
Dospac - Thursday, July 23, 2009 - link
I too am extremely interested in this type of testing. Just the numbers on one of the new Intel drives with and without fde would suffice, don't need an all-out comparison.has407 - Wednesday, July 22, 2009 - link
Thanks for the update! Based on your numbers, a couple items you might look at when you do more testing, assuming you're using IOmeter(?) ...1. 4KB random write 34.5MBs = ~8.4K IOPs and a latency of ~118us = very close to the Intel spec ("up to" 8.6K IOPs and 115us latency).
2. 4KB Random read 58.5MBs = ~14.3K IOPs and a latency of ~70us = significantly lower IOPs and a bit lower latency than Intel specs ("up to" 35K IOPs and 85us latency), and significantly lower latency than you show (200us).
Intel uses IOmeter with a queue depth of 32; are you using the same, or is there another possible bottleneck (especially on the read side) in your test system?
Or maybe methinks Intel's 35K IOPs 4KB random read claim is bogus? That would imply a latency of ~29us (or ~143MBs), but they also spec 85us latency, which gives ~11.8K IOPs--which more closely matches your test numbers.
has407 - Thursday, July 23, 2009 - link
Apologies, last paragraph in my previous post has a brain fart... it assumes IOPs are proportional to latency, which is obviously false with multiple channels and pipelining. Duh. Other benchmarks (e.g., PC Perspective) suggest 35K IOPs might be achievable in theory, but I've yet to see anything close to that number in practice.MODEL3 - Wednesday, July 22, 2009 - link
"While I don't believe that 70MB/s write speeds are terrible, Intel does need to think about improving performance here because the competition is already far better. The more important values are still random read/write performance, but sequential write speed is an area that Intel continues to lag behind in."Well maybe with the introduction of the 320GB model with increased Random 4KB Writes (10.6 IOPS?) and with 166MHz SDRAM instead of 133MHz maybe they can hit around 95MB/s (85 advertised) in Sequential write performance
DominionSeraph - Friday, July 24, 2009 - link
PC133 SDRAM has a bandwidth of 1.066GB/s. Somehow I don't think it's the problem.sotti - Wednesday, July 22, 2009 - link
Any update on the ammount of writes the drive will take before it's Kaput.I know MLC lifespan is improving, but an update on how long a drive like this would last as an OS drive w/ a page file would be nice to know.
has407 - Wednesday, July 22, 2009 - link
Unfortunately as Anand mentioned, Intel doesn't provide detailed durability specs for the M/MLC drives, other than a "Minimum Useful Life: 5 years" (from the datasheet). While the 34nm parts may have actually have a lower per-cell durability than the 50nm parts, I would hope and expect they have compensated for that with additional sparing and ECC. (The BER for all Intel's SSD drives are the same: "1 sector per 10^15 bits read".)Intel provides more data for the E/SLC drives: "1 petabyte of random writes (32 GB)" and "2 petabyte of random writes (64 GB)" (from the datasheet). Assuming "random writes" means evenly distributed, that implies ~31.5K erase/write cycles. Again, however, I would hope and expect that Intel is being very conservative with those numbers. MLC drives typically can sustain an order-of-magnitude (or more) fewer cycles than SLC, but then again they're cheaper, so more room for spare cells and better ECC.
I think (or at least hope) there's little reason to doubt Intel's "5 year" assertion; validation is pretty straightforward, and they seem to have done their homework in all other respects. Not to mention that SSD typical failure mode (write and non-catastrophic) is preferrable to HDD (read and catastrophic), assuming of course it's done properly...
Given that Intel is holding their controller very close to the vest, my guess, and only a guess, is that Intel is being very conservative with writes (more power and slower programming = longer life), or might be doing a full write-read-verify, which might explain why their write speeds are lower than the competition. If so, good for them. Last thing the SSD market needs is to have large numbers of drives failing in weird ways in a couple years.
p.s As far as pagefile activity... I wouldn't worry about it unless you're constantly and seriously overcommitted on memory. There's a lot less pagefile activity than most people think. But that's another subject.
Anand Lal Shimpi - Wednesday, July 22, 2009 - link
I believe Intel is still rating these things as having a minimum useful life of 5 years.Take care,
Anand
InternetGeek - Wednesday, July 22, 2009 - link
Price/Capacity wise I'm just one or two steps away from buying my first SSD.tomoyo - Wednesday, July 22, 2009 - link
I'd make a conjecture that intel's very good random write performance may somehow relate to why the sequential write isn't as good, although this isn't evident in the SLC version of their SSD. The other possibility is that Intel is distinguishing between their SLC and MLC series through lower write. I'd have to say that random read and write are way more important for a normal desktop user than anything else though, I always notice latency and responsiveness far more than a small change in long term transfer speed. Personally I'm very interested in getting a 80GB/160GB as a main os drive for both my primary box and a ZFS RAID server/storage box.haze4peace - Wednesday, July 22, 2009 - link
With the recent price drops I'm considering getting this drive in its 80GB flavor as an OS drive. Sequential write speed is the least of my concerns, because once you load on your OS and other apps, you barely write in big blocks.pennyfan87 - Wednesday, July 22, 2009 - link
I think Anand forgot to include the G.Skill Falcon series along with the other Indilinx MLC drives.Just sayin'.
Anand Lal Shimpi - Wednesday, July 22, 2009 - link
woops, you're right, table updated :)hyc - Friday, July 24, 2009 - link
And isn't the Samsung controller in the OCZ Summit also used in the Corsair P256 and SuperTalent MasterDrive SX?hyc - Friday, July 24, 2009 - link
doh. Corsair P256 is already listed....deputc26 - Wednesday, July 22, 2009 - link
Given the choice which controller would you rather have in your drive? despite samsung's reputation, I'm going with indilinx.Anand Lal Shimpi - Wednesday, July 22, 2009 - link
My pick is still the Intel drive, but I'd take Indilinx over Samsung (assuming there are no compatibility issues with the system I was putting it in).Take care,
Anand
deputc26 - Wednesday, July 22, 2009 - link
I also pick Intel as number one but the battle for second place is a little grayer, Samsung is more expensive which leads many to believe it is faster which of course is not the case.pennyfan87 - Wednesday, July 22, 2009 - link
so i heard you're giving the sample away to your readers...Souka - Thursday, July 23, 2009 - link
I heard there were two samples being given away... ;)Zelog - Wednesday, July 22, 2009 - link
I'm guessing the new FLASH chips aren't BGA, then they don't need the potting. Would explain why the new controller still has it.tajmahal - Thursday, July 30, 2009 - link
well hello! Nothing like a little corruption of data is there.http://www.dailytech.com/article.aspx?newsid=15827">http://www.dailytech.com/article.aspx?newsid=15827
has407 - Sunday, July 26, 2009 - link
Take a close look at the part numbers. A bit hard to read given the resolution of the pic's, but I'd bet the old unit uses the equivalent of Micron MT29F64G08CFxxx 64Gb parts, and the new unit uses the equivalent of Micron MT29F128G08CJxxx 128Gb parts.Micron production MLC parts for both are available only in TSOP 48. The package dimensions also appear to be the same, and per ONFI 1.0 (on which Intel says they're based), that could be easily verified from the package dimensions. The controller is obviously BGA.
As to why the potting or lack of... thermal, shock, anti-whatever... but I'd guess Intel has just gotten better with the qualification/manufacturing process.
FaaR - Thursday, July 23, 2009 - link
BGA chips typically do not need potting. In fact, the vast, vast majority of BGAs - including some that run very hot - are not potted at all.If the original Intel SSD used extensive potting - I don't know myself, I've not opened up my 60GB SLC drive - I'd assume it would be as an anti-counterfiting measure to prevent far-east outfits from screwing with the innards and then selling the drives more expensively as a higher capacity unit.
Anand Lal Shimpi - Wednesday, July 22, 2009 - link
Very true, although the new controller doesn't have it to the exact same extent.Take care,
Anand