Its been a over a month since the article came out and I still don't see any in the retail stores or a non OEM drive. Where can I get one?? Anyone see a retail box of these drives a a retailer? I want to get a couple
Mike
the Hitachi datasheet refers to three idle modes using APM. The results with AAM enabled could suggest that APM is automatically engaged with AAM. So perhaups one should check the APM level with Hitachi's Feature Tool or the generic tools http://hdparm-win32.dyndns.org/hdparm/">hdparm or hddscan.
We had a lengthy meeting with the Hitachi engineers this week to go over APM and AAM modes along with the firmware that is shipping on the Dell drives. I hope to have some answers next week as testing APM capabilities on a Dell based system resulted in a slightly different behavior than our test bench. I have completed the balance of testing with various AAM/NCQ on/off combinations and some additional benchmark tests. I am hoping to update the article next week. Also, I ran acoustic tests in a different manner and will have those results available. Until, then I did find out that sitting a drive on a foam brick outside of a system and taking measurements from the top will mask some of the drives acoustic results. The majority of noise emitted from this drive comes from the bottom, not the top. ;)
"However, Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move."
This sounds like an sneaky attempt by Hitachi to raise doubt about the safety of Seagate's forthcoming 1TB drive. Where is the data to support this rather bold statement that 250GB platters designs are not as capable as 200GB designs of meeting these completely unspecified "reliability rate targets"? What does that even mean? Can we infer that 150GB platter designs are even more reliable than 200GB designs? It's disappointing to see the review accept Hitachi's statement without question, going so far as to even applaud Hitachi for its approach without any evidence whatsoever to back it.
While I know memory density in general isn't increasing nearly as fast as hard drive size, 32MB cache seems pretty chintzy for a top-end product. I suppose 16MB on the 750GB drives is even worse.
My first 528MB hard drive with a 512KB cache was a 1/1007 ratio (using binary cache size, and labelled drive size which would be around binary 512MB). Other drives still had as little as 128KB cache, so they could have been as little as a 1/4028 ratio, but better with smaller drives. I think anything larger than 512MB always had 512KB.
A 20GB drive with 2MB cache is 1/9536 ratio.
A 100GB drive with 2MB cache is 1/47683.
Then the jump to 8MB cache makes the ratio much better at 1/11920 for a 100GB drive (I'm ignoring the lower-cost models that had higher capacities, but still 2MB cache). Then it gets progressively worse as you get up to the 500GB size drives. Then we make another cache size jump, and the 160GB to 500GB models have a 16MB option, which is back to 1/9536 on a 160GB, to 1/29802 on a 500GB.
The trend here being that we stick with a particular cache size as drive size increases so the ratio gets worse and worse, then we make a cache size jump which improves the ratio and it gets worse again, then we make another cache size jump again.
Now we go to 750GB drives with 16MB cache. Now we are up to a 1/44703 ratio, only the 2nd worse ever, seems like time for another cache increase. Jumping to 32MB with a 100TB drive only makes it 1/29802. Not a very significant change despite doubling the cache again, since the drive size also increased, and it'll only get worse as they increase the drive size. Even 32MB on a 750GB drive is 1/22351, only slightly better than the 16MB/500GB flagship drives when they came out, and we don't even HAVE a 32MB/750GB drive.
A 512MB cache would be nice. That's still not the best ratio ever, it's still 1/1862, but that's a heck of a lot better than 1/30,000th. At the very least, they need to jump those cache chip densities a lot, or use more than one. Even a single 512MB density chip would be 64MB, still not great but better.
Bigger caches would almost make it a necessity that you run the system on a UPS.
Loosing 32mb of data that is yet to be written to the platters is allot, but 512mb?
And the UPS would not take into account OS crashes...
I'm not sure how much this would affect performance either, but a review of a SCSI drive with a SCSI controller with 2mb - 1gb of cache would answer that question well...
They will be releasing a 750GB variant in May. Our initial reports have the single platter drives along with the 300~500GB models coming later in the summer. I am trying to get that confirmed now.
While this drive has enough in the way of other features to make it stand out from the crowd, I was a bit surprised to see that Hitachi hadn't upped the warranty to 5 years for this drive, which is what Seagate offers on most of their drives and WD offers on their raptors.
just noticed a problem you may wish to address with your charts, hope this hasn't already been mentioned.
Take a look at the chart 'video application timing - time to transcode DVD'
Your times are in Minutes/seconds, it seems you're chart application has interpreted the numbers as decimals, and made the bar lengths on this basis. Take a look at the bar for WD5000YS 500GB. It says 4.59; I assume this means 4 minutes 59 seconds, making the WD740GD 2 seconds slower at 5 minutes 1 second. But the bar lengths are scaled for decimal, so that the bar on the WD740GD is much longer. You'll have to see if you can get your graph package to think in minutes:seconds, or have the bar lengths entered in decimal (i.e. 4:30 seconds becomes 4.5 minutes) and put a label on in minutes for readability.
We have a short blurb under the Application Performance section -
"Our application benchmarks are designed to show application performance results with times being reported in minutes / seconds or seconds only, with lower scores being better. Our graph engine does not allow for a time format such a 1:05 (one minute, five seconds) so this time value will be represented as 1.05."
We know this is an issue and hopefully we can address it in our next engine update (coming soon from what I understand). I had used percentage values in a previous article that was also confusing to some degree. Thanks for the comments and they have been passed on to our web team. ;)
The simplest and most logical solution is just to enter the time in seconds, rather than minutes and seconds; even if graphed correctly, comparing values composed of two units (minutes:seconds) is difficult compared to a single unit (seconds).
If two results were 6:47 and 7:04 for instance, the difference betweem them is much clearer if you say 407 and 424 seconds. By giving the value in seconds only, you can see at a glance that there is a 17 second difference, which translates to just over 4% (17 divided by 407/100, or 17 divided by about 4.1).
Doing the same mental calculation with 6:47 and 7:04 first involves working out the difference with the extra step of dealing with 60 seconds to a minute. Then you have a difference of 17 seconds out of a little under 7 minutes, which isn't very helpful until you convert the 7 minutes to seconds, as it should have been originally.
quote: In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, only two years to reach 1TB.
How can you say only two years?
The 14 years you say it took to increase from 1GB to 500GB represents a doubling of capacity nine times, or roughly 1.56 years (19 months) for the capacity to double. That means that the two years (actually 20 months as Hitachi released a 500GB drive in Jul 2005) it took to double again, from 500GB to 1TB is actually marginally longer than average.
It would be more accurate to say that the trend of capacities doubling roughly every 18 months is continuing.
The two year remark is two years from the first commercial perpendicular recording drive. Perpendicular recording has been in the works for a long time. In fact, when I used to examine patent applications for a living, there was patent literature related to perpendicular recording all the way back in 1990-1991, albeit for relatively simple aspects of the device.
The averaging of the time periods does work out to a doubling of capacities every 18~20 months but the last doubling took about 26 months to go from 250GB to 500GB.
Yes, but first 250GB drives were 4-platter 5400rpm ones(Maxtor?)...
First 500GB were 5-platter 7200rpm ones.
IMO there are little dicrepancies in the tren dcaused bu the worry of many-platter drives after 75GXP. Aftre a few years Hitachi came back with 7K400 and the curve just returned to values it lost before...
Too slow, too much money, too little space.
I've owned 3 and sold them.
When are we going to see a 15krpm Savvio 2.5" review?
When will we see a 180gb per platter 32mb 10,000rpm new series raptor?
Maybe WD should also make a 15krpm 2.5" - 32mb model
These incrimental speed upgrades on hard disks are terrible :( need more, much much more.
"Considering the importance of data integrity in today's systems"...? You mean like, in yesterday's (or perhaps tomorrow's) systems, data corruption was considered normal or acceptable?
No, but if you lost a hard drive before, the amount of data that would be gone is nothing compared to the amount of data you lose with current hard drives. It's always a BAD thing to lose data, but it's BAD² to lose data². So it's important² to keep data² safe ;p
"Data integrity" and "drive failure" are two different things. Most data integrity issues are related to bad sectors and corrupted data (and that is why Hitachi chose to go with more platters and lower areal density - less chance of localized data corruption, but actually a slightly higher chance of "catastrophic" drive failure - namely a head crash or a dead motor). The article's author got _that_ part right.
The problem was what came after it. It was just as important to "keep data safe" last year (or the year before that, etc.) as it is this year, so qualifying it as "in today's systems" makes no sense.
quote: The problem was what came after it. It was just as important to "keep data safe" last year (or the year before that, etc.) as it is this year, so qualifying it as "in today's systems" makes no sense.
Looking at the benchmark charts, one thing that pops into the eye is that your world at AT, as far as HDDs are concerned, seems to revolve around Seagate and WD only.
But theres quite a few other manufacturers out there that make good drives (that surpass many of the featured drives in one way or another) - this new Hitachi beast proves it.
Go ahead and test more Samsung, Fujitsu, Hitachi and even Excelstor drives.
quote: Looking at the benchmark charts, one thing that pops into the eye is that your world at AT, as far as HDDs are concerned, seems to revolve around Seagate and WD only.But theres quite a few other manufacturers out there that make good drives (that surpass many of the featured drives in one way or another) - this new Hitachi beast proves it.
Go ahead and test more Samsung, Fujitsu, Hitachi and even Excelstor drives.
We finally have agreements with Samsung and Hitachi to provide review samples so expect to see reviews of their drives ramp up quickly. We are discussing a review format for SCSI based drives at this time and if we can do it right then expect to see this drive category reviewed later this year. We will also be introducing SSD reviews into our storage mix in the coming weeks. While I am at it, our Actual Application Test Suite will under several changes and be introduced in the 500GB roundup. Thanks for the comments. :)
My personal take is that for 99% of users, it doesn't really matter which brand you use. Seagate may win a few benchmarks, WD some others, Samsung, etc. some as well. In reality, I don't notice the difference between any of the HDDs I own and use on a regular basis. I have purchased Samsung, WD, Seagate, Hitachi, and Maxtor. Outside of the Raptors being faster in a few specific instances, without running a low level diagnostic I would never notice a difference between the drives. I suppose I'm just not demanding enough of HDDs?
I think you are right, but don't forget that in this post you are only looking at it from a performance viewpoint. Drive longevity and acoustics are major factors to me, and I think for you too from the article. I think these are the metrics worth looking at and tend to agree that subjective performance doesn't really differentiate that much (although I haven't had half as much experience with different vendors/models as you have).
I read the review earlier this morning but don't recall seeing anything about retail channel availability. Did Hitachi or Dell comment to AT about this?
I'm actually interested in 500GB 7200.10's and hoping this release will push the price of those down a lot.
We do not have an exact date. Hitachi committed to having product into the retail channel by the end of Q1. We should have an answer from Dell tomorrow on when they will offer it outside of their systems. Hitachi is saying the drive will launch at $399, just waiting to see $550 price tags when the first drives show up... ;)
Interesting that perpendicular technology was chosen given the R&D costs (with the tiny return of ~5 years before obsolescence--the article mentions that perpendicular can ONLY go 5x denser with existing techology & figure we'll see 5TB perpendicular drives in 2 years), etc... So why don't manufacturers offer 5 1/4" drives? Not to invoke memories of the MFM drives of long-ago (the original XT had a double-size 5 1/4" MFM drive of 10MB), but they have potentially 50-60% more surface area per platter, and with a possibility of 7(?) platters, isn't that a better solution? True, you wouldn't put them in SFFs or notebooks, but how many of us have towers with empty 5 1/4" bays??? And I assume that a 1TB 5 1/4" drive would be more energy-efficient than two 500GB 3 1/2" drives...
Also, has anyone REALLY tested to see if perpendicular is truly a reliable technology? Seems like manufacturers have 50 years of experience with parallel storage, and only 1-2 years using perpendicular storage...
Bulkier, slower, less energy efficient and more moving parts that reduce reliability all in the name of increased capacity is not the way to the future. I think the biggest problem at the moment is NOT storage capacity, they're mostly increasing capacity to keep HDs evolving and not drop in price, as it's one of the only competitive advantages. If you can increase capacity with the same material cost and some extra R&D, it would be stupid not to do it, and a better way than increasing the material cost.
In fact, regular folk have way too much capacity at the moment. A Seagate CEO worded it nicely a while back "Face it, we're not changing the world. All we do is enable people to store more crap/porn."
The future lies in the direction of flash based hard drives: smaller, less/no moving parts instead of more, faster access times, lower energy consumption/heat. Or other alternative technologies that offer the same advantages. The densities and cost are the only reasons we're not all buying them at the moment, something that should be fixed over time.
If you're worried about unused case slots, buy one of those things that enable you to install normal hard drives in it, or convince the case designers to include more 3 1/2" slots and less 5 1/4" slots.
Have you ever seen a slow motion filming of a cd-rom disk wobbling? Same thing here! I think the platters got smaller because the vibrations produced by them grew with the speed. Since today's r/w heads need to be extremely close to the surface, that would be utterly impossible to control at that speed and diameter. They changed also from an aluminum based disk to an glass based one also, because the roughness of the surface on the Al platter.
It's harder to spin a large diametter platter at high speed. More weight, more wobble, slower access times. I saw once a 5.25" 4x2.5" SATA drive array. Quantum used to do a large format BigFoot for cheap, slow large capacity but it wasn't continued. We have plenty of capacity per platter but limited perf/GB (worse every year) so why decrease performance to increase capacity ?
How smart is it to use temperatures from SMART?
Did you verify all HDDs use good quality unbiased temperature sensors?
> Our thermal tests utilize sensor readings via the S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) capability of the drives and are reported by utilizing the Active SMART 2.42 utility.
It has worked well for us to date. We also took readings with several other programs and a thermal probe. All readings were similar so we trust it at this time. I understand your concern as the sensors have not always been accurate.
I hate this decimal Byte rating they use. They say the capacity is 1 TeraByte meaning 1,000,000,000,000 Bytes, this actually translates into ~930GB or .93TB that the OS will see using the more commonly used (base 2) metric. This is the metric that people assume you are talking about. When will the drive manufactures get with the picture and list the standard Byte capacity?
I don't think it matters all that much, once you heard it you know it. There's not even a competitive marketing advantage or any scamming going on since ALL the drive manufacturers use it and in marketing material there's always a note somewhere explaining 1GB = blablabla bytes. So 160GB on one drive = 160GB on another drive. That it's not the formatted capacity has been made clear for years now, so I think most people who it matters for know.
IBM used to not do this. Their advertised 120GB drive was actually 123.xxGB, where the GB referred to the decimal giga. This made useable capacity a little over 120GB. :)
Of course, you can blame the computer industry for just "approximating" way back when KB and MB were first introduced to be 1024 and 1048576 bytes. It probably would have been best if they had created new prefixes rather than cloning the SI prefixes and altering their meaning.
It's all academic at this point, and we just try to present the actual result for people so that they understand what is truly meant (i.e. the "Formatted Capacity").
quote: Hitachi Global Storage Technologies announced right before CES 2007 they would be shipping a new 1TB (1024GB) hard disk drive in Q1 of this year at an extremely competitive price of $399 or just about 40 cents per GB of storage.
The screenshot shows only 1 x 10 ^ 12 bytes. :(
And I'm wondering, do you know about any plans for 2.5" desktop drives (meaning, not more expensive than cheapest 3.5" drives and better access time)?
How many bytes does this drive actually hold? Is it 1,000,000,000,000 bytes or 1,099,511,627,776 bytes?
It's interesting... it used to not seem like a huge difference, but now that we're approaching such high capacities, it's almost a 100 GB difference - more than most laptop hard disks!
Of course, the standard people decided (AFTER the fact) that we should now use GiB and MiB and TiB for multiples of 1024 (2^10). Most of us grew up thinking 1KB = 1024B, 1MB = 1024KB, etc. I would say the redefinition was in a large part to prevent future class action lawsuits (i.e. I could see storage companies lobbying SI to create a "new" definition). Windows of course continues to use the older standard.
Long story short, multiples of 1000 are used for referring to bandwidth and - according to the storage sector - storage capacity. Multiples of 1024 are used for memory capacity and - according to most software companies - storage capacity. SI sides with the storage people on the use of mibibytes, gibibytes, etc.
Ehm, ehm.
GB was ALWAYS spelled Giga-Byte and Giga- with short "G" is a standard prefix for 10^9 since the 19th century(maybe longer).
The one who screwed up were the software guys whoe just ignored the fact 1024!=1000 and used the same prefix with different meaning.
SI for long ignored this stupidity.
Lately SI guys realized software guys are too careless to accept the reality that 1024 really does not equal 1000.
It is far better to have some standard way to define 1024-multiples and have many people use old wrong prefixes than to have no such definition at all.
I remember clearly how confused I was back in my 8th grade on Informatics class when teacher tried(and failed back then) to explain why everywhere SI prefixes mean 10^x but in computers they mean 2^10 aka 1024.
IT took me some 4 years until I was comfortable with power-of-something nubers enough so that it did not matter whether one said 512 or 2^9 to me.
This prefix issue is a mess SI did not create nor caused. They are just trying to clean it up in the single possible way.
To be a bit more specific, I think it was google who did testing of enterprise type drives, and did a bunch of testing, I'm sure google will turn something up ;)
According to multiple studies done, HDD life expectancy is not affected by heat. I'm sure there are situations, where you literally have parts melting, that could be problematic, but there you have it.
Yes and no. A temperature around 50-60 ºC will not slowly "cook" the drive, but if it rises above a certain level (ex., 120º C), it can kill it instantly. Fast drives with a lot of platters can get hot very quickly, and if they're mounted on plastic rails (poor thermal conductors) with poor air circulation, their life expectancy is probably less than a day. I've seen it happen more than once.
Boiling point of water is around ~191F-212F, 120C is 248F, a CPU could not handle this temperature, what makes you think a HDD could ? Most consumer grade electronic do not take kindly to anything hottter than ~70C-80C. The only exception I can think of in a computer, might possibly be a graphics card, and even then, I personally would not expect it to last long as these temperatures.
Most computers will not / should not exceed ~40C-50C ambient case temperature, and a lot (mine included) run much cooler. It is not uncommon for my CPU to run sub 100F (winter time), and sub 120F (summer time) under a load. Most of the time, the ambient case temperature of my case is easily under 105F.
Anyhow, the whole point here is: practise common sense with your electronics concerning heat. 120C is obviously WAY too hot for a HDD, as well as most consumer grade electronics. This also doesnt negate the fact that several studies have been done in enterprise envoirnments, to prove that heat ( again, within reason ) is not a factor in HDD falure. The whole point of these studies were to prove ( or disprove ) the point of buying enterprise grade hard drives vs. regular HDDs.
I have always wondered why you guys ( who ever claims that HDD fails often ) buy new HDDs with your new system, now I think I know ;)
Please refer us to these multiple studies. AFAIK the only one that corroborates this is the google one, which you mention in a later post. Also I'd question this one study's relevance to home use, as not everyone leaves their drives running 24/7 as google does. My personal feeling is that repeated expansion and contraction damages drives most, and obviously if the drive is running hotter then the expansion will be greater and so will the damage to the longevity of the drive.
What you're reffering to is known as 'Hysteresus'. Excuse the bad spelling, if I misspelled that (it is not a word I used often). Anyhow, this is the effect, that rapid cooling / heating has on an object over time, and the object eventually becomming brittle because of this.
As for the refferal, use google. Do not expect everyone to do your homework for you ;) However, I can tell you that, I personally have many HDDs, some of which are over 12 years old, have seen a lot of heat in their time, and are fully functional. One of which is a 80MB Maxtor . . .
According to my own experience that's not really true. Last summer I had trouble with my main OS drive (a Seagate 7200.8 160GB) where windows would slow to a grind, there were multiple IO errors in the event log, then DMA would switch off and corrupt data showed up on the disc. I thought it died to be honest.
However, before throwing it out I tried upping the cooling. I had 3 Seagate HDs in the HD chamber in front without intake fans, and they were incredibly warm to the touch. Directing a 120mm 800rpm fan over them to test immediately solved all issues, and the drive was as reliable again as ever (no permanent damage even). They're now very cool to the touch. Kinda obvious when I think about it, in a normal case the drive makes metal to metal contact and the HD bay itself functions as a large heat sink, while in the Antec there is no contact at all and the drive is "suspended in the air" on rubber grommets.
It was a particular hot summer period but still, heat shouldn't be ignored.
It is well within the drives operating range and remember the temp dropped to 43C once we turned the front fan on in the case. I was expecting it to run warmer actually.
Looking at the 750GB Seagate with its 16MB cache, there are definitely areas where the 32MB cache helps. Basically, with the larger capacities you need more cache to effectively handle all the data. Realistically, I'd say there's about 0% chance we'll see 64MB cache on smaller drives. When we're running 2TB drives, however....
I wish that the T7K500 could be reviewed in this. After all, that's the current challenger to the 7200.10, and if you read STR, the consensus is that it's an overall faster drive than the 7200.10. Every drive manufacturer tends to set its own trends in performance, and I personally believe it's more useful to have a comparison of the 7K1000 with its sibling rather than only against other brands.
Never thought a 7200 RPM drive could be very competitive to a 10000 one...Thanks to its 32 Mb Cache.Loved the acoustics test, it shows a very quiet drive.With this new drives coming to the market, Western Digital will have to think about its raptors Overpriced drives and low its prices.Very good review.
A question: How many Hitachi drives will be lauched with Perpendicular recording like this and what capacities?
Higher areal density means more data can be read per rotation, so it could even be faster in terms of STR. But 7200 RPM drives will still have higher latency than 10k models, of course. The only way to overcome that would be to add more heads (ex., 2 per platter).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
74 Comments
Back to Article
mikeg - Thursday, April 26, 2007 - link
Its been a over a month since the article came out and I still don't see any in the retail stores or a non OEM drive. Where can I get one?? Anyone see a retail box of these drives a a retailer? I want to get a coupleMike
jojo4u - Monday, March 26, 2007 - link
Hello Gary,the Hitachi datasheet refers to three idle modes using APM. The results with AAM enabled could suggest that APM is automatically engaged with AAM. So perhaups one should check the APM level with Hitachi's Feature Tool or the generic tools http://hdparm-win32.dyndns.org/hdparm/">hdparm or hddscan.
Gary Key - Friday, March 30, 2007 - link
We had a lengthy meeting with the Hitachi engineers this week to go over APM and AAM modes along with the firmware that is shipping on the Dell drives. I hope to have some answers next week as testing APM capabilities on a Dell based system resulted in a slightly different behavior than our test bench. I have completed the balance of testing with various AAM/NCQ on/off combinations and some additional benchmark tests. I am hoping to update the article next week. Also, I ran acoustic tests in a different manner and will have those results available. Until, then I did find out that sitting a drive on a foam brick outside of a system and taking measurements from the top will mask some of the drives acoustic results. The majority of noise emitted from this drive comes from the bottom, not the top. ;)ddarko - Monday, March 26, 2007 - link
"However, Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move."This sounds like an sneaky attempt by Hitachi to raise doubt about the safety of Seagate's forthcoming 1TB drive. Where is the data to support this rather bold statement that 250GB platters designs are not as capable as 200GB designs of meeting these completely unspecified "reliability rate targets"? What does that even mean? Can we infer that 150GB platter designs are even more reliable than 200GB designs? It's disappointing to see the review accept Hitachi's statement without question, going so far as to even applaud Hitachi for its approach without any evidence whatsoever to back it.
Lord Evermore - Thursday, March 22, 2007 - link
While I know memory density in general isn't increasing nearly as fast as hard drive size, 32MB cache seems pretty chintzy for a top-end product. I suppose 16MB on the 750GB drives is even worse.My first 528MB hard drive with a 512KB cache was a 1/1007 ratio (using binary cache size, and labelled drive size which would be around binary 512MB). Other drives still had as little as 128KB cache, so they could have been as little as a 1/4028 ratio, but better with smaller drives. I think anything larger than 512MB always had 512KB.
A 20GB drive with 2MB cache is 1/9536 ratio.
A 100GB drive with 2MB cache is 1/47683.
Then the jump to 8MB cache makes the ratio much better at 1/11920 for a 100GB drive (I'm ignoring the lower-cost models that had higher capacities, but still 2MB cache). Then it gets progressively worse as you get up to the 500GB size drives. Then we make another cache size jump, and the 160GB to 500GB models have a 16MB option, which is back to 1/9536 on a 160GB, to 1/29802 on a 500GB.
The trend here being that we stick with a particular cache size as drive size increases so the ratio gets worse and worse, then we make a cache size jump which improves the ratio and it gets worse again, then we make another cache size jump again.
Now we go to 750GB drives with 16MB cache. Now we are up to a 1/44703 ratio, only the 2nd worse ever, seems like time for another cache increase. Jumping to 32MB with a 100TB drive only makes it 1/29802. Not a very significant change despite doubling the cache again, since the drive size also increased, and it'll only get worse as they increase the drive size. Even 32MB on a 750GB drive is 1/22351, only slightly better than the 16MB/500GB flagship drives when they came out, and we don't even HAVE a 32MB/750GB drive.
A 512MB cache would be nice. That's still not the best ratio ever, it's still 1/1862, but that's a heck of a lot better than 1/30,000th. At the very least, they need to jump those cache chip densities a lot, or use more than one. Even a single 512MB density chip would be 64MB, still not great but better.
Per Hansson - Sunday, March 25, 2007 - link
Bigger caches would almost make it a necessity that you run the system on a UPS.Loosing 32mb of data that is yet to be written to the platters is allot, but 512mb?
And the UPS would not take into account OS crashes...
I'm not sure how much this would affect performance either, but a review of a SCSI drive with a SCSI controller with 2mb - 1gb of cache would answer that question well...
yehuda - Wednesday, March 21, 2007 - link
Do they plan to launch a single platter variant sometime in the near future?Gary Key - Wednesday, March 21, 2007 - link
They will be releasing a 750GB variant in May. Our initial reports have the single platter drives along with the 300~500GB models coming later in the summer. I am trying to get that confirmed now.DeathSniper - Tuesday, March 20, 2007 - link
Last page..."The Achilles heal of the Seagate 750GB drive..."I think it should be heel, not heal ;)
Spacecomber - Tuesday, March 20, 2007 - link
While this drive has enough in the way of other features to make it stand out from the crowd, I was a bit surprised to see that Hitachi hadn't upped the warranty to 5 years for this drive, which is what Seagate offers on most of their drives and WD offers on their raptors.goldfish2 - Tuesday, March 20, 2007 - link
just noticed a problem you may wish to address with your charts, hope this hasn't already been mentioned.Take a look at the chart 'video application timing - time to transcode DVD'
Your times are in Minutes/seconds, it seems you're chart application has interpreted the numbers as decimals, and made the bar lengths on this basis. Take a look at the bar for WD5000YS 500GB. It says 4.59; I assume this means 4 minutes 59 seconds, making the WD740GD 2 seconds slower at 5 minutes 1 second. But the bar lengths are scaled for decimal, so that the bar on the WD740GD is much longer. You'll have to see if you can get your graph package to think in minutes:seconds, or have the bar lengths entered in decimal (i.e. 4:30 seconds becomes 4.5 minutes) and put a label on in minutes for readability.
Thanks for the review though.
Gary Key - Tuesday, March 20, 2007 - link
We have a short blurb under the Application Performance section -"Our application benchmarks are designed to show application performance results with times being reported in minutes / seconds or seconds only, with lower scores being better. Our graph engine does not allow for a time format such a 1:05 (one minute, five seconds) so this time value will be represented as 1.05."
We know this is an issue and hopefully we can address it in our next engine update (coming soon from what I understand). I had used percentage values in a previous article that was also confusing to some degree. Thanks for the comments and they have been passed on to our web team. ;)
PrinceGaz - Tuesday, March 20, 2007 - link
The simplest and most logical solution is just to enter the time in seconds, rather than minutes and seconds; even if graphed correctly, comparing values composed of two units (minutes:seconds) is difficult compared to a single unit (seconds).If two results were 6:47 and 7:04 for instance, the difference betweem them is much clearer if you say 407 and 424 seconds. By giving the value in seconds only, you can see at a glance that there is a 17 second difference, which translates to just over 4% (17 divided by 407/100, or 17 divided by about 4.1).
Doing the same mental calculation with 6:47 and 7:04 first involves working out the difference with the extra step of dealing with 60 seconds to a minute. Then you have a difference of 17 seconds out of a little under 7 minutes, which isn't very helpful until you convert the 7 minutes to seconds, as it should have been originally.
That's my opinion anyway.
JarredWalton - Tuesday, March 20, 2007 - link
Hi Gary. I told you so! Damned if you do, damned if you don't. ;) (The rest of you can just ignore me.)PrinceGaz - Tuesday, March 20, 2007 - link
How can you say only two years?
The 14 years you say it took to increase from 1GB to 500GB represents a doubling of capacity nine times, or roughly 1.56 years (19 months) for the capacity to double. That means that the two years (actually 20 months as Hitachi released a 500GB drive in Jul 2005) it took to double again, from 500GB to 1TB is actually marginally longer than average.
It would be more accurate to say that the trend of capacities doubling roughly every 18 months is continuing.
patentman - Tuesday, March 20, 2007 - link
The two year remark is two years from the first commercial perpendicular recording drive. Perpendicular recording has been in the works for a long time. In fact, when I used to examine patent applications for a living, there was patent literature related to perpendicular recording all the way back in 1990-1991, albeit for relatively simple aspects of the device.Gary Key - Tuesday, March 20, 2007 - link
The averaging of the time periods does work out to a doubling of capacities every 18~20 months but the last doubling took about 26 months to go from 250GB to 500GB.
mino - Wednesday, March 21, 2007 - link
Yes, but first 250GB drives were 4-platter 5400rpm ones(Maxtor?)...First 500GB were 5-platter 7200rpm ones.
IMO there are little dicrepancies in the tren dcaused bu the worry of many-platter drives after 75GXP. Aftre a few years Hitachi came back with 7K400 and the curve just returned to values it lost before...
scott967 - Tuesday, March 20, 2007 - link
On these big drives is NTFS performance an issue at all?scott s.
.
AbRASiON - Monday, March 19, 2007 - link
Too slow, too much money, too little space.I've owned 3 and sold them.
When are we going to see a 15krpm Savvio 2.5" review?
When will we see a 180gb per platter 32mb 10,000rpm new series raptor?
Maybe WD should also make a 15krpm 2.5" - 32mb model
These incrimental speed upgrades on hard disks are terrible :( need more, much much more.
Justin Case - Monday, March 19, 2007 - link
"Considering the importance of data integrity in today's systems"...? You mean like, in yesterday's (or perhaps tomorrow's) systems, data corruption was considered normal or acceptable?Gary Key - Tuesday, March 20, 2007 - link
It was not meant to infer that data integrity was not or will not be important.Spoelie - Tuesday, March 20, 2007 - link
No, but if you lost a hard drive before, the amount of data that would be gone is nothing compared to the amount of data you lose with current hard drives. It's always a BAD thing to lose data, but it's BAD² to lose data². So it's important² to keep data² safe ;pJustin Case - Wednesday, March 21, 2007 - link
"Data integrity" and "drive failure" are two different things. Most data integrity issues are related to bad sectors and corrupted data (and that is why Hitachi chose to go with more platters and lower areal density - less chance of localized data corruption, but actually a slightly higher chance of "catastrophic" drive failure - namely a head crash or a dead motor). The article's author got _that_ part right.The problem was what came after it. It was just as important to "keep data safe" last year (or the year before that, etc.) as it is this year, so qualifying it as "in today's systems" makes no sense.
Gary Key - Wednesday, March 21, 2007 - link
I changed it back to the original text. ;)
Griswold - Monday, March 19, 2007 - link
Looking at the benchmark charts, one thing that pops into the eye is that your world at AT, as far as HDDs are concerned, seems to revolve around Seagate and WD only.But theres quite a few other manufacturers out there that make good drives (that surpass many of the featured drives in one way or another) - this new Hitachi beast proves it.
Go ahead and test more Samsung, Fujitsu, Hitachi and even Excelstor drives.
Gholam - Thursday, March 22, 2007 - link
ExcelStor drives are refurbished IBM/Hitachi.Gary Key - Tuesday, March 20, 2007 - link
We finally have agreements with Samsung and Hitachi to provide review samples so expect to see reviews of their drives ramp up quickly. We are discussing a review format for SCSI based drives at this time and if we can do it right then expect to see this drive category reviewed later this year. We will also be introducing SSD reviews into our storage mix in the coming weeks. While I am at it, our Actual Application Test Suite will under several changes and be introduced in the 500GB roundup. Thanks for the comments. :)
Final Hamlet - Monday, March 19, 2007 - link
Hmm. Only vendor I am interested in seeing him added is Samsung. They have quite a market share here in Germany.JarredWalton - Monday, March 19, 2007 - link
My personal take is that for 99% of users, it doesn't really matter which brand you use. Seagate may win a few benchmarks, WD some others, Samsung, etc. some as well. In reality, I don't notice the difference between any of the HDDs I own and use on a regular basis. I have purchased Samsung, WD, Seagate, Hitachi, and Maxtor. Outside of the Raptors being faster in a few specific instances, without running a low level diagnostic I would never notice a difference between the drives. I suppose I'm just not demanding enough of HDDs?phusg - Tuesday, March 20, 2007 - link
I think you are right, but don't forget that in this post you are only looking at it from a performance viewpoint. Drive longevity and acoustics are major factors to me, and I think for you too from the article. I think these are the metrics worth looking at and tend to agree that subjective performance doesn't really differentiate that much (although I haven't had half as much experience with different vendors/models as you have).gramboh - Monday, March 19, 2007 - link
I read the review earlier this morning but don't recall seeing anything about retail channel availability. Did Hitachi or Dell comment to AT about this?I'm actually interested in 500GB 7200.10's and hoping this release will push the price of those down a lot.
Gary Key - Monday, March 19, 2007 - link
We do not have an exact date. Hitachi committed to having product into the retail channel by the end of Q1. We should have an answer from Dell tomorrow on when they will offer it outside of their systems. Hitachi is saying the drive will launch at $399, just waiting to see $550 price tags when the first drives show up... ;)Jeff7181 - Monday, March 19, 2007 - link
This might find it's way into my computer later this year. :)BUL - Monday, March 19, 2007 - link
Interesting that perpendicular technology was chosen given the R&D costs (with the tiny return of ~5 years before obsolescence--the article mentions that perpendicular can ONLY go 5x denser with existing techology & figure we'll see 5TB perpendicular drives in 2 years), etc... So why don't manufacturers offer 5 1/4" drives? Not to invoke memories of the MFM drives of long-ago (the original XT had a double-size 5 1/4" MFM drive of 10MB), but they have potentially 50-60% more surface area per platter, and with a possibility of 7(?) platters, isn't that a better solution? True, you wouldn't put them in SFFs or notebooks, but how many of us have towers with empty 5 1/4" bays??? And I assume that a 1TB 5 1/4" drive would be more energy-efficient than two 500GB 3 1/2" drives...Also, has anyone REALLY tested to see if perpendicular is truly a reliable technology? Seems like manufacturers have 50 years of experience with parallel storage, and only 1-2 years using perpendicular storage...
Spoelie - Tuesday, March 20, 2007 - link
Bulkier, slower, less energy efficient and more moving parts that reduce reliability all in the name of increased capacity is not the way to the future. I think the biggest problem at the moment is NOT storage capacity, they're mostly increasing capacity to keep HDs evolving and not drop in price, as it's one of the only competitive advantages. If you can increase capacity with the same material cost and some extra R&D, it would be stupid not to do it, and a better way than increasing the material cost.In fact, regular folk have way too much capacity at the moment. A Seagate CEO worded it nicely a while back "Face it, we're not changing the world. All we do is enable people to store more crap/porn."
The future lies in the direction of flash based hard drives: smaller, less/no moving parts instead of more, faster access times, lower energy consumption/heat. Or other alternative technologies that offer the same advantages. The densities and cost are the only reasons we're not all buying them at the moment, something that should be fixed over time.
If you're worried about unused case slots, buy one of those things that enable you to install normal hard drives in it, or convince the case designers to include more 3 1/2" slots and less 5 1/4" slots.
misuspita - Monday, March 19, 2007 - link
Have you ever seen a slow motion filming of a cd-rom disk wobbling? Same thing here! I think the platters got smaller because the vibrations produced by them grew with the speed. Since today's r/w heads need to be extremely close to the surface, that would be utterly impossible to control at that speed and diameter. They changed also from an aluminum based disk to an glass based one also, because the roughness of the surface on the Al platter.tygrus - Monday, March 19, 2007 - link
It's harder to spin a large diametter platter at high speed. More weight, more wobble, slower access times. I saw once a 5.25" 4x2.5" SATA drive array. Quantum used to do a large format BigFoot for cheap, slow large capacity but it wasn't continued. We have plenty of capacity per platter but limited perf/GB (worse every year) so why decrease performance to increase capacity ?piroroadkill - Monday, March 19, 2007 - link
All of the Seagate Barracuda 7200.10 drives have been perpendicular for some time..It seems like a perfectly reliable tech to me
Olaf van der Spek - Monday, March 19, 2007 - link
How smart is it to use temperatures from SMART?Did you verify all HDDs use good quality unbiased temperature sensors?
> Our thermal tests utilize sensor readings via the S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) capability of the drives and are reported by utilizing the Active SMART 2.42 utility.
Gary Key - Monday, March 19, 2007 - link
It has worked well for us to date. We also took readings with several other programs and a thermal probe. All readings were similar so we trust it at this time. I understand your concern as the sensors have not always been accurate.mkruer - Monday, March 19, 2007 - link
I hate this decimal Byte rating they use. They say the capacity is 1 TeraByte meaning 1,000,000,000,000 Bytes, this actually translates into ~930GB or .93TB that the OS will see using the more commonly used (base 2) metric. This is the metric that people assume you are talking about. When will the drive manufactures get with the picture and list the standard Byte capacity?Spoelie - Tuesday, March 20, 2007 - link
I don't think it matters all that much, once you heard it you know it. There's not even a competitive marketing advantage or any scamming going on since ALL the drive manufacturers use it and in marketing material there's always a note somewhere explaining 1GB = blablabla bytes. So 160GB on one drive = 160GB on another drive. That it's not the formatted capacity has been made clear for years now, so I think most people who it matters for know.Zoomer - Wednesday, March 21, 2007 - link
IBM used to not do this. Their advertised 120GB drive was actually 123.xxGB, where the GB referred to the decimal giga. This made useable capacity a little over 120GB. :)JarredWalton - Monday, March 19, 2007 - link
See above, as well as http://en.wikipedia.org/wiki/SI_prefix">SI prefix overview and http://en.wikipedia.org/wiki/Binary_prefix">binary prefix overview for details. It's telling that this came into being in 1998, at which time there was a class action lawsuit occurring I believe.Of course, you can blame the computer industry for just "approximating" way back when KB and MB were first introduced to be 1024 and 1048576 bytes. It probably would have been best if they had created new prefixes rather than cloning the SI prefixes and altering their meaning.
It's all academic at this point, and we just try to present the actual result for people so that they understand what is truly meant (i.e. the "Formatted Capacity").
Olaf van der Spek - Monday, March 19, 2007 - link
The screenshot shows only 1 x 10 ^ 12 bytes. :(
And I'm wondering, do you know about any plans for 2.5" desktop drives (meaning, not more expensive than cheapest 3.5" drives and better access time)?
crimson117 - Monday, March 19, 2007 - link
How many bytes does this drive actually hold? Is it 1,000,000,000,000 bytes or 1,099,511,627,776 bytes?It's interesting... it used to not seem like a huge difference, but now that we're approaching such high capacities, it's almost a 100 GB difference - more than most laptop hard disks!
crimson117 - Monday, March 19, 2007 - link
I should learn to read: Operating System Stated Capacity: 931.5 GBJarredWalton - Monday, March 19, 2007 - link
Of course, the standard people decided (AFTER the fact) that we should now use GiB and MiB and TiB for multiples of 1024 (2^10). Most of us grew up thinking 1KB = 1024B, 1MB = 1024KB, etc. I would say the redefinition was in a large part to prevent future class action lawsuits (i.e. I could see storage companies lobbying SI to create a "new" definition). Windows of course continues to use the older standard.Long story short, multiples of 1000 are used for referring to bandwidth and - according to the storage sector - storage capacity. Multiples of 1024 are used for memory capacity and - according to most software companies - storage capacity. SI sides with the storage people on the use of mibibytes, gibibytes, etc.
mino - Tuesday, March 20, 2007 - link
Ehm, ehm.GB was ALWAYS spelled Giga-Byte and Giga- with short "G" is a standard prefix for 10^9 since the 19th century(maybe longer).
The one who screwed up were the software guys whoe just ignored the fact 1024!=1000 and used the same prefix with different meaning.
SI for long ignored this stupidity.
Lately SI guys realized software guys are too careless to accept the reality that 1024 really does not equal 1000.
It is far better to have some standard way to define 1024-multiples and have many people use old wrong prefixes than to have no such definition at all.
I remember clearly how confused I was back in my 8th grade on Informatics class when teacher tried(and failed back then) to explain why everywhere SI prefixes mean 10^x but in computers they mean 2^10 aka 1024.
IT took me some 4 years until I was comfortable with power-of-something nubers enough so that it did not matter whether one said 512 or 2^9 to me.
This prefix issue is a mess SI did not create nor caused. They are just trying to clean it up in the single possible way.
mino - Tuesday, March 20, 2007 - link
Sorry Jared, didn't saw you comment...Otherwise, thanks for a nice review. Especially that explanation of AAM.
Many guys ask me reguraly why I don't buy non-AAM drives...
yacoub - Monday, March 19, 2007 - link
51C is a bit warm for a HD, no? I wonder how that impacts its life expectancy...yyrkoon - Tuesday, March 20, 2007 - link
To be a bit more specific, I think it was google who did testing of enterprise type drives, and did a bunch of testing, I'm sure google will turn something up ;)yyrkoon - Monday, March 19, 2007 - link
According to multiple studies done, HDD life expectancy is not affected by heat. I'm sure there are situations, where you literally have parts melting, that could be problematic, but there you have it.Justin Case - Wednesday, March 21, 2007 - link
Yes and no. A temperature around 50-60 ºC will not slowly "cook" the drive, but if it rises above a certain level (ex., 120º C), it can kill it instantly. Fast drives with a lot of platters can get hot very quickly, and if they're mounted on plastic rails (poor thermal conductors) with poor air circulation, their life expectancy is probably less than a day. I've seen it happen more than once.yyrkoon - Thursday, March 22, 2007 - link
Boiling point of water is around ~191F-212F, 120C is 248F, a CPU could not handle this temperature, what makes you think a HDD could ? Most consumer grade electronic do not take kindly to anything hottter than ~70C-80C. The only exception I can think of in a computer, might possibly be a graphics card, and even then, I personally would not expect it to last long as these temperatures.Most computers will not / should not exceed ~40C-50C ambient case temperature, and a lot (mine included) run much cooler. It is not uncommon for my CPU to run sub 100F (winter time), and sub 120F (summer time) under a load. Most of the time, the ambient case temperature of my case is easily under 105F.
Anyhow, the whole point here is: practise common sense with your electronics concerning heat. 120C is obviously WAY too hot for a HDD, as well as most consumer grade electronics. This also doesnt negate the fact that several studies have been done in enterprise envoirnments, to prove that heat ( again, within reason ) is not a factor in HDD falure. The whole point of these studies were to prove ( or disprove ) the point of buying enterprise grade hard drives vs. regular HDDs.
I have always wondered why you guys ( who ever claims that HDD fails often ) buy new HDDs with your new system, now I think I know ;)
phusg - Tuesday, March 20, 2007 - link
Please refer us to these multiple studies. AFAIK the only one that corroborates this is the google one, which you mention in a later post. Also I'd question this one study's relevance to home use, as not everyone leaves their drives running 24/7 as google does. My personal feeling is that repeated expansion and contraction damages drives most, and obviously if the drive is running hotter then the expansion will be greater and so will the damage to the longevity of the drive.yyrkoon - Thursday, March 22, 2007 - link
What you're reffering to is known as 'Hysteresus'. Excuse the bad spelling, if I misspelled that (it is not a word I used often). Anyhow, this is the effect, that rapid cooling / heating has on an object over time, and the object eventually becomming brittle because of this.As for the refferal, use google. Do not expect everyone to do your homework for you ;) However, I can tell you that, I personally have many HDDs, some of which are over 12 years old, have seen a lot of heat in their time, and are fully functional. One of which is a 80MB Maxtor . . .
Spoelie - Tuesday, March 20, 2007 - link
According to my own experience that's not really true. Last summer I had trouble with my main OS drive (a Seagate 7200.8 160GB) where windows would slow to a grind, there were multiple IO errors in the event log, then DMA would switch off and corrupt data showed up on the disc. I thought it died to be honest.However, before throwing it out I tried upping the cooling. I had 3 Seagate HDs in the HD chamber in front without intake fans, and they were incredibly warm to the touch. Directing a 120mm 800rpm fan over them to test immediately solved all issues, and the drive was as reliable again as ever (no permanent damage even). They're now very cool to the touch. Kinda obvious when I think about it, in a normal case the drive makes metal to metal contact and the HD bay itself functions as a large heat sink, while in the Antec there is no contact at all and the drive is "suspended in the air" on rubber grommets.
It was a particular hot summer period but still, heat shouldn't be ignored.
Gary Key - Monday, March 19, 2007 - link
It is well within the drives operating range and remember the temp dropped to 43C once we turned the front fan on in the case. I was expecting it to run warmer actually.yacoub - Monday, March 19, 2007 - link
I've been curious how much help drives would get from a larger cache. What if smaller drives came with, say, 64MB of cache?JarredWalton - Monday, March 19, 2007 - link
Looking at the 750GB Seagate with its 16MB cache, there are definitely areas where the 32MB cache helps. Basically, with the larger capacities you need more cache to effectively handle all the data. Realistically, I'd say there's about 0% chance we'll see 64MB cache on smaller drives. When we're running 2TB drives, however....atomicacid55 - Monday, March 19, 2007 - link
I wish that the T7K500 could be reviewed in this. After all, that's the current challenger to the 7200.10, and if you read STR, the consensus is that it's an overall faster drive than the 7200.10. Every drive manufacturer tends to set its own trends in performance, and I personally believe it's more useful to have a comparison of the 7K1000 with its sibling rather than only against other brands.Gary Key - Monday, March 19, 2007 - link
Hitachi is finally sending us a T7K500. We will have a 500GB roundup with the latest drives from WD, Seagate, Samsung, and Hitachi in April.dm0r - Monday, March 19, 2007 - link
Never thought a 7200 RPM drive could be very competitive to a 10000 one...Thanks to its 32 Mb Cache.Loved the acoustics test, it shows a very quiet drive.With this new drives coming to the market, Western Digital will have to think about its raptors Overpriced drives and low its prices.Very good review.A question: How many Hitachi drives will be lauched with Perpendicular recording like this and what capacities?
Justin Case - Wednesday, March 21, 2007 - link
Higher areal density means more data can be read per rotation, so it could even be faster in terms of STR. But 7200 RPM drives will still have higher latency than 10k models, of course. The only way to overcome that would be to add more heads (ex., 2 per platter).bkiserx7 - Monday, March 19, 2007 - link
...wonder what a perpindicular raptor will do one day?crimson117 - Monday, March 19, 2007 - link
Attack prey from right angles?cruzer - Monday, March 19, 2007 - link
On page 10, "As stated in the article, we believe leaving AAM and NCQ turned provides the best experience with this drive."Do you mean turned on or off?
tuteja1986 - Tuesday, March 20, 2007 - link
I really want 3 of them for raid 5 setup :)Souka - Tuesday, March 20, 2007 - link
Hmmm..... I'd like to see this drive against the raptors in a RAID 1, 0, and 5 setup....Gary Key - Tuesday, March 20, 2007 - link
As soon as we have another drive. ;)
Souka - Wednesday, March 21, 2007 - link
Right on!Zoomer - Wednesday, March 21, 2007 - link
Make that 2 for raid 5!