Comments Locked

205 Comments

Back to Article

  • unityole - Tuesday, May 31, 2016 - link

    well time to read
  • Drumsticks - Tuesday, May 31, 2016 - link

    I've gotten through about half of it so far (and the conclusion) but man, those prices... Zen is supposed to be really, really close to Broadwell in IPC. Imagine that 8-core Zen part with 90-95% of BDW-E performance at 30% of the price. That would be really pretty nice.

    Great review so far, and I'm sure the rest is awesome!
  • SunLord - Tuesday, May 31, 2016 - link

    If Zen is 90% the performance of BDW-E then it will be priced to match that though I doubt they'll break $999. AMD will try and under cut Intel but it's not gonna give a away a great performing chip like it had to with the crappy cores it has now
  • jjj - Tuesday, May 31, 2016 - link

    lol 7$ per mm2 for top SKU is beyond outrageous. Anyone buying this deserves a nomination for the Darwin Awards.
    Zen can compete with Intel's 2 and 4 cores + pointless GPU on price, easily.
  • Railgun - Tuesday, May 31, 2016 - link

    Because dying in a stupid way is akin to buying an expensive chip. And since when would price per area be any useful metric?
  • Shadow7037932 - Tuesday, May 31, 2016 - link

    >Zen can compete with Intel's 2 and 4 cores + pointless GPU on price, easily.

    That's what you're hoping for. For all we know, there could be some serious issues with Zen. Remember the TLB issue with the original Phenoms?
  • silverblue - Tuesday, May 31, 2016 - link

    ...and the AVX memory write performance issues with Piledriver. Still, what architecture doesn't have bugs?
  • Ratman6161 - Tuesday, May 31, 2016 - link

    I've been wanting to like AMD for a long time. I still remember the good old days of the Athlon 64 and 64 x2 when they used to beat Intel and at a lower price. but they keep on disappointing me. I'm taking an "I'll believe it when I see it approach"
  • Azethoth - Tuesday, May 31, 2016 - link

    I think you mean we remember that one time the Athlon 64 beat intel and we bought it but everyone else stuck with Intel because Intel, or AMD is for gamers, or Intel being monopolistic.

    That is more than 10 years ago though and it seems that by stated policy those days are never coming back for AMD.
  • usernametaken76 - Tuesday, May 31, 2016 - link

    Eh, I remember prior to Athlon 64, just plain old Athlon XP, the value was tremendous compared to Intel. There was no sense in buying Intel if you were building a gaming rig, something for yourself to do basic computing tasks, etc. Intel was for suckers who bought business class desktops.
  • chrisso - Friday, June 17, 2016 - link

    The athlon xp chips and most of the pentium 3 equivalents beat the snot out of intel chips for quite a while actually. One of my mates was gobsmacked when I ran lost coast at 56 fps using a 3000+ I bought used from ebay for £28.
    A 3 gig pentium 4 could manage about 40.
  • lunchbox4k - Wednesday, June 1, 2016 - link

    The Athlon 64 (K8) and part of the Athlon (K7) was designed by Jim Keller, guess who designed ZEN?
  • solomonshv - Wednesday, June 1, 2016 - link

    when AMD was better than intel, i stuck with intel because i was in high school and couldn't afford the an AMD processor. the cheapest San Diego class CPU was north of $300 and AMD was charging $1000 for the FX 57. i ended up getting a Pentium 4 630. overclocked it from 3GHz to 4.4GHz and was happy as can be.
  • hoohoo - Tuesday, May 31, 2016 - link

    Wait and see still seems like the best approach given the price of these CPUs.
  • lunchbox4k - Wednesday, June 1, 2016 - link

    You can always do that, unless you always pay for the top chip, with technology wanting to double in performance every year for the same cost, some SOCs will be pennies in the near future.
  • bronan - Tuesday, November 15, 2016 - link

    I find that really the wrong approach, yes the piledriver suffered from the weird decision with the cache and that made it crinch instead of perform well. But they are still very well running cpu's which cost about a fraction of the insane high prices intel tends to give the endless just a bit high clockspeed and new socket models. All keep saying that they are such a big step forward while i see only a little step in reality and yes the insane slow build in gpu sucks so bad its not even worth using on anything. The big problem is that intel makes the non gpu version locked and lowers the clock on that too. While i am 1000% certain those would be the best and greatest overclockers.
    The silly gpu is forced on everybody, but i bet nobody ever use that crap.
  • JoeyJoJo123 - Tuesday, May 31, 2016 - link

    Just FYI, a Darwin award is awarded to those who accidentally, involuntarily, and often stupidly remove themselves from the gene pool, permanently. While this (often) involves a lack of forethought which leads to the person's own death, accidents resulting in the person becoming permanently infertile also count.

    Someone could voluntarily and knowingly remove themselves from the gene pool, but because there is forethought to this action, I've never heard of a Darwin award for this.

    I don't believe buying an overpriced processor equates to removing oneself from the gene pool.
  • Azethoth - Tuesday, May 31, 2016 - link

    It is even worse. Being able to afford this because you have so much money the cost does not even register means you are actually up for whatever the inverse darwin is. Statistically the wealth makes you live longer and healthier. You are not working 24/7 and you can certainly eat better and working out with a hot personal trainer and having wonderful vacations wherever you feel like going on the planet.
  • cswor - Wednesday, June 8, 2016 - link

    Or mommy and daddy have money.
  • ddferrari - Tuesday, May 31, 2016 - link

    Someone is trying way too hard to sound smart and condescending...
  • michael2k - Tuesday, May 31, 2016 - link

    Zen doesn't exist, yet, so cannot compete at all.
    When Zen does exist, however, AMD would literally win a Darwin Award if they offer more than a 10% discount for parts that perform similarly.

    In other words, if Zen is capable of powering a 10 core part that offers 90% of the 6950X performance, expect it to cost $1,550. If it offers 110% of the performance, expect it to cost $1,725.
  • Spunjji - Wednesday, June 1, 2016 - link

    There's a small fault in your logic, which is that if they priced it like Intel are here they would sell as few of them as Intel clearly expect to, and thus struggle to make the market share gains that they badly need.

    I'd expect a competitive product to cost something more like $1000 (at which price they would still be making PHENOMENAL margins) and force a price-drop from Intel. They're not going to give anything away for free, but they absolutely stand to benefit from being less obscenely liberal with their margins than Intel.

    This is assuming they execute on time and as promised, which is, well, not very AMD of late.
  • cswor - Wednesday, June 8, 2016 - link

    I agree. In their underdog position, they need to undercut and can still probably make a nice profit on a chip priced to sell larger volumes, assuming it performs and they can manufacture it.
  • Azix - Tuesday, May 31, 2016 - link

    AMD might not have the luxury of not going for the jugular. If the yields aren't great maybe their prices will be that high. They won't gain market share/mind share with high prices though.
  • just4U - Tuesday, May 31, 2016 - link

    They won't be that high. AMD has only been able to price their higher end consumer processor at intel pricing "once" (to my knowledge) and it didn't stay there long.
  • nandnandnand - Tuesday, May 31, 2016 - link

    How do we know that Zen is that close to Broadwell in IPC (and Skylake, since there is very little difference)? I'd love for it to be true, but AMD's Zen 8 cores need to solidly beat Intel's quad cores and do almost as well in single threaded performance.
  • retrospooty - Tuesday, May 31, 2016 - link

    " Zen is supposed to be really, really close to Broadwell in IPC"
    - What has AMD done in the past decade that makes you believe that? I will believe it when its released and retail units (not engineering samples) are independently tested. Until then I don't believe anything AMD says.
  • Drumsticks - Tuesday, May 31, 2016 - link

    I'm not about to sing the praises of AMD completely yet, but I think there's reason to believe they're more focused than they've been in the last ten years. On top of that, Jim Keller was good way back when, he's proved he still has great ideas now with Apple, so there's hope that Zen could really impress. They still have to execute (something we know isn't a given for AMD) but we'll know all in a few months time.

    If the rumors were true about Vega in October (I doubt they are), they could have a pretty nice high publicity 1-2 punch. It's unlikely Vega will show up then, but I'd be pretty happy if it did.
  • retrospooty - Tuesday, May 31, 2016 - link

    I hear you, and I have heard that too... But all the same, AMD's PR is always far more active than their engineering teams leading up to launch. If it comes out and is as fast as they seems o think it will be and doesn't have any major heat or power issues (that cause the need to clock it lower than expected) it may be good... All the same, its best to wait until retail chips are released, prices set and units reviewed to decide.
  • michael2k - Tuesday, May 31, 2016 - link

    I am enthusiastic too, but if Zen really is that powerful I cannot imagine it selling at 30% of Intel prices. The more powerful the part, the higher the price will be, up to 90% of Intel's prices for similar performance.
  • nevcairiel - Tuesday, May 31, 2016 - link

    Even if it is, its like 2 years and 2 generations late to the party then. By the time Zen is out, we have Kaby Lake, and they advertise being on-par with not the current, not the previous, but one gen even before that?
  • Spunjji - Wednesday, June 1, 2016 - link

    Their claims (if true) would signify rough IPC parity with Broadwell, which Skylake outclasses by a mighty 2.3% according to this site. That was in turn a staggering 3.3% over Haswell so even matching that won't leave them far off the mark. We have no reason to suppose the Kaby Lake release will alter than pattern substantially.

    It's all big ifs, though, and of course it'll be compared to whatever's out when it finally arrives.
  • Flunk - Tuesday, May 31, 2016 - link

    We can hope I guess, I gave up hope long ago.
  • maxxcool - Tuesday, May 31, 2016 - link

    If you are buying a 8 core cpu from EITHER vendor specifically 'game on' your a proper idiot, tool and dumbass.
  • JoeyJoJo123 - Tuesday, May 31, 2016 - link

    If you are telling people what they can and can't do with their money along with slinging personal attacks, you're a proper idiot, tool, and you need to get a job so you can manage your own money, rather than someone else's.

    Seriously though. This is the internet. You should really stop caring about what other people spend their own money on. People much richer than these kids are spending money on sports cars and getting into a wreck a week later, often involving other vehicle(s) and/or innocent people.

    I really couldn't care less if 10,000 people on this article's comments section thought the new extreme edition processor was a "good value" and bought one (or more). More power to them.
  • hoohoo - Tuesday, May 31, 2016 - link

    AMD is not a charity.

    AMD will charge as much as the market will bear.

    Ninety percent of the performance probably costs about ninety percent the price.
  • Michael Bay - Wednesday, June 1, 2016 - link

    >supposed to be
    >Zen
    At this point in time it`s not even remotely funny anymore.
  • Bulat Ziganshin - Saturday, June 4, 2016 - link

    >Zen is supposed to be really, really close to Broadwell in IPC.

    are you really believe that AMD, who was a lot behind Intel back in 2008, and then lost a few years on Bulldozer development, in a miraculous way will jump over? i expect that Zen will be a little better than their last Phenom, and that their first implementation of SMT will be as inefficient as Nehalem one. And higher core count, as well as AMD huge lag in lowering-heat-dissipation technologies, will mean more heat and therefore stricter limits on frequency - the same limits as in 10-core Broadwell and probably even stricter. So it may be like 8-core Nehalem at 4 GHz (with best overclocking). That's better than i7-6700K for multi-threaded tasks, but of course slower for tasks with 1-4 threads, including most of games. Or you may continue to believe in Santa :)
  • jchambers2586 - Tuesday, May 31, 2016 - link

    you spend $434 on a CPU and it does not perform than a $ 250 6600K in gaming you would think spending more would get you better gaming performance. I don't' think spending $434 on the i7-6800K is worth it for gaming.
  • beginner99 - Tuesday, May 31, 2016 - link

    This is the take-away. Useless for gaming for now. If 6-cores will actually benefit from DX12 remains to be seen. If I were I game developer I would focus on making use of the iGPU versus scaling above 4-cores because most of my user base has an idling iGPU and very few more than 4 cores.

    If it would at least have edram. For broadwell it's 5775c or else skylake for gaming.
  • Flunk - Tuesday, May 31, 2016 - link

    DX12 actually uses the CPU more efficiently so it should make every LESS CPU constrained, not more so.
  • willis936 - Tuesday, May 31, 2016 - link

    Well if you want to toss four channels of memory and 28 pcie lanes out the window and just talk about gaming then you should probably keep it in your head that anyone buying these processors over xeons will be overclocking them. You'll likely get identical single threaded performance on the six core parts to the four core parts but just have 2 more cores. If you say more cores doesn't matter in gaming well idk why everyone (including anandtech) is saying this. I play csgo and bf4 @ 1440p120 on a 770/4790k@4.5GHz and while I'm still GPU limited I see all 8 threads get over 70% usage regularly. I have no doubt that even this 4 core single threaded performance king will bottleneck a 1080 in some cases. Intel has slowed down in performance increases and GPUs haven't. The old talk of "you'll always be CPU limited" shouldn't be treated as dogma. Oh and anandtech should consider changing their CPU gaming benchmarks. It's not super helpful to see a bunch of data that shows a dozen CPUs at a dozen scenarios that are all GPU limited. It's not hard to choose realistic CPU limited scenarios.
  • bogda - Tuesday, May 31, 2016 - link

    It is difficult now and it has always been difficult finding meaningful, high end, CPU limited gaming benchmarks (unless you are working in Intel marketing/sales). Nobody buys $1000+ processor to run games at 720p.
    Anybody thinking about buying high end processor for gaming, after seeing meaningful gaming benchmark, should think twice.

    P.S. You probably wanted to say "... you will always be GPU limited should not be treated as dogma".
  • jacklansley97 - Tuesday, May 31, 2016 - link

    These aren't meant for gamers anyway. HEDT has always been aimed at content creation, development, and calculation. I don't know why anyone thinks it's a revelation that these chips don't perform better than a quad core for gaming.
  • Impulses - Tuesday, May 31, 2016 - link

    Even then, you need to study your needs carefully... For basic photo/Lightroom tasks clock speed actually matters a decent amount and not a lot of tasks scale super well beyond 4 cores... Obviously for things like video encoding more cores will make a huge difference.
  • joex4444 - Tuesday, May 31, 2016 - link

    For gaming, no, but that's the thing: PCs can do so much more than just play games.
  • unityole - Tuesday, May 31, 2016 - link

    wanted to see more core per core IPC from 6800/6850k vs 4960x and also 5960x vs 6900k. not just because of technology changed also IPC gain from using TB3.0 which probably minimal.
  • Ian Cutress - Tuesday, May 31, 2016 - link

    We covered IPC in both our Broadwell and Skylake mainstream desktop reviews:

    http://www.anandtech.com/show/9483/intel-skylake-r...
  • landerf - Tuesday, May 31, 2016 - link

    That broadwell has edram. This one doesn't.

    The two things I and a number of others wanted to see from this review were IPC and overclocking for the whole range, as we've already seen from leaks the 10 core was a bad clocker but a lot of people had high hopes for the lower core models.
  • Ian Cutress - Tuesday, May 31, 2016 - link

    That's a fair point. After Computex blows over I'll look into running Broadwell-E with dual channel memory similar to those tests.
  • SAAB340 - Wednesday, June 1, 2016 - link

    If possible, can you have a look in to RAM overclocking as well. I believe the memory controller in Haswell-E isn't particularly great. The one in Skylake is way better. I wonder if Broadwell-E has improved there?

    I know RAM speeds in general don't make that much difference but in certain applications it does.
  • StevoLincolnite - Tuesday, May 31, 2016 - link

    I'm still happily cruising with a 3930K. The 5930K was the twice the price of the CPU alone for what I paid for my 3930K, but it certainly doesn't offer twice the performance and the 6850K looks to be more expensive again.

    My 3930K still has a few years of life left in it, hopefully AMD can bring Intel's prices downward.
  • Witek - Thursday, June 16, 2016 - link

    @SteveoLincolite - agreed, I am still on 3930K for more than 3 years now, and I would be happy to switch to something faster, but 6800K is essentially same speed, only faster in specialized workloads, and probably 2 time more costly. Going from 6 to 8 cores, only gives me 30% boost, for almost 4-5 times the prices. The 10 core one is a joke.

    3930K (and it overclocks easily too - 3.2GHz -> 4.2GHz with water cooling non stop in my setop), is still the best value out there probably.
  • prisonerX - Tuesday, May 31, 2016 - link

    It cracks me up that people pay say $300 for a mainstream i7 which is 65% graphics which they don't use, but employ that same silicon for a few more cores and the price is $1000+.

    People belittle AMD for not having the fastest silicon and then touch their toes price wise for whatever scraps Intel throws them. Particularly funny since mainstream processors were 5% slower in the last generation. It's like people are suffering Stockholm syndrome or something.
  • Alexey291 - Tuesday, May 31, 2016 - link

    Well it's little wonder that the cpu market is slowing down since there are no actual products worth buying from a mainstream purchasers' point of view
  • Eden-K121D - Tuesday, May 31, 2016 - link

    People on Haswell are well and good until something extraordinary comes out of intel/AMD
  • Michael Bay - Wednesday, June 1, 2016 - link

    When the most exciting thing about a platform refresh is a goddamn usb3.something type-whatever port, writing is on the wall.
  • beginner99 - Tuesday, May 31, 2016 - link

    Exactly. It's not slowing down because of smartphones or tablets but because 5% performance increases takes 10 years for the average user to be worth an upgrade.
  • Ratman6161 - Tuesday, May 31, 2016 - link

    To take that one step further, for the average user it isn't even 5%. They aren't doing anything with the machine that isn't entirely adequate with what they have. I don't consider myself to be the average user by any means, but my i7-2600K system i built in the spring of 2011 is still more than fast enough for anything I do let alone spend money on a 6700K let alone any of these.
  • mapesdhs - Thursday, June 9, 2016 - link

    Indeed! This week I need to put a system together for handling SD video. I have at my disposal a whole range of SB/SB-E i7s, but they're overkill, so I'm going to reuse the parts from my brother's old PC instead, a P55 with an i7 870 which, at 4.2GHz, is still rather good (people forget it was a particularly low latency platform for its time, with boards that really did push what features one could include, some good innovation with slot spacing and other things). My own general tasks system, a 5GHz 2700K, I can't see becoming obsolete for a long time, it handles everything with ease (scores 880 for CB R15).

    And this is the key problem: it's the very tasks that would benefit the most from real performance and feature improvements where newer products have helped the least, baring in mind the upgrade costs involved and the lack of feature enhancements over the years (how long was it until Intel finally added native USB3 to the top-end chipset?). Given the cost, the gains of the latest top-end CPUs over what was available in 2011 just aren't worth it, which perhaps explains why I see comments even from X58 6-core owners saying they'll stick with their setups for now). Meanwhile, for anyone on a budget who doesn't want to consider 2nd-hand items, it's hard to ignore the value of AMD's current 4c and 6c offerings (heck, the PC I built for my gf is an old Ph2 X4 965 and it's more than adequate), given that really, for response and feel of a normal PC, having an SSD is more important than having the higher IPC of a costly Skylake vs. an FX 6300 or something.

    I was shocked at the launch price of the 6700K, and I didn't think Intel would make the same mistake again, but they have. One of the main things I do is offer free upgrade advice for prosumers on a limited budget (typically self-employed artists); atm, the 6950X is so expensive that I'd recommend a 2-socket XEON setup instead without hesitation. 3 years ago this wasn't the case, back then there was a solid rationale for (example) an AE user on a limited budget to build an oc'd 3930K. Today though, what Intel is doing will only help reduce the enthusiast market even further, and I was told by a high street shop owner that the top-end items are the ones which provide the best margins (he said his store couldn't survive on the mainstream level sales). There will be long term self-reinforcing consequences if Intel doesn't change direction. Perhaps Zen will achieve that; certainly many seem to hope it will.
  • JimmiG - Tuesday, May 31, 2016 - link

    What's worse than the price premium is that you're also paying for the previous generation architecture.

    I really don't see why anyone would want one of those CPUs. For gaming and most typical applications, the mainstream models are actually faster because of their more modern architecture and higher clock speeds. If you're a professional user, you should really be looking at Xeons rather than these server rejects.
  • K_Space - Tuesday, May 31, 2016 - link

    Exactly. I think that's the whole point: Intel realizes that -realistically- little profit will be made from these B-Es given the little incremental increase in performance so why not use them as an advert for the Xeons (which they have aggressively been marketing for HEDT not just servers over the last few month). Anyone considering these will consider the Xeons now.
  • Ratman6161 - Tuesday, May 31, 2016 - link

    There are a few benchmarks where they do make sense, if and only if you are doing that particular task for your job i.e. an environment where time is money. For the rest of us, if I need to do a video conversion of some kind its relatively rare and I can always start it before I go to bed.
  • retrospooty - Tuesday, May 31, 2016 - link

    People belittle AMD because even though Intel has dramatically slowed down the pursuit of speed, AMD still cant catch up. It's actually worse than that though. If AMD were competitive at all in the past decade Intel would still be perusing speed and would be further ahead. Its a double edged sword sort of thing.
  • Flunk - Tuesday, May 31, 2016 - link

    Yes, Intel has slowed down for AMD to catch up before. Cough, Pentium 4.
  • retrospooty - Tuesday, May 31, 2016 - link

    Yup... and back then AMD took advantage of it. I was the happy owner of a Thunderbird, then an Athlon, then an Athlon X2... Then Intel woke up and AMD went to sleep. For the past decade AMD has been too far behind to even matter. In the desktop CPU space there is Intel and then ... no-one.
  • Flunk - Tuesday, May 31, 2016 - link

    You're right, it's totally Intel's fault. They could launch a line of high-end consumer chips that cost the same as the current i5/i7 line but had 2-3X as many cores but no iGPU. They'd cost Intel the same to fabricate. They're the only ones to blame for their slowing sales.
  • khon - Tuesday, May 31, 2016 - link

    I could see people buying the i7-6850K for gaming, 6 cores at decent speeds + 40 PCI-E lanes, and $600 is not that bad when consider that some people have $700 1080's in SLI.

    However, the i7-6900/6950 look like they are for professional users only.
  • RussianSensation - Tuesday, May 31, 2016 - link

    40 PCI lanes are worthless when i7 6700K can reliably overclock to 4.7-4.8Ghz, and has extra PCIe 3.0 lanes off the chipset. The 6850K will be lucky to get 4.5Ghz, and still lose in 99% of gaming scenarios. Z170 PCIe lanes are sufficient for 1080 SLI and PCIe 3.0 x4 in RAID.

    6850K is the worst processor in the entire Broadwell-E line.
  • Impulses - Tuesday, May 31, 2016 - link

    Well if you're about gaming only you might as well compare it with the 6600K... AFAIK HT doesn't do much for gaming does it? The 6800K isn't much better either when your can just save a few bucks with the 5820K.

    I feel like they could've earned some goodwill despite the high end price hikes by just putting out a single 68xx SKU for like $500, it'd still be a relative price hike for entry into HEDT but could be more easily seen as a good value.

    Are the 6800K bad die harvests or something? Seems dumb to keep that artificial segmentation in place otherwise when HEDT is already pretty far removed from the mainstream platform.

    When I chose the 6700K over the 5820K I thought it'd be the last quad core I'd buy, but at this pace (price hikes, HEDT lagging further behind, lower end SKU still lane limited) I don't know if that'll be true.
  • mapesdhs - Thursday, June 9, 2016 - link

    By definition, professionals wouldn't use this kind of tech at all. Pro users don't oc. Pro users have a budget to afford XEON.

    The prosumer market though, solo professionals, those on a budget, these are the people for whom previous generations of SB-E/IB-E made some sense, but not anymore.
  • sleekblackroadster - Tuesday, May 31, 2016 - link

    This is the opposite of generating enthusiasm, Intel.
  • jjj - Tuesday, May 31, 2016 - link

    This is what Intel means by more focus on certain segments and it will only get worse as the PC market fades away.
  • damianrobertjones - Tuesday, May 31, 2016 - link

    No 6700k in the tests? :(
  • damianrobertjones - Tuesday, May 31, 2016 - link

    Clicks the next page... DAMMIT!
  • PJ_ - Tuesday, May 31, 2016 - link

    It was in the GTA V benchmarks for example
  • PJ_ - Tuesday, May 31, 2016 - link

    And many more
  • medi03 - Tuesday, May 31, 2016 - link

    AMD's CPUs aren't that bad for gaming (mostly because of multi-threading becoming a treand in games, thanks to consoles) as many people think:

    http://wccftech.com/fx-8370-i5-6400-gaming-compari...
  • josetesan - Tuesday, May 31, 2016 - link

    It will be great if , for next multi-threaded tests, Linux Kernel compilation times were added, as thay make great use of it, via the -J <threads> parameter.
    Some people use their computers to compile,and we benefit for multicores a lot. ( java, C, whatever ).
    I can see the 6-core for $434 it a nice price, given Haswell i7-4770 has 4 cores and is similar priced.
    Great review, indeed.
  • Tom Womack - Tuesday, May 31, 2016 - link

    It's not clear that the 6-core Broadwell is very much better than the 6-core Haswell, and it's likely that its existence makes the 6-core Haswell cheaper; so pick up a 5820K in the near term.
  • mapesdhs - Thursday, June 9, 2016 - link

    Or a used 3930K, they cost diddly now. Use the cost saving on better SSDs, faster RAM, etc.
  • rtho782 - Tuesday, May 31, 2016 - link

    Wow.

    Minor performance boosts at best was expected and I could swallow that and still be excited for BDW-E.

    Minor performance boosts and a 70% price boost? I won't bother upgrading then.
  • ShieTar - Tuesday, May 31, 2016 - link

    There is no 70% price boost. The 6900K that replaces the 5960X basically sells at the same price with a good 10% performance boost.
    The 6950X needs to be compared to last generations E5-2687W V3, which still costs more than 2k$. So Intel actually hands out a 10% to 15% performance boost with a 20% price drop on that front:

    http://www.anandtech.com/bench/product/1730?vs=135...
  • pencea - Tuesday, May 31, 2016 - link

    It's been10 days since the embargo on GTX1080 reviews was lifted and previews aside, there is still a deafening silence from Anandtech. Yes the apologists will argue Anandtech does a deeper review, give them time and all that but seriously when your review is this late, it begins to look like incompetence. Or perhaps you consider your reviews to be elitist, the holy grail among tech websites and that therefore any delay is acceptable? What pressing projects are the GPU staff working on that could explain this state of affairs?

    GET IT TOGETHER ANANDTECH YOU USED TO BE BETTER!
  • Ryan Smith - Tuesday, May 31, 2016 - link

    "there is still a deafening silence from Anandtech"

    Feedback is always appreciated. I've mentioned a few times now that it's not done yet and is still in the works. But I'm not sure what else you're looking for?
  • D. Lister - Tuesday, May 31, 2016 - link

    In other news, Guru3D has recently put up their 1070 review. Next time guys, use "Review: Part 1" in title, instead of calling it a "Preview". :P
  • retrospooty - Tuesday, May 31, 2016 - link

    He is looking for it to be done, and not "in the works". =)
  • HighTech4US - Tuesday, May 31, 2016 - link

    ^ T H I S
  • HighTech4US - Tuesday, May 31, 2016 - link

    Quote: But I'm not sure what else you're looking for?

    An actual review.
  • artk2219 - Monday, June 6, 2016 - link

    Sigh, you can't please everyone, and thank you for taking the time and effort to do these reviews in the first place. As always, I look forward to it and hope to see it whenever it's ready.

    Thank you!
  • redfirebird15 - Tuesday, May 31, 2016 - link

    There is data in the bench for 1070 and 1080 founders editions. The 1070 is on par with the 980ti, and the 1080 beats it in all categories. Review complete.
  • HOOfan 1 - Tuesday, May 31, 2016 - link

    I imagine they are getting more clicks without it being posted, than they would with it being posted.

    Once it is posted, people will read it and move on. Now people have to stop by every day to ask "are they finally done yet?"
  • Impulses - Tuesday, May 31, 2016 - link

    I get the impatience, even tho there's plenty of other places with decent reviews, but the aggressive and entitled attitude is a little bizarre. You can't every buy the cards right now, and you'd be silly to overpay for a Founder's already, so why are you stressing so much and spamming every other article about it?
  • SkiBum1207 - Tuesday, May 31, 2016 - link

    Good quality reporting and reviewing takes a TON of time. Doing benches aren't as simple as firing up a few runs of 3DMark and calling it a day. Then on top of that, there's the analysis, conversations with the manufacturer to clear up any issues, and writing it in clear and concise prose.

    That's not easy to do. You can get raw benchmarks anywhere online, but getting actual deep analysis by people who actually understand their respective fields? That is hard to find.

    Anandtech has always had a hallmark of taking their time, doing their research, and not shoving out hacked together reviews/articles. If anything, they have gotten better over time, rather than "used to be better".

    @HOOfan1, that's literally the opposite of how journalism online works. The quantity of people who rapid-refresh a page for a single article is a vast minority in comparison to once an article is published. That's one of the reasons why publish-fast, fact check later publishing has become more prevalent.

    @Ryan - I know things have gotten "noisier" the past 5 or so years on the internet and in the comments, but there are still some absolutely loyal readers which absolutely appreciate the work and detail you put into each article. Thank you, and keep up the amazing work.
  • Eden-K121D - Wednesday, June 1, 2016 - link

    You want a deep dive into pascal read this https://images.nvidia.com/content/pdf/tesla/whitep...
  • HOOfan 1 - Wednesday, June 1, 2016 - link

    Plenty of other sites have given a lot more than benchmarks. Maybe not as much as a typical anandtech article, but certainly much more than just "raw benchmarks".

    Is George RR Martin writing graphics card reviews on the side?

    Anandtech doesn't owe us anything, but I would say the fact that people are frustrated that there is no full review yet shows they are loyal readers. If they weren't loyal readers, they just wouldn't care.
  • HollyDOL - Tuesday, May 31, 2016 - link

    Ouch, this definitely puts high perf line both out of my reach and interest.
  • jabber - Tuesday, May 31, 2016 - link

    Glad I didn't bother waiting when I bought the 5820k last month.
  • getho - Tuesday, May 31, 2016 - link

    Bonkers. I was holding out for this, but was forced to replace my 1366 system this year (and went with 5820). THere is no way i would've contemplated dropping $1700 on a CPU. If it was twice as fast, maybe. Even at $999 i think i'd look at dual xeons - and probably second hand at that.
  • adamod - Wednesday, June 1, 2016 - link

    ive got an hp z600 with dual x5660's that consistently run at 3.1ghz al cores at 100 percent load...i love it....cheap r9 280x in there and a pair of ssds and its prety damn quick.....graphic card is older and kinda sucks but it DOES play crysis at 1080P which is, well just ok.....point is i plan to get a 1070 for it and i dont expect i will need to upgrade for another 5 years for gaming and CAD work that i do
  • RealLaugh - Tuesday, May 31, 2016 - link

    Why are there no 4k resolution benchmarks, did I miss something?

    Surely the consumer base for this tech are not going to be playing on 1080p?!

    Isn't that where the CPUs would start to get ahead of the i5 and i7 products?

    Call me out if I'm mistaken!
  • dannybates - Tuesday, May 31, 2016 - link

    You are mistaken.
    Lower Res = More CPU Dependent, Less GPU Dependent
    Higher Res = More GPU Dependent, Less CPU Dependent

    Lowering the resolution of a computer game or software program increases the dependency on the CPU. As the resolution decreases, less strain is placed on the graphics card because there are fewer pixels to render, but the strain is then transferred to the CPU. At a lower resolution, the frames per second are limited to the CPU's speed.
  • RealLaugh - Tuesday, May 31, 2016 - link

    ok thanks now I know.
  • adamod - Wednesday, June 1, 2016 - link

    i shall purchase a xeon e5 2699 v4 and a GT210.....i was to play crysis at 800x600 but ULTRA!!!
  • Ph0b0s - Tuesday, May 31, 2016 - link

    The only thing for gaming might be scaling with more cores. With Directx 12, it makes better use of multi-core CPU's. It would be good for Anandtech to do a story on how Directx 12 scales with more cores, now we can have up to 10. I don't know if there are enough DirectX 12 games to do this yet? If you don't get an benefit for having more that 4 cores then Broadwell-E will not be needed for gaming. If you get a benefit over 4 cores that will be the case for needing Broadwell-E for gaming.
  • jabber - Tuesday, May 31, 2016 - link

    Yeah I'm looking forward to playing some DX12 games in 2018 with my DX14 capable GPU. C'mon folks that's how it always works.
  • adamod - Wednesday, June 1, 2016 - link

    here is some limited date....it shows ashes and gears at least along with some synthetics:
    http://www.pcworld.com/article/3039552/hardware/te...
  • Wardrop - Tuesday, May 31, 2016 - link

    Good banner photo! Liking it.
  • r3loaded - Tuesday, May 31, 2016 - link

    Yay, price gouging!

    AMD pls save us.
  • ochadd - Tuesday, May 31, 2016 - link

    Pricing is just prohibitively high imho. I'm still rocking SandyBridge and was hoping the lowest end would basically by a 5820 with 40 lanes unlocked. Would have made a great upgrade. With the pricing I think it's best to wait until the next version of the regular desktop (Kaby Lake?) to pull the trigger.
  • Impulses - Tuesday, May 31, 2016 - link

    Or just get a 5820K if you'd benefit from the extra cores... Or a 6600/6700K if you want the IPC bump for non gaming tasks and platform upgrades (USB 3.1, more lanes for M.2, etc).

    After throwing tick tock out the window it's unlikely the next refresh will be any more tempting. If you don't need any of the aforementioned things (more cores or platform upgrades) then you might as well sit tight tho.
  • rhysiam - Tuesday, May 31, 2016 - link

    I've asked this question before and never got a good answer, so I'm trying again. Can someone explain to me why the boost clocks on the SKUs with more cores are always so much lower than those with fewer cores enabled?

    Base clocks have to be lower of course. I've got no issues with 10 active cores requiring lower clocks than 6 active cores, that makes sense. But what I don't get is why a 10 core SKU with 1 active core and 9 idle is somehow unable to turbo anywhere near as a 6 core implementation which is effectively 1 active core, 5 idle cores and 4 disabled cores on the same silicon . Is the power difference between those 4 idle & disabled cores really so significant on a 140W CPU that it necessitates an almost 10% lower clock speed?

    It makes it even harder to justify spending $1.7K on a CPU when it looses so many benchmarks to CPUs costing a fraction of the price (including often the almost 3 year old 4820K).
  • RealLaugh - Tuesday, May 31, 2016 - link

    I would like to know this too, I looked at the frequency and was surprised but I don't understand why it's like that.

    Also in the past Intel's top like Xeons have had hugh cache + core count but between 2-3GHz clock speeds only...
  • Ph0b0s - Tuesday, May 31, 2016 - link

    Brief go at an explanation. Others can chime in to add on to my very simplistic explanation.

    The higher cored CPU's having lower clocks is down the to thermal envelope (referred to as TDW in watts) they are trying to hit on that CPU. Each core when working is effectively a heather on the CPU package. On CPU's with more cores, the heaters are more dense, i.e more heaters per area as they try to hit the same physical CPU package size whether 6 or 10 cores.

    When all the cores are going a 10 core CPU will generate more heat than a 6 or 4 core CPU when the clocks are the same. To keep the 10 core CPU from hitting the thermal envelope limits that Intel put on them they decrease clock speed to offset the extra heat they are getting from the extra cores.
  • rhysiam - Tuesday, May 31, 2016 - link

    Yes, but the "boost" clocks refer to single (or lightly) threaded workloads. Only one core is working. Your last paragraph refers to "when all the cores are going" - that's a base clock situation. As I said in my post I have no issue at all with 10 active cores requiring a lower frequencies than 6 in the same package. It's the single core taxed scenario I struggle to understand.
  • adamod - Wednesday, June 1, 2016 - link

    just a side note....my x5660's are rated at 2.8 base and 3.2 boost and with all 12 cores (dual socket setup) and 24 threads going 100 percent load and pulling 102W ea (while rated at 95W ea) mine NEVER get below 3.1ghz...i somehow got lucky as shit because i have two that can do it and only get to about 80C with the stock cooling on my hp workstation...sometimes you get lucky.....unfortunately i cant overclock though :(
  • ThortonBe - Tuesday, May 31, 2016 - link

    Perhaps it is like this. Not every transister has the same performance. When you increase the number of transisters (more cores) you increase the chance that you will get some slower transisters. For the turbo spec every core needs to be able to hit it.

    By making the turbo spec lower for higher core parts, Intel will have more parts that can be sold as the more expensive part (e.g. with ten cores they have a higher chance of fabricating a slow core than with six cores so they lower the max turbo on the ten cores to compensate to keep yields high).

    Also, the floorplans (how the cores are wired up) might differ between the ten and eight core parts. In the GHz even the wiring can severely limit how high a frequency can be achieved. The more complex ten core designs are probably harder to wire up properly.
  • rhysiam - Wednesday, June 1, 2016 - link

    The idea of tolerances with individual cores is an interesting suggestion I hadn't thought of.

    RE your last paragraph though, my understanding is that all 4 of these SKUs are identical chips, with the lower parts simply having cores (and PCIe lanes) disabled. That was certainly the case with Haswell-E CPUs and I'm assuming the same here. So the 10 core designs are exactly the same as the 6 core.

    I suppose it's possible that the 6 core chips undergo testing and have their worst cores disabled, allowing higher turbo frequencies. It just seems, particularly with this generation, that the $1.7K flagship CPU is going to be such a low volume part anyway that they should be able to cherry-pick CPUs which can hit higher boost clocks.

    Your suggestion would certainly explain why they're pursuing and promoting "turbo boost max 3.0." It seems like it's a bit of a mess at the moment, but if they can allocate single threaded workloads to the "best" core, surely they could start to hit much better boost clocks?

    With Haswell the situation was even worse. You can buy a 4790K which can boost a single core to 4.4Ghz, but the best single threaded Haswell-E option (5930K), despite more the 50W additional TDP to play with and no iGPU for competition has to settle for a full 700mhz (16%) lower on the boost clock. I realise there's additional complexity with the "E" parts with larger cache and a wider memory bus, but that's a massive sacrifice to make that in many cases makes the cheaper 4790K the faster CPU, often by a wide margin.

    I'd welcome other thoughts/comments/ideas here.
  • SAAB340 - Wednesday, June 1, 2016 - link

    The 6950X will all have to be well binned to start with. They will all have to have the whole chip working and be able to do so at low voltage enough to meet the 140W TDP. If you have a leakier fully working chip it might still be sold as a 8 or 6 core version given that you just disable 2 or 4 (fully working but leaky) cores to meet the 140W TDP.

    The Turbo speed bins being lower in general the more cores you get on a CPU is certainly a function of that every individual core will have to be able to hit the highest turbo bin, even though it won't be TDP limited at that time. So you're pretty much guaranteed to be able to overclock to max single core turbo speed but you will most likely exceed the TDP.

    It's just the same as that its way harder to find a hexa- octa- or deca-core chip able to reach xGHz overclock on all cores compared to finding a quad-core chip able to reach the same xGHz as 'only' 4 cores have to be good enough overclockers to reach it. The more cores, the less likely when we start to push the limits.

    Turbo Boost Max 3.0 is certainly sounding like an interesting function where by the sound of it they instead try to identify the core that is able to run at the highest frequency. Here the opposite would be true, the more cores to choose from the higher likelihood to find one able to reach xGHz.
  • extide - Monday, June 6, 2016 - link

    The floorplan does not differ. All 4 of these sku's use the exact same 10-core die. The lower end ones just have cores disabled, but otherwise they are the same exact silicon.
  • jwcalla - Tuesday, May 31, 2016 - link

    No time for a 1070 review but a dozen-page day-one review for a platform nobody is going to buy.
  • rhysiam - Wednesday, June 1, 2016 - link

    Different authors. It's Ryan Smith who tackles the GPU reviews.
  • GreenReaper - Thursday, March 2, 2017 - link

    It is just as important to write the reviews for things people should not buy as it is for those they should. Perhaps more-so, so that people avoid making a mistake!
  • ex_User - Tuesday, May 31, 2016 - link

    Haven't you forgotten to change the following line on overclocking page: "MSI has improved its overclocking options as of late on the Z170 platform(...)"?
  • jardows2 - Tuesday, May 31, 2016 - link

    Little confused. The chart shows the i7-6950X as 10 core/ 20 threads, but you state it is"a full $634 more than the 8-core i7-6900K"

    I thought the i76900K was 4 core - 8 threads. Am I missing something here?
  • GTRagnarok - Tuesday, May 31, 2016 - link

    6900K is 8C/16T. Maybe you're thinking of the mainstream Skylake 6700K?
  • jardows2 - Tuesday, May 31, 2016 - link

    That would be my confusion! Thanks for setting me straight!
  • mapesdhs - Thursday, June 9, 2016 - link

    Don't blame yourself, Intel's product naming is really dumb.
  • zeeBomb - Tuesday, May 31, 2016 - link

    Golly...dreams money CAN'T buy.
  • maxxbot - Tuesday, May 31, 2016 - link

    I've been easy on Intel these past few years but they deserve nothing but ridicule for this launch, the fact that you still need a spend a full $1000 for any 8-core CPU is a disgrace.
  • RussianSensation - Tuesday, May 31, 2016 - link

    It's a slap in the face when 6850/6900/6950X are also crap overclockers and will get owned hard in games and every day tasks by a $310 6700K. The only CPU that even remotely makes sense is the 6800K. For workstation use case, dual Xeons will smash the 6950X. Heck, it's better to build a 6900K + 6700K in the same case allowing one to be productive and game at the same time. Phanteks makes such cases now. 6950X is just a way to show your status, nothing more.
  • mapesdhs - Thursday, June 9, 2016 - link

    Something usually missing from reviews now is an oc'd 4820K, which is annoying because a 4c IB-E on X79 allows for quite a lot of oc headroom given the high rating of the socket and the beefy power delivery available on boards like the R4E, etc. I bet it would give many of the newer CPUs a serious pelting.
  • Drazick - Tuesday, May 31, 2016 - link

    Could we have Extreme Edition with Iris Pro + 128MB eDRAM?

    That would be a great addition (Even only for the 6 Cores Part).
  • Eden-K121D - Wednesday, June 1, 2016 - link

    Iris pro would be useless but i agree with the eDRAM acting as L4 Cache
  • barleyguy - Tuesday, May 31, 2016 - link

    Great review.

    One possible omission though: You mentioned that the Xeon E-2640 is a better deal as far as price/performance, but there are no Xeons on the benchmark charts. Do you plan to review the E-2640 at some time in the future?

    Thanks.
  • ShieTar - Tuesday, May 31, 2016 - link

    That might indeed be a statement which needs to be proven by tests?
    The 2640 has 10 Broadwell-Cores at a 2.8GHz All-Cores-Turbo, the 6950X should have a 10-core-Turbo of 3.2GHz, so you might argue you get 87.5% of the Performance for ~60% of the CPU cost. But the 2640 does have a slower verified memory speed, which may have a little impact. And its turbo boost settings are defined to hit a 90W TDP, and I don't think you can change that even in a workstation with plenty of cooling available. Add to that the fact that you can overclock to improve the performance of the 6950X, and that the 700$ price difference should be considered relative to the overall system cost, and you probably end up with very similar price-to-performance ratios.

    I think the stronger challenger to the 6950X price-to-performance figures is the 2687W v4, which can be had for just over 2k$, and gets you annother 2 cores at almost the same clocks. That's ~16% more performance for ~16% more CPU cost, which translates into less than 10% higher system cost.
  • samer1970 - Tuesday, May 31, 2016 - link

    Hello ,

    we all know that games dont use more than 8 threads today ...

    so to take advantage of an 8 cores or 10 cores CPU in Gaming you should Disable HT (Hyperthreading) and run the gaming test again to compare it against the 4 cores i7 6700K .

    and test it with SLI as well to reach the i7 6700k bottleneck !

    let me put it more simple ,

    The i7 6700K has 4 cores and can oc to 4.4 ghz easy . this CPU will give us 8 Virtual cores comparable to 2.2 GHZ clock for each virtual core .

    However the 8 Coresi7 6900K , With the HT Turned OFF , will give us 8 cores @ 4.4 ghz EACH !

    Thats double the speed of the 4 cores i7 ! if the game uses 8 threads .

    EVEN if we dont OC the 8 cores , it would be 3.2GHZ VS 2.2 GHZ !!!

    if you ask why Disable HT ? simple because the game will never use 16 Virtual cores !!! and the advantage is LOST .

    Please run the test again for games with HT turned off .

    and to stress the CPU more , TEST SLI as well , we want the i7 6700K to bottleneck !

    THANKS

    oh and Intel Should release i5 Broadwel-E CPU , 8 cores without HT , CHEAPER and BETTER for GAMERS
  • RussianSensation - Tuesday, May 31, 2016 - link

    Nice try but no cigar. 6700K @ 4.8ghz + HT is the optimal gaming CPU. No current game even scales linearly across 6 cores + HT. 8-10 core CPU with a slower architecture would lose badly to an i7 6700K @ 4.8Ghz + DDR4 4000.

    There is not a chance 6950X @ 4.4Ghz can keep up with 6700K OC.
    http://www.techspot.com/article/1171-ddr4-4000-mhz...

    By the time games use 8-10 cores, we'll be on PS5/XB2 generation in 2020-2021 and Icelake-E. Broadwell-E 8-10 cores will be outdated.
  • adamod - Wednesday, June 1, 2016 - link

    http://www.pcworld.com/article/3039552/hardware/te...
    look at the ashes bench
  • mapesdhs - Thursday, June 9, 2016 - link

    Oh grud not here aswell! You've been banging on about this HT thing on toms for ages.
  • someonesomewherelse - Thursday, September 1, 2016 - link

    Or fix the cpu scheduler to properly schedule threads for the highest trough put/latency/power efficiency depending on what the thread/program is doing (compressing a lot of data for archival doesn't need low latency, games/multimedia/UIs/... need low latency) the state of the machine (if you are connected to the power grid and not over heating power efficiency is not as important if you are on battery power in the middle of nowhere or if it's summer and you have no ac), user preferences (I might still be cheap and want lower total power consumption even if it means slightly less performance for things that are running in the background, or I could be using electric heating so power efficiency doesn't matter since the heat isn't waste, or maybe I want that things being run by me run better then the things that my friend is running over the network (I mean I would rather have my ui and videos smooth than his).
  • bji - Tuesday, May 31, 2016 - link

    One thing I never understand: what does "uncore" mean? It sounds like it's all the stuff that's not part of the cores. And yet, we have "Queue" and "I/O" listed separately. Why aren't those things "uncore"?
  • keeepcool - Tuesday, May 31, 2016 - link

    Wikipedia knows what it is:
    https://en.wikipedia.org/wiki/Uncore

    :)

    Also the Sparc is also somewhere burried in that mess.
  • bji - Tuesday, May 31, 2016 - link

    OK thanks for the pointer. So Uncore is Intel's way of referring to specific parts of the CPU that interface directly with the cores and have to be very high performance, mostly managing inter-core communication functions like cache coherency and memory access, and some high performance interconnect stuff like Thunderbolt. Not sure why they bother to have a specific name for these sections, instead of calling them out directly when they are interesting, but whatever.
  • Morawka - Tuesday, May 31, 2016 - link

    Wow price increases across the board. Even the 8core got a $100 increase almost. Lame.

    6950x was supposed to be $999, and the 8 core $600, but i see Intel doesnt have any competition so everyone has to pay.

    I'd wait for skylake E this fall/winter
  • Morawka - Tuesday, May 31, 2016 - link

    Newegg sells all xeons. Even the 20 core versions. No need to ask a system builder to order one for you
  • mooninite - Tuesday, May 31, 2016 - link

    O RLY? Find me a E3-1260L v5 on Newegg.
  • James S - Tuesday, May 31, 2016 - link

    I give you newegg doesn't sell every single CPU made but they do have the Xeon E3-1270 v5, they don't have the low power variants as you already know. One could simply snag one off Dell.com though.
  • legolasyiu - Tuesday, May 31, 2016 - link

    You should be able to overclock much better using Strix X99 gaming or Rampage V Extreme / Edition 10. I was clocking 4.3Ghz without issues with 6800K and will push 4.4Ghz soon
  • ezcameron76 - Tuesday, May 31, 2016 - link

    So I just bought the 6800k instead of the 5820k. After reading this I feel like I made the bad call and could have saved some money and get the Haswell E. Thoughts on this as I dont want to make a mistake and the new one has just shipped. I mainly play games but do some creation as well. I am redoing my pc and dont want to make a bad call if the 6800k can't overclock more then the haswell 5820k. Thoughts please everyone share.
  • ShieTar - Tuesday, May 31, 2016 - link

    You could have saved some money by not ordering a CPU on the very first day of availability. Other than that, there is no downside to having a 6800K instead of a 5820K, its just not vastly faster.
  • ezcameron76 - Tuesday, May 31, 2016 - link

    What would have been the difference in getting it say next week or at what time would you say would be better. I have to have the PC build by this Thursday so didn't have the time but I wouldn't think the price would change in a short amount of time.
  • ShieTar - Tuesday, May 31, 2016 - link

    About 50$ I assume. Don't know how to find this info for the US, but in Germany prices have dropped by 30€ from yesterday to today:
    http://geizhals.eu/?phist=1394467
  • ezcameron76 - Tuesday, May 31, 2016 - link

    I paid $450 on newegg
  • ezcameron76 - Tuesday, May 31, 2016 - link

    My question is should I return the 6800k for the 5820k as it will overclock better or no?
  • ShieTar - Wednesday, June 1, 2016 - link

    Not really, the first 10% of better OC will be wasted on compensation of the IPC improvement anyways. And with virtually no CPU-limited games out there, you don't really need to OC anyways.
  • TEAMSWITCHER - Tuesday, May 31, 2016 - link

    I'm not sure. Most of the reviews are overclocking the 10-core 6950X. I'm wondering if there will be some sweet 6-core parts (6800K and 6950K) that overclock great because the four disabled cores are used separate the six functional cores. I'm speculating that having active cores separated by inactive cores might help to impede thermal accumulation.

    It's a funny thought I had today, but I don't know of any way to find out which cores are disabled.
  • HighTech4US - Tuesday, May 31, 2016 - link

    Where the heck is the GTX 1080 review?

    It's been weeks since the NDA was lifted on it and now with the NDA lifted on the GTX 1070 nothing again.

    Since there was time to do this review excuses about not enough time to do a proper review won't hold water.
  • fanofanand - Tuesday, May 31, 2016 - link

    960 *cough*
  • JanSolo242 - Tuesday, May 31, 2016 - link

    For a mere $4,115, why not order a 22 core Xeon? :-D

    http://ark.intel.com/products/91317/Intel-Xeon-Pro...
  • piroroadkill - Tuesday, May 31, 2016 - link

    Shitty pricing, lame increases in performance.
    I really hope Zen lights a fire under their ass.
  • aggrokalle - Tuesday, May 31, 2016 - link

    Hi Ian, is the Intel Thermal Solution TS13A compatible with the thinner package of the Broadwell-E? I didn't find any informations on the intel website. Don't wonna break my shiny new toy :p
  • Godofmosquitos - Tuesday, May 31, 2016 - link

    Honestly - as the article itself mentions, the EE-line of CPU's have just fallen too far behind to be considered serious options for enthusiasts. At least imo. I still have a 980X clocked at 4GHz. And it runs everything with stellar performance. That was the last time Intel had an EE-CPU which was ahead of the curve. Also, as PCI-E 4.0 will seemingly require a new platform, due to lacking backwards compatibility of PCI-E 4.0 cards with 3.0 slots, I seriously cannot see anything justifying an upgrade before '18, when PCI-E 4.0 is out, we're on 10nm, and Intel Optane disks are readily available.
  • Godofmosquitos - Tuesday, May 31, 2016 - link

    Or well, for "the average" enthusiast at least ^^
  • Impulses - Tuesday, May 31, 2016 - link

    What I really don't get is why the 6800K is still saddled with a lower lane count... Aren't the price hikes and the lag to market enough of an HEDT differentiator? Is the lower lane count something that helps yields?

    They've gone backwards, from having an attractively priced 5820K that could lure some Z170/6700K buyers to basically making HEDT as irrelevant as possible unless you absolutely need the extra cores.

    A lot of enthusiasts that don't NEED 6+ cores but COULD benefit from it (photo/video work in the side etc) would be all over a more attractive and less ignored HEDT lineup.
  • rhysiam - Wednesday, June 1, 2016 - link

    I totally agree. I'm due for an upgrade and put myself exactly in that category of photo + video work on the side and being "lured" towards a 5820K. But the price hikes, lag to market and practically 0 performance seems to have pushed the "HEDT" line from enthusiast to niche. Reading this review I don't want anything to do with it.

    We've waited almost 2 years since the Haswell-E launch and the "update" offers significantly worse price/performance ratios.

    Especially with Skylake having plenty of PCIe lanes, with the right motherboard you're covered for 2 graphics cards (or 1 plus a RAID controller), several PCIe SSDs and a 10GBps NIC... plenty for the foreseeable future. Intel is making the cost of these 6+ core CPUs (both in terms of $$s and in the sacrifice you have to make in single threaded performance) larger and larger.

    My worry is that pushing up HEDT prices will allow them to bump up the prices of high end mainstream CPUs. Let's see how much the overclockable Kaby Lake i7 costs shall we? I sure hope Zen can shake things up.
  • adamod - Wednesday, June 1, 2016 - link

    market segmentation....no other reason
  • mapesdhs - Thursday, June 9, 2016 - link

    Doubly backwards given the 4820K was a 40-lane chip, whereas the 5820K isn't. It means a 4820K/X79 can do things for gaming with SLI/CF (and still have lanes for storage and other stuff) which a 5820K and 6800K can't.
  • rodmunch69 - Tuesday, May 31, 2016 - link

    I had a 980x and then upgraded to a 3930k... 4 years ago. The 2 extra cores were great and useful, but otherwise there wasn't a big difference between the chips. Reading this it doesn't seem like the base 6800k is really much of an upgrade over a 3930k. I've been wanting to upgrade if there was a reason to do so, but Intel again isn't giving me one. One thing however with the 980x is the motherboards and the related chipsets, that's where you'd see a big difference going with something like a 6800k and it would be the reason I'd move off a 980x, but only if I was looking for a reason to move.
  • rodmunch69 - Tuesday, May 31, 2016 - link

    I've had a 3930k for 4 years now and I still don't feel much of a need to upgrade. What is going on with Intel? They seriously need a competitor to kick them in the rear and push them ahead.
  • Witek - Thursday, June 16, 2016 - link

    @rodmunch69 - Yeah. I also almost 4 years on 3930K (at 3.2GHz->4.2GHz), which I got for something like 350$ at the time. And I still do not see anything better in similar price bracket. x86 arch is a shit, with limited instruction issue, substandard compilers targeting generic cpus, and power hungry circuits trying to workaround the arch limitations.

    The prices on these new CPUs are shit, and Intel do that because they do not have competition in high performance x86 market right now. I understand making 14nm chip is more costly than previous generations, but eh, still it is crazy. There is no point of using 14nm for desktop if it doesn't provide substantial performance boost or power saving, at similar price. I am seriously looking into ARM64, Power8, MIPS and other architectures (especially high core counts, or ones still in developement - like Out of the Box Computing Mill CPU), that can break this trend (at least on Linux). SPARC looks like highest performance at the moment, but the prices are crazy as hell. Intel Phi is also interesting for paralellized workloads, but the price is still very high (due to the smaller target market).

    Zen will help bring these prices into check. 16 core Zen (32 threads) at 1000$ would be awesome, and will bring all these Intel i7 cpus prices down substantially too.
  • vivs26 - Tuesday, May 31, 2016 - link

    Looking at single threaded and multi threaded performance cant help but be reminded of Amdahl's law. The performance you can extract out of your system is only as much your workload allows you to ....
  • ithehappy - Tuesday, May 31, 2016 - link

    I am still on i7 950, just with a GTX970, will it be worth it if I get the 6800K? Or shall I still wait for Skylake-E? Gaming is my main priority, and low power consumption, because mine is on nearly 24x7.
  • rhysiam - Wednesday, June 1, 2016 - link

    If all you care about is gaming, get an i7 6700K. There's almost no value proposition in these CPUs anymore unless you absolutely need more than 4 cores. Very few games have been shown to benefit from more than 4 cores at all, and the hyperthreading on the i7 6700K will be there to help if (probably when) games finally start to scale better. The single threaded performance of the 6700K is also significantly better. If you have high end graphics and a 120/144hz display, where CPU performance can sometimes start to matter, the 6700K is actually the faster CPU, and would net you higher fps than any of these overpriced Broadwell-E CPUs.

    The only argument you could make is that at some point in the future games might start to benefit from 6+ cores. We've already seen in gaming benchmarks of i5s vs i3s vs Pentiums that hyperthreading does a surprisingly good job at mitigating the impact of a game running more threads than you have CPU cores. There's a very good chance that the 4 Core + HT of an i7 6700K will hold its own in gaming for a long time to come. Even if that turns out to not be the case, you'd be much better off in the long run just upgrading your machine when you need it rather than sinking money into a Broadwell-E system now.
  • mapesdhs - Thursday, June 9, 2016 - link

    Re your power consumption, if that's because you care about long term cost, then there's a lot of utility in used hw such as a 3930K. It'll give a very good boost, it's much easier to oc than the later models, it's cheap, the platform supports broad SLI/CF, and it'd take years for the slightly higher power consumption of a 4.8 3930K to wipe out the huge cost saving vs. a 3930K (BIN for 96 UKP on eBay UK atm). It'll also better exploit future improvements in game design that support more cores.

    If you do want something new though, then rhysiam is right, 6700K or 4790K is fine.

    Or go for something inbetween, like a used 4930K (costs a bit more, but higher IPC and some other benefits over SB-E).

    However, if you do want something new, then rhysiam is right, the 6700K is plenty, or indeed a 4790K.
  • asmian - Tuesday, May 31, 2016 - link

    Quite apart from cost/performance, the key question for some is whether this last version of Broadwell has had retrofitted the SGX extensions that were introduced with Skylake. Was this feature left out as it wasn't part of the original Broadwell platform? (Preferable) lack of SGX will mean this is the last secure-from-remote-snooping Intel processor release, otherwise the last will unfortunately be Haswell/Haswell-E.

    Anandtech has been conspicuously silent on SGX and why this is a privacy nightmare for users, unable to monitor or detect exactly what software may be secretly running on their processors due to a by-design inability to snoop on the process in-use memory. The benign use cases usually put forward hardly outweigh the risk of mode-adoption by virii, trojans and user-snooping malware of government origin, able to obfuscate their own remote loading, which would potentially be immune from detection by any means (likely including by the AV and anti-malware industry).

    For more on why SGX is of concern read http://theinvisiblethings.blogspot.co.uk/2013_08_0... and http://theinvisiblethings.blogspot.co.uk/2013_09_0...

    Please confirm definitively whether Broadwell-E has SGX or not.
  • Jvboom - Tuesday, May 31, 2016 - link

    This is so disappointing. Every time a new release comes out I come on here hoping to justify buying. The numbers just aren't there for the $$.
  • Tchamber - Tuesday, May 31, 2016 - link

    Is anyone else disappointed that a new, cutting-edge CPU consumes 10W more than my 2010 i7 970 with the same number of cores? Add to that, prices go up faster than performance does. That makes it nice to see that CPUs don't make much difference in gaming. There are plenty of features I'd like, but I can wait till Zen comes out. In all honesty, I'll probably buy Zen just to support the underdog.
  • krypto1300 - Tuesday, May 31, 2016 - link

    Man, and I'm still getting by with my workhorse 1366 platform from 6 years ago. Running a Xeon X5650 @ 3.66GHz , 16GB of DDR31866 and a GTX 970! Everything still runs great! Doom and Project Cars do 1440P @ 60fps no problem!
  • mapesdhs - Thursday, June 9, 2016 - link

    A good example that shows the continued utility of what IMO was the last really ground-breaking new chipset release. I can remember reading every review I could find at the time about Nehalem and X58. Not done that since.

    Btw, are you by any chance using a Gigabyte board? 8)
  • hoohoo - Tuesday, May 31, 2016 - link

    Single thread performance lower than previous generation, but more cores. Sadly the price is totally out of line.

    I want to upgrade from an i7-3820, these things do offer the bang but the buck is definitely missing.
  • hoohoo - Tuesday, May 31, 2016 - link

    Shouldn't they be using their GTX 1080? The games do not seem to be CPU bound. Am I reading the charts wrong?
  • pavag - Tuesday, May 31, 2016 - link

    "For $1721, [...] can invest in either the 14-core E5-2680 v4 [...] or get double the cores in a 2P system and using the E5-2640 v4 processor: a 10-core 2.4 GHz/3.4 "

    Ok. You said it, you own it.

    Do the benchmarks and compare. I actually need it.
  • Seekmore - Wednesday, June 1, 2016 - link

    Intel I7 6950x-Like it already? It is overpriced and is the most expensive in the history of processors till the date. An excellent Processor with excellent performance calculated for its price and features it supports. One of the fastest processor ranking high in the category.
    http://www.comparecpus.com/en/intel-i7-5960x-vs-am...
    Get other details of Intel I7 6950x Extreme Edition to find out why is everybody looking for it..
  • Spunjji - Wednesday, June 1, 2016 - link

    I saw the prices and threw up a little in my mouth. This is market capitalism at its nadir :(
  • Spunjji - Wednesday, June 1, 2016 - link

    ...specifically with respect to the tech industry, obviously - didn't to be that hyperbolic.
  • stimudent - Wednesday, June 1, 2016 - link

    I just need to do my banking and watch porn. 56 cores should do the trick.
  • Gothmoth - Thursday, June 2, 2016 - link

    multitasking is the word.

    i use lightroom, photoshop, autopano giga at the same time very often.. every core helps.
    or i render out a video with premiere while i edit a composition in after effects.
  • iGigaflop - Thursday, June 2, 2016 - link

    I'll keep my 5820k i was kicking myself for not waiting 2 months for the 6800k but broadwell might be a little faster per clock but it doesnt overclock as good as haswell. I run my 5820k at 4.7 at 1.313 volts i think mine in a great overclocker and it never goes above mid 70c. But for everyday use i keep it at 4.2 and it stays around 50c. Im using a h100v2 and a cm storm stryker case. I think pretty much every 5820k should go 4.3-4.5ghz. And im running it off a asus x99 deluxe board. I just hope they keep the x99 v3 socket for skylake e.
  • Jackie60 - Friday, June 3, 2016 - link

    Why do you bother with the pointless GPU limited benchmarks-it's a total waste of time and effort and tell us nothing.
  • sor - Sunday, June 5, 2016 - link

    Actually, without those benchmarks I wouldn't have known for sure that we were GPU limited or that CPU makes almost NO difference (I might have expected a few more FPS). I also found it interesting that one game stood out as being CPU limited.
  • IUU - Sunday, June 5, 2016 - link

    All good and awesome but Intel's persistence on maintaining many-core cpus as niche expensive
    products is a dangerous strategy, that may ultimately lead to its downfall.

    If the IT world would like to see a renewed growth, high performance computing should enter the lives of ordinary Joes. Plenty of apps could make their way into our lives via many-core cpus.
    Games with really improved AI, local voice and image recognition, reading comprehension, advanced text editing, advanced 3d printing, etc etc.

    Looking down on the needs of ordinary people, is nonsensical, for it is "simple joes", the needs of which led to many core cpus and advanced gpus,that the hpc world so much uses. No further development can come if the same old strategy is not applied.
  • Motion2082 - Monday, June 6, 2016 - link

    What other XEONs would you recommend for 8 core?
  • craveable - Tuesday, June 7, 2016 - link

    Seriously, guys. My personal opinion is to skip Broadwell-E, unless you absolutely cannot wait till Q1 2017. It's a very realistic timeframe when Skylake-E will be released. Some journalist spotted a Gigabyte motherboard on Computex 2016, suited for Skylake-E, sporting a new LGA 3647 socket and 6-channel memory controller. Memory bandwidth was always a crucial selling point of HEDT CPUs, so a transition from 4 (current) to 6 (Skylake-E) is an important incremental update.
  • craveable - Tuesday, June 7, 2016 - link

    However, further studying of leaks reveled that the next HEDT will be released in Q2 2017 and be probably named Skylake-X. In the leaked roadmap the Skylake-X availability timeframe is strangely the same as for Kaby Lake-X.
  • craveable - Wednesday, June 8, 2016 - link

    However #2, it's likely, that only server part of Skylake (EP) will get LGA 3647 and 6-channel controller. And Skylake-X will get 4 channels and the same LGA 2011-3.
  • galta - Wednesday, June 8, 2016 - link

    There is no point in saying that someone should skip upgrading. But for budget constraints, if your processor is not good enough, you have to upgrade; if it still performs as you need, you do not need to upgrade, even if new generation has the same price and 50% more performance.
    It is said that Intel is trying to milk us with a +USD1,700 cpu and that not all softwares (games) are fully optimized for more than 2-4 cores, but if you still run a i7 9xx series, it is probably time to upgrade, even if Broadwell-E increase in performance is not huge.
  • mapesdhs - Thursday, June 9, 2016 - link

    That makes no sense. You state that BW-E performance increase isn't that much, while saying someone with an X58 6c should still upgrade anyway. What for? Why would someone on a constrained budget spend so much for so little speed gain? Given the platform differences, X58 users would probably benefit more from newer storage tech such as M2 and newer USB3x, in which case a 5930K on a decent board would be more sensible (or a 5820K if they don't need the PCIe lanes). What you've suggested just sounds like upgrading for its own sake. These days there are far more nuanced options available, especially from the used market.
  • fifa17 - Sunday, June 12, 2016 - link

    I'm putting a system together and I'm stuck between Core i7-6850K and Core i7-6800K on a ROG STRIX X99 GAMING motherboard. The only confusing point is the number of PCI Express Lanes which is 28 for i7-6800K and 40 for the other. I've pre ordered the ASUS ROG Strix GeForce® GTX 1080 and don't plan on adding a second one in the future. I'm also not getting an optical drive of any sort and no HDDs. Just a 1TB Samsung 850 PRO SSD. Which CPU should I choose? Any ideas about these components?
  • legolasyiu - Wednesday, June 29, 2016 - link

    It will be best to get Core i7 6850 with 40 lanes for GTX 1080 16x with 850 Pro.
  • h4gfish - Monday, June 13, 2016 - link

    > This combination of colors tends to go down well with whoever loves gold,
    > perhaps indicating that Intel is looking at a new kind of premium customer.

    Whom do you mean by this 'new kind of premium customer'? Indians? Middle easterners? New York Italians? Rap stars? Or is it just that enterprise buyers don't tend to pay much attention to packaging? Maybe it's just me and my stereotypes, but I heard a 'sneer' in that sentence that is possibly unintentional.
  • Witek - Thursday, June 16, 2016 - link

    I still think overclocked i7-3930K provides awesome value, 6850K is more expensive, absymally faster, and not that much more power efficient. Only good reason is if you care about AVX2, few additional special instructions (like ADDX, FMA3, etc), improves AES speeds and random numbers generation, or PCI 3.0 (3930K only supports PCI 2.0). But in generic applications speed improvements are 10-15% on average. Not worth 2-3 times money.

    The price of 6950X is a joke, and it is better probably to get some Xeon (or two) at that price point. I understand making these chips is very expensive, but it is not practical to sell them at these prices.
  • SanX - Tuesday, June 21, 2016 - link

    Did I correctly understood this article mentions that if you use AVX instructions then overclocked freq must be up to 300 MHz less, which basically means that overclock is not possible? And did Ian use water cooling or all was on air?
  • SlyNine - Wednesday, June 22, 2016 - link

    Pick up chicks easier. They loves the core count.
  • spooh - Thursday, August 4, 2016 - link

    Any clues, if Anniversary update supports Turbo Boost Max?

Log in

Don't have an account? Sign up now