Comments Locked

477 Comments

Back to Article

  • Zoeff - Wednesday, August 5, 2015 - link

    Thank you for the timely review! :)
  • Whatchagot - Wednesday, August 5, 2015 - link

    As a Sandy Bridge owner I've been really looking forward to this. Sadly it's a trailer I've been waiting for and now the show has arrived. Great write up as always.
  • TelstarTOS - Wednesday, August 5, 2015 - link

    Wonder if i'll make the upgrade or wait for skylake-E next spring (more likely)
  • Despoiler - Wednesday, August 5, 2015 - link

    I'm waiting to see what AMD's Zen brings to the table next year.
  • darkfalz - Wednesday, August 5, 2015 - link

    Who knows, maybe they'll catch Nehalem?
  • Refuge - Wednesday, August 5, 2015 - link

    Hey if they catch up to Sandybridge they will won't be too far off the mark to retake the crown apparently! ROFL....
  • Michael Bay - Wednesday, August 5, 2015 - link

    In their dreams. The moment that happens, Intel will trot out something actually new and destroy everything again.
  • lilmoe - Wednesday, August 5, 2015 - link

    But you wouldn't want that to happen since you love how much Intel are milking their customers right?

    I mean, who the hell cares for the benefits that come from competition...... Silly me.
  • Michael Bay - Thursday, August 6, 2015 - link

    You`re not talking to me but your own projections.
  • cykodrone - Friday, August 21, 2015 - link

    Agreed, the second AMD goes under, Intel will announce $1000 'consumer' CPUs, enjoy Intel phanboiz. :P
  • taltamir - Friday, October 23, 2015 - link

    Recognizing reality as it is, and being willing to admit that AMD is a joke, does not make someone an intel fanboy nor does it mean they want intel to win and AMD to fold.
  • Samus - Wednesday, August 5, 2015 - link

    This launch and the performance of skylake over haswell/broadwell is entirely unexpected because it is wholly unnecessary. The ipc improvement is upward of 10% in some cases, when normally it has been 4-6% in the past. It's amazing that the ipc improvement over nehalem is almost 50% while using nearly half the power. They are finally progressing after dogging along since sandybridge
  • ptmmac - Saturday, August 8, 2015 - link

    Intel has been turning a double decked supertanker to catch mobile chips on dozens of smaller platforms. Notably Apple is riding on a trialled super yacht and is leading the pack. The real,race going on right now is who will build the first photonics based chip and actually make money selling it. Intel is in that race, but we don't really have much data as to who will take the lead there. The real race is heading towards where the puck will be in 10 years. This is like watching Americas cup in the 50's. It is not important to the average person.
  • Jaybus - Monday, August 10, 2015 - link

    Expect hybrid chips first. These will have photonic i/o with electronic cores. This will allow an inter-chip [serial] bus at core-clock speeds, drastically reducing the need for on-chip caching and replacing 64 (or more) traces from CPU to DRAM with a single optical trace. L3 (maybe L2) could likely be eliminated, freeing up real estate and reducing power. Essentially, it allows using DRAM modules, peripheral chips, and even GPUs and other CPUs as if they were all on-chip. Actual photonic cores would come later, perhaps much later.
  • CaedenV - Wednesday, August 5, 2015 - link

    And we would all buy that processor rather than eternally waiting in purgatory. I really hope AMD puts out something amazing, even if I am not going to buy it.
  • TheGladiator2212 - Friday, October 16, 2020 - link

    Yup...
    This comment aged badly
  • Mariosti - Friday, September 10, 2021 - link

    Well, this comment didn't age well.
  • prisonerX - Thursday, August 6, 2015 - link

    I think the funniest thing is how people bag AMD and praise Intel while paying through the nose for CPUs that are marginally faster (or marginally slower) than last generation.

    It's especially funny since Intel is selling its hottest chips (TDP wise, compared to other CPUs it makes) to the "mainstream" while wasting a huge % of the die on a useless integrated GPU that no-one who is willing to pay actually uses.

    I always buy AMD because I support competition, it gives me much better value for my money, provides more balanced and batter matched performance and because I'm not a child I have no need for bragging rights about the singe threaded performance of my CPU that I don't need.
  • D. Lister - Thursday, August 6, 2015 - link

    "I always buy AMD because I support competition, it gives me much better value for my money, provides more balanced and batter matched performance and because I'm not a child I have no need for bragging rights about the singe threaded performance of my CPU that I don't need."

    That's right... children brag about single-thread performance (was there anyone in this section actually doing that though?). Adults, on the other hand apparently, brag about several things simultaneously, like the better performance per dollar of their purchase, and having a superior sense of maturity, morality, economics and technology.

    You sir, are duly nominated for the AnandTech comment section's esteemed "Irony of The Month" award for August '15... bravo!
  • Eugene86 - Thursday, August 6, 2015 - link

    Well he's gotta justify that purchase decision to himself somehow...
  • MapRef41N93W - Friday, August 7, 2015 - link

    Intel users don't have to brag about single threaded performance. Intel CPUs destroy AMD in multi-threaded as well.....
  • SIDESIDE - Sunday, August 9, 2015 - link

    Actually, you are a child. As for you throwing gasoline on the fire here in the intel vs. amd debate. THERE IS NO DEBATE, intel is literally twice as efficient and powerful as amd, and why wouldn't it? they are 2twice as old a company and have a lunch budget bigger than amd's R&D budget. amd's are a budget line of processors, so you buy budget cause money is tight, good for you. I run a video company and will gladly pay and extra $150 for twice as fast rendering all year. I hope AMD the best because competition is ALWAYS a good thing. but you, prisonerX clearly have your head up your A**
  • medi03 - Thursday, August 6, 2015 - link

    They did that quite a while ago.
  • Artas1984 - Thursday, August 6, 2015 - link

    WELL SAID!!!
  • SkOrPn - Tuesday, December 13, 2016 - link

    Zen appears to be matching the $1050 i7-6900K. I would say that is far better then Nehalem.
  • mmrezaie - Wednesday, August 5, 2015 - link

    Zen needs more than 40% improvement to be competent, but I am hoping as well.
  • mdriftmeyer - Wednesday, August 5, 2015 - link

    The word you're looking for is competitive.
  • Peichen - Wednesday, August 5, 2015 - link

    Competent, competitive. AMD is neither at the moment so both of you are correct.
  • prisonerX - Thursday, August 6, 2015 - link

    Actually AMD is very competent given how much money they have to work with. AMD would be much more competitive too now if it were not for Intel's well documented illegal practices against AMD.

    It's like a thief robbed your home and you're praising the fact that it's great that you can go to the pawn shop and buy what he stole from you.
  • mapesdhs - Thursday, August 6, 2015 - link

    Wow, blaming years of terrible decisions on Intel... that's a new one. It wasn't Intel that made AMD adopt automated design tools, or ignore the much easier, faster and obvious option of releasing a tweaked 8-core Ph2. AMD has made massive losses year after year. Their debts are awful. Blaming all this on Intel is just nuts.
  • medi03 - Thursday, August 6, 2015 - link

    Yeah. Blaming Intel that HP didn't want to use FASTER AMD CPUs FOR FREE, fearing Intel's illegal revenge is just nuts.

    AMD Athlon 64's beat Intel in all regards, they were faster, cheaper and less power hungry. Yet Intel was selling several times more Prescotts,

    Not being able to profit even in a situation when you have superior product (despite much modest R&D budget), yeah, why blame intel.
  • MrBungle123 - Sunday, August 9, 2015 - link

    In the Athlon 64 days, yes, AMD had a better product but the cold hard truth behind the curtain was that AMD didn't have the manufacturing capacity to supply everyone that Intel was feeding chips to.
  • silverblue - Thursday, August 6, 2015 - link

    A "tweaked 8-core Ph2"? Putting aside the fact that significant changes would've been required to the fetch and retire hardware (the integer units themselves were very capable but were underutilised), a better IMC and all the modern instruction sets that K10 didn't support, AMD had already developed its replacement. It probably would've buried them to have to shelve Bulldozer (twice, it turns out) and redevelop what was essentially a 12-year old micro-architecture.

    AMD were under pressure to deliver Bulldozer hence the cutting of corners and the decision to go with GF's poor 32nm process as they simply didn't have any alternative (plus I imagine they were promised far more than GF could deliver). Phenom II was not enough against Nehalem, let alone Sandy Bridge.

    Blaming Intel doesn't help either as AMD couldn't exactly saturate the market with their products even when they were fabbing them themselves, however I think the huge drop in mainstream CPU prices when Core 2 was released along with the huge price paid for ATi did more damage than any bribing of retailers and systems manufacturers.
  • nikaldro - Wednesday, August 5, 2015 - link

    40% over excavator, with 8 cores, good clockspeeds and good pricing doesn't sound that bad. I'll wait till Zen comes out, then decide.
  • Spoelie - Thursday, August 6, 2015 - link

    IPC difference between piledriver and skylake amounts to 80%... Lets hope excavator's IPC is better than anticipated and 40% is sandbagging it a bit.

    Given AMD's track record of overpromising and underdelivering, I'm afraid Zen will massively disappoint.
  • Asomething - Thursday, August 6, 2015 - link

    Well it will only be behind by something like 15-25% if the difference between piledriver and skylake is 80% since piledriver to excavator is supposed to be a good 20% jump. If amd can manage to catchup in any meaningful way and make chips that can touch 5ghz then things might turn out ok.
  • mapesdhs - Thursday, August 6, 2015 - link

    Catchup will not be good enough. They need to be usefully competitive to pull people away from Intel into a platform switch, especially business, who have to think about this sort of thing for the long haul, and AMD's track record has been pretty woeful in this regard. I hope they can bring it to the table with Zen, but I'll believe it when I see it. Highly unlikely Intel isn't planning to either splat its prices or shove up performance, etc., if they need to when Zen comes out, especially for consumer CPUs. We know what's really possible based on how many cores, TDP, clock rates, etc. are used for the XEONs, but that potential just hasn't been put into a consumer chip yet.
    Remember, Intel could have released an 8-core for X79, but they didn't because there was no need; indeed the 3930K *is* an 8-core, just with 2 cores disabled (read the reviews). Ever since then, again and again, Intel has held back what it's perfectly capable of producing if it wanted to. The low clock of the 5960X is yet another example, it could easily be much higher.
  • MapRef41N93W - Friday, August 7, 2015 - link

    You're assuming it's going to be a flat 40% over Excavator and not a best case scenario 40% (like every single AMD future performance projection always is...). It's more than likely a flat 20% IPC increase which puts it even behind Nehalem IPC wise.

    Top off the fact that it's AMD's first FinFET part (look at the penalty Intel paid in clockspeed with the transition to FinFET with IB/HW) and a transition to a new scalable uARCH (again look at the clockspeed hit Intel took when going from Netburst to scalable core arch, very similar to what AMD is doing now actually) and I can see Zen parts clocking horribly on top of that. Being on a Samsung node that is designed with low power in mind won't help their case either.

    You may get an 8 core Zen part for $300-$400 but it probably won't clock worth a damn and end up at 3.5-4GHz on average. So it would be a much worse choice than a 5820k for most people.
  • mapesdhs - Wednesday, August 12, 2015 - link

    Btw, I wasn't assuming anything about Zen, I really haven't a clue how it'll compare to Intel's offerings of the day. I hope it's good, but with all that's happened before, I hope for the best but expect the worst, though I'd like to be wrong.
  • Azix - Friday, August 21, 2015 - link

    You guys are being pretty negative on AMD. AMD tried to do an 8core chip on 32nm, maybe that was their mistake. The market wasn't even ready considering how long that way and where we are now. I do think intel got them pretty badly with their cheating

    The next processors are on a much better process. Based on the process alone we would expect a significant bit more performance than some seem willing to allow. Not to mention the original architecture was designed on a 32nm process. It's no surprise it would fall that far behind intel who is currently on 14nm. As time progresses though, those process jumps will take intel longer and longer. AMD will be much closer. Next year will be the first these two are on the same process (similar anyway). in a long while and it will last till at least 2017. AMD should be able to pick up some CPU sales next year and hopefully return to profitability. Intel also enjoys ddr4 support.

    Stop pushing old 32nm architectures and crappy motherboards.
  • SkOrPn - Tuesday, December 13, 2016 - link

    Well if you were paying attention to AMD news today, maybe you partially got your answer finally. Jim Keller yet again to the rescue. Ryzen up and take note... AMD is back...
  • CaedenV - Wednesday, August 5, 2015 - link

    Agreed, seems like the only way to get a real performance boost is to up the core count rather than waiting for dramatically more powerful single-core parts to hit the market.
  • kmmatney - Wednesday, August 5, 2015 - link

    If you have an overclocked SandyBridge, it seems like a lot of money to spend (new motherboard and memory) for a 30% gain in speed. I personally like to upgrade my GPU and CPU when I can get close the double the performance of the previous hardware. It's a nice improvement here, but nothing earth=shattering - especially considering you need a new motherboard and memory.
  • Midwayman - Wednesday, August 5, 2015 - link

    And right as dx12 is hitting as well. That sandy bridge may live a couple more generations if dx12 lives up to the hype.
  • freaqiedude - Wednesday, August 5, 2015 - link

    agreed I really don't see the point of spending money for a 30% speedbump in general, (as its not that much) when the benefit in games is barely a few percent, and my other workloads are fast enough as is.

    If Intel would release a mainstream hexa/octa core I would be all over that, as the things I do that are heavy are all SIMD and thus fully multithreaded, but I can't justify a new pc for 25% extra performance in some area's. with CPU performance becoming less and less relevant for games that atleast is no reason for me to upgrade...
  • Xenonite - Thursday, August 6, 2015 - link

    "If Intel would release a mainstream hexa/octa core I would be all over that, as the things I do that are heavy are all SIMD and thus fully multithreaded, but I can't justify a new pc for 25% extra performance in some area's."

    SIMD actually has absolutely nothing to do with multithreading. SIMD refers to instruction-level parallellism, and all that has to be done to make use of it, for a well-coded app, is to recompile with the appropriate compiler flag. If the apps you are interested in have indeed been SIMD optimised, then the new AVX and AVX2 instructions have the potential to DOUBLE your CPU performance. Even if your application has been carefully designed with multi-threading in mind (which very few developers can, let alone are willing to, do) the move from a quad core to a hexa core CPU will yield a best-case performance increase of less than 50%, which is less than half what AVX and AVX2 brings to the table (with AVX-512 having the potential to again provide double the performance of AVX/AVX2).

    Unfortunately it seems that almost all developers simply refuse to support the new AVX instructions, with most apps being compiled for >10 year old SSE or SSE2 processors.

    If someone actually tried, these new processors (actually Haswell and Broadwell too) could easily provide double the performance of Sandy Bridge on integer workloads. When compared to the 900-series Nehalem-based CPUs, the increase would be even greater and applicable to all workloads (integer and floating point).
  • boeush - Thursday, August 6, 2015 - link

    Right, and wrong. SIMD are vector based calculations. Most code and algorithms do not involve vector math (whether FP or integer). So compiling with or without appropriate switches will not make much of a difference for the vast majority of programs. That's not to say that certain specialized scenarios can't benefit - but even then you still run into a SIMD version of Amdahl's Law, with speedup being strictly limited to the fraction of the code (and overall CPU time spent) that is vectorizable in the first place. Ironically, some of the best vectorizable scenarios are also embarrassingly parallel and suitable to offloading on the GPU (e.g. via OpenCL, or via 3D graphics APIs and programmable shaders) - so with that option now widely available, technologically mature, and performant well beyond any CPU's capability, the practical utility of SSE/AVX is diminished even further. Then there is the fact that a compiler is not really intelligent enough to automatically rewrite your code for you to take good advantage of AVX; you'd actually have to code/build against hand-optimized AVX-centric libraries in the first place. And lastly, AVX 512 is available only on Xeons (Knights Landing Phi and Skylake) so no developer targeting the consumer base can take advantage of AVX 512.
  • Gonemad - Wednesday, August 5, 2015 - link

    I'm running an i7 920 and was asking myself the same thing, since I'm getting near 60-ish FPS on GTA 5 with everything on at 1080p (more like 1920 x 1200), running with a R9 280. It seems the CPU would be holding the GFX card back, but not on GTA 5.

    Warcraft - who could have guessed - is getting abysmal 30 FPS just standing still in the Garrison. However, system resources shows GFX card is being pushed, while the CPU barely needs to move.

    I was thinking perhaps the multicore incompatibility on Warcraft would be an issue, but then again the evidence I have shows otherwise. On the other hand, GTA 5, that was created in the multicore era, runs smoothly.

    Either I have an aberrant system, or some i7 920 era benchmarks could help me understand what exactly do I need to upgrade. Even specific Warcraft behaviour on benchmarks could help me, but I couldn't find any good decisive benchmarks on this Blizzard title... not recently.
  • Samus - Wednesday, August 5, 2015 - link

    The problem now with nehalem and the first gen i7 in general isn't the CPU, but the x58 chipset and its outdated PCI express bus and quickpath creating a bottleneck. The triple channel memory controller went mostly unsaturated because of the other chipset bottlenecks which is why it was dropped and (mostly) never reintroduced outside of enthusiast x99 quad channel interface.

    For certain applications the i7 920 is, amazingly, still competitive today, but gaming is not one of them. An SLI GTX 570 configuration saturates the bus, I found out first hand that is about the most you can get out of the platform.
  • D. Lister - Thursday, August 6, 2015 - link

    Well said. The i7 9xx series had a good run, but now, as an enthusiast/gamer in '15, you wouldn't want to go any lower than Sandy Bridge.
  • vdek - Thursday, August 6, 2015 - link

    I'm still running my x58 motherboard. I ended up upgrading to a Xeon 5650 for $75, which is a 6 core 32nm CPU compatible with the x58. Overclocked at 4.2ghz on air, the thing has excellent gaming performance, I see absolutely no reason to upgrade to Skylake.
  • bischofs - Thursday, August 6, 2015 - link

    Absolutely agree, My overclocked 920 still runs like a watch after 8 years. Not sure what Intel is doing these days, but lack of competition is really impacting this market.
  • stux - Friday, August 7, 2015 - link

    I upgraded my 920 to a 990x, it runs at about 4.4ghz on air in an XPC chassis! and has 6/12 cores.

    I bought it off ebay cheap, and with an SSD on a SATA3 card I see no reason to upgrade. It works fantastically well, and is pretty much as fast as any modern 4 core machine.
  • Samus - Sunday, October 25, 2015 - link

    If you single GPU and don't go ultra-high-end then gaming is still relevant on x58, but it really isn't capable of SLI due to PCIe 2.0 and the lanes being reduced to 8x electrical when more than one 16x length slot is used. QPI also isn't very efficient by todays standards and at the time, AMD still had a better on-die memory controller, but Intel's first attempt was commendable, but completely overhauled with Sandy Bridge which offered virtually the same performance from 2 channels. Anybody who has run dual channel on X58 knows how bad it actually is and why triple channel is needed to keep it competitive with todays platforms.

    I loved X58. It is undoubtedly the most stable platform I'd had since the 440BX. But as I said, by todays standards, it makes Sandy Bridge seem groundbreaking, not because of the IPC, but because of the chipset platform. The reduced power consumption, simplicity and overall smaller-size and lower cost of 60/70 series chipsets, then the incredibly simplified VRM layout in 80/90 chipsets (due to the ondie FIVR of Haswell) makes X58 "look" ancient, but as I said, still relevant.

    Just don't load up the PCIe bus. A GPU, sound card and USB 3.0 controller is about as far as you want to go, and for the most part, as far as you need too!
  • vdek - Thursday, August 6, 2015 - link

    Get a Xeon 5650, 6 core CPU, 32nm, will run at 4-4.2ghz all day on air. I upgraded my i7 920 the X5650 and I couldn't be happier. They go for about $70-80 on amazon or ebay. I'm planning on keeping my desktop for another 2-3 years, I upgraded the GPU to a GTX970 and it maxes out most of what I can throw at it. I don't really see my CPU as a bottleneck here.
  • mdw9604 - Tuesday, August 11, 2015 - link

    Can you OC a Xeon 5650?
  • mapesdhs - Wednesday, August 12, 2015 - link

    Of course, back then the main oc'ing method was still bclk-based based, though X58 was a little more involved than that compared to P55 (uncore, etc.)
  • LCTR - Saturday, August 15, 2015 - link

    I'd been pondering the 6700K until I saw these posts from 920 users :)
    I use mine for gaming / video editing, it's running non-hyperthreaded at 4.2GHz on air (about 4Ghz with HT on)

    I also upgraded my GPU to a 970 and have seen decent gaming performance - if I could jump to a X5650 and stretch things for 1-2 years that'd be great...

    What sort of performance do you see from the X5650? Would it win 4GHz with HT enabled?
    The Xeon 5650's don't need any special mobo support or anything, do they? I have a gigabyte GA-EX58-UD5

  • Nfarce - Wednesday, August 5, 2015 - link

    Well sadly, ever since SB (which I have one that's 4 years old, a 2500K, alongside a newer Haswell 4690K, each new tick/tock has not been much. The days of getting 50% boost in performance between a few generations are long gone, let alone 100% boost, or doubling performance. Also keep in mind that there is a reason for this decrease in increased performance: as dies shrink, physics with electrons start becoming an issue. Intel has been focusing more on decreased power usage. At some point CPU manufacturers will need to look at an entirely different manufacturing material and design as silicon and traditional PCB design is coming to its limit.
  • Mr Perfect - Wednesday, August 5, 2015 - link

    It's not even 30% in high-end gaming. There is a clear improvement between SB and Skylake, but why should I build a whole new PC for 5FPS? I can't justify that expense.

    I'd be curious to see the high-end gaming benchmarks rerun with the next generation of GPUs. Will next gen GPUs care more about the CPU, or does DX12 eliminate the difference altogether?
  • mkozakewich - Thursday, August 6, 2015 - link

    It's unlikely you'll be seeing doubles and doubles anymore. If you look at what's been going on for the past several years, we're moving to more efficient processes instead of improving performance. I'm sure Intel's end goal is to make powerful CPUs for devices that fit into people's pockets. At that point you might see more start going into raw performance.
  • edlee - Wednesday, August 5, 2015 - link

    i am not sure why conclusion to the review makes it seem i7-2600k users should upgrade to this.
    If you are a gamer, there is no 25% improvement in average or minimum frame rate, its 3-6% at best.

    Is this the future of intel's tock strategy, to give very little improvement to gamers?
  • VeauX - Wednesday, August 5, 2015 - link

    On the gaming side, you'll never see a speed bump if you are not CPU Limited. Once you have a decent CPU, just put your money in GPU, period.
  • Nagorak - Wednesday, August 5, 2015 - link

    CPU is almost irrelevant for games at this point. As games start to take advantage of more cores, older processors are utilized more efficiently, further negating the need to upgrade. DX12 may improve this further.

    I sort of wonder if Intel isn't on a path to some trouble here. There's basically no point for anyone to upgrade their CPU anymore, not even gamers. Other than a few specialized applications the increase in performance just doesn't really matter, if it exists at all.
  • AndrewJacksonZA - Thursday, August 6, 2015 - link

    I also sometimes think that, but then I remember that we developers *WILL* find a way to make use of more computing power.

    Having said that, I still can't quite justify me upgrading from my E6750 and 6670 @ 1240 x 1024. I slapped in an SSD in February last year and it was like I got a brand new machine.

    Chrome and Edge on Win10 lag a teeny tiny bit though, maybe I can use that as my justification... Perhaps a 5960X or a 5930K though - more cores FTW? Or perhaps a 6700K and get it to 5GHz for the rights to claim some "5GHz Daily Driver" epeen... ;-)
  • Zoomer - Friday, August 14, 2015 - link

    Exactly, I reached the opposite conclusion as Ian. There is no point in upgrading even from SB. If you do, stick to DDR3. Only GTA5 benefits from DDR4.

    It's interesting to see if regular DDR3 sticks can run on Skylake, perhaps by bumping the Vsa, voltages. Not clear if Ian's overclocking tests were with the IGP disabled - would be interesting to see if disabling the IGP / reducing Fgt, Vgt helps overclocking any.
  • StevoLincolnite - Wednesday, August 5, 2015 - link

    I'm still happily sitting with my Sandy-Bridge-E. Still handles everything you could throw at it just fine... And still gives Intel's $1,000 chips a run for their money whilst sitting at 5Ghz.
  • mapesdhs - Wednesday, August 12, 2015 - link

    Yup, that's why my most recent gaming build was a R4E & 3930K, cost less than a new HW build, much quicker overall. My existing SB gaming PC is 5GHz aswell (every 2700K I've tried handles 5.0 just fine).
  • leexgx - Monday, August 10, 2015 - link

    hmm maybe i can upgrade from my i7-920 now (really any of the newer intel cpus are faster then it)
  • sheeple - Thursday, October 15, 2015 - link

    DON'T BE STUPID SHEEPLE!!! NEW DOES NOT ALWAYS = BETTER!
  • AntDX316 - Thursday, November 12, 2015 - link

    as my last greatest laptop was back in like 2007.. I had bad laptops ever since and when skylake had huge power saving options I decided to get an Alienware 17 R3 with 4k IGZO screen and 980m
  • mikael.skytter - Wednesday, August 5, 2015 - link

    Looking forward to read up :) Gonna be an awsome review - as always!
    Thanks!
  • Zponxie - Wednesday, August 5, 2015 - link


    In the section "Skylake's Launch Chipset: Z170":

    "In the previous Z97 chipset, there are a total of 18 Flex-IO ports that can flip between PCIe lanes, USB 3.0 ports or SATA 6 Gbps ports. For Z97, this moves up to 26 and can be used in a variety of configurations"

    Was that last Z97 meant to be Z170?

    Also, thank you for another quality review
  • Ryan Smith - Wednesday, August 5, 2015 - link

    "Was that last Z97 meant to be Z170?"

    Indeed it was. Thank you for pointing it out.
  • ingwe - Wednesday, August 5, 2015 - link

    Man I have been waiting for this! Pumped about DDR4.
  • freaqiedude - Wednesday, August 5, 2015 - link

    why? it basically has no performance impact whatsoever...
    and the powerbenefits are negligable, and it's more expensive...
    I never understand why people buy premium RAM anymore, it simply has no impact on performance except for very very specialized benchmarking applications.
  • richardginn - Wednesday, August 5, 2015 - link

    This CPU is a total joke.... Why Intel would make us pay over 300 bucks for a CPU and not put in GT4e graphics is a total fail.
  • A5 - Wednesday, August 5, 2015 - link

    Good integrated graphics are a waste here.
  • richardginn - Wednesday, August 5, 2015 - link

    A full on waste. If you are going to spend 300 bucks plus on a CPU you are going to spend at least 200 bucks on a GPU, BUTTT when you can throw in GT3e graphics in a broadwell i7-5775c CPU you must no bring us the pile of crap that is GT2 graphics for the 6700k CPU.
  • 8steve8 - Wednesday, August 5, 2015 - link

    not true, i'd gladly pay for the best CPU, but have littler interest in buying a GPU that takes extra space and energy/heat.

    Not everyone who wants CPU performance is a hardcore gamer.
  • im.thatoneguy - Wednesday, August 5, 2015 - link

    I'm not a hardcore gamer but when I'm GPU rendering with CUDA my whole UI slows to a crawl and I can hardly move windows. A passable GPU built in would let me use my NVidia cards for CUDA while freeing my CPU integrated graphics for windowing.
  • lilmoe - Wednesday, August 5, 2015 - link

    Just wondering, have you tried that very scenario in Windows 10? Please do. I'm assuming you're using Windows 7 since I've had the same problem. But even with 100% GPU utilization, Windows 10 has been very responsive in comparison, at least for me.
  • Flunk - Wednesday, August 5, 2015 - link

    They're on a die anyway, just cut down. Which makes it a really stupid waste.
  • extide - Thursday, August 6, 2015 - link

    No, the 48EU versions are separate dies than the 24EU versions.
  • Beaver M. - Wednesday, August 5, 2015 - link

    Wrong. Some games use the L4 cache, and you can see the increase in performance clearly. Also think about that the Broadwell is running lower clock speed.
    Skylake is a joke, if you open your eyes.
  • richardginn - Wednesday, August 5, 2015 - link

    GT2 WILL NEVER BEAT GT3e graphics. It is all about those EU'S and GT2 just does not have enough of them... More L4 Cache would help, but it will not be enough.
  • Beaver M. - Wednesday, August 5, 2015 - link

    I am talking about L4 being used and increasing framerates even if the IGP is not being used. You can see it clearly in the benchmarks.
  • Refuge - Wednesday, August 5, 2015 - link

    Then it sounds like you want the CPU with Crystalwell memory, but no iGPU.
  • Beaver M. - Wednesday, August 5, 2015 - link

    I am talking about what is there right now. Skylake is supposed to be an upgrade, yet Broadwell wipes the floor with it in many cases because of that L4, even with lower clock speeds.
  • fokka - Wednesday, August 5, 2015 - link

    there will also be skylake chips with l4 cache.
  • Beaver M. - Wednesday, August 5, 2015 - link

    An i7-6700k or faster?
    How do you know?
  • Refuge - Wednesday, August 5, 2015 - link

    lol everything was wasted on this chip...
  • ingwe - Wednesday, August 5, 2015 - link

    Now for the new Surface Pro and Macbook Pro to drop. I think those will be really compelling. Choosing will be difficult.
  • JeremyInNZ - Wednesday, August 5, 2015 - link

    I see what you did there... Not falling for the MS vs Apple war you hope to start with that post :p
  • ingwe - Wednesday, August 5, 2015 - link

    Haha. If only I was that clever. I do love watching fanboys fight.

    I do actually think both will be really compelling though--obviously for very difference reasons.
  • ingwe - Wednesday, August 5, 2015 - link

    *different. Edit button
  • BubbaJoe TBoneMalone - Wednesday, August 5, 2015 - link

    Thank you to AnandTech for the Skylake review but....no mention of Skylake-E or its release date. Guess we have to wait next year.
  • A5 - Wednesday, August 5, 2015 - link

    The E parts will be out when the Skylake Xeons are done. Like you said, probably next year.
  • BubbaJoe TBoneMalone - Wednesday, August 5, 2015 - link

    Thank you to A5 for your reply.
  • milkod2001 - Wednesday, August 5, 2015 - link

    next year
  • Jaguar36 - Wednesday, August 5, 2015 - link

    I still don't see the point in upgrading from Sandybridge let alone anything newer. Its a big chunk of a cash for a new mobo, CPU and memory, all for what, 25%?
  • Cumulus7 - Wednesday, August 5, 2015 - link

    Exactly.
    Usually i suggest an upgrade if you get approximately twice the performance (+100%). But for 25%: forget it! Never!!!
  • colonelclaw - Wednesday, August 5, 2015 - link

    The other way of looking at it is that it's amazing how good the Sandy Bridge numbers hold up, that good ol' 2600k is one of Intel's all-time great CPUs. Be happy you backed a winner!
  • mrcaffeinex - Wednesday, August 5, 2015 - link

    I did and I am very happy. It frees up funds to focus on other parts that make a bigger impact like more RAM, a larger SSD, and of course a better video card.

    This is better than Haswell in several ways and I imagine the overclockers are going to have some fun since they have been given back more options than some of the previous generations. At least it looks like Intel is paying more attention to the enthusiasts this time around, even if they are not the largest target market.
  • Cellar Door - Thursday, August 6, 2015 - link

    Well, I'm glad I held back on the 2600K and 3770K and got haswell!! - So with this kind of reasoning, its a never ending circle.

    Look at the platform overall, the pcix storage, nvme compatibility, m.2 ports. USB 3.1 which will be all over the place a lot faster then people realize.

    At the same speed its 37% faster and "In specific tests, it is even higher" - a good clocking 6700K will be a nice upgrade for anyone with Sandy. Just like Haswell was for Nahelem users.

    Seems a perfectly justifiable upgrade.
  • Kutark - Sunday, August 9, 2015 - link

    Eh. I've been wanting an excuse to upgrade from my 2600k. I am an "enthusiast" and building PC's and such is my hobby. So, it's not always just simply the price/perf value proposition. Christmas is coming up and my nephew doesn't have his own PC yet, and also loves to play steam games (he usually does it at his grandparents). So, this gives me an excuse to build a new setup. I still have my GTX 760 laying around that i upgraded to a 980ti. So a 6700k setup would be a nice pairing with the 980ti and should realistically set me for 3-4 years.
  • kmmatney - Wednesday, August 5, 2015 - link

    I'm an i5 3570K owner. If I'm going to upgrade, I'd look for an i7-4790K (or even a i7 2600K) on Ebay before completely overhauling my system with this.
  • darkfalz - Wednesday, August 5, 2015 - link

    Even that's a waste of money unless you are doing a tonne of Handbrake.
  • mapesdhs - Wednesday, August 12, 2015 - link

    No, get a 2700K instead, they oc much better than the 2600K.

    2700K = 5GHz guaranteed, even with a simple TRUE and one fan (I use the ASUS M4E,
    built five so far).
  • tim851 - Wednesday, August 5, 2015 - link

    Agreed. Especially when you factor in that those 25% is peak performance. How often does the average user call on peak performance? I think the most common and frequent scenario for average users to need CPU power is gaming. And here, due to the fact that GPUs are the bottleneck, you won't even get 10%.
    From an enthusiast point of view, the last 4 years since Sandy Bridge have been disappointing. If not outright worrying.
  • zShowtimez - Wednesday, August 5, 2015 - link

    With 0 competition at the high end, its not really a surprise.
  • Refuge - Wednesday, August 5, 2015 - link

    ^this^
  • darkfalz - Wednesday, August 5, 2015 - link

    Yeah, sad. Single digit generational IPC improvements and a trickle up of clockspeed - not exactly exciting times in the CPU world. But I'm kind of happy, in a way, as who wants to have to upgrade their whole system rather than just the GPU every 2 years. It strikes me that Intel are doing a pissload of work for very little results though.
  • wallysb01 - Wednesday, August 5, 2015 - link

    What’s lost is that these gains are coming with roughly zero increased power draw. Much of the gains of years past were largely due to being able to increase the power consumption without melting things. Today, we’ve picked all the low hanging fruit in that regard. There is just no point in being disappointed in 5-10% increases in performance, as time moves on its only going to get worse.

    There is also no point in getting mad at AMD for not providing competition to push intel or at Intel for not pushing themselves enough. If added performance was easy to come by we’d see Intel/AMD or some random start up do it. The market is huge and if Intel could suddenly double performance (or cut power draw in half with the same performance) they would do it. They want you to replace your old Intel machine with a new one just as much as they want to make sure your new computer is Intel rather than AMD.
  • boeush - Thursday, August 6, 2015 - link

    And yet, one would expect much lower operating voltage and/or much higher base clocks with a new architecture on a 14nm process, as compared to the 22nm Haswell. The relatively tiny improvements in everything except iGPU speaks to either misplaced design priorities (i.e. incompetence) or ongoing problems with the 14nm process...
  • Achaios - Wednesday, August 5, 2015 - link

    Very often. You are simply NOT a gamer. There are games that depend almost completely on CPU single threaded performance: World of Warcraft, Total War series games, Starcraft II, etc.
  • Nagorak - Wednesday, August 5, 2015 - link

    The games you listed aren't ones where I'd think having hundreds of FPS would be necessary.
  • jeffkibuule - Thursday, August 6, 2015 - link

    FPS can vary wildly because so many units end up on screen.
  • vdek - Thursday, August 6, 2015 - link

    I'm a gamer, I plan a ton of SCII, my Xeon 5650 6 core @ 4.2 ghz does just fine on any of those mentioned games. Why should I upgrade?
  • Kjella - Wednesday, August 5, 2015 - link

    Yeah. I upgraded from the i7-860 to the i7-4790K, the only two benchmarks they have in common in Bench suggests that's roughly a 100% upgrade. And a lot of that is the huge boost to base clock on the 4790 vs the 4770, I prefer running things at stock speed since in my experience all computers are a bit unstable and I'd rather not wonder if it's my overclocking.
    .
    At this rate it looks like any Sandy Bridge or newer is basically "use it until it breaks", at 5-10% increase/generation there's no point in upgrading for raw performance. 16GB sticks only matter if you want more than 4x8GB RAM. PCIe 3.0 seems plenty fast enough. And while there's a few faster connectors, that's accessories. The biggest change is the SSD and there you can always add an Intel 750 PCIe card instead for state of the art 4x PCIe 3.0 NVME drive. Makes more sense than replacing the system.
  • Sunburn74 - Wednesday, August 5, 2015 - link

    Still not budging from my 2600K. There is no evidence of real world benefit of an PCIE drive compared to a sata drive. As for the CPUs, the 25% gain IS impressive considering the short time period it has been accomplished in but is just within the threshold of noticeability. Would prefer to wait till maybe 50% gain or when software really starts to take advantage of SSDs.
  • Pneumothorax - Wednesday, August 5, 2015 - link

    You're giving Intel too much slack here. Even the much maligned P4 generations had a 50% improvement in raw speed in 2 years. We've waited almost 4 years for only 25%. What's crazy is it cost Intel billions of R&D for it.
  • heffeque - Wednesday, August 5, 2015 - link

    25% in 4 yeas... but their iGPU now has more Dx12 Tier 3 support than nVidia and AMD.
  • Nagorak - Wednesday, August 5, 2015 - link

    Unfortunately, no serious gamer will ever use it.
  • Bambooz - Wednesday, August 5, 2015 - link

    "in my experience all computers are a bit unstable"
    in other words: you've been buying shitty components :)
  • kmmatney - Wednesday, August 5, 2015 - link

    OK, come on! You bought a "K" cpu - it's meant to overclock - at least bump it up to 4.2-4.3 Ghz - it will run that without any voltage increase. In fact, most motherboards (mine included) seem to overvolt Devil's Canyon cpus, even at stock voltage. In fact, you can probably undervolt it, and overclock it at the same time, as Anandtech did here:

    You have a guide handed to you for overclocking this cpu:
    http://www.anandtech.com/show/8227/devils-canyon-r...

    My Z97 PC-Mate motherbard was setting my i5 4690K at 1.2V while at stock, which is higher than it needs - it can run at 4.5Ghz at that setting!
  • IUU - Saturday, August 8, 2015 - link

    I agree with the upgrading part.
    But not quite with the 25% IPC increase. 25% is awesome.
    Two important things:1. It carries the burden of a useless gpu. It could either consume
    less energy or the same with more cores. At least then you would use the cpu
    where possible for what it's meant to be used.
    2. the 25 % increase is the "real" world performance. Unfortunately the "real" world is terribly
    slow to take advantages of the improvement Intel introduces with the new instruction sets.
    So while it's good to stay down to earth to the "real" world performance, it would be nice to have also a mention of the theoretical improvements of the cpu.
  • icebox - Wednesday, August 5, 2015 - link

    I still don't see the reason to upgrade from a 2600k even though I really urge to upgrade my hardware :) I really don't care about integrated graphics and I'm pretty sure 80% buyers of i7 k parts don't either.

    There was no reason when ivy brought 5 percent, haswell another 10% and devil 5% more. Now it's a total of 25%. I want more from a total platform change - I moved from 755 to 1155 for a lot more than that. Mainly I want more than 16 PCI lanes, I understand they'll never give us more than 4 cores because xeon and E series.
  • euler007 - Wednesday, August 5, 2015 - link

    Same here, I'm always waiting every generation upgrade my OC'ed 2500k, but I mainly use my home rig for gaming and these benchmarks give me no reason to upgrade. Maybe Skylake-E.
  • moerg - Wednesday, August 5, 2015 - link

    so DDR4 offers nothing in gaming?
  • semyon95 - Wednesday, August 5, 2015 - link

    DDR4 is completely useless and it will stay useless for a while
  • Vlad_Da_Great - Wednesday, August 5, 2015 - link

    DDR4 should be replaced with 3XP. The new memory from INTC/MU JV. Intel has foreseen that and perhaps next year CPUs models will have huge leap in performance.
  • boeush - Thursday, August 6, 2015 - link

    3dXP is much faster than NAND, but still nowhere near as fast as DRAM. So it will never replace DDR.
  • boeush - Wednesday, August 5, 2015 - link

    DDR4 is useless at 2133. It won't be quite as useless once 4000+ becomes the affordable norm. It'll take a year or two, but come 2017-18, DDR3 will be a clear-cut dinosaur. Of course, by then HBM 2 and maybe even 3 might be a compelling alternative - if not an upgradeable one...
  • jjj - Wednesday, August 5, 2015 - link

    Great , now cut the GPU and sell it at 60-80$ and it's all good.
    Until then, this is one of the biggest ripoffs in tech.
    They do this instead of making a 60mm2 chip (without a GPU) that would cost them 10-20$ depending on yields and could easily retail well bellow 100$ even with their obscene margins. They just add bloat, in fact most of the chip is bloat, just to make it look like you are getting something worth paying for.
  • richardginn - Wednesday, August 5, 2015 - link

    Why cut 80 bucks off it??? The broadwell i7-5775c CPU has GT3e graphics on it and would destroy the crap out of this Skylake CPU for Integrated graphics performance.

    Skylake is a full on total flop right now!!
  • Bambooz - Wednesday, August 5, 2015 - link

    Except noone gives two sh*ts about integrated graphics when buying an i7..
  • 8steve8 - Wednesday, August 5, 2015 - link

    I do
  • Teknobug - Wednesday, August 5, 2015 - link

    Unless it's in a mini PC type thing like the Gigabyte BRIX or Alienware Alpha. But no I don't buy i7's for its integrated GPU, just gonna say that high end CPU's shouldn't even include iGPU.
  • nikaldro - Tuesday, August 11, 2015 - link

    Still, the L4 cache can be useful, and DX12 could make use of both the GPU and iGPU to give better results.
    I mean, you're charging us 350 bucks, and I want the absolute cutting edge for that much, considering that by next year its 6 core counterpart will cost just a bit more, I think I'll just wait for the skylake E parts
  • jwcalla - Wednesday, August 5, 2015 - link

    I kind of agree. I think I'm done with paying for a GPU I'm never going to use.
  • jardows2 - Wednesday, August 5, 2015 - link

    If you don't overclock, buy a Xeon E3. i7 performance at i5 price, without integrated GPU.
  • freeskier93 - Wednesday, August 5, 2015 - link

    Except the GPU is still there, it's just disabled. So yes, the E3 is a great CPU for the price (I have one) but you're still paying for the GPU because the silicon is still there, you're just not paying as much.
  • MrSpadge - Wednesday, August 5, 2015 - link

    Dude, an Intel CPU does not get cheaper if it's cheaper to produce. Their prices are only weakly linked to the production costs.
  • AnnonymousCoward - Saturday, August 8, 2015 - link

    That is such a good point. The iGPU might cost Intel something like $1.
  • Vlad_Da_Great - Wednesday, August 5, 2015 - link

    Haha, nobody cares abot you @jjj. Integrating GPU with CPU saves money not to mention space and energy. Instead of paying $200 for the CPU and buy dGPU for another 200-300, you get them both on the same die. OEM's love that. If you dont want to use them just disable the GPU and buy 200W from AMD/NVDA. And it appears now the System memory will come on the CPU silicon as well. INTC wants to exterminate everything, even the cockroaches in your crib.
  • Flunk - Wednesday, August 5, 2015 - link

    Your generational tests look like they could have come from different chips in the same series. Intel isn't giving us much reason to want to upgrade. They could have at least put out a 8-core consumer chip. It isn't even that much more die space to do so.
  • BrokenCrayons - Wednesday, August 5, 2015 - link

    With Skylake's Camera Pipeline, I should be able to apply a sepia filter to my selfies faster than ever before while saving precious electricity that will let me purchase a little more black eyeliner and those skull print leg warmers I've always wanted. Of course, if it doesn't, I'm going to be really upset with them and refuse to run anything more modern than a 1Giga-Pro VIA C3 at 650 MHz because it's the only CPU on the market that is gothic enough pending the lack of much needed sepia support in Skylake.
  • name99 - Wednesday, August 5, 2015 - link

    And BrokenCrayons wins the Daredevil award for most substantial of lack vision regarding how computers can be used in the future.

    For augmented reality to become a thing we need to, you know, actually be able to AUGMENT the image coming in through the camera...
    Today on the desktop (where it can be used to prototype algorithms, and for Surface type devices). Tomorrow in Atom, and (Intel hopes), giving them some sort of edge over ARM (though good luck with that --- I expect by the time this actually hits Atom, every major ARM vendor will have something comparable but superior).

    Beyond things like AR, Apple TODAY uses CoreImage in a variety of places to handle their UI (eg the Blur and Vibrancy effects in Yosemite). I expect they will be very happy to use new GPU extensions that do this with lower power, and that same lower power will extend to all users of the CI APIs.

    Without knowing EXACTLY what Camera Pipeline is providing, we're in no position to judge.
  • BrokenCrayons - Friday, August 7, 2015 - link

    I was joking.
  • Zisch - Saturday, August 8, 2015 - link

    Hehe … Best comment so far in this thread by a wide margin! Cheers!
  • sepffuzzball - Wednesday, August 5, 2015 - link

    Time to replace my aged i7-930. It has served me well.
  • vdek - Thursday, August 6, 2015 - link

    IMO Upgrade to a Xeon 5650, x58 is still rocking it!
  • hans_ober - Wednesday, August 5, 2015 - link

    Superb review!
  • Uxi - Wednesday, August 5, 2015 - link

    Could you please show how you applied the thermal paste with either a photo or a diagram, TIA.
  • Harry Lloyd - Wednesday, August 5, 2015 - link

    The most important question - can you get the full range of BCLK overclocking on non-K CPUs? Will you be able to overclock an E3-1230 v5? That would be sick.
  • extide - Thursday, August 6, 2015 - link

    Wow, I hadn't thought of this, but if the BCLK is unlocked on all models, and un-linked from everything, than technically ALL the CPU's become overclockable again!
  • Azune - Wednesday, August 5, 2015 - link

    Would it be possible to redo the gaming benchmarks with faster memory? A german site has found massive improvements in dedicated gaming performance with 3000 mhz memory.
    (See http://www.gamestar.de/hardware/prozessoren/intel-...
  • Ian Cutress - Wednesday, August 5, 2015 - link

    It's on the list of things to do! I need more hands and something like Narnia.
  • Azune - Wednesday, August 5, 2015 - link

    Awesome!
  • wishgranter - Wednesday, August 5, 2015 - link

    Ian, for the overclocking lok here. a better ES sample
    http://www.guru3d.com/articles_pages/core_i7_6700k...
  • Khenglish - Wednesday, August 5, 2015 - link

    It's a shame this doesn't have the 128MB L4 cache. It obviously helps Broadwell over Haswell in CPU benchmarks. If Skylake had it it'd be a clear and very significant upgrade over Haswell, but without it it's just too minor to warrant an upgrade over Haswell or Ivy Bridge.
  • Brazos - Wednesday, August 5, 2015 - link

    Is this what happens when AMD stops being a competitor?
    And I agree with the comments about the IGPU. Most enthusiasts will purchase a graphics card. Save money, space etc by dropping it.
  • Jumangi - Wednesday, August 5, 2015 - link

    Welcome back to the early 2000's when Intel could put out miniscule upgrades while charging premium prices because of the lack of any real competition.

    A 25% increase over a 4 year old CPU...pathetic.
  • Gigaplex - Wednesday, August 5, 2015 - link

    AMD was very competitive early 2000s.
  • zodiacfml - Thursday, August 6, 2015 - link

    Hmm, you kind have a point. Though Intel is relentless with innovation as to chase their dreams on mobile and server market, Skylake architecture seems optimized for server/computing applications. I think it has been that way for many years already. Maybe, the overclocking support Intel is giving to enthusiast is a sign of this. If AMD were competitive, Intel wouldn't have to optimize too much on the server/computing performance.

    I'm baffled, there's obvious IPC increase and massive improvements in multithreading/Handbrake, but doesn't show in games. With DX12, I doubt it will help.
  • Eidorian - Wednesday, August 5, 2015 - link

    I am frankly looking at Skylake for the platform as a whole. The improvements between generations are not amazing and disconcerting for gaming but I am coming from a Lynnfield + P55 system built in 2009. This is going to be great for me. I can still see users on Sandy Bridge + P67 holding on to those systems a little longer.
  • postem - Wednesday, August 5, 2015 - link

    I was reluctant to upgrade to devils canyon, from 950 Bloomfield due to this 6 month proximity to skylake.
    I never had severe performance issues with the 950 @ 4.2 Ghz, but since i started to use a 980 i saw the frames consistently dropped below.
    When i finally updated to 4790K, man it was good. Not only better response on overall processing, better frames, much faster in all aspects.

    Bottom line: i dont think you need to upgrade each generation. I would gladly gone to Haswell-E if it wasnt so $$$, but anyway, DC is giving me a hell of performance, and i dont think its worth considering to ugrade it to skylake. It just sucks intel changed the whole socket because of a pin.

    If you are comming from a 3-4 generations before, you will really see the benefits from the upgrade i can assure you. I cant say its the end of Sandy Bridge, but its coming to its age.

    What really is getting nice is to have good cpus with minimal TDP on laptops. You can have a broadwell i5 with low as 10W consumption.
  • MrSpadge - Wednesday, August 5, 2015 - link

    Ian, could you please undervolt the chips? I know you reported 1.20 v at 4.3 GHz as "undervolting", but that's far more than I'd give even a 32 nm CPU and is just considered low because the stock voltage is so insanely high. Give us a few more data points until about 4.0 GHz, please.
  • Flunk - Wednesday, August 5, 2015 - link

    I've got a 2500K overclocked to 4.4Ghz @ 1.2v.
  • MrSpadge - Friday, August 7, 2015 - link

    It works, of course, and is OK for bursty load (like any regular desktop system sees). I'm interested in energy efficeint 24/7 load, however, that's why I said specifically "more than I would give...".
  • royalcrown - Wednesday, August 5, 2015 - link

    How much for that sweet board on page 2 of the article ? Please paint in 4 more ram slots, I'll be mailing in a check today. Also, please paint me a 6 ghz processor with 30% OC to go with it and I'll mail another check !
  • DanNeely - Wednesday, August 5, 2015 - link

    "Here’s a suggestion: bring back the old turbo button on a chassis. When we saw 66 MHz become 75 MHz, it was the start of something magical. If Intel wants to grow overclocking, that’s a fun place to start."

    *BAH*

    The turbo button turned my 486 into an 8086 so I could run old games; pushing it certainly didn't give an OC.
  • Teknobug - Wednesday, August 5, 2015 - link

    Don't forget the math co-processor
  • DanNeely - Wednesday, August 5, 2015 - link

    I had a 486SX, so no hardware FPU in either state.
  • royalcrown - Saturday, August 8, 2015 - link

    Yeah, 80387 FTW !!
  • ant6n - Wednesday, August 5, 2015 - link

    I remember hitting that button accidentally on my 33Mhz 386 (it had no MHz display). It took me a couple of days to figure out why the fuck the computer was so slow all of the sudden.
  • zShowtimez - Wednesday, August 5, 2015 - link

    So here I am, still going to stick with my 4.8Ghz i7 Sandy Bridge.

    Just going to replace my SLI GTX 670s next year when Pascal comes out...

    I can't rationalize upgrading the CPU when all I use it for is gaming.
  • postem - Wednesday, August 5, 2015 - link

    See my other comment. When i switched to 4790K from 950, i through it would be minimal; in some scenarios, yes, but on vastly everything else, its paying off. Comming from 950 @ 4.2 ghz i saw around 20-30% frame increase, in some heavy titles like total war even more.
  • eek2121 - Wednesday, August 5, 2015 - link

    Yes, but you have to realize that the 4790k is not that much faster than the 2600k. Definitely not worth replacing all the components. The 2600 was significantly faster than your 950.
  • DanNeely - Wednesday, August 5, 2015 - link

    Reliability might start to be a concern in the next year or two. I had a 920 and a 930 (bought right after release and about a year later); but back in June the 920 stopped POSTing. Since I did a precautionary upgrade to a 4790K at the start of the year it didn't impact me; and I haven't gotten around to doing any part swaps to figure out which component failed yet. (Not so I can buy a replacement part; but so I know what's potentially usable as a spare if/when the 930 does the same.)
  • 06GTOSC - Wednesday, August 5, 2015 - link

    Looks to me that I'm not missing anything for gaming by staying with my 4790k.
  • nmm - Wednesday, August 5, 2015 - link

    Well, that was a bit underwhelming. Fantastic article, of course, but I can't agree that this is a nail in the coffin for Sandy Bridge. For myself, I think I'll just upgrade the cooler on my i7 2600k and bump the multiplier up a few notches and hold out for another year. When there are some reasonably cheap NVME options and affordable/fast high capacity DDR4 modules and Pascal GPU's, that will probably be the right time for me to break up with Sandy.
  • piroroadkill - Wednesday, August 5, 2015 - link

    Definitely not the nail in the coffin for Sandy, far from it.

    In games, there's still no point to upgrading..
  • Jon Tseng - Wednesday, August 5, 2015 - link

    TBH I'm not missing anything staying with my QX6850! (65nm FTW). GPU is all that matters nowadays...
  • postem - Wednesday, August 5, 2015 - link

    That was my doubt when i was up to wait 6 months or get DC to replace old 950. The difference is mainly negligible from DC, but there will be much more costly due to new mobos, DDR4 and so on.
  • kenansadhu - Saturday, August 8, 2015 - link

    Were you really thinking about upgrading to skylake when you bought your 4790k?
  • Refuge - Wednesday, August 5, 2015 - link

    I wanna know whats going on under the hood. Can't wait for the follow up article.

    Otherwise I'm disappointed, if you claim this is the end for Sandybridge, then anyone who agree's and is upgrading lemme know, I'll gladly take your old hardware for cheap.

    With this performance I'd be happier buying a Devilscanyon for less and wait for Intel to actually give me something worth spending $220+ on.

    Skylake, I am very disappoint.
  • otimus - Wednesday, August 5, 2015 - link

    Says "Sandy Bridge, your time is up" proceeds to practically show data as to why it is not. Especially for gaming.

    Do you guys just not live in reality, or does getting things for free just flat out cloud your mind to the staggering cost it'd take to go from Sandy Bridge to Skylake for what seems like a very few FPS? Even worse, I imagine most folks are still on 60 Hz monitors.
  • Refuge - Wednesday, August 5, 2015 - link

    I have a lot of respect for you Ian, but I completely disagree that Sandy's time is up, and if it is? Skylake isn't the cause. This is such a terrible release haha. I hope it is just some launch kinks that need ironed out.

    Otherwise Devils Canyon is where its at. Older gen so it is cheaper, and it is more compelling than this slab of silicon. Other than increased iGPU performance with DDR4 memory, there isn't a single gain I saw in any of these benchmarks that is noticeable to an end user.
  • samal90 - Wednesday, August 5, 2015 - link

    So basically, if you are a gamer using a dedicated GPU, upgrading is useless. It's funny how all games have become CPU agnostic. And if you use integrated graphics, then better go with the 5775C.
    Also, why haven't you compared the iGPU with Kaveri or Carrizo? Still waiting for those 8800P benchmarks btw ;)
  • boeush - Wednesday, August 5, 2015 - link

    Well, maybe not *all* games are CPU-agnostic - just the mainstream ones that elevate eye candy over AI and complexity. But I'll bet chess fans will still appreciate a faster CPU, for instance... Too bad there's no chess game in the Anandtech benchmarks...
  • Jtaylor1986 - Wednesday, August 5, 2015 - link

    Ian the title is Middle-Earth: Shadow of Mordor. Please correct
  • Ryan Smith - Wednesday, August 5, 2015 - link

    Aye, you are correct. The graph titles have been updated.
  • core5 - Wednesday, August 5, 2015 - link

    Based on the "What you can buy" results, I'm tempted to get an i3 processor, and run a low-temp gaming rig.
  • watzupken - Wednesday, August 5, 2015 - link

    I actually don't think that Skylake will be much of an incentive for people to ditch their Sandy Bridge rig despite its age. This is especially so for those looking for the best value and have since been using a Sandy bridge processor. My oldest rig currently runs on Ivy Bridge, just 1 generation after SB, and I am not at all impressed with the improvement in performance.

    I was expecting Skylake to be a very minor upgrade over Broadwell since most leaked benchmarks show that it is barely outperforming Haswell. Graphic wise, Skylake certainly have more EUs over Broadwell, but it shld also be constraint by the shared TDP. So again, I am not expecting a big leap in performance.
  • Enterprise24 - Wednesday, August 5, 2015 - link

    Look like I will keep my 2600K @ 4.6Ghz for another year.
  • ghitz - Wednesday, August 5, 2015 - link

    ditto
  • DIYEyal - Wednesday, August 5, 2015 - link

    Great article as usual. Found a typo on page 7 in the second last paragraph: "The only real benchmark loss was FastStone, which was one second in 48 seconds."
  • MySchizoBuddy - Wednesday, August 5, 2015 - link

    where are the compute benchmarks of the CPU and IGPU?
  • Modzy - Wednesday, August 5, 2015 - link

    I'm still using my 2500K cruising along at 5.25Ghz. Unless skylake are nuts overclockers I think ill keep waiting a few more years.
  • Refuge - Wednesday, August 5, 2015 - link

    They aren't, but if you are running that daily, I'm impressed it is still running now, and if it still is in 3 years you better frame that bitch and hang it up in your office!

    What kinda cooler/mobo/psu are you running?
  • Bambooz - Wednesday, August 5, 2015 - link

    @Refuce: How's that anything special? http://valid.canardpc.com/byk3u4
  • Bambooz - Wednesday, August 5, 2015 - link

    *Refuge .. dammit
  • bernstein - Wednesday, August 5, 2015 - link

    am i understanding correctly that DMI3 is just 4x8Gb/s = ~4GB/s compared to the 2.5GB/s we had with sandy bridge?
    thats not even enough for one PCIe3 x4 SSD... let alone a 3x PCIe x2 SSD RAID.
    or when using one 10GbE & one USB3.1 we're basically still limited to PCIe2 x4 / PCIe3 x2
  • repoman27 - Wednesday, August 5, 2015 - link

    DMI 3.0 x4 is essentially the same as PCIe 3.0 x4 from a bandwidth perspective, just as DMI 2.0 x4 was equivalent to PCIe 2.0 x4.

    DMI / PCIe 2.0 operate at 5.0 GT/s but use 8b/10b encoding which makes it 4.0 Gbit/s per lane.

    DMI / PCIe 3.0 bump that up to 8.0 GT/s and switch to the more efficient 128b/130b encoding resulting in 7.877 Gbit/s per lane.

    So the new DMI 3.0 x4 link is 31.5 Gbit/s, or roughly twice the bandwidth of DMI 2.0 x4 at 16 Gbit/s. Just as with previous PCH implementations though, the DMI is heavily oversubscribed. However, the fastest single port on the Z170 is PCIe 3.0 x4, so no devices will be bottlenecked by the DMI while operating individually.
  • jrs77 - Wednesday, August 5, 2015 - link

    I'm really glad that I opted for the i7-5775C instead of waiting for Skylake. I don't need a couple of MHz more for the CPU, as even the 3.3GHz is more than enough for most of my workload. I need the best iGPU I can get tho, so I can built the smallest and most energy-efficient workstation possible.

    A workstation the size of a MacMini with a 3.3GHz 4C/8T CPU and an iGPU as powerful as a GT740 / R7 250 with only some 125W powerdraw under full load at the plug in the wall :)
  • Beaver M. - Wednesday, August 5, 2015 - link

    This is one joke of an upgrade...
    Now I still cant decide if I take an i7-5775C, i7-4790K, i7-5820K or an i7-6700K.
    Good job, Intel...
  • Teknobug - Wednesday, August 5, 2015 - link

    Seems I'd be more included to buy an i7 5775C or i5 5675C than a Skylake I guess.
  • Juggzz - Wednesday, August 5, 2015 - link

    Thank you for the review, but what a pile of junk. What is Intel thinking?? Why are they releasing a product that can't even compete with their 4 year old processors?
  • eek2121 - Wednesday, August 5, 2015 - link

    They out-engineered themselves.
  • buevaping - Wednesday, August 5, 2015 - link

    I am guessing the lithography of Skylake chipset is 22nm. I am taking about the die on MB and not the CPU.
  • AnnonymousCoward - Saturday, August 8, 2015 - link

    Try 350nm.
  • extide - Monday, August 10, 2015 - link

    No, it's probably same 32nm as last time.
  • bernstein - Wednesday, August 5, 2015 - link

    I still don't see the point in upgrading from Sandybridge let alone anything newer... especially for a gaming PC with a dGPU. as DirectX12 games on the horizon will eliminate the CPU as bottleneck.
    - ~23% ***max*** speed (+25% IPC, -4% frequency, +2% DDR4)
    - PCIe3 x16 (yet anthing less than a GTX 980 Titan for sure doesn't profit from it...)
    - minor improvement like integrated PCIe SSD boot support & (with the right mainboard) USB3.1 / HDMI2 / Thunderbolt3

    even performance/watt @90W seems to be around 45% better. that's a paltry 10% per year...
    geekbench figures suggest @35/45W it going to be around 55%. still. not. enough.
  • milkod2001 - Wednesday, August 5, 2015 - link

    Could not care less about Desktop Skylake(im still on 4770K, it'll be OK for next 2-3 years)

    Im looking forwards to Mobile Skylake. Im dreaming here if i would want Dell to drop mobile Skylake quad core with improved graphics to 2016 model of XPS13?
  • jeffkibuule - Wednesday, August 5, 2015 - link

    That chassis will not handle a 45W CPU. Sorry.
  • Kidster3001 - Wednesday, August 5, 2015 - link

    Skylake M will be in the 10W range.
  • boeush - Thursday, August 6, 2015 - link

    For an XPS 13 form factor, I'd assume you want U rather than M (so probably more like 15-20W, still very doable.)
  • boeush - Thursday, August 6, 2015 - link

    Heh, just double checked: there won't be an M at all for Skylake. See https://en.wikipedia.org/wiki/Skylake_(microarchit...
  • extide - Monday, August 10, 2015 - link

    Yeah, it's right there -- the Y variant is Core M Skylake.
  • jeffkibuule - Thursday, August 6, 2015 - link

    milkod2001 wants the quad-core H SKU (45-55W) in a chassis of a laptop designed for a dual-core U SKU (15-28W). That's not going to go over so well.
  • extide - Monday, August 10, 2015 - link

    Well we had a 35w Quad core all the way back with Ivy Bridge, doesnt seem to me to be at all un-reasonable to see a 28w quad core, it definitely seems technically feasible, at least.
  • HardwareDufus - Wednesday, August 5, 2015 - link

    Using an IvyBridge, 3770k at 4ghz. Still feeling very up to date.

    Guess I'll look at Kaby Lake next year if they have something like an unlocked I7-7775k that includes GT4e. I'd like to see the GT4e support multiple DisplayPort/HDMI and ditch DVI-D.
  • Teknobug - Wednesday, August 5, 2015 - link

    I had to sell my i7 3770K to pay bills (and pay U-Haul for moving), what a great system it was but I'm using an i5 3550 and it still holds up.
  • StrangerGuy - Wednesday, August 5, 2015 - link

    "I’d love to see an i3 part in the future, but I suspect we will have to wait and see if Intel gets serious competition again"

    Knowing Intel's usual market segmentation games, it will be an unlocked i3 2C/4T that will be priced so close to the cheapest i5 quad it makes the next to completely pointless.
  • Teknobug - Wednesday, August 5, 2015 - link

    Yup I'm more interested in i3's these days ever since Haswell ones with pretty good perf/cost, but if it's going to be priced like the i3 43xx to a i5 4440 then forget it. lol
  • extide - Monday, August 10, 2015 - link

    We might still be able to overclock the i3's using the BCLK. I hope they keep the BCLK separate in all models, not just K series.
  • joex4444 - Wednesday, August 5, 2015 - link

    On RAM latency and "performance index," I'm running dual channel DDR2-1000 CAS5, which has a 2ns clock time -- quite large relative to DDR3-1866 (1.07ns) or DDR4-2400 (0.83ns). At CAS5, the true latency is 10ns, slightly worse than DDR3-1866 CAS9 (9.65ns) and slightly better than DDR3-1866 CAS10 (10.72ns). DDR4-2400 CAS15 yields 12.5ns - a full 25% slower than DDR2-1000 CAS5.

    Now for the "performance index" mentioned - DDR2-1000 CAS5 has a PI of 200 (1000/5). Compare to DDR3-1866 CAS9 (207), DDR3-1866 CAS10 (187), or DDR4-2400 CAS15 (160) and this 6-year old RAM looks fairly modern.

    Yet the bandwidth one gets from DDR2-1000 CAS5 is in the handful of GB/s, while DDR4 on Skylake is in the 25-30GB/s range...
  • extide - Monday, August 10, 2015 - link

    Granted, what you said is correct, but remember that DDR2-1000 CAS5 was significantly faster than most people were running DDR2.
  • Sttm - Wednesday, August 5, 2015 - link

    Glad to see that my 2600k is still on par for gaming; because this means I wont be tempted to buy a CPU from a company that would pay their employees $4000 not to hire someone because they have the same skin color as me, the same gender.

    Hopefully AMD will come out with a good CPU upgrade for me next year, and I can skip buying from Racist/Sexist Intel.
  • repoman27 - Wednesday, August 5, 2015 - link

    I posted this earlier, but it seems to have disappeared along with the comment it was a reply to. Is there some filtering going on in the comments section these days?

    It's annoying that Anandtech is continuing to make things confusing for everyone by deciding to call USB 3.1 SuperSpeed devices USB 3.0. We don't refer to USB 2.0 keyboards that only operate at 12 Mbit/s USB 1.0 or even USB 1.1. The problem is with the common practice of using the specification revision number as a stand-in for the maximum supported data transfer mode. Since USB's inception, their have been ordained names for the transfer modes that were supposed to be used in public facing marketing materials but everyone pretty much ignored them (Low Speed, Full Speed, High Speed, SuperSpeed, and SuperSpeedPlus).

    The USB 3.1 specification supersedes the USB 3.0 spec. Just as with HDMI, DisplayPort, or any number of other standards, not all features are required to be compliant with a given version. However, as soon as you include any feature that only appears in a newer revision, you either have to roll with that spec or be considered a non-compliant implementation.

    Using Apple as an example, although they did not include a discrete USB 3.1 host controller in the MacBook (Retina, 12-inch), they did make use of the USB Type-C Cable and Connector Specification, Billboard Device Class Specification, USB Power Delivery Specification Revision 2.0, and USB Type-C Alternate Modes including the DisplayPort Alternate Mode. While those features are all technically outside the scope of the actual USB 3.1 spec, using USB Type-C ports raises the minimum current carrying capacity to 3.0 A, which in turn requires slightly elevated VBUS voltages to account for increased losses. The VBUS supply of the new MacBook runs at up to 5.5 V DC instead of the previous limit of 5.25 V DC to ensure voltage levels are within a suitable working range at the device end. The result of this is that the USB port on the new MacBook is fully USB 3.1 compliant despite only supporting SuperSpeed or Gen 1 operation, but is not actually compliant with the original USB 3.0 specification.

    If an OEM is using USB 3.1 Gen 1 in lieu of USB 3.0 to trick people on spec sheets, then yes, it's scummy. If their implementation is such that it qualifies as USB 3.1 but not USB 3.0, then it's merely the correct way of describing it.
  • Ryan Smith - Wednesday, August 5, 2015 - link

    "I posted this earlier, but it seems to have disappeared along with the comment it was a reply to. Is there some filtering going on in the comments section these days?"

    Nope. That's my fault. I deleted the placeholder comment I had put in this morning about catching up on the graphs. I forgot that deleting a root comment also sends its children off to the void. Sorry about that.
  • Wolfpup - Wednesday, August 5, 2015 - link

    Huh. The IPC increase over Broadwell is surprisingly lame. Looking at all this, Haswell was actually a better upgrade than I’d realized.

    I’d sort of thought the team doing Conroe/Penryn/Sandy/Ivy/Skylake was better than the other team…but I guess that’s not universally true, and really Haswell is a better CPU than I’d realized. I mean not that I thought it was BAD, I just didn’t realize it was as much better over Ivy Bridge as it is.

    I’m pleasantly surprised by how big a jump there is going from Sandy Bridge to Skylake. In one thing I actually use, we’re talking about a 70% increase at the same clock speed! That’s seriously impressive, and I bet a lot of people will be upgrading from Sandy Bridge.

    I’ve got Ivy Bridge and Haswell in my main systems, and they’re certainly just fine, but it’s nice to see things are improving!

    Of course then I get to the gaming page, and…honestly it looks like if you’ve got Sandy Bridge, you’re fine, so long as you’re using a dedicated GPU.

    Kind of surprised by how decent Intel’s GPUs are now, though they still tick me off as I still want those transistors used on larger CPUs or more cores!
  • Gich - Wednesday, August 5, 2015 - link

    I'd like to change my Z68 MoBo for a Z170 while keeping my i5 2500k.
  • zlandar - Wednesday, August 5, 2015 - link

    I also do not understand why Sandy Bridge's time is "up". Who sits around running synthetic benchmarks all day? Maybe if you use your computer for alot of video processing it makes sense. The majority of people who care are going to be gamers and the bottleneck remains GPU not CPU.

    That kind of statement should be reserved for a cpu release that would make me upgrade my current system now.
  • Achaios - Wednesday, August 5, 2015 - link

    You are wrong. Yet another guy who has got no clue about gaming. There are games that depend almost completely on single threaded performance. World of Warcraft, Total War series games, Starcraft II, to name a few.

    The performance gain in those games (and other CPU bound games) over Sandy Bridge is well worth the upgrade.
  • Sttm - Wednesday, August 5, 2015 - link

    Until they get a DX12 client, at which point that CPU bottleneck evaporates.
  • Achaios - Wednesday, August 5, 2015 - link

    DX 12 will not help in Single Threaded (CPU bound) games at all.
  • mdriftmeyer - Wednesday, August 5, 2015 - link

    Nice Red Herring. No one is debating the past. They are discussing with Mantle/DirectX 12/Vulkan, those that declare themselves certified for the features touted by such will mean Single Threaded Games are of the past and moving forward implementing your DX12/Vulkan games into the core engine infers multi-core evenly distributed resource management.
  • abrowne1993 - Wednesday, August 5, 2015 - link

    They tested Total War: Attila and there's hardly a difference, certainly not one worth upgrading for. Can you show me some benchmarks that would convince me to upgrade?
  • Refuge - Wednesday, August 5, 2015 - link

    lol... what?

    None of those games are going to be held back from being playable on an i5-2500k, unless you are running on the iGPU. you are just trying to be silly... right?
  • Teknobug - Wednesday, August 5, 2015 - link

    What substance are you smoking
  • kmmatney - Wednesday, August 5, 2015 - link

    What games are cpu bound by an overclocked SandyBridge? Is an extra 3 fps "well worth it"?
    All the games you mention already run perfectly fine on the last few generations of high-end cpus. Single-threaded performance is better at stock speeds, sure, but you lose a lot of that when you take into account overclocking. Most modern games are multi-threaded nowadays, and the older opnes that aren't are old enough to not need the upgrade.
  • Bambooz - Friday, August 7, 2015 - link

    "Single-threaded performance is better at stock speeds, sure, but you lose a lot of that when you take into account overclocking."

    http://i.imgur.com/KCjJPbI.jpg
  • extide - Monday, August 10, 2015 - link

    Uhhh because the difference between stock vs stock is bigger than overclocked vs overclocked..
  • abrowne1993 - Wednesday, August 5, 2015 - link

    It'd be nice if you could throw in the 2500K for the benchmarks (mainly the gaming ones). Seems to me that there are a ton of people still content with their 2500Ks and little reason to upgrade.

    I assume the numbers would be very similar to the 2600K, but it'd still be nice to get a confirmation.
  • Ian Cutress - Wednesday, August 5, 2015 - link

    I haven't yet rerun the 2500K since we updated our gaming benchmarks - if I get a chance I will.
  • Oxford Guy - Thursday, August 6, 2015 - link

    Including the FX processor at around 4.5 GHz would be a lot more relevant.
  • Bambooz - Friday, August 7, 2015 - link

    Why? We know they're utter garbage.
  • JumpingJack - Wednesday, August 12, 2015 - link

    Why? You can read any 9590 review and normalize it against Ivy and figure out it will be way behind.
  • mapesdhs - Wednesday, August 12, 2015 - link

    Indeed, they run rather well, been benching a 2500K @ 4.7 today.
  • qlum - Wednesday, August 5, 2015 - link

    I guess I will wait a gen still with upgrading my 3570k. @4.7ghz (when needed) it performs pretty well and in most situations my 7950 @1138mhz will be the bottleneck anyway. I will probably upgrade once a game really starts demanding me to and when that happens its going to be a big one. For now I will just wait till the end of the year to upgrade my ssd to either 500gb or 1tb
  • Achaios - Wednesday, August 5, 2015 - link

    There is a huge increase in Single Threaded performance of the Skylake CPU's over Sandy Bridge CPU's. which makes upgrading to Skylake from a SB platform absolutely justified.

    There will be huge gains in games that depend on Single Threaded performance for overclocked Skylake CPU's, such as World of Warcraft, Total War series games (Shogun II, Rome II, Attila), Starcraft II, etc.
  • Refuge - Wednesday, August 5, 2015 - link

    Prove it, hell even toss in a GTX 980ti to try and force the CPU to be the bottleneck.

    Guarantee there is no reason to upgrade lol.
  • zlandar - Wednesday, August 5, 2015 - link

    How about SHOWING us this HUGE increase? The review looked at several games including Total War. 2 FPS more from a i7 2600k.

    I can't believe you would even mention WoW as a game that would "benefit" from an upgrade from i7-2600k. Are you kidding me?

    Dude you have no idea what you are talking about.
  • Khenglish - Thursday, August 6, 2015 - link

    WoW is not well coded and is very CPU intensive. Overclocking my 3920xm from stock to 4.4ghz is over a 20% fps improvement in cities, and that doesn't even always get fps up to 50 when crowded.

    Dude you have no idea what you are talking about.
  • Bambooz - Friday, August 7, 2015 - link

    WoW isnt "well"-anything. It's a money sink for idiots. No one gives two shits about WoW "players".
  • Dr_Orgo - Wednesday, August 5, 2015 - link

    Starcraft II is definitely a Single Threaded CPU limited game. I play it a lot on my system 3570k @4.0ghz (hyper 212 limits the overclock) with gtx 970. The gpu never goes above ~30 % usage even lategame. Large battles especially in team games drop to < 30 fps easily even with the most cpu intensive settings turned down (physics, shadows, etc). There literally is no CPU that can maintain 60 fps with max settings in SC2 in lategame. Starcraft II would definitely benefit from Skylake.
  • mikamika - Wednesday, August 5, 2015 - link

    No reason to upgrade from my current 2700K@4.4GHz Sandy Bridge. I was hoping for a major performance improvement from the new architecture but sadly that didn't happen :(. So waiting for Kaby Lake on 2016.
  • boeush - Wednesday, August 5, 2015 - link

    Caby won't be a new architecture either - just a Skylake equivalent of what Devil's Canyon was for Haswell. In other words, Cabylake is just a Skylake Refresh.
  • extide - Monday, August 10, 2015 - link

    No, it will be a slight change, about as big as the change from haswell to broadwell, or sandy to ivy -- except we will be on the same process.
  • fokka - Wednesday, August 5, 2015 - link

    i don't know what i should think of this. i like the features and some real world benchmarks do benefit 20% or so, but i honestly expected more in terms of IPC. i mean compared to devil's canyon it hardly seems to be an upgrade and that is haswell which has been around for over two years, architecture-wise.
  • im.thatoneguy - Wednesday, August 5, 2015 - link

    " With the new processors we get [...] the move to DDR4."

    New, if we ignore Haswell-E from 13 months ago.
  • Refuge - Wednesday, August 5, 2015 - link

    In the above quote the processor is "new" not the use of DDR4.
  • Midwayman - Wednesday, August 5, 2015 - link

    So a minor improvement to gaming over a 2600k just as dx12 hits and will make CPU performance less important? Intel is just a little late on this one. I'm going to wait for dx12 gaming benchmarks before I invest in changing my cpu/mb/ram. I honestly can't believe I'm still on sandy bridge at this point.
  • Achaios - Wednesday, August 5, 2015 - link

    DX12 will not improve Single Threaded performance, meaning that a CPU with weak single-threaded performance (e.g. AMD, old CPU families like Nehalem and SB) will continue to suck with DX 12 just as much as it did suck with DX 11.

    SB Single Threaded Performance, even unreasonably OC'd to 4.7 GHz something that 70% of the ppl out there who run stock Intel coolers or Hyper 212 Evo can't do anyway, sucks compared to Skylake. SB is history at this point, it's just that ppl don't really know much about gaming therefore they cannot understand the importance of Single Threaded performance.

    What DX 12 will improve is multi-core utilization, however, single threaded performance will still matter and miracles won't happen.
  • nathanddrews - Wednesday, August 5, 2015 - link

    Who said anything about ST? The shift to MT from game devs is real and demonstrated. Meaningful MT development and utilization make ST less important. I'm glad Windows 10 has a free 1-year upgrade window. By then we'll see DX12 games, matured drivers, and plenty of hard data to show us the best path.
  • mapesdhs - Wednesday, August 12, 2015 - link

    He's also ignoring the fact that no system running a game can function sensibly with just one core. It needs at least one more to handle system overhead, I/O, perhaps security & virus apps in the background, etc. All this adds up & means even a game that doesn't use more than two cores will run better on a quad core machine.
  • ajlueke - Wednesday, August 5, 2015 - link

    Of course, a large number of popular titles are created for consoles first, which have low single threaded performance. Meaning they are optimized for that type of APU and then ported to PC (sometimes very poorly, looking at you MK X and Arkham Knight). DX12 will help with utilization of those APUs inside the modern consoles and by extension multicore processors as well. With Xbox One incorporating Windows features including potentially a mouse a keyboard, it seems likely that games like Total War, and StarCraft, that were created for PC exclusively are falling by the wayside, and even those that are will likely have multicore optimizations.
  • Lillard185 - Wednesday, August 5, 2015 - link

    Still not seeing why I should upgrade, and I'm using an ancient i7-950.
    Let's not forget that when Sandy Bridge came out, Anand was complementing about how i7-2600K was absolutely SLAUGHTERING i7-980X only from 1 generation ago.
    Now, we have to compare Skylake CPU's from 4 GENERATIONS ago to make it look impressive, and comparing Skylake to i7-4790K, we actually see a slight regression in performance.

    This is not acceptable. At least not for me.
  • nonoverclock - Saturday, August 8, 2015 - link

    I used to have the i7-950 but moved up to an i7-4770. Previously, I was happy with the i7-950 but I did see some clear improvements when I upgraded. One unexpected one was the performance of game menus for Need for Speed Hot Pursuit and Hard Reset which now were a lot more zippy.
  • demonbug - Wednesday, August 5, 2015 - link

    Any overclocking results on the i5-6600k? I'm mostly curious if they can reliably hit the base clocks of the 6700k at ~1.2v, or if they are really scraping the bottom of the barrel to get these out with that significantly reduced clock speed.
  • Ian Cutress - Wednesday, August 5, 2015 - link

    I haven't had a chance to overclock it yet due to time restrictions, but I'll see if I can put a post up at some point with the results.
  • izmanq - Wednesday, August 5, 2015 - link

    how come there's no comparison with AMD APU :|
  • Michael Bay - Wednesday, August 5, 2015 - link

    You`re that much of a sadist?
  • Ian Cutress - Wednesday, August 5, 2015 - link

    Check the 'What You Can Buy' section? It's been there from the start.
  • Oxford Guy - Thursday, August 6, 2015 - link

    But of course, you noticeably neglected to include even one FX number, like an 8 thread chip at a typical 4.5 - 4.6 Ghz speed.
  • Bambooz - Friday, August 7, 2015 - link

    Everyone knows faildozers are utter garbage. Why should they waste space in the graphs for them?
  • im.thatoneguy - Wednesday, August 5, 2015 - link

    In the DDR4 section you have a small misprint:
    "DDR3-1600 C11: 13.75 nanoseconds
    DDR4-2133 C15: 13.50 nanoseconds"

    According the chart you mixed up DDR4-2133's (C15) latency with DDR4-2666 (C18).
  • TallestJon96 - Wednesday, August 5, 2015 - link

    Part way through the so far excellent review, and I encountered the brut truth that skylake is worse at games than haswell clock for clock...

    That's pretty rough stuff. As someone looking to upgrade from a pitiful i3-2120 to an i5 that I can OC, I may not purchase a skylake desktop, and stick with haswell.
  • im.thatoneguy - Wednesday, August 5, 2015 - link

    Another typo. On your generational comparison you say "<Previous Gen> to <Next Gen>" except int he case of skylake where you say "Skylake to Haswell" which is backwards.
  • vFunct - Wednesday, August 5, 2015 - link

    Why does Intel waste their high-end introductions on games?

    They really should go to server Xeons. That's where the money is. Games are a small market.
  • zodiacfml - Wednesday, August 5, 2015 - link

    How do you know. IMO, it seems Skylake is optimized for Xeons.
  • vFunct - Wednesday, August 5, 2015 - link

    Right, which is why Xeons should have been intro'd first.
  • zodiacfml - Thursday, August 6, 2015 - link

    I agree. Maybe, they will. It's just they want to put out the benchmarks as fast as possible.
  • Gigaplex - Wednesday, August 5, 2015 - link

    The Xeon market needs stability. They don't like bleeding edge architectures. Case in point, desktop Haswell exposed fatal bugs in TSX so that feature was pulled. Let the gamers be the testers.
  • Kvaern2 - Thursday, August 6, 2015 - link

    Xeons are build to do mission critical task and thus require a buttload of QA which isn't necessarrily required for consumer CPU's.

    Introducing them last makes perfect sense.
  • hojnikb - Wednesday, August 5, 2015 - link

    Does changes in base clock mean that we could overclock CPUs like i3 and pentium ?

    Or is Intel gonna be greedy and lock that :(
  • zlandar - Wednesday, August 5, 2015 - link

    To Achaios: I might be a noob that doesn't understand gaming.

    I understand benchmarks:

    http://www.anandtech.com/show/9483/intel-skylake-r...

    Why don't you explain to the rest of us idiots why we don't see more than a single-digit fps increase going from an i7-2600k to Intel's newest and greatest?
  • Achaios - Wednesday, August 5, 2015 - link

    Because, the gaming benchmarks have been made with the CPU's at stock. The difference in performance will be seen with OC'd CPU's, especially in games that depend on single threaded performance, such as TW: Attila et al. In addition, running a benchmark is not indicative of real in-game performance.
  • zlandar - Wednesday, August 5, 2015 - link

    Find that highly unlikely since it's easier to OC Sandy Bridge than Skylake.

    2 fps. Whoopee.
  • Achaios - Wednesday, August 5, 2015 - link

    No, it doesn't work that way.

    There will be instances in both TW games as well as WoW and Starcraft II that a SB, even overclocked, will get so much pressed that it will slow to a crawl, that is sub 20 FPS. In exactly the same situations, a Skylake CPU will hold itself above 30 FPS, thus retaining playability whereas SB's game would have been reduced into a lagfest.

    This why Skylake is a superior gaming CPU and SB is finished.

    To see how nice person I am, I didn't even mention about games such as GTA V that take advantage of HT and more than 4 cores, in which a 2500k would fall flat on its face compared to the performance of a modern CPU.
  • lukevega - Wednesday, August 5, 2015 - link

    @Achaios, your comment concerning a 2500k falling flat on its face in GTA V is not correct, unless you meant one that is not overclocked. I run mine at 4.5Ghz with a GTX 970 and 144mhz monitor, and I get a little over 100fps.
  • Bambooz - Friday, August 7, 2015 - link

    @Achaios: for f*cks sake get lost with your single-threaded "games" bullshit spam
  • BigDDesign - Wednesday, August 5, 2015 - link

    Just spent a half hour reading comments for these new processors. I used to like to read the comments because they were educational and fun. Lately it's been a bunch of little boys complaining. And then you get your panties in a bunch because Ian believes that Skylake I-7 is actually a great time to upgrade. Show some respect..... A comments section is not designed to be a pissing match. Please mention something in the comments section that says something about the new technology and grow the conversation. Intel puts one heck of a lot of time to improve the CPU and you guys are putting them down. Better watch out. Intel could say screw you little haters and not put billions into making faster products, and just count the money.

    I like this new CPU and I have an Ivy Bridge. But I'm on Windows 7 Pro. It would be nice to build a new machine with Windows 10 and keep my existing machine with all it's programs setup (over 200 hours... graphic design machine). Keep your damn machine with Win 7, no problem. JUst for a minute think of how nice this new setup will be with Win 10 and a whole bunch of new equipment. I used to spend $8,000-$10,000 to build my workstations 15 years ago. 15 years later I can build a full tilt one for half that. Gaming is cool and all.... but there are many other uses of the computer, in case you haven't noticed. Before retirement my workstation consisted of 3 full workstations with 2 monitors, 3 keyboards, 6 raid arrays, 22 internal drives (many were Raptors). And I used all of that power and working with large photo edits and video renderings, 3d rendering and designing. I will take every bit of speed Intel can put out there. Stop complaining, it will give you wrinkles.
  • boeush - Wednesday, August 5, 2015 - link

    Well, I for one am on board with that epitome of little boyhood - Linus Torvalds - who opines that Intel keeps emphasizing useless graphics that nobody wants at the cost of real CPU performance improvement. The minuscule increase in performance relative to Haswell despite new architecture and a process shrink from 22nm to 14nm, is indeed a pretty pathetic display - very much worthy and deserving of derision. At least such is the situation on the desktop: not too surprising, considering it's still a mobile-first design, as have been the last several generations in a row. At this point, it only remains to hope that the mobile Skylakes will show more if a performance boost over mobile Broadwells. We'll see come October...
  • Pneumothorax - Wednesday, August 5, 2015 - link

    Sounds like a case of someone who's been fed steak shaped hamburger meat for the last 4 years that they've forgotten the filet minon that Intel used to serve us from one generation to the next. I could understand the millenials who've been fed this core crap for almost half a decade, but for someone who lived through 286-386-486-Pentium- Conroe years sees this release for what it really is.
  • halcyon - Thursday, August 6, 2015 - link

    Yes. Lack of competition and simple super-linear voltage/frequency/litography scaling issues coming to an end c. 2008-2009. This was discussed widely at the time and very little has changed since. How many have been running 4.5Ghz for years now? We are stuck. No amount of pipeline adjustments or additions of cores will improve most non-HPC tasks. The grains have been really miniscule.
  • royalcrown - Saturday, August 8, 2015 - link

    Yeah...Coders will have to learn to optimize, object oriented garbage needs to go bye bye. The next step after that is everyone on the same hardware platform for maximum optimization and no HAL. Full circle back to 1982 !
  • royalcrown - Saturday, August 8, 2015 - link

    How about the Apple II, C-64, IBM 5150, Amiga years...now that's old farts ;)
  • halcyon - Thursday, August 6, 2015 - link

    Flamebait :-)

    Will not bite. Have a nice day!

    BTW, the review was great, as usual. The also very good.
  • zodiacfml - Wednesday, August 5, 2015 - link

    Weird CPU. I couldn't tell what's going on the discrete gaming benchmarks, there's even a graph where the Skylake is only as good and even worse than a Sandy Bridge. I think they have been improving threaded/Hyper threading performance that it does hurt gaming or a few applications. Productivity and research applications are awesome to run in Skylake. Handbrake is also impressive. They already have a winner for Skylake based Xeon's. I don't pity AMD anymore. I'm numb.
  • yuhong - Wednesday, August 5, 2015 - link

    As a side note, I think many DDR3 will work with 1.35V if you lower the clock speed, right?
  • KAlmquist - Wednesday, August 5, 2015 - link

    Sure. In fact you don't necessarily have to lower the clock speed. Just for grins I tried reducing the voltage on my memory to 1.35 volts and memtest indicated that it worked just fine.
  • Bambooz - Friday, August 7, 2015 - link

    My Crucial Ballistix Sport VLP (DDR3-1600 9-9-9-24) run at 1.35V normally. 1.5V is optional in a 2nd XMP profile.
  • fildecuivre - Wednesday, August 5, 2015 - link

    How did you meadure the Memory Latency ?
  • Carleh - Wednesday, August 5, 2015 - link

    The PC market is shrinking, yet every new CPU that comes out is a few bucks more expensive than previous gen. Windows is going nowhere since version 7. The last several years were very disappointing for the PC.
    There was no real innovation, it was more like "lets try this, perhaps it will work" (Windows 8, 8.1, 10) or "a little faster, a little cooler, a little more expensive" (Intel CPUs). It appears as if those behemoth companies have completely lost their vision and guidance and are drifting in the open sea, moving in the direction of the wind.
  • royalcrown - Saturday, August 8, 2015 - link

    Windows 10 offers really good improvements over 7.

    One is a lot of the older code has been dumped and the codebase modernized.

    Another is that Newer Direct X versions put more optimizations in the hands of the developers, where it should be.

    8.1 and 10 are a lot better at fixing themselves if something gets corrupted

    The I/O stack is optimized for SSD vs HDD and it makes a good difference in real life.

    No more driver discs/downloads for older platforms like z77 since drivers are baked in.

    I am sure there is bad too, just also some good things.
  • Iketh - Wednesday, August 5, 2015 - link

    this article makes me feel good about my DDR3... I bought this ram in 2012 for $235. 4x4GB 2400 11-11-11 and running it at 2133 9-10-10 1.55v
  • WTSherman - Wednesday, August 5, 2015 - link

    Great, in depth read but as far as I'm concerned there is no point trying to sugar coat it, Skylake is not very impressive and neither is DDR4. I guess we're lucky to get 2-3% increases or even to have new chips coming to market targeting desktops and overclocking at all considering the obsession with mobile devices and power savings these days.
    I don't see how you can come to the conclusion that this is somehow a positive thing for desktop performance enthusiasts.
  • zodiacfml - Wednesday, August 5, 2015 - link

    No power consumption data after a benchmark run? I think these chips are already overclocked versus 65 watt broadwells.
  • Sweepr - Wednesday, August 5, 2015 - link

    The review is flawed.
    Skylake is severely limited by RAM speed, AnandTech should have used DDR4-2666 or better.

    See DDR4-2133 vs DDR4-2666 here:
    http://forums.anandtech.com/showpost.php?p=3760873...

    DDR4-2666 vs DDR4-3000
    http://forums.anandtech.com/showpost.php?p=3760792...
  • neocpp - Wednesday, August 5, 2015 - link

    NPB test results aren't there in the Linux section (shows the NAMD results twice).
  • shadowjk - Wednesday, August 5, 2015 - link

    "Throughout the testing, it was noticeable that multithreaded results seem to (on average) get a better kick out of the IPC gain than single threaded. If this is true, it would suggest that Intel has somehow improved its thread scheduler or offered new internal hardware to deal with thread management."

    I was under the impression that with hyperthreading disabled, thread scheduling was entirely an OS controlled thing. Without hyperthreading, the only thing I can think of is better cpu cache handling?
  • ThomasS31 - Wednesday, August 5, 2015 - link

    Thank you Ian!

    Very comprehensive a good test from all the angles possible and makes sense.

    Keep it up! :)
  • Sweepr - Wednesday, August 5, 2015 - link

    Come on Ian, find a faster DDR4 kit and do some comparisons.
    Skylake is severely limited by RAM speed, AnandTech should have used DDR4-2666 or better.

    See DDR4-2133 vs DDR4-2666 here:
    http://forums.anandtech.com/showpost.php?p=3760873...

    DDR4-2666 vs DDR4-3000
    http://forums.anandtech.com/showpost.php?p=3760792...
  • ajlueke - Wednesday, August 5, 2015 - link

    It all depends on what your needs and individual setup are if an update is going to be worth it. From a gaming standpoint it is going to be a tough sell, especially for me. I personally I have PC hooked up to my receiver and from their to the TV to make use of my multi channel audio setup. I am limited to 1080p on the HDTV, and I typically game with Vsync on to eliminate tearing. So anything above 60fps isn't going to make much difference as far as my experience goes.
    In Alien Isolation for example, every CPU is running above 60fps at 1080p, even the most lowly AMD APU.
    Shadows of Mordor is a similar story, but GRID shows a solid difference on the AMD APUs versus Intel, but not between the different Intel generations. Upgrading to the GTX 980 however, and everything is back over 60 again. In GTA V however, the Intel CPUs are over 60 fps, while the AMD APUs are in the upper 40s, but all the Intel CPUs are over 60, from the 2600k right on up. So shelling out money for the latest greatest Intel CPU setup isn't going to gain me much at all. I don't really even gain much by going Intel.
    Now, if you have invested in a nice 120Hz monitor, the picture changes a bit. There is a definite benefit of going Intel in that scenario, but between the different generations? The area it seems to net the most improvement is GRID on the R9 290X. But that difference is mitigated by switching to the 980 GTX. So again, it seems to be a better bet just to get a new GPU.
  • structuralintegrity - Wednesday, August 5, 2015 - link

    Your review compares everything to Sandy Bridge, which is great, but the i5-2500k that many people have been sitting on and waiting to see if its worth upgrading from doesn't seem to appear on any charts.

    Some of your cinebench charts even have lynnfield processors, but not i5-2500k. You should add it to the charts for every benchmark for a straight comparison.

    I know the headings are labelled "tests on i7-6700k", but the headline says you're testing the i5-6600k too and the people upgrading from the i5-2500k would really appreciate seeing it on the charts if you have those benchmarks already!
  • Brett Howse - Wednesday, August 5, 2015 - link

    Don't forget we always have our Bench section at the top of the site where you can compare anything we've tested. Here's what you're looking for: http://anandtech.com/bench/product/288?vs=1544
  • sjakti - Wednesday, August 5, 2015 - link

    Thorough review as usual, thanks so much! I really like the generational overclocking table, that's a really helpful general guide.
  • spikebike - Wednesday, August 5, 2015 - link

    I'm all for faster CPUs, but what's the point in a paper launch of skylake when I can't even find the broadwell's for sale yet? I've yet to find the i5-5675C in stock anywhere.
  • erte0 - Wednesday, August 5, 2015 - link

    Now we seeing how the monopol looks like. 5% gain in every next gen of CPU.
    I'm disappointed. Perf per Watt is improved. But the hell I (We?) WANT MORE!

    @Ian. Do you really test all those Ci7 in 4 threads? If no, please correct the CPU descriptions in graphs.
  • RafaelHerschel - Wednesday, August 5, 2015 - link

    I'm really disappointed with the iGPU performance.

    I get it. With a high-end CPU a decent graphics card is the way to go, if gaming is an objective. An i3 combined with a GTX 750 will outperform discreet every iGPU. But I believe that a high-end CPU should excel in all areas.

    I'm considering a new PC for work. I don't need an extremely powerful processor, but since the expenditure is work-related, I don't mind paying a bit more and get a fast processor just for the fun of it. And maybe overkill will save me the occasional second.

    And if I get a fast processor, why not use the PC for occasional casual gaming? The thing is I'm going for a small Mini-ITX build with an SFX PSU and there is no room for a PCIe card. And if I go for a slightly larger Mini-ITX case, it's tempting to install a decent soundcard since I listen to music a lot while working.

    It isn't a big deal, I have a dedicated gaming PC, but the logic somewhat escapes me. I'll guess I go for the i5-5675C.
  • Gigaplex - Wednesday, August 5, 2015 - link

    Discrete internal sound cards are a waste of time if all you care about is music. Spend the money on speakers and a digital receiver if you've got money to burn for sound quality.
  • RafaelHerschel - Wednesday, August 5, 2015 - link

    Audio is very much a matter of personal preference. I often listen with headphones and the combination of a good soundcard plus a decent headphone amp sounds better to me than most other options. Compared to onboard sound it's an expensive solution, compared to a comparable hi-fi system it's rather cheap.

    In the PC's that don't have a discrete soundcard I tend to use the build in DAC in a headphone amplifier.

    When I play music trough inexpensive but reasonable PC speakers I don't get great sound quality, but the soundcard still makes a difference. Since the PC speakers are for background music only, I keep them simple and inexpensive.
  • Qwertilot - Thursday, August 6, 2015 - link

    Patience :) This is just two chips.

    The Skylakes with big - probably rather bigger/faster than the Broadwell iGPU it seems - iGPU's are coming in time.
  • iceveiled - Wednesday, August 5, 2015 - link

    Thanks for the write up, that was informative. I think I'll hold onto my sandy bridge a bit longer though. My 2500k at 4.5 Ghz still meets my 1080p needs. The problem with PC gaming isn't really my hardware, it's really bad ports. (Batman the latest offender). I've got a PS4 for a reason. As for the other computing I do, my computer just still blazes.

    I'll wait for tick. Plus I'm a broke student.
  • Duxa - Wednesday, August 5, 2015 - link

    I am still rocking Nahalem based i7 930 OCd to 4.0Ghz, which was keeping up pretty good for the last 5 years with stock speed procs and gave no reasons to upgrade. While there is finally a reason to upgrade I think Ill be holding off another 1 or 2 generations, until things I throw at it actually start getting bottlenecked by the CPU before GPU.
  • halcyon - Thursday, August 6, 2015 - link

    I'm in the same boat with i7-920 OCed with 12GB tri-channel RAM + SATA3 card and USB3 upgrade card.

    Now, if I upgrade, I get dual-channel memory, less bandwidth, +2-3% RAM speed difference and maybe 15% speed difference in most apps I used.

    What's tthe point?

    Intel couldn't do better in 6 years? Sad, really sad.

    I'd buy X99 platform in an instant IFF Skylake-E was available, OCed well and provided real perforamnce gains and the platform wasn't so behind the times.

    So I will wait, for another platform upgrade till 2017 and Intel loses another client until then.

    Oh, the times when AMD could still compete in the CPU arena...
  • sickbeast93 - Wednesday, August 5, 2015 - link

    Hey Ian, great article, you really hit this one out of the park! You were very thorough and detailed in your testing. I really appreciate it, it's very informative.
  • CaedenV - Wednesday, August 5, 2015 - link

    Truly amazing review! Especially considering the lack of information you were given!

    Lingering questions:
    1) Will the new audio IO allow for cellphone-like audio wake from sleep to allow things like Cortana commands to wake the computer? Because that would be pretty neat, especially as this tech makes its way into more portable devices.

    2) I understand that 10GbE on current motherboards is done with an add-on controller, but I somehow thought that we would see a more native support of the new Ethernet standard as consumer oriented 10GbE switches are expected to hit market in the next year or two. Do you think we will see more 10GbE integration on motherboards with this generation? Or are we still a few years out on that?

    3) DDR4 obviously has some maturing to do as far as speed and CL (to say nothing of price) goes. Do you think we will continue to see the dramatic improvements and price reductions that we have seen over the last year? Or is it going to slow down a bit now as it gains adoption and becomes ever more mainstream?
    Perhaps more importantly, are we going to see a crash in the DDR3 market as places start clearance out product to make way for new DDR4 modules?

    4) I am getting closer to feeling justification to upgrade my old Sandy Bridge system... but the justification just isn't there for me yet. In the benchmarks it is clear that the bottleneck is still squarely on the GPU rather than the CPU for games, and when you are running at 4.2+GHz the bottleneck is even further pushed onto the GPU compared to your 3GHz tests.
    For other things like video editing and video rendering (especially to HEVC) the speed increase is certainly there... but we are still talking about processes that take several hours. It is one of those things where so long as the process is done by the time I wake up in the morning, the performance isn't really all that important, and the Sandy Bridge chip is still getting the job done.
    The big selling point to me is the ability to run multiple m.2 drives in a RAID configuration. I find that the bulk of what I do is still limited by the speed of the SSDs rather than the CPU, and moving from SATA3 (especially my older SSDs) to m.2 is going to be a large performance increase that simply cannot be had with my current rig.

    Anywho, awesome review! Can't wait to build a few rigs on this platform for my friends who are hurting to upgrade form their Core2Quad game rigs. They are truly going to be blown away by this!
  • flyingchicken - Wednesday, August 5, 2015 - link

    5% performance gains over devils canyon with an 8% higher TDP. Seems like a downgrade to me. Especially considering the overclocking results that have surfaced online showing ridiculous voltage requirements and temperatures to reach a modest 4.7ghz overclock.
  • boeush - Wednesday, August 5, 2015 - link

    What are the odds that motherboard/BIOS immaturity at this point could be subtracting another 5-10% from the CPU (and even iGPU) performance - beyond the unrealized potential if higher-speed (but more expensive) DDR4 that the base 2133?
  • boeush - Wednesday, August 5, 2015 - link

    *of *than ... blasted autocorrect...
  • SanX - Wednesday, August 5, 2015 - link

    Wow, so much wait for this final concusion: the technology in total stall. In few more generations reaching the slowness of the brain cell
  • SanX - Wednesday, August 5, 2015 - link

    /* make Edit function you dumb retros
  • zodiacfml - Wednesday, August 5, 2015 - link

    What is the power consumption at 1.2V?
  • muratti - Thursday, August 6, 2015 - link

    Can anybody explain why Skylake cpu is 91 Watt TDP with GT2 half-as-strong-gpu and no eDRAM while Broadwell cpu is a mere 65 Watt TDP with twice as strong GPU and eDRAM ?!

    No way I'm going to buy a skylake cpu.
  • Ryan Smith - Thursday, August 6, 2015 - link

    Much higher clockspeeds and a higher TDP limit (better turbo).
  • Qwertilot - Thursday, August 6, 2015 - link

    Also the rumours (credible as they seem to match this release) have the 65w i7 Skylake at 3.4/4, so presumably a bit ahead of Broadwell if you don't need the IGP.

    They seem to be making very real progress around the ~35w quad level but finding it much harder to hold that progress as they push for these 'max clock' quad cores. Maybe they should just pack them in and stick a huge IGP on them instead. Well that or move all overclocking to the 6+ core platform.

    They'll have considered it all though and tend to know what they're doing :)
  • Beaver M. - Thursday, August 6, 2015 - link

    A Broadwell with 128MB eDRAM, which can be overclocked to 4.5 GHz and using 3000+ DDR4 would be the best. We would have a killer upgrade then, which would even tempt Sandy Bridge users. Skylake on the other hand is a joke and only creates even more insecurity amongst people, since they dont know if they should take a 5820k, 5775C, 4790K or 6700K. I still dont get why they released it in this state and at this price. If it at least would be similar priced as Sandy Bridge...
  • vision33r - Thursday, August 6, 2015 - link

    These new socket, chipset, and cpus are meant to push consumers to newer platform with higher margins then for consumers to keep reusing their old hardware. I have for probably 3 years use the same set of 8GB DDR3 Corsairs that I kept swapping into different hardware. DDR4 did not interest me one bit.
  • sammyflinders - Thursday, August 6, 2015 - link

    "Sandy Bridge, Your Time Is Up."

    -OK, this statement was not only bold but given my primary application; gaming, completely false.

    I'm sorry but as an owner of a 5.0Ghz OC Sandy Bridge 2600K and a 980Ti those benchmark results you posted show nothing compelling for a substantial financial outlay to upgrade.

    Hundred and hundreds of dollars for what? 1-4fps difference in games? No thank you.

    I admit that I did not read the entire review yet, and that other applications no doubt fair better, but as an avid gamer first and foremost this processor follows suit with all its predecessors since my original Sandy Bridge purchase in offering almost nothing compelling.
  • xsilver - Thursday, August 6, 2015 - link

    it doesnt even need to be 5ghz - the promised overclocking benchmarks should also test a performance per clock benchmar (maybe run all cpus from 2600k onwards @4ghz for example) and see what we get. From the extrapolation of these current benchmarks - I think its around a 10% performance boost per clock but you have to change cpu/ram/mobo --- cost is going to be hard to justify for many.
  • SJD - Thursday, August 6, 2015 - link

    There's still an error in the test setup section.... 6600K is showing as 4C/8T when it should be 4C/4T...
  • Dragonlordcv - Thursday, August 6, 2015 - link

    "Sandy Bridge, Your Time Is Up". Not at all on my perspective.
    As i Sandy owner (I7 2600k) , there is no noticeable difference to lead to an upgrade into Skylake especially for games. The only reasonable upgrade will be a top GPU. Skylake is for those who will built a new rig from scratch.
    Nice review though.
  • iwod - Thursday, August 6, 2015 - link

    IF, and If Skylake is the biggest change for Intel X86 Arc in recent years, then it will properly take some time to get compiler to optimized for it. Not to mention for the Compiler to take advantage of the additional instruction sets.
  • zodiacfml - Thursday, August 6, 2015 - link

    You might have a point there. In tomshardware, Skylake's have performed significantly better on Windows 10 than W8.1. They don't know why yet.
  • SuperVeloce - Thursday, August 6, 2015 - link

    Yeah, looks like this is the case. Everyone seems to get those better results on win10. Win 7 is the worst of the three.
  • halcyon - Thursday, August 6, 2015 - link

    Remember the time, when you could OC 50% or even 100% higher frequency on your CPU?

    Or the time, when every single c. 1.6 year cycle would bring you at least 50% performance increase?

    Those times are over.

    I'm still running 4-core(8HT)/tri-channel/SATA3/USB3 platform from 2009.

    I see very little reason to upgrade, other than tinkering and spending time.

    The money is burning in my hand. Intel is refusing to come out with something that is really worthwhile.

    Ah well, maybe Skylake-E or maybe Cannonlake-E, or maybe...

    I've been waiting to upgrade for a long time (and have upgraded my GPU several times, as there at least I get some bang for my buck).
  • Bambooz - Friday, August 7, 2015 - link

    I remember my Core 2 Duo E4300 (1.8GHz) that ran at 3.2GHz (400MHz FSB *8) for most of it's life till I replaced it with a Core 2 Quad Q6600 (2.4GHz) that ran at 3.6GHz (400MHz FSB *9). Those were the days when OCing was actually a fun thing to do.. till intel fucked everyone over by making you pay for CPUs that can be OCed or not have any meaningful OC at all.
  • watzupken - Thursday, August 6, 2015 - link

    Come to think of it, the comparison of the i7 processor actually put Skylake i7 6700K in an unfavorable position. Honestly, I am more keen to get a Broadwell i7 5775C if I am looking to upgrade. The significantly better graphics, lower TDP and L4 cache seems like a better deal to me. Clockspeed is no big deal since the i7 5775C is overclocking unlocked if I am not mistaken.

    I think Intel just messed themselves up by launching the desktop variant of Broadwell with Crystal lake graphics just ahead of Skylake.
  • yhselp - Thursday, August 6, 2015 - link

    Thank you for the review. You place a lot of emphasis on gaming and overclocking, value for money, upgrading from SB, and rightly so; however, there are no gaming benchmarks with an overclocked Skylake CPU to illustrate that point -- it would be particularly interesting to see how an i5-6600K@4.5Ghz (assuming it can reach that) fares as that would be most representative of this market.

    All that Sandy Bridge gamers see now is a 0.9fps to 5.8fps (excluding GRID) improvement which can't possibly justify the price of a new CPU, motherboard and memory.
  • MiSt77 - Thursday, August 6, 2015 - link

    I would like to argue, that it's a little bit too early for the the authors' assertion, that Skylake only gives meagre performance improvements over Broadwell (and earlier processors).

    Considering that Skylake does in fact feature one big, if not ground breaking, improvement in that (with AVX-512) programmers now have thirty-two¹ 512-bit registers at their disposal, I think Skylake-optimized software should - at least for algorithms amenable to SIMD optimizations - be able to deliver considerable performance improvements.

    In this regard it would be interesting to know, if any of the tested software already comes with proper AVX-512 support (or not, as I suppose) - regrettably, there is not even a single mention of this in the benchmark section ...

    ¹ That is, twice as many as before, which also should help optimizing compilers, as it reduces register pressure.
  • SuperVeloce - Thursday, August 6, 2015 - link

    Um, no. As far as we know, only xeons will get avx-512 enabled.
  • MiSt77 - Thursday, August 6, 2015 - link

    Thank you for this factual correction!

    To my dismay I have to admit that you are right: AVX-512 will only be offered on (some?) Xeon models. As I'm one of those, who where waiting for Skylake specifically because of AVX-512, I find this very disappointing: thanks for ruining my day, Intel.

    On the other hand, with the E3-1535M v5 and E3-1505M v5 two (mobile) Xeons were announced recently; maybe my dream of an (AVX-512 enabled) notebook can still become a reality ...
  • Da W - Thursday, August 6, 2015 - link

    Yeee! I'll be able to keep my i7 4770k for at least a decade without upgrading!
    Time to spend that money on a surface pro 4, a new phone, hololens, whatever else, but my desktop will remain usable!!
    Big change from the 90s :)
  • TheGame21x - Thursday, August 6, 2015 - link

    My Core i5 2500k is still chugging along nicely at 4.2 GHz so I think this'll be another generation I'll be skipping. The cost of buying a new motherboard and RAM alongside the new CPU is a bit more than I'm willing to spend as of now given the performance increase.
  • sweeper765 - Thursday, August 6, 2015 - link

    Another Sandy Bridge owner here (2500k @ 4.6GHz 1.27v load).
    I've been waiting for a worthwhile upgrade. This is close but i'm not yet convinced.

    The ipc improvements and new instructions are attractive but there are some negative points too:
    - hotter, always hotter. Why no temperature analysis ? I see from other reviews temps in the high 80's with water cooling setups. Is this was passes as normal these days?
    - need to buy DDR4 besides cpu and motherboard. I see there is a strong push towards this, despite having no real benefit relative to DDR3 (same bandwidth at the same frequency). Basically it's a kind of high frequency DDR3L. Please don't say it saves power, it's only a few watts less than ddr3 at best.
    - why is stock voltage so high for 14nm technology? It's higher than my oc voltage on a 32nm 4 years old cpu. Power consumption in load is also pretty high.
    - i was hoping for avx3
    - cannot install win7 from usb are you kidding me? I don't have an optical drive and was doing fine without it for years now
    - price goes up from generation to generation. 2600k launched at 317$, 6700k at 350$
    - connectivity wise i would have liked to see more usb on back panel and more internal sata , especially with sata express consuming 2 standard ports.
  • janolsen - Thursday, August 6, 2015 - link

    Great review, both on details and timeliness :)

    Strange that graphics are slower. Why did Intel chose to downgrade the GPU from Broadwell, scrapping it makes a tiny percentage in overall CPU speed?

    And discrete graphics are surprisingly is also slowed down by the new CPU/chipset.

    No great reason to upgrade it seems...at least not for gamers.
  • HollyDOL - Thursday, August 6, 2015 - link

    Few points...

    1. I think testing with Windows 7 is quite outdated, there seems to be benchmarks hinting there are interesting performance gains with Skylake+Windows 10... It needs to be checked out imo

    2. It also seems DDR4-2133 gives quite lousy numbers. Where the difference is noticable, it looks like DDR4-3000ish seems to hit the sweet spot

    3. There are decent chances performance will grow up yet with new firmware and or compiler versions. Alas, there is no doubt current numbers are such we can see.

    4. Absence of decent graphics (5775C ish) seems to be like a step down for me. Either I would like to have absolutely no silicon wasted on never used graphics or have one that I can use for usual good old "DirectX 9" or older gaming without need to kick in dGPU.

    5. There are definitely improvements but atm nothing that big that would make me upgrade to new platform (CPU+MOBO+RAM+GPU+NVMe SSD) to make it worth it (atm 2500K). Guess Kaby Lake and HBM2 nVidia could make it worth it.

    6. Thanks Ian for exhaustive review.
  • Bambooz - Friday, August 7, 2015 - link

    "Let's jump on the Windows 10 bandwagon and leave a solid OS that never caused problems like their so-called successors ( http://www.extremetech.com/computing/164209-window... ) behind for no good reason"
  • mycrobe - Thursday, August 6, 2015 - link

    It's interesting to speculate that Intel put so much effort into improving 14nm yield for Skylake that they scaled back on improvements to the core outside of the GPU and memory interface.

    Perhaps Skylake is somewhere between a tick and a tock. Call it a "tück".

    Implicit in that statement is the hope that the real 14nm tock will be Kaby Lake.
  • versesuvius - Thursday, August 6, 2015 - link

    "There’s no easy way to write this."

    Why? Are you having an AMD Flagship GPU moment with almighty Intel? Is that eating into the fabric of the universe as you know it?
  • zlandar - Thursday, August 6, 2015 - link

    After looking at other reviews of Skylake I'll give props to Ian for including a Sandy Bridge processor in his review. Many of the other reviews don't include SB which is a glaring omission since many PC users still use one in their main rig.

    I still disagree with the assertion SB is "dead" or there is a "compelling" reason to move from SB to Skylake.

    It's not that Skylake is bad or mediocre. It's just not the evolutionary jump that Sandy Bridge was from the 1st gen Core processors. When you ask people to replace a motherboard and RAM for a new processor the expectations are higher.
  • superjim - Friday, August 21, 2015 - link

    "It's not that Skylake is bad or mediocre. It's just not the evolutionary jump that Sandy Bridge was from the 1st gen Core processors. When you ask people to replace a motherboard and RAM for a new processor the expectations are higher."

    This should have been on the conclusions page and sums up how I feel about SL.
  • eSyr - Thursday, August 6, 2015 - link

    "But it is worth remembering that these tests are against a memory clock of 2133 MHz, whereas the others are at 1866 MHz. As a result, the two lines are more or less equal in terms of absolute time, as we would expect." — but latency graph is in CPU clock cycles, isn't it? And, as a consequence, these graphs should be directly comparable (and DDR4 has bigger latency in absolute time), shouldn't they?
  • 8steve8 - Thursday, August 6, 2015 - link

    when can we actually buy this? the 5775c isn't even for sale yet here in the USA
  • Fruitn - Thursday, August 6, 2015 - link

    We have in Denmark. Both the 5775c, 6600K, 6700K :)
  • medi03 - Thursday, August 6, 2015 - link

    Uhm, 290X, not Fury vs 980?
  • Oxford Guy - Thursday, August 6, 2015 - link

    Don't forget only weaker APUs instead of the FX.
  • Oxford Guy - Thursday, August 6, 2015 - link

    They have been doing this for quite some time.
  • Ananke - Thursday, August 6, 2015 - link

    Intel shot itself in the foot with the DDR4 push. No matter how advanced the Skylake is, it is not worth upgrading from DDR3 to DDR4.
    Let's see what will happen on the mobo frontier - mainstream mobos with DDR3 will be the sellers.
  • boeush - Thursday, August 6, 2015 - link

    DDR4 will be of more immediate benefit on laptops and tablets, where every Watt counts. DDR4 will be of more relevance once 3000+ offerings are cheaper and more abundant (there are some available now, but quite expensive.). Give it a few more months... In the meantime, early adopters pave the way with their wallets, as usual.
  • lord_quake - Thursday, August 6, 2015 - link

    yeah well if intel created a ring bus slot connected level 4 cache for users of the I5 and i7 classed cpu stanbd point from 2 meg 4 and y6 and 8 and 10 an 12 an 12 an 16 meg for a level four cache mounts on main board l4 cache they would give do it your self builders more advantage to buy more in intel for the system the person wants for custom builder more self addressed issues and connect the on slot level 4 cache on to the main board slot connected ring bus.,,. and in the end save moe transistor counts becuase moores law is starting to effect things,.
  • Oxford Guy - Thursday, August 6, 2015 - link

    I would say that I am shocked that you did not include the FX processor in the charts and instead included only the weaker APUs. However, I am not shocked.
  • xunknownx - Thursday, August 6, 2015 - link

    kind of disapointed why you guys didnt include the haswell-e results in your benchmark graphs. i understand that x99 is more geared towards enthusiasts but the entry level i7-5820 is only $30-$30 more than the i7-6700k. Anyone considering the 6700k should consider the i7-5820 as well. it should be a no brainer.
  • sudz - Thursday, August 6, 2015 - link

    I really want to see a benchmark for a high CPU bottleneck game - Like Cities Skyline.
  • Khanivore - Thursday, August 6, 2015 - link

    And so the Sandy Bridge plodded on into the sunset till the next time...
  • Chaser - Thursday, August 6, 2015 - link

    Now even more pleased with my 5820K rig I bought two months ago.
  • Artas1984 - Thursday, August 6, 2015 - link

    I disagree with the statement that Skylake should now be a definitive replacement for Sandy Bridge.

    It's like saying that your game runs at 200 FPS slowly, so now you have to upgrade to get 250 FPS. Of course i am not talking about games directly, it's a metaphor, but you get the point.

    Also with the way how fast computer electronics develop, people are "forced" to up their quality of life at the expense of buying more important things in this short fu+kin life. Just because there are things manufactured does not mean you have to live someone else's life! I for one give a shit about smart phones and will never use them anyway, i will never use 3D googles or monitors in movies or gaming just because they exist.

    On top of that:

    AMD's chips have not yet reached the performance levels of Sandy Bridge. The piece of crap FX 9590 falls behind 2600K in every multi-threaded bench and get's beaten by 2500K in every game!
  • Oxford Guy - Friday, August 7, 2015 - link

    Take a look at this: http://www.techspot.com/review/1006-the-witcher-3-...
  • Oxford Guy - Friday, August 7, 2015 - link

    It seems there's a reason why Anandtech never puts the FX chips into its charts and instead chooses the weak APUs... Is it because the FX is holding its own nicely now that games like Witcher 3 are finally using all of its threads?
  • Oxford Guy - Friday, August 7, 2015 - link

    A 2012 chip priced as low as $100 (8320E) with a $40 motherboard discount (combo with UD3P set me back a total of $133.75 with tax from Microcenter a few months ago) is holding its own with i7 chips when overclocked, and at least i5s. Too bad for AMD that they released that chip so many years before the gaming industry would catch up. Imagine if it was on 14nm right now instead of 32.
  • boeush - Friday, August 7, 2015 - link

    Oh yeah, real impressive: FX 9590 @ 4.7 Ghz is a whole 1% faster than the 4 year old i5 2500K @ 3.3 Ghz. I'm blown away... Particularly since the 9590 overclocks to maybe 5 Ghz if you are lucky, at 200 W with water cooling, while the 2500K overclocks to 4.5 Ghz on air. And it's not as if that game isn't GPU limited like most of the others...

    Fanboi, please.
  • Oxford Guy - Friday, August 7, 2015 - link

    You're missing the point completely, but that's OK. Anyone who looks at the charts can figure it out for themselves, as the reviewer noted. Also, if you would have taken the time to look at that page before spouting off nonsense, you would have noticed that a high clock rate is not necessary for that chip to have decent performance -- negating the entire argument that extreme overclocking is needed. The game clearly does a better job of load balancing between the 8 threads than prior games have, resulting in a much more competitive situation for the FX (especially the 8 thread FX).

    As for being a fanboy. A fanboy is someone who won't put in an FX and instead just puts in a bunch of weaker APUs, the same thing that has been happening in multiple reviews. Name-calling is not a substitute for actually looking at the data I cited and responding to it accurately.
  • Markstar - Friday, August 7, 2015 - link

    I totally agree - looking at the numbers it is very obvious to me that upgrading is not worth it unless you are heavily into video encoding. Especially for gaming, spending the money on a better graphic card is clearly the better investment as the difference is usually between 1-3%.

    My i5-2500K is "only" at 4.5GHz and I don't see myself upgrading anytime soon, though I have put some money aside for exactly that purpose.
  • sonny73n - Friday, August 7, 2015 - link

    I don't agree with your bold statement: "Sandy Bridge, your time is up". Why do you even compare Skylake and SB K series at their stock speeds? I have my i5-2500K at 4.2GHz now with Prime95 stress test max temp at 64C on air cool. I can easily clock it to 4.8GHz and I have so but never felt the need for that high of clocks. With ~25% overall system improvement in benchmarks and only 3 to 5% in games, this upgrade doesn't justify the cost of a new MB, DDR4 and CPU. I'm sure a few people can utilize this ~25% improvement but I doubt it would make any difference for me on my daily usage. Secondly, Skylake system alone can't run games. Why upgrade my SB when it can run all the games with Evga 780 that I wanted it to? For gamers, wouldn't it be a lot wiser and cheaper to spend on another 780 instead of spending on a new system? And all that upgrade cost is just for 3 to 5% improvement in games? Sorry, I'll pass.
  • MrSpadge - Friday, August 7, 2015 - link

    Ian, when testing the memory scaling or comparing DDR3 and 4 you shouldn't underclock the CPUs. Fixing their frequency is good, but not reducing it. The reason: at lower clock speeds the throughput is reduced, which in turn reduces the need for memory bandwidth. At 3 vs. 4 GHz we're already talking about approximately 75% the bandwidth requirement that a real user would experience. In this case memory latency still matters, of course, but the advantage of higher bandwidth memory is significantly reduced.
  • boeush - Friday, August 7, 2015 - link

    A good point, but I think you missed this page in the review:

    http://www.anandtech.com/show/9483/intel-skylake-r...

    The other pages where all CPUs are normalized to 3 Ghz are for generational IPC comparison, not memory scaling. The later "what you can buy" pages repeat all the same tests but with all CPUs at full default clocks, as well - to gauge the combined effect of IPC and frequency scaling across generations.

    Still missing and hopefully to be addressed in a future follow-up, is a study of generational overclocked performance, and performance under DDR4 frequency scaling with and without CPU (other than memory) overclocking.
  • MrSpadge - Friday, August 7, 2015 - link

    Well, on the page before the one you linked Ian says:
    "For these tests, both sets of numbers were run at 3.0 GHz with hyperthreading disabled. Memory speeds were DDR4-2133 C15 and DDR3-1866 C9 respectively."
    I think this applies to both memory scaling pages.

    You've got a good point, though, that the "what you can buy" section compares DDR4-2133 and DDR3-1600 (latency unspecified) at default CPU clocks. And from a quick glance the differences there are not that different from the ones obtained in the dedicated memory scaling section.
  • Nutti - Friday, August 7, 2015 - link

    Left out all the AMD FX processors? Looks pretty bad for AMD this way. FX is still much better than 7870K. Zen will nicely catch up with Intel. AMD needs 40% improvement over FX8350 and they will sure get that through better IPC and multithreading.
  • Bambooz - Friday, August 7, 2015 - link

    Wishful thinking from a fanboi
  • Oxford Guy - Friday, August 7, 2015 - link

    Ad hominem isn't a rebuttal, bud.
  • Oxford Guy - Friday, August 7, 2015 - link

    The FX does nicely in a modern game like Witcher 3 that uses all of its threads as can be seen here: http://www.techspot.com/review/1006-the-witcher-3-...

    Anandtech has been doing the "let's throw in a dozen APUs and completely ignore FX" for some time now. The only thing it accomplishes is obscuring the fact that the FX can be a better value for a workstation (rendering and such) that also has some gaming requirements.
  • nils_ - Friday, August 7, 2015 - link

    You probably should have run the Linux Tests through Phoronix Test Suite, the Linux Bench seems rather outdated with Ubuntu 11.04 (we are on 15.04 now).
  • eeessttaa - Friday, August 7, 2015 - link

    Great article as always. I wish intel would leave the fivr in it. i know how hot it got but instead of removing it they should've improved on its design.
  • Nelviego - Friday, August 7, 2015 - link

    Seems it might finally be time to OC my i7 2600k and give it another 4 1/2 years. ;)
  • Oxford Guy - Friday, August 7, 2015 - link

    Intel made everyone think Skylake was going to be a massive improvement on all fronts. Massive IPC increase. Massive technological advance. People shilled for Intel by claiming it was highly likely that Skylake wouldn't need a new socket and would just use LGA 2011-3.

    Instead, we get ... what? Chips that aren't significantly better than Haswells, let alone Broadwell?

    I guess Intel is sandbagging even more than usual since AMD isn't doing anything new on the CPU front. So much for all of the intense Skylake hype. It amazes me, too, how people are blithely now saying "I guess I'll wait for Kaby Lake" -- the same people, often enough, who said Skylake would revolutionize computing.

    It looks like this is what happens when Intel has minimal competition. The FX chips are still clinging to relevance now that consoles have 8 threads and weak individual cores (not that you'd know it based on the way this site never puts even one of them into its reviews in favor of weaker APUs) -- and because rendering programs like Blender can use their threads which can make them a decent value for budget workstation use, but their design is from 2012 or so. Overclocking is also keeping old chips like the 2500K viable for gaming.

    I admit I feel for the hype a bit. I was expecting at least some sort of paradigm-shifting new tech. Instead... I don't see anything impressive at all. A new socket... a small gain in efficiency... rinse repeat.

    An article I read recently said that overclocking will become increasingly non-viable as process nodes shrink. It seems we're seeing that already. The article says an Intel executive said Intel is taking overclocking seriously but the company may not have much choice.

    Intel should have included hardware devoted to h.265 encoding for Skylake at least. Maybe it did, but it's not like I can tell by the charts provided. What is the point of putting in that h.265 encoding chart and not including the fastest non-E Haswell (4790K) and a Haswell-E (5820K)? It makes it look like your site is trying to hype Skylake. Don't you think people who are doing a lot of tasks like that which require serious performance (like the "slowest" setting in Handbrake) are going to need to see a comparison with the best available options?
  • SuperVeloce - Saturday, August 8, 2015 - link

    Wait, what? Skylake and 2011-3 in the same sentence? Who, for the love of god, would say such a thing? Power delivery is (again) new and very different from Haswell/Broadwell, so there is no chance to reuse 1150 and 2011-3
  • Oxford Guy - Saturday, August 8, 2015 - link

    The belief put forward was that Broadwell would be compatible with Haswell desktop motherboards and Skylake would be compatible with Haswell-E motherboards.
  • KAlmquist - Saturday, August 8, 2015 - link

    The analysis by Puget Sound Systems offers a plausible explanation of why Skylake has a higher TDP than Haswell or Ivy Bridge: the integrated GPU that comes with Skylake is faster and draws more power. It appears that if you don't use the integrated GPU, Skylake draws slightly less power than Haswell.
  • SuperVeloce - Saturday, August 8, 2015 - link

    That's definitely plausible. The other thing here is the TDP 4790K uses. 88W is too conservative for the clocks and voltages from that chip. They needed to up that I am sure.
  • bobbozzo - Saturday, August 8, 2015 - link

    Error in graph on final page:
    "Gains over Sandy Bridge.png" - the key for green says IVY bridge.
  • tuklap - Saturday, August 8, 2015 - link

    I don't know... Intel seems to keep pushing forwards every year with profit in mind. The thing that they are really making breakthrough is the non volatile, high bandwidth memory or Xpoint...

    If Xpoint will be available maybe this will give a new speed bump... But Sandy-Skylake is really good...
  • wizyy - Saturday, August 8, 2015 - link

    There is a review which shows 6600k to be quite a nice improvement over popular I5 processors in 10 recent games, over at eurogamer.net. Check it if you're a gamer thinking to upgrade your older I5.
  • SilverManSachs - Saturday, August 8, 2015 - link

    There is a good jump in IPC for the Core i5, less so for the Core i7. This makes sense as its harder to push the top end performance higher at smaller nodes but they did improve the i5 performance which is great as i5's are the most sold parts. Also, good overclocking room on the i7.

    Would be very interested to see 'Skylake vs Excavator' CPU only benchmarks on the mobile 17W parts. Please so that test for us AT!
  • soldier45 - Sunday, August 9, 2015 - link

    Spending $500+ on Skylake over my 2600k to get 3-5 fps in my games isn't really worth it. Having said that at the end of the day,I'm about to spend $700 on a 980Ti over a 780 classified so yeh I will end up going with Skylake.
  • asmian - Sunday, August 9, 2015 - link

    The interesting fact for me faced with building a new rig is how the i7-6700K compares with the 28-lane Haswell-E i7-5820K. For my usage (design/programming, no interest in SLI/Crossfire, regular Handbrake use), with very comparable mid-range boards (ASRock Z170 Extreme6+ versus ASRock X99 Extreme4 with the USB 3.1 A/C card) the price of mobo + board is almost identical at £490 or so in the UK right now - in fact, the Haswell-E combo would be £15 cheaper. All other added components (DDR4 memory, new OS, M2 SSD etc.) would be identical.

    So do the extra 2 cores at a somewhat lower eventual overclock for that Handbrake usage make up for extremely marginal extra IPC on 4 cores at a higher price (and trading a few extra features for many less SATA ports)? Somehow I doubt it... The only question remaining would be whether waiting another year or more for Skylake-E would be worth it for even more chipset features over X99, but that looks rather marginal as well.
  • asmian - Sunday, August 9, 2015 - link

    >Somehow I doubt it...

    Sorry, no edit - I meant of course the reverse, that 2 extra cores is DEFINITELY better than marginal extra IPC at a slightly higher overclock, despite the slightly higher TDP. Quad-core Skylake at this price AND requiring DDR4 makes Haswell-E look very good indeed.
  • Ethos Evoss - Sunday, August 9, 2015 - link

    Why they STILL calling it i7 an i5 i3 ... they supposed to change it this time differently ..
    like i4 i6 i8 ?? or rather without that apples ''i'' ?
  • orion23 - Sunday, August 9, 2015 - link

    Yay for my 2600K @ 4.8ghz from day 1
    Never had as much fun overclocking and building system
    By now, I've changed cases (3x) and PSU's (2X), VGA's (2X). But not my loyal 2600K :)
    What a workhorse it is
  • Kutark - Sunday, August 9, 2015 - link

    I think a lot of people in the comments aren't really understanding the article. They state that the best reason to upgrade isn't really the processor speed, its all the other things the new platform affords you.

    In particular im very happy that i will FINALLY be able to get an SSD with speeds faster than what SATA3 allows as many of the motherboard for the z170 have m.2 thats not running on sata but on PCIE channels. It also allows for some real bandwidth in SLI situations. I have a single 980ti, and this platform would allow me to SLI another down the road and not impede things.

    Granted, its not a good value proposition when you look at the end result, but its a very nice future proofing platform in my opinion.

    Its kind of like saying if you have a modded older mustang thats as quick as a new mustang that you shouldn't upgrade because its just as or maybe slightly faster. There are more factors to the equation. Things that add to the quality of life, etc.

    In skylake's its mostly stuff related to the chipset. IMO thas fine by me.
  • sonny73n - Wednesday, August 12, 2015 - link

    I think you're an idiot. Understanding the article is one thing, realizing how close it is to the truth is another. Sure it's a nice upgrade for anything prior to Sandy Bridge but the author has summed up this article with a bold statement "Sandy Bridge, Your Time Is Up" which I believe - a false statement. Should I have a 5th grader break down the calculation of upgrade options so you can understand? First, note to mind that there's no such thing as future proofing in PC hardwares like you said and K series are made for overclocking.
    Let break down the upgrade options for my rig - Z68 MB $190, 2500K $230, HSF $60, 8GB RAM $60, PSU $180, GTX 780 $480, SSD $180, Case $80. Total $1460.
    Option 1: Upgrade MB, CPU, HSF and RAM. Old components ($540 new) can eBay for ~$200. New components $560 - $200 = $360 (out of pocket). Performance gain: System Overall 30%, Gaming 3 to 5%.
    Option 2: Upgrade the whole system. Total $1480. Performance gain same as option 1. Now having 2 systems (wonder what I'm gonna do with both).
    Option 3: Upgrade for gaming. Another GTX 780. Performance gain: BF3 1920x1200 4xAA about 95%. Total $480.

    Sure Skylake has some new features. Do I need them? NO. Do my SSD saturate SATA3 bus (throughput around 550MB/s)? NO. Is there any program (beside Handbrake which I use rarely) that can utilize the full power of my 2500K OCed mildly at 4.2GHz? NO. Can 980ti SLI saturate PCI-e2.0? NO. Am I such an idiot that I have a good running Mustang but I still like to buy another just because it's a bit better? NO. Is being financially irresponsible add to the quality of life? NO.

    Anyone with a brain that has a SB system or newer would never pick the first 2 options.
  • mapesdhs - Wednesday, August 12, 2015 - link

    If there was a thumbs-up button for your post, I'd be clicking it. :D
  • sonny73n - Thursday, August 13, 2015 - link

    Thanks :-) I wish I could explain it better. He's probably wondering why there's a $20 different lol. Hint CPU
  • Kutark - Thursday, August 20, 2015 - link

    This is pretty hilarious and just further proves my point. You had a fundamental misunderstanding of what the article is stating. You also have a fundamental understanding of the concept of an opinion. This article is not a encyclopedia brittanica article trying to create statements of fact. It is the OPINION of this website that sandy bridge's time is up. I tend to agree with them. And i'm on sandy bridge.

    Like most internet heroes, you're focusing on one aspect, price/performance. People buy products for a multitude of other reasons. Just simply getting a pure speed upgrade isn't always the primary factor behind the decision.

    For example, i bought a VW GTI a few years back instead of a Mazdaspeed 3, even though the mazdaspeed 3 was a better performing car, and was cheaper. I bought the VW because of the intangibles. I liked the way it drove, i liked the interior design better, the exterior design better, etc etc etc.

    I will be buying a skylake platform because i like the options the chipset affords me moving forward, in particular the increased number of PCI express lanes which will come in useful when m.2 pcie SSD's come down in price.

    And please don't talk to me about financial responsibility. We're not talking about buying a $500k house when you can really only afford a $300k house. Most of us make enough money that while $1k isn't insignificant, it's not going to break the bank either. Get your head out of your ass.

    But, please, continue on making an ass of yourself, if nothing it is entertaining...
  • FullCircle - Monday, August 10, 2015 - link

    I'm still happy with my SandyBridge i7-2600k.

    I see no reason to upgrade for 25% performance boost...

    I just upgraded my graphics card from GTX 580 to GTX 970, giving me a performance boost of 250%... now that's a worthwhile upgrade...

    25% on the other hand? That's not worth it. CPU advancement has slowed so much there's not much reason to upgrade at the moment unless you have an incredibly old processor. Even the Core i7 processor I have in my old PC is still pretty good.
  • mapesdhs - Wednesday, August 12, 2015 - link

    I upgraded from 3GB 580 SLI to one 980 and even that was a good speed increase. Rocking along with a 5GHz 2700K. For a 2nd system to drive a 48" TV, I considered HW, but in the end for the games I'll be playing (which can use more than 4 cores) a used SB-E build made a lot more sense. ASUS R4E only 113 UKP, 3930K only 185 UKP, etc. Only key item I bought new was another 980.

    It's pretty obvious with hindsight that Intel jumped ahead much more than they needed to with SB/SB-E, so we won't see another leap of that kind again unless AMD or some other corp can seriously compete once more, just as AMD managed to do with Athlon64 back in the day. All this stuff about bad paste under the heat spreaders of IB, HW and still with SL proves Intel is dragging its feet, ditto how lame the 5960X compares to XEONs wrt its low clock, TDP, etc. They could make better, but they don't need to. Likewise the meddling with the PCIe lanes for HW-E; it's crazy that a 4820K could actually be better than a 5820K in some cases. Should have been the other way round: 5820K should have been the 6-core low end with 40 lanes, next chip up at current 5930K pricing should have been an 8-core with 40 lanes, 5960X should have been an 8 or 10 core with 64 or 80 lanes (whatever), with a good 3.5 base clock, priced *above* the current 5960X a tad - that would have been a chip the real enthusiasts with money to burn would have bought, not the clock-crippled 5960X we have atm.
  • experttech - Monday, August 10, 2015 - link

    I have a Sandy Bridge 2600K running on a Asus H67 EVO motherboard, so not overclocked. My motherboard is slowly dying. First the obboard sound died, then the reset is now working. Now I am wondering whether to upgrade the motherboard to an overclockable Sandy Bridge motherboard or jump the wagon to 6700K. I mostly do Video Editing and Encoding, no gaming. or wait till the motherboard dies completely and hope SkyLake E or Kaby Lake is out by that time. Any suggestions?
  • sonny73n - Wednesday, August 12, 2015 - link

    2600K is an excellent chip. I'd rather have the i7-2600K than the new i5-6600K. You should get a new MB Z77 but there's not many still available now. I only saw 1 Z77 on Newegg, it's the Asrock and I think it costs around $160. You can also find used Z68 and Z77 MBs on Amazon or eBay but I wouldn't recommend it. Video editing with the 2600K is a piece of cake and x264 encoding is not bad either. Keep the chip and spend your money on a good video card and a nice 4K ips monitor.
  • experttech - Thursday, August 13, 2015 - link

    Thanks for your reply. I too realized the same, I did notice the only Z77 Asrock motherboard (which is an excellent motherboard by the way) but for the price, I can't justify buying it especially since so many options are available in the new platform. One interesting thing I noticed is that with the newer instruction sets, my laptop with i5 5200U actually renders some frames very fast but overall, my i7 2600K renders the finished movie quicker. So though there are IPC improvements in the newer chips, the basic features (performance, mutithreading etc) haven't changed night and day. Of course I am comparing a Sandy bridge i7 to a lower clocked Broadwell i5 but I am not sure if there will be a tangible difference upgrading to SkyLake as of now. So you are right my friend and thanks for the advice!

    I do have a 1440p monitor and its amazing how much real estate you get going from 1080p. Definitely one of the best upgrades I made. I will look into a 4K monitor as they have come down quite a bit in price.
  • phillipstuerzl - Monday, August 10, 2015 - link

    Hi,

    On your 5th page, under Test Setup, you list the i5 6600K as being 4C/8T. This is incorrect. It is not hyperthreaded, and only 4C/4T.

    Great article!
  • Ryan Smith - Tuesday, August 11, 2015 - link

    Thanks!
  • DannyDan - Monday, August 10, 2015 - link

    So do we expect the 1151 socket to have a few good upgraded processors down the road? It really sucked getting a socket 1156 CPU.
  • mdw9604 - Tuesday, August 11, 2015 - link

    Moore's law is a crock of $%!24. 7 years later and Intel still hasn't doubled the performance the i7 870. Core for Core.

    The may be able to cram more transistors into a smaller space, but doesn't mean better performance.
  • Oxford Guy - Thursday, August 13, 2015 - link

    It's funny how every time it fails people say it is being "adjusted", "extended", "massaged", "modified", or something like that. Either it works or it doesn't. It should be called Moore's Heuristic = "process density increases over time" (duh).
  • ES_Revenge - Saturday, August 15, 2015 - link

    You misunderstand Moore's Law. Moore's law has nothing [necessarily] to do with performance. Moore's law only states that the number of transistors possible in a given space will double every two years. It also doesn't just apply to Intel and mainstream CPUs, it applies to *all* integrated circuits. Everything from CPUs to EEPROMS, to SOCs, to microcontrollers, to image sensors...etc. these things are all included as they're all ICs. So, on average, it still holds AFAIK.

    Whether or not a 14nm chip outperforms, or how much it outperforms, a 28nm one (there wasn't one for Intel, but Sandy was 32nm) is NOT what Moore's Law predicts. Other people have construed this "Law" to mean things about performance (including some guy at Intel that once said performance would double every xx months--totally wrong and not what Moore's Law states anyway), but it's not about performance and certainly not just about desktop CPUs.

    You can't say something is a "crock of $%!24" if you don't know what it's about to begin with.
  • Kutark - Thursday, August 20, 2015 - link

    Unfortunately, whether or not someone understands something has never stopped someone from speaking their mind on it. Hell, look at pretty much every election under the sun. The vast majority of people who vote in them couldn't even give you a basic rundown of the issues at hand, yet they sure do have an opinion on it...
  • nsteussy - Tuesday, August 11, 2015 - link

    Ian, under the Linux benchmarks you have the graphic for the NAMD Mol Dynamics twice but none for the NPB fluid dynamics. That said, that is quite the nice bump for NAMD (~24% from a 4770 to a 6770). Very tempting.
  • Visual - Wednesday, August 12, 2015 - link

    I kinda don't like how you keep repeating the generic benchmark descriptions before each graph. I'd prefer if it were hidden by default, visible on hover or toggled by clicking of some info button or similar, or at the very least formatted a bit different than actual article text.

    I'd also like if you had some comments on the actual results, at least where there are some peculiarities in them.

    Case in point: Why is the 5775C IGP so much better in some games?
  • mapesdhs - Wednesday, August 12, 2015 - link

    Agree re comments on results, eg. why does the 2600K dip so badly for Shadow of Mordor @ 4K with the GTX 770? It doesn't happen with the 980, but if the dip was a VRAM issue @ 4K then the 3770K shouldn't be so close to the other CPUs. Weird...
  • wyssin - Wednesday, August 12, 2015 - link

    Has anyone published a review comparing i7-6700k with other cpus all overclocked to, say, 4.5 GHz? For those who typically run an overclocked system, it's not an apples-to-apples comparison to put the new entry up against the older all at stock settings.
    So to make the best-informed decision, it would be very useful to be able to see head-to-head trials at both (1) stock settings and (2) overclocked to a speed they can all reasonably manage (apparently around 4.4 or 4.5 GHz).

    I have the same problems with the Guru3D review and the Gamestar.de review that were mentioned in earlier comments.
  • Oxford Guy - Thursday, August 13, 2015 - link

    The key is to pick a speed that the worst overclocking examples would be able to get to with reasonable voltage. That takes the luck of the draw out of the scenario.
  • beatsbyden - Thursday, August 13, 2015 - link

    Not much improvent. Only worth the money if you're coming from an I5
  • Darkvengence - Thursday, August 13, 2015 - link

    This lack of CPU power needed in gaming is only temporary once u have photorealistic graphics in 4k u gonna need crazy powerful GPUs which need feeding by beastly CPUs . our current technology will seem like a dinosaur CPU in comparison. That is of course a fair few years away but still one day it will happen . I'm glad current CPU are not being taxed by today's games even less with dx12. Gives my gen 1 MSI nightblade more life with 4970k as u can't change motherboard it all custom front panel connectors and stuff. I used to have a i7 920 and got to say that is still a good CPU especially for single GPU systems. I really like sandy bridge tho very impressive for its age. But older CPUs lose out mainly being tied to older chipsets so u lose new connector and bus speeds for hardware tho
  • gasparmx - Thursday, November 19, 2015 - link

    I think you're kinda wrong, the point of DX12 is depending less on CPU, NVIDIA says in the future probably you're not going to need a beast CPU to play 4k games.
  • djscrew - Friday, August 14, 2015 - link

    I'm so disappointed in SB/DDR4. After all this wait and the IPC gains with discreet graphics are negative? WTF Intel. I Guess my Nehalem system will survive another generation, or maybe three? No compelling reason to upgrade. It's such a shame because I was really looking forward to building a $3k rig. I think I'll shop for a nice 4k panel instead.
  • Ninjawithagun - Friday, August 14, 2015 - link

    And just when I was about to purchase the 6600K and a Z170 mini-ITX motherboard as an upgrade to my 4690K and Z97i Plus motherboard...man, am I glad I ran across this article. Saved myself about $600 for a useless upgrade!
  • ES_Revenge - Friday, August 14, 2015 - link

    Umm what the heck happened to the power consumption? In particular the i7/6700K. It's not really shown thoroughly in this review but the Broadwell CPUs are more power-efficient it seems. While the 6700K has a half GHz faster clock speed, it also has a much lesser GPU. To begin with, both the i5 and i7 Skylake parts have higher TDPs than the Broadwell desktop parts, and then the 6700K can actually draw over 100W when loaded. This is above its TDP and also significantly more than its 6600K counterpart which runs only a few hundred MHz slower. Odd.

    I mean I think we were all waiting for a desktop CPU that didn't have the power constraints as the Broadwell CPUs did but I don't think this is exactly what anyone was expecting. It's like these Skylake CPUs don't just take more power but they do so...for no reason at all. Sure they're faster but not hugely so; and, again, their iGPUs are significantly slower than Broadwell's. So their slight speed advantage came at the price of markedly increased power consumption over the previous gen.

    That only leads me to the question--WTF? lol What happened here with the power consumption? And losing that IVR didn't seem to help anything, eh? Skylake is fast and all but TBH I was more impressed *overall* with Broadwell (and those CPUs you can't even find for sale anywhere, the last time I checked--a few weeks ago). Granted as we've seen in 2nd part of the Broadwell review it's not a stellar OCer but still, overall it seems better to me than Skylake.

    It's kind of funny because when Broadwell DT launched I was thinking of how "Intel is mainly focusing on power consumption these days", meaning I thought they weren't focused enough on performance of DT CPUs. But it seems they've just thrown that out the window but the performance isn't anything *spectacular* from these CPUs, so it just seems like a step backwards. It's like with Broadwell they were showing just how much performance they could do with both CPU and iGPU with a minimum of power consumption--and the result was impressive. Here it's like they just forgot about that and said "It's Skylake...it's new and better! Everyone buy it!" Not really that impressive.
  • janolsen - Friday, August 14, 2015 - link

    Stupid question:
    Can Skylake IGP easily play back 4K video. Thinking of a person just using a 4K screen for Youtube stuff, not gaming...
  • ES_Revenge - Saturday, August 15, 2015 - link

    Yeah it can. This one of the very few improvements over Broadwell/previous HD Graphics implementations. It has a "full" HEVC decode solution built in, unlike the "hybrid" solutions they had previously. If you look on the 4th page of the review it actually goes pretty in-depth about this (not sure how you missed that?).
  • alacard - Friday, August 14, 2015 - link

    It's clear you put a ton of work into this Ian, many thanks.
  • Flash13 - Monday, August 17, 2015 - link

    So, far Intel Core i7-6700K 8M Skylake Quad-Core 4.0GHz is just vapor to the public.
  • somatzu - Wednesday, August 19, 2015 - link

    "So where'd you get your degree?"

    "Anandtech comments section."
  • superjim - Friday, August 21, 2015 - link

    I'm still not convinced this is a worthwhile upgrade from Sandy Bridge. If I can get 4.8 from a 2700K and maybe 4.6 from a 6700k, factor in cost difference, is it really worth it? At the end of the day, cpu/mobo/ram would be near $700 for maybe a 15% speed bump overall.
  • watzupken - Friday, September 4, 2015 - link

    From a desktop standpoint, there is very little incentive for one to upgrade. The new gen mainly targets power savings, so likely to benefit mobile users, i.e. Ultra Books and tablets.

    As far as Intel is trying to target those people still on Sandy and Ivy Bridge to upgrade, they fail to account for the cost of upgrade for a paltry improvement in performance. To upgrade from SB, one has to upgrade the ram, motherboard and CPU, and on top of that, need to separately purchase a heatsink since they want to cut cost.
  • CynicalCyanide - Saturday, August 22, 2015 - link

    Question to the Authors: You've noted two DDR4 equipped mobos in the "Test Setup" section, but you've also tested DDR3 equipped Skylake. Which motherboard did you use for that?

    Furthermore, in a previous article it was mentioned that Z170 wouldn't be able to handle 'regular' 1.5V DDR3, but here apparently it wasn't an issue reusing old 1.5V RAM after a voltage adjustment. Was there any special method required aside from booting as per normal into the BIOS and adjusting the voltage?
  • TiberiuC - Saturday, August 22, 2015 - link

    Everything comes down eventualy to "Intel vs AMD". What Intel did with Core2Duo was the right path to go, what AMD did was so wrong and that sometimes happen when you inovate. AMD stopped with the last FX series and went back to the drawing board and that is a wise decision. What will ZEN do? i am expecting Ivi Bridge performance maybe touching haswell here and there. If this wont happen, it is bad for them and very bad for us. Intel is starting to milk the customers acting like there is a monopoly. I did buy my 2600k for 300$ (after rebates), i have to say that the price of the 6700k is well, meh...
  • watzupken - Friday, September 4, 2015 - link

    To be honest, I feel the recent AMD chips are not so bad. From my opinion it boils down to 2 things,
    1) They are not able to get software makers to optimize for their chips,
    2) Disadvantage in terms of fab, i.e. 28nm vs 20/14nm.
    Of course, they also don't have a pocket a deep as Intel to begin with. So any misstep can have a serious setback for them.
  • i_will_eat_you - Saturday, December 12, 2015 - link

    AMD is long dead especially for the desktop market and server market. For their latest highend chips they simply slap bigger and bigger fans/heat sinks on to deal with a higher TDP from a ramped up clock. I'm not even sure if they have a particularly good standing in the "APU" market, low end market, etc. ARM and Intel are doing much better.

    They only have a slight gain in the GPU market with the push of HBM but even this does not give them a strong lead and they are falling onto Apple like marketing in attempt to boast their sales.

    The only reason people might tolerate AMD at the moment is because a lot of tasks will run ok on a CPU that is not the best or not the best value for money.

    Until they release a new architecture and a new fabrication process they are becoming completely out of the game. I agree they have no room for error in that.
  • Synomenon - Monday, August 24, 2015 - link

    So it's possible to have the full 16 PCIe 3.0 lanes from the CPU going to the GPU and have 4 PCIe 3.0 lanes from the chipset going to the m.2 drive on a Z170 board?
  • wyssin - Sunday, August 30, 2015 - link

    Here's what I'm talking about.
    In their i7-6700K review article, bit-tech.net compared chips at stock settings AND at a decent overclock. By seeing both of those results, you can see whether an upgrade makes sense for your needs (assuming you are an overclocker).
    http://www.bit-tech.net/hardware/2015/08/05/intel-...
  • oranos - Tuesday, September 15, 2015 - link

    Looks like after 5 years, there is still no reason to upgrade a 2500k.
  • sheeple - Thursday, October 15, 2015 - link

    I TOTALLY agree with you
  • sheeple - Thursday, October 15, 2015 - link

    THIS is funny, I'm using a SUPER OOOOOOLD L5408 Xeon that sips 40 watts and gives the performance of a 4th. gen i3 and runs ALL the latest 2015 games and the L5408 cost me 40 bucks on ebay, HAHAHAHAHAAAAA!!!!
  • sheeple - Thursday, October 15, 2015 - link

    My L5408 isn't even overclocked past 2.76 Ghz and runs The Witcher 3 Wild Hunt (game from 2015 on a machine with a cpu and mobo that together cost me 70 bucks used-the cpu was introduced in beginning of 2008) at 30 fps AVERAGE WITH ALL SETTINGS MAXED @ 1080p using a STOCK GTX 950 LOL!!! Whoever buys one of these "Skynet" Cpu's needs to do more research, SERIOUSLY !!!!
  • sheeple - Thursday, October 15, 2015 - link

    DON'T BE STUPID SHEEPLE!!! NEW DOES NOT ALWAYS = BETTER!
  • manolaren - Saturday, October 31, 2015 - link

    So if Anandtech tests are accurate, between the skylake cpu's, i5 is the way for a gaming pc. Gaming benchmarks are almost identical, but the price is a lot cheaper for the i5. Considering skylake doesn't bring nothing groundbreaking for the genre, then i cant see any other way for gamer's. My only question is if future games will take advantage of more than 4 cores and make i7 cpu's a must.
  • xxxGODxxx - Saturday, October 31, 2015 - link

    Hi guys I would like to know whether I should buy the 6600k with a z170 mobo at $417 or should I buy a 3930k with a x79 mobo at $330? I'm not too sure if the extra IPC of the 6600k is enough to warrant the extra $87 over the 3930k especially since I will be overclocking the cpu and I will be gaming on a r9 390 (maybe I will add one more 390 in the future) at 1440p.
  • Toyevo - Wednesday, November 25, 2015 - link

    Even now I hesitate at updating a Phenom II X4 945. The Samsung 950 Pro pushed me over the line, and with it the need for PCIe M.2 only available in recent generations. There's no holy grail in CPUs, only what's relevant for each individual today. Of several other systems I have, none demand any change yet. On the Intel side my 2500K (and up) I wouldn't bother with even Skylake. With AMD my FX6300 (and up) are more power hungry but entirely adequate. And our E5-2xxx servers sit on Ivy Bridge until early 2017.

    What does all this mean? Not a lot.. In the same way many of you see Skylake as a non event, I equally saw Broadwell and Haswell as non events. 20 years ago the jumps were staggering, overclocking wasn't nearly as trendy, nor as straight forward, but entirely necessary, the cost of new hardware prohibitively expensive. The generations were so definitive and fast back then.
  • i_will_eat_you - Saturday, December 12, 2015 - link

    This is a good review, especially the look at memory latency. The 4690K is left out however from a lot of benchmarks. If you include that then I don't see much of an attraction to skylake. There is also the concern about the new rootkit support skylake introduces with protected code execution. This is not something I see being used for the good of the consumer.

    My one gripe is the lack of benchmarks for intense game engines (simulations, etc). Total war is there which is a step forward but I'm not sure if that benchmark really measures simulation engine performance.

    If you take games such as Sins of a Solar Empire or Supreme Commander then they have a separate thread for graphics so tend to maintain a decent frame rate even when the game engine runs at a crawl. The more units you add to the map and the more that is going on the slower it goes. But this is not in FPS. It means that ordering a ship across the solar system might take 10 s when there are 1000 units in the game but 5 minutes when there are 100000 units in the game. I would love to see some benchmarks measuring engine performance of games such as this with the unit limits greatly increased. It is a bit of a niche but many sim games (RTS, etc) scale naturally which means you can increase the unit limit, map size, AI difficulty, number of AIs, etc as your hardware becomes more powerful.

    This is especially relevant with CPUs such as the broadwell which might gain a big advantage each game loop processing the very large simulation engine dataset.
  • systemBuilder - Tuesday, July 19, 2016 - link

    Wow your review really sucked. Where are the benchmarks for the i5-6600k? Did you forget?
  • POPCORNS - Friday, August 19, 2016 - link

    To me, It doesn't matter if there's no IPC improvement over Sandy Bridge, Ivy Bridge or Haswell,
    Because I've upgraded from a Wolfdale Celeron (E3300) to a Skylake (6700K), lol.
  • oranos - Thursday, December 29, 2016 - link

    This article seems to be confused. DDR4 brings more sustained framerates for higher resolutions (especially 4k). Really a waste of time doing a 1080p comparison.
  • oranos - Thursday, December 29, 2016 - link

    if you wanted to do a proper test for DDR4 gaming performance you should run a 6700K and GTX 1080 minimum and run multiple games in 4K for testing.

Log in

Don't have an account? Sign up now