Comments Locked

95 Comments

Back to Article

  • name99 - Friday, April 15, 2016 - link

    Cloudbooks? Seriously? And you think those "offered with a cloud service of some kind" crapware offerings are a PLUS?

    Intel is so utterly lost here, with this desperate hope that cloudbooks will be what ultrabooks weren't. I guess the thinking goes "well, with ultrabooks we tried to sell to the high end, but the OEMs sabotaged us at every step; so this round let's just skip the BS and admit we're peddling cheap crap in a race to the bottom".

    Of course this is all in reaction to:
    http://www.macrumors.com/2016/04/11/gartner-idc-pc...
    (TL;DR - worldwide PC shipments down 9.6% over the last year --- AGAIN... Sixth consecutive of PC shipments, and lower shipments than in 2007...)
  • JoeyJoJo123 - Friday, April 15, 2016 - link

    You can blame the lack of big performance gains year-over-year in processors for that.

    PCs sold like hotcakes back when big performance gains were to be had over last year's model. In a way, this is kind of a good thing; it means that there's less e-waste going around and people are actually holding onto PCs for a longer period of time, rather than chunking them out the door.
  • Thatguy97 - Friday, April 15, 2016 - link

    Yeah but on the other hand the cpu side of things now is boring as all hell
  • MrPoletski - Monday, April 18, 2016 - link

    Well hopefully Zen will change all that, even if it doesn't kick intels ass, it'll hopefully make them sit up straight, do up their tie and stop having that triple vodka during their lunch break. Then we might see an intel with it's pants down again. I really hope zen vs skylake ends up being like the athlon XP vs pentium 4 days. Then we will see the new core2 from intel rawr.
  • duploxxx - Monday, April 18, 2016 - link

    the issue is that you expect that current slow performance is due to lack of CPU power, which is not the case :)

    run your system with i3 or i7 with equal memory and storage system for general tasks it will still be crappy MS to get your pants down..... even Intel releases i9 :) still crappy SW based on the initial fat slow dog called x86.
  • mapesdhs - Wednesday, April 20, 2016 - link

    Indeed, just wasted an hour trying to fix weird Win7 driver issues...
  • gamerk2 - Monday, April 18, 2016 - link

    The issue is that the tasks we care about (games) are almost entirely GPU limited, to the point where the CPU really doesn't matter to us anymore.

    Throw in the fact that for everyone else a Pentium class CPU is more then enough, and you get general disinterest about CPU improvements.
  • mapesdhs - Wednesday, April 20, 2016 - link

    Yup, a mere G3258 in an HTPC is surprisingly strong. Paired with a 750 Ti, pretty good for most stuff, handles Google Earth nicely, totally quiet (ASUS Z97I-Plus, SanDisk X300 256GB).
  • name99 - Friday, April 15, 2016 - link

    That's not exactly true. Yes, the Western PC market is saturated. But phones and tablets, and traditional PCs in non-Western markets, were Intel's to lose. And lose them they did...

    It was Intel's choice to prioritize the x86 ISA over their other advantages (like circuit and process prowess), with all that implied for additional complexity and cost over the ARM alternative; and the business consequences that implied (ie never letting Atom get too good compared to the expensive CPUs). The consequences were easily predictable at the time, and WERE predicted (by me, among others); but that's what happens when you're trapped in a bubble where everyone tells you the x86 ISA is so unbelievably fantastic that no-one would ever want to use anything else.

    And it's not even like Intel had no warning. They freaking LIVED through Apple switching from PPC to x86 because x86 provided better low-power chips, and they TOLD Intel that they wanted even lower power chips for mobility...
  • Reflex - Friday, April 15, 2016 - link

    You are partly right and partly wrong. You are correct, Intel hamstrung the Atom, a CPU introduced in 2008 but not really usable until about three years ago due to lack of investment. That was a huge miss. Even now they have yet to integrate it with a proper LTE modem and other features common to ARM designs. Its a huge mistake.

    You are wrong about them prioritizing x86 and that being a mistake. The instruction set is not relevant. There is nothing inherently better about the ARM ISA vs x86, and as ARM has grown in complexity its power and performance profile has started to become very similar to x86. Intel staying with the ISA they have invested decades into optimizing is not a mistake, its leveraging already expended R&D.

    The mistake, as I stated before, was insisting on product segmentation to the detriment of their truly low power designs.
  • name99 - Friday, April 15, 2016 - link

    Except that the product segmentation is what is made so problematic BECAUSE of their insistence on x86 ISA... If they used a different ISA, there would be much less fear about Atom cannibalizing the high end, and they wouldn't need such strong crippling of Atom.

    And the delay in implementation is because of the x86 ISA... Implementing a high end ARM CPU takes, worst case, around 4 years. The equivalent figure for x86 was 7 years at the time of Nehalem, and is now probably 8 years or more.

    x86 is not ORTHOGONAL to the issues you have identified as a problem, it is a CAUSE of them.
  • Reflex - Saturday, April 16, 2016 - link

    I do not agree with this. Part of what makes Intel chips so expensive is an advantage mentioned above: Their class leading process technology. You cannot escape that cost. Intel has the most expensive to operate fabs in the world, if they produced a completely generic ARM chip with no special features it would still have to be sold for more money than what any of their competitors could sell theirs for.

    Given that, Intel has to have an advantage that makes that higher initial cost worthwhile. The advantage they attempted to leverage was x86 compatibility. And it is a powerful lever. But their lack of full commitment for a variety of reasons made it not worth it to venders to consider. If they had had a highly integrated SoC based on x86 in 2010, ARM would likely already be reduced to the very low end. But instead they avoided that for reasons you are correct about above (product segmentation).

    Simply aping the ARM ISA and making a cheap SOC out of it would have still produced an overpriced product for these market segments. And even then, you still aren't addressing the fact that they have repeatedly failed to create any of the necessary supporting logic for a chip, such as a decent LTE modem. That does not change, even if it was an ARM chip.
  • jasonelmore - Saturday, April 16, 2016 - link

    well, when you shrink transistors and go to a smaller process, you save money per transistor. That is was the main reason we advance these lithography processes. Now we are shrinking for power savings.

    Intel has a 20% lead over Samsung and TSMC 14/16nm.. Nobody can touch their SRAM tech, which determines size of cache. ARM/Samsung are marketing 14/16nm, but their size is really 20nm.

    https://www.semiwiki.com/forum/content/3759-intel-...
  • epobirs - Saturday, April 16, 2016 - link

    And why would anyone buy a non-x86 compatible product from Intel instead of a different ISA from another vendor? Running existing x86 code is a huge, huge asset they aren't going to let go unless they can find some great advantage elsewhere they can leverage. Remember, Intel was already in the ARM business a long time ago and sold it off. This was before mobile turned huge but it seems unlikely they were completely blind to that prospect. They just didn't see any advantage for them to wield against other ARM vendors that was greater than x86 compatibility.
  • name99 - Saturday, April 16, 2016 - link

    You're missing the point. Intel had the chance to create mobile chips that were superior to what was available from ARM (because they had superior process and circuits). Why would companies have bought them? Uh, because they were SUPERIOR.
    Instead, by sticking with x86, they were late to the party, shipping garbage that no-one actually wants (how many phones with x86? how many watches do you expect to see with quark?) but the stupidity doesn't end there, because not only did they not acquire the market they hoped for, they're slowly hurting the market they actually care about.

    Look, if it were such a great strategy, then why do we keep seeing headlines like this:
    http://www.oregonlive.com/silicon-forest/index.ssf...

    (and that follows layoffs last year...)

    Intel is not going to disappear tomorrow. But there is a difference between simply hanging on doing the same old same old as ten years ago, and being part of the future. Intel is now in the same sort of place as a company like IBM --- they have their niche, they make plenty of money, but for the most part they're just not very interesting, and they're unlikely ever to be so. They've swapped innovation for repetition; they've swapped what could have been more spectacular growth for flat profits.
    Basically textbook Clayton Christensen stuff.
  • Reflex - Sunday, April 17, 2016 - link

    I'm not sure how many different ways I can explain this, but lets just make some bullet points:

    - The ISA is irrelevant here. x86 does not cost any more or less power than ARM inherently. x86 does not perform any better or worse than ARM inherently. x86 is not any harder or easier to develop new designs for inherently.
    - The reason Intel's designs take years to complete has nothing to do with the ISA. It has to do with the fact that a given design has to serve more markets. In other words, it has to scale. ARM, on the other hand, serves only very specific markets, and they do not have to worry about how scalable their designs are.
    - The reason scaling is important is that it permits Intel to spread the cost of fab research and upgrades across more products, which permits them to fund continuing research to stay ahead of their competition in this space. Take away that advantage and they would soon NOT be class leading on process tech.
    - The argument that if Intel just used their existing 14nm fabs to build ARM chips they would have a superior product is misleading. Given the R&D cost and build cost that went into those fabs, and given that they cannot sell silicon at a loss, the base price on those chips would be around 2x the base price of a comparable Qualcomm design. While it is true that the CPU's would likely be able to perform at a higher level while consuming less power, no one will double their bill of materials cost for that when the end result is going into a tablet or phone. There is a reason why AMD's ARM efforts are aimed at the server room, where absolute lowest cost is not important.
    - Given that simply having higher performance can't justify a higher price, why jettison x86 compatibility at all? It's a huge potential asset.
    - And finally, none of that addresses the fact that Intel's real issue is lack of integration. They have no products that contain all that is needed in one simple SoC. Their efforts at a LTE modem have been crap, and numerous other typically integrated parts simply don't exist for Intel, meaning that if you go Atom you are stuck not only paying more for the CPU, but paying more for all the extra logic you have to buy to make it functional in a phone design vastly inflating your cost and reducing your power efficiency. That aspect is the largest impediment to success, and it is not addressed by taking the Atom and making it ARM.
  • name99 - Sunday, April 17, 2016 - link

    The problem is that you keep thinking I am using arguments that I am not.
    I am NOT saying that x86 imposes a power or a performance barrier.
    I am saying that it

    (a) imposes a COMPLEXITY barrier. To design a performant x86 CPU seems to take 1.5 to 2x as long as designing an equivalent RISC CPU (POWER, SPARC, ARM, I don't care). We have two pieces of evidence for this:
    - we know how long it takes to design Intel CPUs, and we have a pretty good idea how long it takes to design their competitors
    - we've had AMD say words to this effect when asked why they were considering designing an ARM server chip

    (b) the use of a more-or-less identical instruction set for Intel's low-end as their high-end means that it is OBVIOUS that the low-end will compete against the high-end. This is not the case when a different instruction set is used. Again, this is a business fact, it is not a matter of opinion. The alternative instruction set Intel used did not have to be ARM (though given Intel's track record with designing instruction sets, they should DEFINITELY have outsourced the problem to someone with competence in this space), it just had to be different from x86. One weak alternative, for example, which would still somewhat solve the complexity problem of (a) would have been to use a maximally simplified version of the x-64 instruction set and machine mode. Dump everything from the 16 and 32-bit world, dump x87, dump PEA and SMM and segment registers and 4 rings and all the rest of it.

    Present a user facing instruction set that accepts purely 64-bit code (though, if you insist, with the same assembly language), and am OS-facing model that looks like the standard sort of RISC model --- hypervisor, OS, user, hierarchical page tables --- and dump the obsolete IO stuff that still hanging around --- broken timer models, broken interrupt models, etc.

    This would have allowed for very rapid creation/porting of compilers, assemblers, and other dev tools; and would have allowed any modern code to be compiled to this new target just fine.

    But Intel couldn't figure out what the actual goal was. Was the goal to create a performant phone chip and build a completely new business every bit the equal of the existing PC business? Or was the goal to run Windows and DOS code from 1980 on small form factor devices?
    For some reason (my theory being that there are utterly deluded about the value of absolute x86 compatibility) they assumed the second goal was vastly more important than the first. Well, we've seen how that turned out.

    I'll point out that we have an example of a company that does exactly what I've suggested and handles it just fine, namely IBM. IBM sells z-series at the high end and POWER at the (for IBM) low end. Completely different instructions sets and machine models, with completely different constraints on how each can be evolved. I suspect there is a fair bit of sharing between the two teams as appropriate (process technology, packaging, circuits, verification, perhaps even some aspects of the micro-architecture like the branch predictors and pre-fetchers); and this is the sort of sharing I'd expect Intel likewise to have engaged in.
  • Reflex - Sunday, April 17, 2016 - link

    I'm sorry but you really don't understand Intel's fabrication facilities. They have no ability to simply slap an ARM design out at 14nm. Their facilities are designed hand in hand with their CPU designs, each complementing the other. The reason their designs take so long has nothing to do with the complexity of x86, indeed they have been producing revamped designs every 2 years until just recently. The reason it takes so long (and I submit the tick-tock schedule was as fast as anything ARM produced) is that they have considerations from the top to the bottom, and that scalability adds complexity.

    As for the rest where you say what Intel 'should' do, well all I can say is that CPU design is not that simple, and you just made a design that is nothing like any other ARM player, which would mean a lot of work would have to go into it and it would not be compatible with a lot of existing software up and down the stack.

    And you still seem to be avoiding the obvious: Even if everything you say there were true, how does any of that have anything to do with the fact that Intel does not have the component pieces, such as integrated LTE modems, required to make a competitive product? Even if they could magically by adopting ARM create a CPU that permitted annual updates on their latest and greatest processes, that really does not get them past the problem that the product is not what OEM's actually need.
  • name99 - Sunday, April 17, 2016 - link

    Jesus! I tell you MULTIPLE times that I am NOT talking about Intel buying an ARM physical design and fabbing that, and yet you keep claiming that is my suggestion. I mean, half the above comment (the one you replied to) talks about an Intel solution based on (and only on) the x86-64 instruction set!

    This suggests that you are more interested in pushing an agenda than in understanding the world, or your fellow commenters.

    Likewise, you may have noticed that Apple has not (so far) included an LTE modem on their SoC. Hmm. Seems to suggest that it is quite possible to create a compelling (and low enough power solution) by using external radio components. Hmm. And the rumors are that the A10 will come with an LTE baseband on the SoC. Provided by Intel no less. Hmm.

    So once again, I have to wonder what's going on in your head, given your eagerness to
    (a) utterly misrepresent what you read (even when it is repeatedly explained to you that you ARE misrepresenting it), and that you
    (b) seem more than a little unaware of just what exists in Intel's product portfolio.
  • Strunf - Monday, April 18, 2016 - link

    X86 is what keeps Intel relevant. Intel already tried the other ISA approach it was the IA-64... how did it go? A new ISA means a huge risk, you need invested partners and to provide them with a cost competitive solution, even if it was better at twice the price no one cares.

    Why do you think MS and SONY went the AMD way despite Intel+nVIDIA probably was a more performant solution?

    Even if Intel was making ARM SOCs why would Samsung buy from them when they can make their own SOCs, why would Apple buy from Intel when they can design their own SOCs and then shop around for a foundry?...
  • BurntMyBacon - Monday, April 18, 2016 - link

    @name99: "(a) imposes a COMPLEXITY barrier. To design a performant x86 CPU seems to take 1.5 to 2x as long as designing an equivalent RISC CPU (POWER, SPARC, ARM, I don't care). We have two pieces of evidence for this:
    - we know how long it takes to design Intel CPUs, and we have a pretty good idea how long it takes to design their competitors
    - we've had AMD say words to this effect when asked why they were considering designing an ARM server chip"

    I believe this is the crux of the issue. I'm going to ignore POWER and SPARC as you haven't presented any data on their development cycles and by all appearances they seem to have similarly long cycles to Intel. Focusing in on ARM, you compare a four year cycle to a 7 or possibly 8 year cycle. I do agree that ARM designs do in practice take less time to come to market, but we should note three important considerations. 1) It is much easier to complete a SoC design when you can take core design that has already reached layout and insert it into your design. 2) The 4 year figure was for an A15 design if I recall correctly (you didn't specify). As design complexity goes up, so will development time. Consider the time it took for nVidia to develop Denver, AMD to design Seattle, and, more importantly, how long it took Qualcomm to develop Kryo. 3) Your comparison is a tiny, relatively simplistic, in-order, short pipe design (ARM) and a massive, complex, out-of-order, long pipe design (Nahalem). It should be expected that such a chip will take longer to develop. I'd compare to how long it took to design Atom. Seems to me that they've spent much more time on the SoC portion of the chip lately than the core.

    @name99: "(b) the use of a more-or-less identical instruction set for Intel's low-end as their high-end means that it is OBVIOUS that the low-end will compete against the high-end."

    There are pros and cons to changing the ISA and there are plenty of opinions to go around so I'll leave mine out. Putting the ISA aside, I agree that how Intel handled their product segmentation is largely responsible for the situation they're in.

    There does seem to be a misunderstanding of the way Intel's processors are designed. The early stages of the processor are responsible for fetching instructions and decoding them from x86 into micro-operations (RISC type instructions). After this point, x86 is no longer relevant to the design. Back when AMD was still trying to steal Intel's lunch, this accounted for a little less than 10% of the transistors in the processor. Given that the decode logic is a fixed quantity, it's percentage and overall contribution lessen every process improvement. I think it is 2% (4% on atom sized chips) today. The point is, the chip isn't complex because it is x86, it is complex because that is how Intel designed it.

    If Intel went the ARM route, they wouldn't buy an ARM core (you seem to agree with this). Their fabs are too expensive to just slap out a standard design anyways. They would likely start by replacing the x86 decode logic with ARM decode logic making any minor adjustments necessary to the micro-operation handling and leverage as much of their past work as possible. They could if they wanted fab a seriously high end server chip based on their core architecture and their low end design would likely be base on the Atom architecture. Consequently, their time to market would not change much.

    The bigger question is what all this complexity buys them. Compared to ARM standard designs, the cores are much larger with much better IPCs and much more complex power saving features. ARM has chosen the simpler route of designing two cores (one performance, one low power) and letting software choose between them. Intel has chosen to design a single core that can act as both. While you can argue that it is hard to do both well, having at something like twice the transistor budget per core has its own benefits. Interestingly, Apple and Qualcomm both seem to be moving towards fewer, larger cores as well.
  • extide - Monday, April 18, 2016 - link

    Dude this has NOTHING to do with the ISA. See A53, A57, etc. There are high end and low end ARM cpu's as well. Also, a "high end ARM CPU takes only 4 years" well -- those high end cpu's are basically only competitive with Atom ANYWAYS! The ISA only really matters to the front end and instruction decoders on the CPU, Much of the back end/execution side is all the same, regardless of the ISA, so really you have no idea what you're talking about.
  • jasonelmore - Saturday, April 16, 2016 - link

    This, Until Intel gets serious about making LTE modems and integrating them into their Tablet Soc's, then everyone will choose a Snapdragon chip that has the modem already built in. Even qualcom's fee's MSRP LTE Fee plus soc, is less than what intel will charge for this platform which has no LTE
  • rahvin - Tuesday, April 19, 2016 - link

    Intel's costs have little to do with production cost. They are trying to not undercut their other sales. They are in a sticky situation. They could wipe ARM sales out if they wanted to but they'd have to annihilate their own high end sales margins to do it. So they try to balance power at the low end so they don't erode their margins on the high end. The result is they aren't very competitive on the low end, but this isn't because they can't be. It's strictly because they don't want to erode their margins.
  • zodiacfml - Saturday, April 16, 2016 - link

    Yep, partly right and wrong. The only mistake Intel did was to consider Atom as second class in terms of getting the latest process nodes. It is only recently they realized which is too late now as smartphones got their performance to a level of good enough.
  • hrrmph - Saturday, April 16, 2016 - link

    No, the other folks have it correct, it was the lack of (and continuing lack of) integration. You could put it on the best process node in the world every time, but without LTE it is still only second best.
  • name99 - Sunday, April 17, 2016 - link

    And yet that is EXACTLY what Apple ships...
    As far as I know, Kirin (Huawei's SoCs) also come without on-board LTE. Likewise Exynos (Samsung's SoCs).
    Rockchip, of course, does ship integrated LTE, in their SoFIA line. How do they do that? By using INTEL's modem!

    Pretty much every aspect of this claim (Intel was doomed to failure in mobile because they had no cellular basedband to integrate onto their parts) is false.
  • Strunf - Monday, April 18, 2016 - link

    Apple is not relevant in this context, they make their own phones and design their own chips, it's not like their SOC were competing against any other SOC.

    MediaTek SOC have LTE, the new Exynos have it, the Snapdragons have it... most if not all SOC with ARMv8 probably have LTE and other options. All of them can provide a more complete solution than Intel can, and this probably at a lower cost.
  • BurntMyBacon - Monday, April 18, 2016 - link

    @Strunf: "Apple is not relevant in this context, ..."

    Except that it is possible (as name99 said) that they will be using Intel's LTE modem for their A10.

    @Strunf: "MediaTek SOC have LTE, the new Exynos have it, the Snapdragons have it... most if not all SOC with ARMv8 probably have LTE and other options. All of them can provide a more complete solution than Intel can ..."

    Huawei's Kirin has seem more market penetration than Intel's Atoms. They don't have a modem either. I'm pretty sure Intel also has the stronger brand name, so there must be another reason.

    @Strunf: "... probably at a lower cost."

    That's the one. In a cutthroat market with low margins that is approaching saturation, you have to present a compelling reason for people to spend more. From an objective look at benchmarks, Intel has good core performance, but the end user experience is largely the same as their lower cost competitors.
  • Namisecond - Monday, April 18, 2016 - link

    Processors, on the desktop at least, got good enough around Sandy Bridge. Blame it on the SSDs for freeing up the IO bottleneck and making a 6 year old PC better than a recent pc with a HDD.
  • patrickjp93 - Tuesday, April 19, 2016 - link

    No you can't, because the gains have been huge. The problem is software not keeping up with new instructions that do more. All the big-iron servers swap chips every year, so clearly performance gains are there.
  • paddytokey - Friday, April 22, 2016 - link

    SSDs also make for a very valid upgrade option to bring back new life to older notebooks, I made a lot of people hold off from buying a new laptop just by installing an SSD.
  • haukionkannel - Saturday, April 16, 2016 - link

    Chromebooks Are the biggest winners at this moment so ofcourse Intel try to compete with those. People don,t need anything more powerfull that chromebooks at this moment, so the faster Computer sales is declining. There Are some minor market segments that would like to have faster computers. Gamers, and video content makers and some computational task.
    For those gamers is so minor, that They can be ignored. The others can use 24 core pro prosessors, because the price is not a problem.

    So cloudbooks is the segment where you can Expect more sales if the price is right!
  • BurntMyBacon - Monday, April 18, 2016 - link

    @haukionkannel: "For those gamers is so minor, that They can be ignored."

    DON'T IGNORE ME!!!
    (We can be very vocal and you know what they say about the squeaky wheel)
  • haukionkannel - Tuesday, April 19, 2016 - link

    And what new and exiting gamers has got during the last 5 years from CPU department... Hmm... nothing... And I have not seen any signs that Intel is worried about that.
    We are minority, that has no meaning at all...
  • LordConrad - Saturday, April 16, 2016 - link

    I think Intel is saying "cloudbooks" because they don't want to endorse individual companies. Chromebooks are the obvious connection, and they have been selling very well, especially in schools and to people who don't need or want full-blown computers.
  • otherwise - Monday, April 18, 2016 - link

    "Cloudbook" is just a vendor-agnostic way of saying "Chromebook".
  • rahvin - Tuesday, April 19, 2016 - link

    Hate to break it to you buddy but Intel isn't pushing the "cloudbook" and never has. Google invented this market segment and nearly every sale is a Chromebook. Both Intel and Microsoft laughed when Google introduced the machines. Microsoft stopped laughing about 2 years ago when they realized how many had been sold. I'm actually surprised that only 5 million have sold. Chromebooks occupy at least one sometimes all three of the top sales spot on Amazon for laptops.

    Chromebooks are real, sell quite well and for a lot of people they are the only computer they need. This presentation slide is Intel finally admitting the segment is real, just like Microsoft did when they created the HP Stream as a counter.
  • pugster - Tuesday, May 31, 2016 - link

    Not really Intel's fault that the pc shipments are down. Rather companies like Microsoft and Google didn't raise hardware requirements, instead maintaining hardware requirements for Windows and Android OS. I have an Intel Z3735F tablet and it works really great despite having 2gigs of memory and 32gb emmc disk. Intel really flupped it up on the Z3735G by crippling it with 1gb of memory and you can't really run windows with it.

    There are always entry level buyers who just want to surf the web and read emails and 32gb emmc is good enough for them. Ultrabooks are nice, but kaby lake cpu's cost 10x more than Apollo lake SOC's and it is cannibalizing sales of their entry level products.

    Intel's Apollo Lake SoC's are made to compete with the mobile/higher end ARM market. The problem with Intel is that they couldn't compete with ARM SOC's in the mobile market.
  • BobSwi - Friday, April 15, 2016 - link

    netbook v2, prolly be good for a spot of ole linux. Hopefully less hoops to jump through than chromebooks have.
  • Arnulf - Saturday, April 16, 2016 - link

    Assuming they don't gimp the platform (with locked UEFI/bootloader restricted to W10) once again like they did with previous incarnation(s) of Atom.
  • Michael Bay - Saturday, April 16, 2016 - link

    Dream on. This cheap thing will live or die on software, which means Windows only.
  • MonkeyPaw - Friday, April 15, 2016 - link

    The problem for Intel is that they don't really want Atom to be too good. If it is, then the Core line suffers as more people buy good-enough Atom-based systems. ARM SOC companies don't have this problem, as they are still mostly working to make their chips faster. Intel has superior fabrication equipment and probably the best engineers in the business, but the corporate machine spends so much energy making up SKUs with random features missing. I would love to see what Intel could really do with the Atom architecture, but I don't think we ever will.
  • mrdude - Friday, April 15, 2016 - link

    We're at a point where the x86 Intel alternative is the crap, poor-performing solution when compared to the ARM equivalent -- whether Snapdragon, A9, or even standard A72, they all outperform Intel's x86 SoC in performance, features, and often both.

    That's shameful.
  • Thatguy97 - Friday, April 15, 2016 - link

    Well there's no competition, Intel is in absolutely no hurry
  • mrdude - Friday, April 15, 2016 - link

    That argument worked when they were rolling in the money, but that's not the case anymore. Intel made 58% of their revenue from the PC market in 2015, and that's a shrinking pie. If Intel isn't providing incentive and innovating in order to spur more sales -- completely irrespective of competition in the marketplace -- then they'll suffer because of it.

    And that's only in the high-end x86 space where there isn't competition. The two comments above you are within the context of the low power 'mobile' space where Intel has been spanked, bullied, and robbed of their lunch money. The issue there wasn't the lack of competition, but *the competition*
  • rahvin - Tuesday, April 19, 2016 - link

    And yet, Intel still makes more money on Silcon than anyone else in the industry. We'll see more failures in the ARM segment before Intel is ever at risk. Their margins are still pushing 58% while most ARM producers are in the 3-8% range.
  • Michael Bay - Saturday, April 16, 2016 - link

    >outperform
    Laughable. Thow anything actually complex at ARM and it dies while boiling itself.
  • name99 - Saturday, April 16, 2016 - link

    That's nonsense. Apple's CPUs outperform Skylake-m at equivalent powers, with more consistent lack of throttling, and with a substantially superior GPU performance.
    With the A8 Apple basically matched Intel at i3-level in IPC on-core, but still had some throttling issues. With the A9, Apple improved that to match Intel at the i7-level for IPC, both on-core AND in the uncore, substantially reduced throttling, and substantially ramped up frequency.

    Intel has, right now, only one advantage at equivalent power; namely that Intel can still turbo its low power chips to ~3GHz for very short spurts, which provides some level of snappiness and reduced latency. I suspect that they will retain that for a while, because I think Apple has access to more low-lying fruit than adding turbo-ing circuitry, basically improvements that can materially improve their IPC over Intel while also slowly ramping up their frequency.

    Of course Apple is sui generis, and doesn't sell on the open market. But their success acts as a continual spur to both ARM itself, and to the ARM independents like QC and nV. The cadence of ARM improvements is quite a bit faster than the pathetic 3% or so annual x86 improvement; and I think it will persist, especially once some sort of rationality and consolidation takes over the ARM market. (Right now it is crazy that nV, QC, Samsung, AMD, and various server companies like Applied Micro and Cavium are all trying to create basically the exact same sort of CPU. At some point this should consolidate to maybe ARM, QC, and Apple as the three CPU design companies, with talent spread across them much less thinly.)
  • lilmoe - Saturday, April 16, 2016 - link

    "Apple's CPUs outperform Skylake-m at equivalent powers"

    lol
  • zodiacfml - Sunday, April 17, 2016 - link

    Not really outperforms but comes very close. It is not surprising though as the Apple's chips are already bigger than Intel's.
  • Michael Bay - Monday, April 18, 2016 - link

    He`s an armophile lunatic always ready to sperg, just look higher up the thread.
    One can`t help but tease.
  • name99 - Friday, April 15, 2016 - link

    I actually don't believe Intel has the best engineers anymore. I suspect that many of the best engineers have moved to Apple.
    Of course working for Apple has some downsides (in particular even stronger secrecy levels than Intel), but the upside is that you're able to spend vastly more time on the interesting problems as opposed to the "Oh god, how do we stay compatible with some stupid decision made 30 years ago" problems. I also suspect that Apple is substantially more willing than Intel to take a risk on daring new ideas (in part because if they slip one rev, who would even know? they'll simply announce the A(n+1) as the A(n) with more cache or more GPU or whatever and 100MHz faster, and things will be fine.)
    Finally almost everyone at Apple gets to be on an "A"-team producing the sexy CPUs (whether it's sexy fast as in the phones, or sexy low-power as in the watches). I expect most of the Intel engineers are in more of a support position, taking the overall design for the new CPU and then performing various cripplings of it to match Intel's ever-expanding portfolio of SKUs. That's got to be pretty soul-destroying, taking a perfectly fine CPU and figuring out how to make it work worse to satisfy some marketing strategy.
  • menting - Friday, April 15, 2016 - link

    do you even know any engineers at work at Apple? Because if you do, you'll know that your comment is pretty far from the truth.
  • zodiacfml - Saturday, April 16, 2016 - link

    Right. Not all came from Intel. The secret of Apple's superior SoC is because they can afford a big chip. The 6s chip is only slightly smaller compared to Intel's Core chips.
  • name99 - Saturday, April 16, 2016 - link

    The Apple die sizes are not outrageous. They've been around 100 to 150mm^2 since the A4, and the CPU's are a small part of the die, around 15% or so; the largest single area of the die is GPUs.

    Exynos, for example, has been at similar sizes until the most recent Exynos 7420 (at 80mm^2), and you can argue that that was a very stupid decision on Samsung's part --- they should have used the process shrink to slap more GPU on the die (which they could then also use to lower power, by running more GPU cores at lower frequency whenever circumstances are not too demanding).
  • name99 - Saturday, April 16, 2016 - link

    Well I did work at Apple for 10 years so...
  • jwcalla - Saturday, April 16, 2016 - link

    Well, Intel is on record stating that they're more interested in promoting "diversity" than actual engineering and making money. So, yeah...
  • pSupaNova - Thursday, April 28, 2016 - link

    With diversity comes innovation.

    Anyway Intel is going to create a much more lucrative/powerful market if they stick to their road map of ever increasing power efficiency.

  • Michael Bay - Sunday, July 3, 2016 - link

    With racial diversity comes only space to destroy™. All those fabulous tech companies became so fabulous precisel because they were steered by white heterosexual men.
  • eddman - Sunday, April 17, 2016 - link

    What's the DMIPS of apple's A9? Aren't ARM processors still way behind intel's when it comes to FP calculations?
  • name99 - Monday, April 18, 2016 - link

    iPad Pro vs Surface Pro 4:
    http://browser.primatelabs.com/geekbench3/compare/...

    The single-threaded numbers are ridiculously comparable --- about as many wins for Intel as for Apple. (And that includes FP.)
    Obviously Intel comes out slightly ahead in multi-core because of hyper-threading, but that's simply not very interesting --- if Apple cared about multi-core performance their easy solution (maybe not always, but certainly for now) but would be to simply slap a third core onto the die, like they did for the A8X. The cores are small enough that that would increase die area by no more than maybe 6% or so.

    The GPU numbers are even more stark, with iPad Pro substantially ahead of Surface Pro.
  • Speedfriend - Monday, April 18, 2016 - link

    @name99

    try
    Kraken Surface Pro i5 1179 v iPad Pro 1535
    Octane 30,064 v 19,862
    WebPRT 326 v 221
    IceStorm Physics 40,480 v 15,035
    IceStorm Graphics 72,m183 v 50,240
    IceStorm overall 61,482 v 33.046

    IPad Pro is not even close in almost every benchmark....
  • name99 - Monday, April 18, 2016 - link

    The numbers you give refer to an i5 CPU. I specifically said that A9 beat m-class (~5W, about the same as the Apple target power) x86.
    If you allow the Intel core substantially more power (~15W) then things change. No surprise there.

    The Geekbench numbers, while obviously not perfect, have the advantage that for the most part it's CPU against CPU. The browser numbers are problematic for such an analysis because now you're also comparing across very different algorithms (up to and including possible multiple core use, where hyper-threading kicks in). Browser comparisons within the same browser (say Chrome to Chrome, or Safari to Safari) likely give some useful info, but across browser comparisons, while telling you something about the user experience, are not helpful for comparing cores.

    IceStorm Physics, of course, is basically a test of frequency*number of cores. It's the DMIPS or Whetstone of graphics benchmarking --- utterly useless for providing new information.

    What would be most useful (and what is missing) would be SPEC numbers using the same compiler. Unfortunately these don't yet exist, because the x86 numbers (that I have seen) have so far always been compiled with icc, which performs such aggressive (and apparently extremely code specific) transformations on SPEC that, once again, you're no longer comparing like with like. There were lots of complaints about this when the iPad Pro 12.9 review came out, so maybe we'll get lucky and have more comparable numbers (all done with LLVM, or at the very least x86 numbers done with gcc) when the detailed iPad Pro 9.7 numbers come out? Or maybe we'll have to wait until November and the A10 ;,-( ?
  • Meteor2 - Monday, April 18, 2016 - link

    Normalised for power (15W v 5W), there's little difference in performance. But normalising is pointless -- there are simply no 15W ARM chips around, and none on the horizon. If it were easy, someone would've done it.
  • rahvin - Tuesday, April 19, 2016 - link

    TDP is measured differently by everyone and there is no reliable way to compare other than total system processing and power use. Attempting direct comparisons when they measure directly is an effort in futility.
  • eddman - Tuesday, April 19, 2016 - link

    That's not Dhrystone/DMIPS. Geekbench isn't exactly trusted when it comes to comparing ARM to x86.

    A Core i7 4770K's DMIPS per clock cycle, per core is 8.57. A cortex-A15's in an exynos 5250 is 3.5. What's A9's?

    Same thing for FP numbers in geekbench.
  • KateH - Friday, April 15, 2016 - link

    Re: the comparison chart- I'm 99% sure Bay Trail had 4EU's in the iGPU. Huge GFX performance increase going from Bay Trail to Cherry Trail with the extra transistor budget thanks to 14nm!
  • varad - Friday, April 15, 2016 - link

    I'm interested to see if this platform can show up in NUC sized boxes [for non-gaming HTPC apps]. The HW decode support for 4k HEVC + VP9 is great news. Is there any mention of whether there is support for HDMI 2.0 + HDCP 2.2 ?
  • iwod - Friday, April 15, 2016 - link

    Would really like a update on Atom performance. Are they still 1 Core 1 Thread? IPC improvement?
    Would a 4 Core Atom beat a 2 Core Sandy Bridge ( Which i consider to be the tipping point of CPU performance )
    If not, how about a Core 2 Duo?
  • mkozakewich - Saturday, April 16, 2016 - link

    It apparently does beat the lowest-performing Core i3 chips, and for far lower cost and power. I suppose I'd expect something like this every five years, no?
    http://cpu.userbenchmark.com/Compare/Intel-Core-i3...
  • sonicmerlin - Saturday, April 16, 2016 - link

    Single core floating is 41% faster in the i3
  • Ammaross - Sunday, April 17, 2016 - link

    That's a Cherry Trail vs a Sandy Bridge mobile CPU. Not even current nor relevant.
  • Arnulf - Saturday, April 16, 2016 - link

    It should easily beat Core2 Duo. Expect very similar level of performance to Core2 (per core) so it's about on par with first generation Core2 Quads (Q6600 territory, sans overclocking).
  • rahvin - Tuesday, April 19, 2016 - link

    I'd be surprised, they've been keeping the performance below those metrics to avoid cannibalization. The Avoton Atoms beat those performances and they sold those for a pretty penny with strict restrictions on how they could be used to avoid eating into i3 and i5 sales.

    Intel is going to continue to hold down performance on the Atoms for this reason.
  • mkozakewich - Saturday, April 16, 2016 - link

    Ooh, exciting! I'm currently using a $100 Onda V820w, with 2GB of RAM. Honestly, I'm starting to feel the struggle with these kinds of limited ports. I haven't had any luck finding a microHDMI adapter, so it'll be nice when I can just plug everything in through a powered USB hub.
  • aIIergen - Saturday, April 16, 2016 - link

    cannibalization candidate: $999 notes/convertibles/AIOs would be killed by this. Stupid Intel.
    I imagine two years later, we see PCs which prices the same as today but only craps inside.
  • StrangerGuy - Sunday, April 17, 2016 - link

    Intel and their OEMs are a toxic relationship. Both sides are more than aware of the huge problem of rampant cost cutting that is dragging quality down the toilet. But both parties, other than PR, refuse to give in to each other when it comes to actually footing the bill of raising the cost for better quality PCs.
  • mikk - Saturday, April 16, 2016 - link

    EU Count isn't really unknown, Apollo Lake should have 20 EUs.
  • Vince789 - Saturday, April 16, 2016 - link

    Do these support UFS 2.1?

    I'd like to see some UFS 2.1 in the Surface 4
  • TheinsanegamerN - Sunday, April 17, 2016 - link

    Right? The emmc 4 storage is what kills the surface 3 most of the time.
  • zodiacfml - Saturday, April 16, 2016 - link

    I hope they get rid of the Baytrail SoCs that we are still using for Intel 2 in 1s we use in a school I work for.
  • watzupken - Saturday, April 16, 2016 - link

    I am not sure if its really that low cost to be honest. Between the initial 3 models from Cherry Trail, the cheapest and the most crippled was the x5 z8300. Going up to z8500 adds quite a substantial cost from what I observed. The x7 z8700 devices are almost competing with the laptops with low/ mid range Core series processors.
  • Bob Todd - Monday, April 18, 2016 - link

    The z8500 was in the $99 Kangaroo PC, so I don't think the cost to OEMs was the problem.
  • TesseractOrion - Saturday, April 16, 2016 - link

    Might as well wait for Kaby Lake IMHO....
  • fmaste - Saturday, April 16, 2016 - link

    Will these thing be reliable this time?

    http://www.phoronix.com/scan.php?page=news_item&am...
  • haukionkannel - Sunday, April 17, 2016 - link

    Well it depends on what Linux kernel programmers do...
    How important is them to program support to energy saving features. Who knows...
  • The_Assimilator - Sunday, April 17, 2016 - link

    If Windows can run Bay Trail without issue and Linux cannot... where do you think the problem lies? I'll leave it to your superior trolling skills to come up with an answer.
  • serendip - Sunday, April 17, 2016 - link

    Any actual performance numbers of Apollo Lake vs. Cherry and Baytrail-T SoCs? I've got an Atom Z3740 tablet and it's still pretty fast. It's only the lack of hardware HEVC support that hamstrings it but I rarely use HEVC anyway.

    And when will Intel come up with sane numbering schemes? It's easier to compare sequential numbers like Z3740 vs. Z3800 instead of obscure lake and river names.
  • TheinsanegamerN - Sunday, April 17, 2016 - link

    I don't see how we could have performacne numbers for a chip that's not even released yet.
  • Meteor2 - Monday, April 18, 2016 - link

    I just hope there's a Surface 4 with this inside and a couple of Type-C ports.
  • Tujan - Thursday, April 21, 2016 - link

    Way to run a railroad Intel.
  • Sliderpro93 - Thursday, April 21, 2016 - link

    wtf? intel is almost done baking skylake-refresh type kaby lake chip with 10 gen graphics and now they ship 9 gen graphics? It is 90% stupid decision. maybe it is to encourage the moment, where we will have 10 gen graphics on core m-core i level and, like a cheaper analogue, a 9 gen graphics in budget products? If it is 24EU unit in sub 15w tdp for 200 bucks, it might be fine, but we all know, where this will go - to chinese tablet makers. And intel alr released the joke - a 8300 chip with awesome cpu, gpu and single channel ram (if u dont know, gpu uses ram,single channel = half the gpu power of dual channel). Hope all the chips will be dual channel and no more stupid stuff from intel. I am very eager to see zen tablet APUs. Still dont like the fact about 9gen
  • AJLond1 - Sunday, April 24, 2016 - link

    If they can step up the power of the integrated graphics, hardware OEMs could use this to make pretty sweet handheld gaming devices...
  • caralampio - Thursday, August 4, 2016 - link

    Intel must join in the ARM socs and/or buy MIPS, and take the high end market of mobile devices, at same time they dominate the PC market. This position of domination will create a good platform to support them to expand to the mid and low end of mobile devices socs. Intel have deep pockets to do it. If the don't is only for lack of innovation and avoiding the risks, like an old giant sleeping in the comfort zone, trusting only in their glory past. Intel can and must take ARM and MIPS markets.

Log in

Don't have an account? Sign up now