If it were up to ARM, yeah every phone would use BIG.little. Because ARM makes more money per device. (They bill Clients per core) on cookie cutter implementations. IMO it's just a waste of precious board space, and it's a serious compromise due to their bigger cores using more power than everyone would like.
at the most a system needs 2 little cores, but i'd argue one is enough. Fully custom ARM SoC's like apple's a8, are engineered around power efficiency at all workloads. When you can't make a custom SOC, you use these cookie cutter builds and pay out the ass in licensing cost.
Yet Qualcomm opt for 4xA57+4xA53 and 2xA57+4xA53 design for their stop-gap solution. You would think if any of what you said were true, they would have 4xA57+2xA53 instead.
Qualcomm, just like all other SoC manufacturers except apple, are "suffering" under the GIMME ALL THE CORES OMG OMG mentality of certain consumers (and thus also device manufacturers), especially driven by MediaTek and their early octa core chips (which "beat" Snapdragon 800 devices in ideal scenarios, but in reality performed ~half as well). Thus QC's stop-gap solution went octa-core to please mainly the asian market. Unfortunately, it seems like stupid solutions like this are here to stay. Apple's shown quite conclusively what can be done with a well designed dual core setup, which (in one core design!) scales from high performance (at low clocks, even), to very low power usage. Unfortunately nobody except Nvidia seems to have taken notice, and I very much doubt we'll se a Snapdragon 830/Exynos whatever/MTK ??? with an extra wide dual core design. Although I would buy the shit out of that.
> are "suffering" under the GIMME ALL THE CORES OMG OMG
The bizarro thing is that this is because "bao" (treasure) in Chinese sounds like "ba" (eight). => This means the number 8 is considered auspicious. => This means having 8 cores in your phone is highly prized.
So basically complex 21st century SoC architecture choices are being dictated by a linguistic coincidence!
Even Nvidia sells a version of their K1 (and probably future) SoC loaded up with stock ARM cores. Does any mass-market hardware even use their twin-core Denver design? I haven't bothered to look after finding out that most K1s are actually just Tegra 5.
2x big cores + 4x LITTLE cores makes the most sense.
Phones like the Motorola Moto G have shown that 4x LITTLE cores is plenty of power for everyday usage, but isn't quite enough for power users, gaming, and heavy usage. So tack on a pair of big cores to handle those situations and you pretty much have the perfect SoC. Just need a skookum GPU to match.
I'm really looking forward to something like a Z3 Compact with a Snapdragon S808 SoC (2x Cortex-A57 + 4x Cortex-A53 CPUs with Adreno 405 GPU).
Sadly I haven't seen one device yet being announced with the rather fine Snapdragon 808, only 615 with 5-5'5 inch screen and a bunch of high-end with 810. :-/
How does 4 little cores "make the most sense"? The problem is that phone workloads are, for the most part, unpredictable. When you have a predictable workload (for example many server workloads) you can put bounds on acceptable latency, and you can partition your workload, as appropriate, between low and high latency cores. But on a phone MOST of your workload is going to consist of "respond to user input, and do so as fast as possible". There is some ongoing background stuff that you can relegate to a low performance core, but it's not clear that you need an A53 for that --- Apple seems to do OK running that stuff on an ARM M3. Playing music and decoding video have predictable performance, but they are best done on dedicated HW, not using a general purpose core.
I've heard plenty of these claims about why lots of low power cores are useful, but (outside of the server space) I've never seen a single TECHNICAL explanation justifying the claim. The people supporting lotsa small cores come across as fanboys, while the people actually publishing papers and providing numbers pretty much universally agree that (again, under the constraints of the unpredictable code that runs on phones) race to complete is both lower energy AND provides a better user experience.
For example, do we have any published papers (or something similarly representative; not just vague claims) of games that deliberately pinned certain low-performance, bounded latency threads to the slower cores and thereby achieved lower energy usage than allowing those threads to time-share with other threads on the faster cores?
Lets say you have a leaky process. Which would be better way to spend power while processing or waiting IO: let the big cores process the IO, or gate the more leakier cores and wake one of them only on when a small core reaches maximum sustained load so that the overhead off flushing caches and switching context is a beneficial tradeoff? Servers processors might of course have shared, more energy consuming caches for performance. It would be like a DVFS, but with the addition of "leakage switching". Given the right environment and product constraints (low energy, foundry process) it might be worth the dark silicon.
The perfect SoC is one that is power efficient at all workloads. The more cores you need to accomplish this task, the less efficient you are. From a cost perspective, and from a Board Surface area perspective.
Power Gating is a thing you know, no need for a bunch of little cores when you have complete control over every transistor like today's soc's
Qualcomm got caught flat footed when apple released the A7, First ever 64bit ARM proccessor. Qualcomm admittedly got caught off guard. They didn't even have anything on the drawing board at that point. Their PR team downplayed 64bit advantages, and got flamed by the technical media. several PR people were fired after the whole debacle IIRC.
It is possible to have both big and little cores running at the same time (i.e., all 8 cores running simultaneously).
It's just that in typical mobile workloads, nothing actually needs the big cores until you kick off a game, or do that 1s web page render. Hence why ARM suggests two big cores as a sensible configuration.
It's just that the cores are so small, the silicon cost is very low to include more cores (especially A53s) - and this will only become more true at 16nm/14nm.
ARM Chromebooks in 2016 should feel very snappy though.
You have explained why more than 2 large cores is frequently unnecessary. No-one is denying that. What you haven't justified is why the background work demands 4 small cores rather than just 1.
" Fully custom ARM SoC's like apple's a8, are engineered around power efficiency at all workloads." Sure, so what must have been the reason for Apple to include an Cortex M3 core in their A7/A8 SoC. I mean, there should be no reason, if the A7/A8 is so highly efficient both in low power and high power modes. Yet, it isn't. Apple quietly added a companion core for sensor data processing.
With big.LITTLE, there's no need for this. The main CPU can do all this, using a low power CPU. Additionally it can do much more.
You can make the main cores as efficient as possible, but you'll never be able to make them as efficient as dedicated cores, which are trimmed for power efficiency and most importantly, are build using power efficient transistors, instead of performance transistors.
That's nonsense. Show me a phone that uses an A53 core to do the kind of tasks you use an M3 core for, and I'll show you a phone that was designed by somebody without a degree, and which lasts for two hours on a full charge.
You'd see a CortexM companion core used alongside big.Little impkementations, because using anything else than an M core for simple tasks like counting steps, etc.would be a colossal waste.
Companion cores like the Cortex M class cores are puny compared to A53 or even A7 or A5 cores. A-class cores are easily ten(s) times more powerful. But also use ten(s) times the power.
M-class cores are designed to last for months on a single charge. The smaller ones like the M3 are more of a microcontroller than a microprocessor.
Eager to see some realistic perf numbers, not sure why they chose to go with a very creative math, just undermines their credibility and that's hard to regain. Does it beat Denver or Denver on 16ff+ or Core M , hopefully we find out soon. The guys with custom cores (AMD included) might be not too happy about this, depending on how their own core does. There is a typo in the article, "instead of ten partners" it says "then partners".
Agreed. If you note, that is a 75% increase in efficiency for similar workloads...comparing an A15 on 28nm versus 16nm FINFET...that process and technology difference alone probably more than accounts for a 75% difference in efficiency! That is two full nodes, plus planar to FINFET!!!!
Sounds like something that'll be that much more heavily throttled if they try to fit it in to a smart phone. Now, burst performance is nice, but when you start getting in to only a few seconds of burst performance, and then throttling 40-60%+ it provides speed ups to fewer and fewer tasks.
Granted, the efficiency advantages means higher sustained workload speeds...but the process changes alone probably should provide that, the new architecture sounds like it likely will only provide a minor boost, very minor, accept in thermally unconstrained installations or for very short periods of time until throttled. Core is getting closer to ARM power levels and ARM is getting closer to Core power levels.
And one of ARM's weaknesses has been memory efficiency, so the new interconnect will surely improve matters there.
I'm looking forward to more details, sadly ARM's product releases are very different from Intel's and AMD's, as they license designs rather than sell product. At least they have completed the 16nm POP hard macros though, so it will be easy to incorporate into a design (I'm sure that SoC designers have designs already just waiting for the final cores to slot in).
The interconnect is a lot more interesting than you seem to think. Intel uses a ring (multiple rings on high-end Xeons) rather than a crossbar (even on Xeon Phi, which seems a bizarre choice). For ARM to use a crossbar (with the extra silicon that implies compared to a ring) suggests that they are really striving for high performance interCPU communication (and probably that they have a goal of, in the nearish future) switching to multiple large L3 cache slices along with NUCA and all that implies.
Where are you getting sub-3w thermal with that performance? Their numbers suggest 3.5x the performance of an A15, but with only 75% better energy efficiency at the same workload as "2014 devices", which I assume to mean the A15 as well. Those numbers are also comparing 28nm TSMC versus 16nm FINFET+ TSMC, which is probably 100% of the power efficiency gain there.
Since some of that is likely through higher clocks, the efficiency advantage is likely lower at the clock speeds necessary to yield a 3.5x gain in performance.
Considering the A15's TDP, this means a full up A72 implementation is likely >>>3w when actually preforming at 3.5x the performance of an A15. That sounds more like a 6-8w TDP when performing at those levels, but when throttled way back after hitting temp caps, its probably only performing 1.5-2x the performance of an A15 to stay within 2-3w or so to be in a phone.
That means more power than Broadwell M (a lot more) even if the performance somehow actually manages to be higher.
Just absolutely amazing?!? Surely you mean THE GREATEST TRIUMPH IN CPU DESIGN EVARR!!! Did anybody call the Nobel committee yet?!?
Seriously though: We're talking about a quad core CPU that only has launched on paper yet. We have no idea on how it'll perform besides very rudementary figures from the designer(s). It's also around 18 months away from being in an actual product.
The fact that you think it's 'absolutely amazing' that a design 18 months into the future beats Apples Cyclone (which has been out for over a quarter) and Intels Core M (which is out now) is perhaps a abusing the words 'absolutely amazing' no?
No its not abusing when you consider the fact that this CPU will be set into many cores and run inside your phone. .and be more frugal than today's phone CPU's. And "much" (around 80%) better than Cyclone, that is a two core setup.
Intel otoh, is on a superior process even now, and isn't able to beat it even with a $270 CPU, with 2-3 times higher power consumption.
The lowest Exynos score and the highest Core M score. While in reality the Exynos goes up to 1400 and Core M starts from 1800(depending on the model..you used a high end thermally unrestricted one that sips up to 20W - read the review on notebookcheck.com - and costs over 250$!!)
You are aware of this thing called Moore's Law, right?...
And what is your complaint against "flooding the market with cheap Cortex-A7 cores"? You are in a position to be able to pay, I don't know, $600 for a really nice phone; but there are plenty of poor people out there who are thrilled to have a smartphone that cost them only, maybe $30 or so. I met some of them in Myanmar and a cheap smartphone is the first electronics, hell the first nice thing, they have ever owned. China and MediaTek are doing god's work by bringing so much happiness to so many poor people.
Yes, screw stuff like clean water, paved roads, a stable electricity supply 24/7 and all those other things that actually HELP people in third world countries.
What they really need is a shoddy 30 dollar handset, that they can play Candy Crush on for a few hours, and then stroke lovingly the rest of the day/week when there isn't power in the village, and they don't want to walk 15 miles to get it charged.
A72 seems to provide an impressive gain over A57, after subtracting something process related gains. Every year I want better SoC's to be put into a Lumia 1020 successor, yet my current phone still doesn't want to fail so I keep waiting for price drops or better hardware.
two A72 cores seems to be what would be sensible to have. Not too much power to eat the battery, but so much punch that all everyday applications should be blinding fast... But I am afraid that we get 8 core little big and maybe four core variants... The four core is not so bad, but for me two fast would be better...
Seems needed. Intel and Apple are busy, and the A15 seemed to get relatively few design wins (blame Qualcomm and Apple). Wonder how long it will take to get in products, though. 2017?
WTF are you talking about? There is a LONG list of A15 based SoCs. You just don't hear about them any more because sites like AnandTech tend to review the newest fanciest phones.
And ARM claims there will be A72 devices shipping in 2016. This seems plausible. I expect Apple will use up all the 16FF+ capacity this year for A9s, but that capacity will start to be available for other vendors in early 2016.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
blanarahul - Tuesday, February 3, 2015 - link
Looks like big.LITTLE is here to stay. Oh well.Morawka - Tuesday, February 3, 2015 - link
If it were up to ARM, yeah every phone would use BIG.little. Because ARM makes more money per device. (They bill Clients per core) on cookie cutter implementations. IMO it's just a waste of precious board space, and it's a serious compromise due to their bigger cores using more power than everyone would like.at the most a system needs 2 little cores, but i'd argue one is enough. Fully custom ARM SoC's like apple's a8, are engineered around power efficiency at all workloads. When you can't make a custom SOC, you use these cookie cutter builds and pay out the ass in licensing cost.
levizx - Wednesday, February 4, 2015 - link
Yet Qualcomm opt for 4xA57+4xA53 and 2xA57+4xA53 design for their stop-gap solution. You would think if any of what you said were true, they would have 4xA57+2xA53 instead.SunLord - Wednesday, February 4, 2015 - link
If I'm not mistaken ARM's prefered optimal layout is 2xA57+4XA53 which I think was mention on this site in one of the talks with ARM SoC developersValantar - Wednesday, February 4, 2015 - link
Qualcomm, just like all other SoC manufacturers except apple, are "suffering" under the GIMME ALL THE CORES OMG OMG mentality of certain consumers (and thus also device manufacturers), especially driven by MediaTek and their early octa core chips (which "beat" Snapdragon 800 devices in ideal scenarios, but in reality performed ~half as well). Thus QC's stop-gap solution went octa-core to please mainly the asian market. Unfortunately, it seems like stupid solutions like this are here to stay. Apple's shown quite conclusively what can be done with a well designed dual core setup, which (in one core design!) scales from high performance (at low clocks, even), to very low power usage. Unfortunately nobody except Nvidia seems to have taken notice, and I very much doubt we'll se a Snapdragon 830/Exynos whatever/MTK ??? with an extra wide dual core design. Although I would buy the shit out of that.Jon Tseng - Wednesday, February 4, 2015 - link
> are "suffering" under the GIMME ALL THE CORES OMG OMGThe bizarro thing is that this is because "bao" (treasure) in Chinese sounds like "ba" (eight).
=> This means the number 8 is considered auspicious.
=> This means having 8 cores in your phone is highly prized.
So basically complex 21st century SoC architecture choices are being dictated by a linguistic coincidence!
Crazy...
simonsmithdh - Wednesday, February 4, 2015 - link
You may be talking about Mandarin, but in Cantonese ba sounds like fa (wealthy or properous). Do you have any evidence for your (plausible) assertion?hammer256 - Wednesday, February 18, 2015 - link
Yeah, the Cantonese theory is what I always heard. If I'm not mistaken, associating 8 with wealth is more of a southern tradition.skavi - Wednesday, February 4, 2015 - link
SunLord - Wednesday, February 4, 2015 - link
I like Hexa-core Socs but Octa seems pointles. I wonder if we'll ever see something like a 3xA57 and 3xA53 or a 1xA57 and 3xA53Alexvrb - Thursday, February 5, 2015 - link
Even Nvidia sells a version of their K1 (and probably future) SoC loaded up with stock ARM cores. Does any mass-market hardware even use their twin-core Denver design? I haven't bothered to look after finding out that most K1s are actually just Tegra 5.A.Noid - Monday, February 9, 2015 - link
I believe that the Nexus 9 is using Denver. It's 64 bit I think.phoenix_rizzen - Wednesday, February 4, 2015 - link
2x big cores + 4x LITTLE cores makes the most sense.Phones like the Motorola Moto G have shown that 4x LITTLE cores is plenty of power for everyday usage, but isn't quite enough for power users, gaming, and heavy usage. So tack on a pair of big cores to handle those situations and you pretty much have the perfect SoC. Just need a skookum GPU to match.
I'm really looking forward to something like a Z3 Compact with a Snapdragon S808 SoC (2x Cortex-A57 + 4x Cortex-A53 CPUs with Adreno 405 GPU).
Valis - Wednesday, February 4, 2015 - link
Sadly I haven't seen one device yet being announced with the rather fine Snapdragon 808, only 615 with 5-5'5 inch screen and a bunch of high-end with 810. :-/name99 - Wednesday, February 4, 2015 - link
How does 4 little cores "make the most sense"?The problem is that phone workloads are, for the most part, unpredictable.
When you have a predictable workload (for example many server workloads) you can put bounds on acceptable latency, and you can partition your workload, as appropriate, between low and high latency cores.
But on a phone MOST of your workload is going to consist of "respond to user input, and do so as fast as possible". There is some ongoing background stuff that you can relegate to a low performance core, but it's not clear that you need an A53 for that --- Apple seems to do OK running that stuff on an ARM M3. Playing music and decoding video have predictable performance, but they are best done on dedicated HW, not using a general purpose core.
I've heard plenty of these claims about why lots of low power cores are useful, but (outside of the server space) I've never seen a single TECHNICAL explanation justifying the claim. The people supporting lotsa small cores come across as fanboys, while the people actually publishing papers and providing numbers pretty much universally agree that (again, under the constraints of the unpredictable code that runs on phones) race to complete is both lower energy AND provides a better user experience.
For example, do we have any published papers (or something similarly representative; not just vague claims) of games that deliberately pinned certain low-performance, bounded latency threads to the slower cores and thereby achieved lower energy usage than allowing those threads to time-share with other threads on the faster cores?
TeXWiller - Saturday, February 7, 2015 - link
Lets say you have a leaky process. Which would be better way to spend power while processing or waiting IO: let the big cores process the IO, or gate the more leakier cores and wake one of them only on when a small core reaches maximum sustained load so that the overhead off flushing caches and switching context is a beneficial tradeoff? Servers processors might of course have shared, more energy consuming caches for performance.It would be like a DVFS, but with the addition of "leakage switching". Given the right environment and product constraints (low energy, foundry process) it might be worth the dark silicon.
Morawka - Wednesday, February 4, 2015 - link
The perfect SoC is one that is power efficient at all workloads. The more cores you need to accomplish this task, the less efficient you are. From a cost perspective, and from a Board Surface area perspective.Power Gating is a thing you know, no need for a bunch of little cores when you have complete control over every transistor like today's soc's
Morawka - Wednesday, February 4, 2015 - link
3 words man, Time to MarketQualcomm got caught flat footed when apple released the A7, First ever 64bit ARM proccessor. Qualcomm admittedly got caught off guard. They didn't even have anything on the drawing board at that point. Their PR team downplayed 64bit advantages, and got flamed by the technical media. several PR people were fired after the whole debacle IIRC.
vayal - Wednesday, February 4, 2015 - link
At least it would make sense if it was possible to have both the BIG and little cores to operate in parallel.psychobriggsy - Wednesday, February 4, 2015 - link
It is possible to have both big and little cores running at the same time (i.e., all 8 cores running simultaneously).It's just that in typical mobile workloads, nothing actually needs the big cores until you kick off a game, or do that 1s web page render. Hence why ARM suggests two big cores as a sensible configuration.
It's just that the cores are so small, the silicon cost is very low to include more cores (especially A53s) - and this will only become more true at 16nm/14nm.
ARM Chromebooks in 2016 should feel very snappy though.
name99 - Wednesday, February 4, 2015 - link
You have explained why more than 2 large cores is frequently unnecessary. No-one is denying that.What you haven't justified is why the background work demands 4 small cores rather than just 1.
UpSpin - Wednesday, February 4, 2015 - link
" Fully custom ARM SoC's like apple's a8, are engineered around power efficiency at all workloads."Sure, so what must have been the reason for Apple to include an Cortex M3 core in their A7/A8 SoC. I mean, there should be no reason, if the A7/A8 is so highly efficient both in low power and high power modes. Yet, it isn't. Apple quietly added a companion core for sensor data processing.
With big.LITTLE, there's no need for this. The main CPU can do all this, using a low power CPU. Additionally it can do much more.
You can make the main cores as efficient as possible, but you'll never be able to make them as efficient as dedicated cores, which are trimmed for power efficiency and most importantly, are build using power efficient transistors, instead of performance transistors.
extide - Wednesday, February 4, 2015 - link
Even big.LITTLE could use something like a Cortex M3/M4 -- they are SIGNIFICANTLY less power than even an A53 or A7 or even A5 !extide - Wednesday, February 4, 2015 - link
(As a sensor fusion hub, I mean)V900 - Thursday, February 5, 2015 - link
That's nonsense. Show me a phone that uses an A53 core to do the kind of tasks you use an M3 core for, and I'll show you a phone that was designed by somebody without a degree, and which lasts for two hours on a full charge.You'd see a CortexM companion core used alongside big.Little impkementations, because using anything else than an M core for simple tasks like counting steps, etc.would be a colossal waste.
Companion cores like the Cortex M class cores are puny compared to A53 or even A7 or A5 cores. A-class cores are easily ten(s) times more powerful. But also use ten(s) times the power.
M-class cores are designed to last for months on a single charge. The smaller ones like the M3 are more of a microcontroller than a microprocessor.
jjj - Tuesday, February 3, 2015 - link
Eager to see some realistic perf numbers, not sure why they chose to go with a very creative math, just undermines their credibility and that's hard to regain. Does it beat Denver or Denver on 16ff+ or Core M , hopefully we find out soon. The guys with custom cores (AMD included) might be not too happy about this, depending on how their own core does.There is a typo in the article, "instead of ten partners" it says "then partners".
azazel1024 - Wednesday, February 4, 2015 - link
Agreed. If you note, that is a 75% increase in efficiency for similar workloads...comparing an A15 on 28nm versus 16nm FINFET...that process and technology difference alone probably more than accounts for a 75% difference in efficiency! That is two full nodes, plus planar to FINFET!!!!Sounds like something that'll be that much more heavily throttled if they try to fit it in to a smart phone. Now, burst performance is nice, but when you start getting in to only a few seconds of burst performance, and then throttling 40-60%+ it provides speed ups to fewer and fewer tasks.
Granted, the efficiency advantages means higher sustained workload speeds...but the process changes alone probably should provide that, the new architecture sounds like it likely will only provide a minor boost, very minor, accept in thermally unconstrained installations or for very short periods of time until throttled. Core is getting closer to ARM power levels and ARM is getting closer to Core power levels.
No free lunch.
hahmed330 - Tuesday, February 3, 2015 - link
Boring... Accept the processor A72..psychobriggsy - Wednesday, February 4, 2015 - link
"Except".And one of ARM's weaknesses has been memory efficiency, so the new interconnect will surely improve matters there.
I'm looking forward to more details, sadly ARM's product releases are very different from Intel's and AMD's, as they license designs rather than sell product. At least they have completed the 16nm POP hard macros though, so it will be easy to incorporate into a design (I'm sure that SoC designers have designs already just waiting for the final cores to slot in).
name99 - Wednesday, February 4, 2015 - link
The interconnect is a lot more interesting than you seem to think.Intel uses a ring (multiple rings on high-end Xeons) rather than a crossbar (even on Xeon Phi, which seems a bizarre choice).
For ARM to use a crossbar (with the extra silicon that implies compared to a ring) suggests that they are really striving for high performance interCPU communication (and probably that they have a goal of, in the nearish future) switching to multiple large L3 cache slices along with NUCA and all that implies.
OreoCookie - Tuesday, February 3, 2015 - link
It'll be curious to see what the architecture is like. Given the performance claims, the architecture has to be significantly wider, though.Jumangi - Wednesday, February 4, 2015 - link
A bunch of unprovable PR numbers for chips consumers won't see for 18 months yet probably....yawn.darkich - Wednesday, February 4, 2015 - link
And some simple math reveals the single core Geekbench performance on par, if not better than Broadwell U, and much better that that Core M joke.It is also much better than the 20nm Cyclone.
And we're talking at least quad core setup here, and sub 3W thermal design!
Absolutely amazing
azazel1024 - Wednesday, February 4, 2015 - link
Where are you getting sub-3w thermal with that performance? Their numbers suggest 3.5x the performance of an A15, but with only 75% better energy efficiency at the same workload as "2014 devices", which I assume to mean the A15 as well. Those numbers are also comparing 28nm TSMC versus 16nm FINFET+ TSMC, which is probably 100% of the power efficiency gain there.Since some of that is likely through higher clocks, the efficiency advantage is likely lower at the clock speeds necessary to yield a 3.5x gain in performance.
Considering the A15's TDP, this means a full up A72 implementation is likely >>>3w when actually preforming at 3.5x the performance of an A15. That sounds more like a 6-8w TDP when performing at those levels, but when throttled way back after hitting temp caps, its probably only performing 1.5-2x the performance of an A15 to stay within 2-3w or so to be in a phone.
That means more power than Broadwell M (a lot more) even if the performance somehow actually manages to be higher.
V900 - Thursday, February 5, 2015 - link
WOOT!!!!Just absolutely amazing?!? Surely you mean THE GREATEST TRIUMPH IN CPU DESIGN EVARR!!! Did anybody call the Nobel committee yet?!?
Seriously though: We're talking about a quad core CPU that only has launched on paper yet. We have no idea on how it'll perform besides very rudementary figures from the designer(s). It's also around 18 months away from being in an actual product.
The fact that you think it's 'absolutely amazing' that a design 18 months into the future beats Apples Cyclone (which has been out for over a quarter) and Intels Core M (which is out now) is perhaps a abusing the words 'absolutely amazing' no?
darkich - Friday, February 6, 2015 - link
No its not abusing when you consider the fact that this CPU will be set into many cores and run inside your phone. .and be more frugal than today's phone CPU's.And "much" (around 80%) better than Cyclone, that is a two core setup.
Intel otoh, is on a superior process even now, and isn't able to beat it even with a $270 CPU, with 2-3 times higher power consumption.
So yeah, amazing.
Alexey291 - Friday, February 13, 2015 - link
You really need to trust PR statements a liiiiittle less.Speedfriend - Thursday, February 5, 2015 - link
"And some simple math reveals the single core Geekbench performance on par, if not better than Broadwell U, and much better that that Core M joke."Not sure on your simple math...
Single core Geekbench
Exynos 5433 = 1178
Supposedly A57 will give 1.84x the performance, so 2167
Core M in HP Envy get 2500 Single core Geekbench
try again please
darkich - Friday, February 6, 2015 - link
Haha, talk about cherry picking!!The lowest Exynos score and the highest Core M score.
While in reality the Exynos goes up to 1400 and Core M starts from 1800(depending on the model..you used a high end thermally unrestricted one that sips up to 20W - read the review on notebookcheck.com - and costs over 250$!!)
Now you try again please
Daniel Egger - Wednesday, February 4, 2015 - link
Right, MediaTek and A72... the company known for flooding the market with cheap Cortex-A7 cores. True OctaCore my ass.name99 - Wednesday, February 4, 2015 - link
You are aware of this thing called Moore's Law, right?...And what is your complaint against "flooding the market with cheap Cortex-A7 cores"? You are in a position to be able to pay, I don't know, $600 for a really nice phone; but there are plenty of poor people out there who are thrilled to have a smartphone that cost them only, maybe $30 or so. I met some of them in Myanmar and a cheap smartphone is the first electronics, hell the first nice thing, they have ever owned. China and MediaTek are doing god's work by bringing so much happiness to so many poor people.
V900 - Thursday, February 5, 2015 - link
Yes, screw stuff like clean water, paved roads, a stable electricity supply 24/7 and all those other things that actually HELP people in third world countries.What they really need is a shoddy 30 dollar handset, that they can play Candy Crush on for a few hours, and then stroke lovingly the rest of the day/week when there isn't power in the village, and they don't want to walk 15 miles to get it charged.
MrSpadge - Wednesday, February 4, 2015 - link
A72 seems to provide an impressive gain over A57, after subtracting something process related gains. Every year I want better SoC's to be put into a Lumia 1020 successor, yet my current phone still doesn't want to fail so I keep waiting for price drops or better hardware.haukionkannel - Wednesday, February 4, 2015 - link
two A72 cores seems to be what would be sensible to have. Not too much power to eat the battery, but so much punch that all everyday applications should be blinding fast...But I am afraid that we get 8 core little big and maybe four core variants... The four core is not so bad, but for me two fast would be better...
Tams80 - Wednesday, February 4, 2015 - link
For the 1020 successor (if there even is one), they should go back to the 808 PureView design of having a DSP, rather than using a SoC core.twotwotwo - Wednesday, February 4, 2015 - link
Seems needed. Intel and Apple are busy, and the A15 seemed to get relatively few design wins (blame Qualcomm and Apple). Wonder how long it will take to get in products, though. 2017?name99 - Wednesday, February 4, 2015 - link
WTF are you talking about? There is a LONG list of A15 based SoCs. You just don't hear about them any more because sites like AnandTech tend to review the newest fanciest phones.And ARM claims there will be A72 devices shipping in 2016. This seems plausible. I expect Apple will use up all the 16FF+ capacity this year for A9s, but that capacity will start to be available for other vendors in early 2016.
ruturaj1989@gmail.com - Sunday, March 8, 2015 - link
Surprisingly it's coming out this year only.Pork@III - Wednesday, February 4, 2015 - link
I predict even more heated mobile devices :D