It is. The A12X has more than double the S855's GPU performance and we can expect ~ 20% increase in GPU performance (A12X to A13X) as the A12 to A13 had a similar increase.
Ok, but then again the SD875 (or whatever it will be called) is expected to be on a new architecture after 3 generation, which generally means 50%+ jump just there. With the transition over to 5nm, you can expect even more performance from that. That would, after all, be the most fair comparison to the A14 (or A14X) on 5nm later this year, due to process node comparisons. Same with CPUs (don't forget, the A77 in the SD865 was released in the summer before by ARM, and even presented in the SD865 in December).
Over the past few years Apple has been doing a consistently better job than Qualcomm regardless of process node. Probably they can afford to since they are in full control of the whole technology stack, including the software which means they can squeeze additional performance and efficiency like that. But this doesn't change the fact that year after year A-chips are better than their counterparts.
I'm not sure that apple is much, if at all, more optimized than the Android bsps. If you're aware of proof to the contrary I'd be interested in reading it.
It doesn’t mean optimized the way you envision it. It means more tailored to the design, since Apple has a fixed number of systems it has to support. There are three ways to see it: how many years does Apple push iOS updates? That is a function of performance as well, as as the OS.
Another way to see it is knowing that Apple ships iPhones with much less RAM, meaning their OS and apps have to be designed to use less RAM too.
Likewise their iPhone usually ships with smaller batteries; by designing the OS, SoC, and RAM synergistically they can use a smaller battery too. RAM happens to use energy even when idle, so less RAM does translate to lower energy usage.
Yeah, but anything Qualcomm does to boost performance, Apple will be doing too.
The 865 is going to compete with the A14 in 2020, and the 875 will compete with the A15 in 2021. So if we expect the A14 to boost perf by 15% and the A14X to boost perf by 40%, and the A15 to boost perf again by 10% and A15X to boost perf again by 25%, you'll see: 855 = 1.00 865 = 1.25 875 = 1.50
The 865 competes with A13 not with the future A14. Apple sets the cadence in the SoC space and have done so since breaking rank with sheer performance and transition to a 64bit arch.
This is just misrepresentative. The past two generations ARM's architecture has been closing the gap to Apple. It closed the gap by around 30% in IPC with A76, and doing so by around 15% in IPC with A77 (A77 had 27% IPC gain vs. A13's 12% IPC gain). The gap has been getting smaller, and hopefully it will continue. But the fact is still that it's closing for the performance cores.
Also, you're comparisons are way off. The SD855 was comparable to the A12, just as the SD865 is to the A13, and so on and so forth. This with process node and the actual release date of the Cortex Core in mind.
I know you want to think that, but the 855 right now barely competes with the A13. The gap may be getting smaller, but when Apple has a multiple year lead in performance, it will take multiple years of iteration to catch up, and that assumes Apple isn't growing either.
From the article, the 865 wasn't very competitive with the older A12: On the integer side, the A77 still trails Apple’s Monsoon cores in the A11, but the new Arm design now has been able to trounce it in the FP suite. We’re still a bit far away from the microarchitectures catching up to Apple’s latest designs, but if Arm keeps up this 25-30% yearly improvement rate, we should be getting there in a few more iterations. ... The QRD865 really didn’t manage to differentiate itself from the rest of the Android pack even though it was supposed to be roughly 20-25% ahead in theory. I’m not sure what the limitation here is, but the 5-10% increases are well below what we had hoped for. For now, it seems like the performance gap to Apple’s chips remains significant. ... There’s one apparent issue here when looking at the chart rankings; although there’s an improvement in the peak performance, the end result is that the QRD865 still isn’t able to reach the sustained performance of Apple’s latest A13 phones. ... Looking at the estimated power draw of the phone, it indeed does look like Qualcomm has been able to sustain the same power levels as the S855, but the improvements in performance and efficiency here aren’t enough to catch up to either the A12 or A13, with Apple being both ahead in terms of performance, power and efficiency.
But for their tablets, Apple proceeds to beef up the GPU over 2x the iPhone, in this case by having 75% more CPU cores, much better thermal capacity, and higher clockspeed: https://www.anandtech.com/show/13661/the-2018-appl...
It is beaten by the GTX 1060, but beats the Ryzen 7, in PC space, and soundly beats the iPhone by 80% when not CPU bound.
So the inductive part is that, given the 865 approaches the performance of an iPhone, it won't approach the performance of an iPad.
One thing to remember when you compare GPUs. Benchmarks usually run half precision operations on mobile compared to what they run on desktop - FP16 vs FP32. Also on ios they run Metal while on Windows DX12 or Vulkan or OpenGL. Not the same thing.
That's a stupid exaggeration. If it were dead how come Huawei just released the Mediapad M6 this summer with Kirin 980, and is currently releasing an even more premium tablet right now? How come Samsung makes a new Tab S every year, along with iterations of cheaper models every now and then? Same with Xiaomi, Lenovo and others. Low-end market is sprawling with new tablets from all sorts of brands. Even LG released a tablet this year. The tablet market may not be huge, and certainly lackluster in a lot of ways on Android (mainly due to Google neglecting it since Nexus 9 was a flop; you don’t follow-up Nexus 7 v2—the best tablet ever made—with that nonsense), but it very much exists and is desired by a lot of customers. This reality is true for Apple too, that waited 4-5 years before it released a new iPad Mini iteration.
As an avid tablet user myself, even with a lot to be desired on Android, I still very much like many offerings by them the past 2 years. I'm currently using both an iPad Mini 5 and Mediapad M6 8.4”. Seeing as I, like most other people, use tablett for media consumption (YouTube, streaming movies/shows, reading books, Reddit, browsing, Spotify), the available supported apps are equally good for both platforms. If I were to dive deeper to dedicated application, sure, iPad has much better support, I really never do as I use a tablet for the specific uses it was made for. I can see this be a complaint if you're an iPad Pro user and use it for professional work, but I don't really see that being a desire for even those using regular iPad or iPad Mini. Maybe if you’re a gamer, but that’s really it.
I use the Huawei more than the iPad Mini due to how much more intuitive Android is. At the end of the day, they both run oversized variants of their smartphone OS, and Android is simply more intuitive in a lot of respects, with iOS use still feeling like having one hand behind my back. Where Apple is fantastic though, is in its hardware implementation, like its excellent screen calibration and touch latency, or having a 3.5mm input (unlike M6 or newer Tab S, sadly) with really good DAC that properly drives my HD650. Mediapad wins in more effective and ease-of-use OS, 16:10 aspect ratio. Android also makes it easier to do things like torrenting (which is great for downloading/streaming movies, football matches, etc.), local file management and sharing and more. Ironically, Apple has made up for the Mediapad’s lack of jack with its fantastic 3.5mm Type-C for $7, which beats DACs upwards of 10x its price (no joke – Apple really knocked it out of the park).
Let alone the thousands of Samsung tablets with android that sell like hot cakes. Given the sell over holidays on them, they are even more attractive.
Sure Samsung is terrible at updating software, but not a deal breaker by any means, i tried parents Ipad and it just felt wrong using it, i can't quite put finger on reason, just seems so limited with apps and just not responsive.
One look at Android App Stores tells you just how successful Android Tablets are. That's something everyone can do, and not just take some random fanboys drivel on a forum.
This is just your feeling and not a fact… Some people feel better on iPadOS and some others on Android. Now the fact is that iPad is more powerfull than andoid Tablets…
Performance looks good but I'm really wary of the external 4g/5g chip and its additional antennas and how they'll affect battery life. Not going to buy any new sd865 phone until reviews pop up.
That's the elephant in the room! Haven't seen any good real-world data on just how much power 5G use will actually add. Faster data are nice (in principle), but if 5G cuts battery life by 30-40% vs. current 4G LTE , pretty much useless. If anyone here has any links to such tests, please post - Thanks!
Well, I think generally they're going to stick the 5G SoC next to the QSD 865 SoC. That means heat from one will affect the other, and so OEMs will potentially require a larger phone (and larger screen = more drain), but with a smaller area for the battery.
Not to mention, there is going to be significant battery life hit when using an external radio chipset instead of an integrated one. Remember the huge power savings we saw going from the QSD 600 to the QSD 800 back in 2013. So it will be kind of reversed.
I think overall, what will happen is that all the improvements in the battery technology and the Cortex A77 are going to be nullified. So phones from 2019 which had the best battery life and performance, are going to see a "side-grade" compared to 2020 flagships. So I believe in essence, the QSD 865 will be much less competitive against the Apple A13 and Exynos 990... compared to the QSD 855 against the Apple A12 and Kirin 980.
...I do believe more in-depth benchmarks and reviews will come within 3 months (and validate my hypothesis).
Based on the initial performance estimates from the Cortex A77 announcement article Andrei published back in May, I was actually really excited to see the next generation Snapdragon 865. The lack of performance uplift in the real-world web metrics with the QRD865 is a bit underwhelming, but my main concern is the requirement of an external modem.
I don't have any experience with the current crop of 5G external-modem devices, but back in the day I used several Qualcomm APQ external-modem devices. Back then battery life being terrible was basically a given since batteries were small and Android was pretty bad at power management, but in hindsight I'm sure at least some of it was due to the external modems...
"The web-based benchmarks such as Speedometer 2.0 and WebXPRT 3 showcase similar relatively muted results. Here I had expected Qualcomm to perform really well given the scheduler performance showcase of the Snapdragon 845. The results of the Snapdragon 855 are quite meagre, especially in a steady-state throughput workload such as Speedometer 2.0. Here the Snapdragon 855 only manages to showcase a ~17% improvement over the last generation, and also lags behind the Kirin 980 by some notable amount."
However, compare those 855qrd to the numbers seen in this article with actual shipping devices. Pretty big difference.
In addition to spending $$$ on R&D, Apple can optimize (tailor, really) its SoCs 100% to its OS and vice versa. Also, not sure if anybody has figures just how much the (internal) costs of Apple's SoCs are compared to what Samsung, Xiaomi etc. pay QC for their flagship SoCs. Would be interesting to know how much this boils down to costs.
Can we stop with these excuses? What cost reasons? Whose stopping them from making two architectures then, letting OEMs decide which to use -- if Apple does it, why not them? Samsung aiming at large cores with their failed M4 clearly points towards a desire/intention to have larger cores that are more performant. Let's not make the assumption that there's no need here--there clearly is.
Furthermore, where is the excuse in ARM still being on the A55 for the third straight year? Or Qualcomm being on their GPU architecture for 3 straight years, with so incremental GPU improvements the past two years that they not only let Apple both match and vastly surpass them, but are even getting matched by Mali?
There's simply no excuse for the laziness going on. ARM's architecture is actually impressive, with still big year-on-year IPC gains (whereas Apple has actually stagnated here the past two years). But abandoning any work on efficiency cores is inexcusable. As is the fact that none of the OEMs has done anything to deal with this problem.
Probably because ARM designs for general use - mobiles, tablets, TVs, cars etc, whereas Apple designs specifically for their devices. So naturally Apple is able to devote more resources and time to optimize for their platform, and also design cores/chips specific to their use (phone or tablet).
But then again I'm an outsider, so the reality could be entirely different
TIL using the same A55 architecture is "for general use" /s
If ARM had actually done their job and released efficiency cores more often, like Apple does every year, we'd have far more performant and efficient smartphones today across the spectrum. Flagship phones would benefit in idle use (including standby), and also in assigning far more resource-mild works to these cores than they do today.
But mid-range and low-end phones would benefit a huge amount here, with efficiency cores performing close to performance cores (often 1-2 older gen clocked substantially lower). That would also be cheaper, as it would make cluster of 2 performance cores not as necessary--fitting right in with your logic of making cheap designs for general use.
Apple seems to have started before arm did. They launched their design just 2 years or so after the announcement of a64 while arm needed the usual 4-5 years for a new design. I don't believe apples designers are that much better than normal (I think they handed them the ISA and threatened to buy out MIPS if they didn't). Arm has never recovered that lead time.
That said, PA Semi had a bunch of great designers who has already done a lot of work with low power designs (mostly POWER designs if I recall correctly).
Another factor is a32 support. It's a much more complex design and doesn't do performance, power consumption, or die area any favors. Apple has ecosystem control, so they just dropped the complex parts and just did a64. This also drastically reduces time to design any particular part of the core and less time to verify everything meaning more time optimizing vs teams trying to do both at once.
Finally, Apple has a vested interest in getting faster as fast as possible. Arm and the mobile market want gradual performance updates to encourage upgrades. Even if they could design an iPhone killer today, I don't think they would. There's already enough trouble with people believing their phones are fast enough as is.
Apple isn't designing these chips for phones though. They make them for their pro tablets. The performance push is even more important for laptops though. The current chip is close to x86 in mobile performance. Their upcoming 5nm designs should be right at x86 performance for consumer applications while using a fraction of the power. They're already including a harvested mobile chip in every laptop for their T2. Getting rid of Intel on their MacBook air would do two things. It would improve profits per unit by a hundred dollars or so (that's almost 10% of low end models). It also threatens Intel to get them better deals on higher end models.
We may see arm move in a similar direction, but they can't get away with mandating their users and developers change architectures. Their early attempts with things like the surface or server chips (a57 was mostly for servers with a72 being the more mobile-focused design) fell flat. As a result, they seem to be taking a conservative approach that eliminates risk to their core market.
The success or failure of the 8cx will probably be extremely impactful on future arm designs. If it achieves success, then focusing on shipping powerful, 64-bit only chip designs seems much more likely. I like my Pixelbook, but I'd be willing to try an 8cx if the price and features were right (that includes support for Linux containers).
Nice post! You're right, it really does seem like Apple's own implementations defined the ARM v8.x spec given how soon after ARM's release their chips dropped. ARM is also crimped by the need to address server markets so their chips have a more complex cache and uncore hierarchies than Apple's and generally smaller caches with lower single threaded performance. Their customers' area budgets are also more limited compared to Apple who doesn't generally integrate a modem into their SoC designs.
I would also add that Qualcomm only makes a dozen or so dollars per chip, whereas Apple makes hundreds of dollars per newest generation iPhone and iPad Pro. Qualcomm's business model just puts them at a disadvantage in this case - they have to make a chip that's not only competitive in performance, but at a low enough cost that a) they can make money selling it, and b) handset vendors can make money using it. Apple doesn't really have to worry about that because for all intents and purposes, their chip division is a part of their mobile division.
I wonder if it's in the cards for Apple to ever include both an Intel processor as well as a full fledged mobile chip in the future, working in the same way as integrated/discrete graphics - the system would primarily run on the A13x, with the Intel chip firing up for Intel-binary apps as needed.
I think there could be some possibility of AMD striking that deal with some stipulations. They have the semi-custom experience to make it happen and they don't have much to lose in mobile. AMD already included a small arm chip on their processors. They already use AMD GPUs too. A multi-chip package with be great here.
I've given some thought to the idea of 8 Zen cores, 8 core ARM complex, 24CU Navi, 32GB HBM2, and a semi-custom IO die to the it together. You could bin all of these out for lower-spec'd devices. The size of this complex would be much smaller than a normal dedicated GPU, CPU, and RAM while using a bit less power. Most lower end devices would probably only need 2 x86 cores and 8-11CU with 8GB of RAM.
>"I wonder if it's in the cards for Apple to ever include both an Intel processor as well as a full fledged mobile chip in the future, working in the same way as integrated/discrete graphics - the system would primarily run on the A13x, with the Intel chip firing up for Intel-binary apps as needed."
Doubt it, if only because x64 is already coming out of patent protection, and with each passing year newer feature revisions will have the same thing happen. By 2025 or 2026 or so, Apple (or anyone else) will just flat out be able to implement x86-64 all the way up to Core 2 at least however they like (be it hardware, software, or some combo with code morphing or the like). That would probably be enough to cover most BC, sure stuff wouldn't run as fast but it would run. And there'd be a lot of power efficiency to be gained as well.
OSX on arm seems a given soon. That would allow them to really blur the line between their ipad pro and the lower end laptops. Even if they are still technically different OSes it would make getting real pro apps onto the ipad pro a ton easier. MS tried this of course but didn't have the clout or tablet market to really make it happen. Apple is in a position to force the issue and has switch architectures in the past.
Nope, Apple still support AArch32, and Apple 64bit is only ahead of ARM by 1 year max, actual S810 silicon by Qualcomm was only 15 months later than A7, you can't possibly say Apple started earlier AND took 2-3 years LESS than ARM's partners to design silicon. That would mean Apple has to beat A57 by at least 3 year. Reality says otherwise.
ARM announced their 64-bit ISA on 27 October 2011. The A7 launched 19 September 2013 -- less than two years later. Anandtech's first review of a finished A53 and A57 product was 10 Feb 2015 -- almost 3.5 years later and their product was obviously rushed with new revision coming out after and A57 being entirely replaced and forgotten.
Qualcomm and others were shocked because they only had 2 years to do their designs and they weren't anywhere near complete. A ground-up new design in 23 months with a band new ISA isn't possible under and circumstances.
Apple SoC uses more Die Space for CPU Core, it is as simple as that, so they are not a fair comparison. For roughly the same die size, Qualcomm has to fit in the Modem, while Apple has the modem external.
I'm not sure I understand the "fair" bit? The other chip makers are free to design a larger-core variant if they so choose. And, the 865 has the modem external, just like the Apple chips. Also, generally speaking, the SoC + external modem approach should require more power, yet Apple seems to do very well on those benchmarks.
Maybe it's more as per another reply, i.e. Apple just optimises everything, one example being throwing out a32.
That's not an argument -- the modem costs money for both parties either way at the end of the day. Also, Cortex Cores are pretty great, with still bigger year-on-year improvements than Apple (which seems to have stagnated), so it is closing the gap, albeit slowly. The big complaint however is in things like Qualcomm's complacency in GPU, or in ARM doing shit-all to give us a new efficiency core architecture, after 3 years.
Apple has surpassed them hugely here, to the point that their efficiency cores perform more than 2x as much with half the power. Now, if you want bring price into here, think about how much that costs OEMs. It costs them by forcing them to use mid-range SoCs that use expensive performance cores, when they could make due with only efficiency cores that performed better. It costs them, as well as flagship phones, in a lot of power efficiency, forcing them to do hardware compromises, or spend more on larger batteries, to compete.
ARM has been catching up, though. The IPC increases since A11 have been pretty meagre, whereas A76 was a pretty sizeable jump (cutting a lot of the gap), and A77 is doing a 25% IPC jump, whereas the A13 did what, half that? Of course Apple still has a huge foothold, but the gap has been getting smaller...
ARM's issue right now, though, is in efficiency cores. The fact that their Cambrdige team hasn't developed anything for 3 straight years now (going into the 4th), whereas Apple's yearly architecture improvement has given them efficiency cores that is monumentally better in both performance and efficiency, is getting embarrassing at this points. It's hurting Android phones a lot and getting kind of ridiculous at this point. No less frustrating that none of the SoC actors are bothering to make any dedicated architectures themselves to make up for it. Qualcomm is complacent in even their GPUs, which have been on the same architecture for 3 straight years and has in this time completely lost its crown to Apple--even ARM's Mali has caught up!
"How is Apple so far ahead in some/many respects, given that Arm is dedicated to designing these microarchitectures?"
based on what I've read in public reporting, Apple appears to mostly thrown hardware at the ISA. Apple has the full-boat ISA license, so they can take the abstract spec and lay it out on the silicon anyway they want. but what it appears is that all that godzilla transistor budget has gone to caches(s) and such, rather than a smarter ALU, fur instance. may haps AT has done an analysis just exactly what Apple did to the spec or RD to make their versions? did they actually 'innovate' the (micro-?) architecture, or did they, in fact, just bulk up the various parts with more transistors?
Thanks Andrei! Amy chance to post the S855's QRD's figures also? These QRDs are "for example" demo units, and the final commercial handsets are often different (faster). Also, any word from QC on how much AI processing power will be needed to run 5G functionality? Huawei's Kirin 990 5G has twice the AI TOPs than their LTE version, and that seems to be due to their (integrated) 5G modem using about half the AI TOPs when actually working in 5G mode
I don't see the point in showing the QRD855 results, there's a large spectrum of S855 device results out there and likely we'll see the same with the S865. The QRD855 and QRD865 aren't exactly apples-to-apples configuration comparisons either so that comparison doesn't add any value.
None of the tests were made under thermal stress scenarios, the cooling isn't a limitation on the QRDs, the performance showcased is the best the chip can achieve.
Just shows how Samsung does the best implementation of Qualcomm Soc's even last years Samsung 855 devices are able to out perform Snapdragon 865 in many benchmarks
Anyone even expected Qualcomm beating Apple in performance? You were dreaming then don't know whom to blame Arm or Qualcomm but the Android world is constantly receiving inferior chips
IMHO all these SOCs are at the level that average Joe can do with any of these and the device will feel snappy and good. Now it comes down to the OS delivering the performance and features that users crave.
the only benchmarks are the web, 3dmark and geekbench for the a13 chip the rest is in favor of the snapdragon. It should perhaps be remembered that this is a soc so cpu + isp + gpu + ... and when adding the snapdragon >>>> A13. just see the AI markers which take into account the entire soc. For gfx bench it would be necessary to explain why so much difference whereas in the other benchmarks GPU there is not this difference but gfx bench is not outdated for more than a year for me it is no longer a reference. For web performance just see the speed tests on youtube to see that this score is not justified
The best snapdragon can barely keep up with the a11, as Andrei points out in his analysis. YouTube speed tests are by far the most useless and pointless benchmarks ever devised, which is why not a single reputable source (like anandtech) ever uses them...
Sorry, but the only question here is how much faster the a14 will be. 40%, 50% or even more...
Why doesn't Qualcomm simply increases their die size & use a larger die properly to at least come closer to apple maybe it is needs more than a larger die size it needs a better Architecture Arm or Qualcomm whom to blame?
A key problem for smartphones is power budget. These SoCs are already pushing 5 W/h and up if running at full tilt, so even a nicely sized battery (5000 mAh) can be drained in 3-4 hours top if someone runs them accordingly. Apple has managed to accommodate high peak/burst performance while still getting good overall power usage, and I still find their battery life wanting.
Why do they need to ? Apple is only Apple and it only works for them.
If you see realworld speedtests on YouTube see how OP7 Pro flies through the tasks giving the user a faster and smoother experience.
And go to ScyllaDB website and see how AWS Graviton 2 stacks with Intel in Benches and how they mention benches only should not be taken as a measure.
Apple OS lacks Filesystem. It cannot be a computer ever. iOS is a kid friendly OS. You can't even fucking change launcher / icons forget other system level changes.
Qcomm needs competition from MediaTek, Exynos. Huawei HiSilicon but except Exynos all are garbage because they do not let us unlock Bootloaders. And Android phones see community driven ROMs there is so much or choice to add even the DAPs from 200USD to 3000USD have Qcomm technology.
Repairing is also easier due to the HW Boxes which can bring a QComm9008 Brick to life. Whereas with Apple its Ball and Chain ecosystem.
I see my SD835 run like butter through everything I throw at it and has an SD slot too.
This stupid Whiteknighting of Apple processors beating x86 and their use case / Android Phones is a big sham. People need to realize benches are not the only case when you compare Processors accross OSes.
A 1995 computer running MS DOS 6.0 is also butter smooth, I hope you dont think that means an intel 486 DX4 is faster than an apple chip.
Please stop with your nonsense about "real world tests". Real world your 835 has a slower cpu, GPU, and storage. Doesn't mean it is garbage - it is fine you are happy with it but it is not your duty to defend the honor of Oppo against facts. I dont want an iphone either die to their walked garden but that doesn't mean I live under the delusion that my brand new galaxy s10e is anything other than at least 40% slower and twice as inefficient as an iPhone 11...
Coming from exynos 9810 note 9 to iphone 11 pro max... the SOC on the iphone is literally times faster and more efficient than the exynos. The difference is absurdly big and people still calls apple slower because of design choices (like the slow animations, etc). It's super smooth in all conditions/times + it's rofl fast in any app/game (not to mention apps got functions not available on android). GL running full PC civilization 6 on android with decent performance later in the game on bigger map and decent battery life. There is a reason why the game was not ported on android too (and not only piracy) - it will run poor even on most high end current gen android phones.
They could, but are you going to pay for it? Let say Qualcomm has to bump up $50 ( inclusive of their profits ) to reach the same level of performance, as you consumer you will have to pay roughly $100 more.
In a cut throat Android market, who is going to risk putting up their Smartphone price by $100?
There is a reason why Samsung and Huawei are trying to make SoC themselves, instead of putting those profits into Qualcomm's hand, they want those cost to go towards more die space to better differentiate their product and compete with Apple.
Now here is another question, how many consumer will notice the different in CPU speed? And how many consumer will notice the Modem quality different?
They are all set of trade offs, not only in engineering, but also in cost, markets, risk... etc...
It is a matter of cost. Arm could design a cpu core that is 4 times the size of the a76 and 50% faster, catching up to apple. But that would cost a lot of die size and thus money... for high margin, high cost devices it is ok but not for cheap ones. Ape can afford this...
It's not that simple as putting a lot of transistors in it. You can somewhat tackle the problem with that, but by itself it will not lead to the desired end result.I can elaborate, but it will be lengthily and highly technical post.
The Spec2006 tables show that A13 has performance similar to x86 desktop chips, which may be considered as revolution. Can you please add frequencies of the chips (both x86 and Apple) too, at least some estimations? Also, what are the memory configs (freq/CAS/...)? It will be also interesting to see x86 chips in individual SPEC benchmarks so we can analyze what are the weak and string points of Apple architecture.
The Apple chips are running near their peak frequencies, with some subtests being slightly throttled due to power. The 9900K was at 5GHz, the 3950X at 4.6-4.65GHz, 3200CL16 on the desktop parts.
It would be nice if some contemporary x86 laptop chips could be added to that list (Ryzen/Ice Lake/Coffee Lake...) just for ease of comparison between ARM and x86 mobile chips.
Any strong reason for these tests being compiled with -mcpu=cortex-a53 on Android/Linux?
One might expect for SoCs with 8.2 on all cores there may be some uplift from at least targeting cortex-a55, if not cortex-a75?
When you're expecting to run on a big core, forcing the compiler to target a in-order core which can only execute one ASIMD instruction per cycle seems likely to restrict the perf (unrolling insufficiently etc.). Certainly seems a bit unfair for aarch64 vs. x64 comparison, and probably makes the apple SoCs look better too (assuming XCode isn't targeting a LITTLE core by default). It also likely makes newer bigger cores look worse than they should vs. older cores with smaller OoO windows.
I get not wanting to target compilation to every CPU individually, but would be interesting to know how much of an effect this has; perhaps this could contribute to the expected IPC gains for FP not being achieved?
The tuning models only have very minor impact on the performance results. Whilst using the respective models for each µarch can give another 1-1.5% boost in some tests, as an overall average across all micro-architectures I found that giving the A53 model results in the highest performance. This is compared to not supplying any model at all, or using the common A57 model.
The A55 model just points to the A53 scheduling model, so they're the same.
Hmm, I took a look at LLVM and the scheduling model is indeed the same for A53 and A55, but A55 should enable instruction generation for the various extensions introduced since v8.0. I can believe that for spec 2006 8.1 atomics/SQRDMLAH/fp16/dot product/etc. instructions don't get generated.
It looks like not much attention has been paid to tweaking the LLVM backend for more recent big cores than A57, beyond getting the features right for instruction generation, so I can believe cortex-a53 still ends up within a couple of percent of more specific tuning. Probably means there's more work to be done on LLVM.
If it is easy to test I think it would be interesting to try cortex-a57, or maybe exynos-m4 tuning on a77 because these targets do seem to unroll more aggressively than other cortex-X targets with the current LLVM backend. I made a toy example on godbolt: https://godbolt.org/z/8i9U5- , though for this particular loop I think a77 would have the vector integer MLA unit saturated with unroll by 2 (and is probably memory bound!), still the other targets would seem more predisposed to exposing instruction level parallelism.
I pointed out to Arm that there's not much optimisations going on in terms of the models, but they said that they're not putting a lot of effort into that, and instead trying to optimise the general Arm64 target.
I tested the A57 targets in the past, I'll have a look again on things like the M4 tuning over the coming months as I finally get to port SPEC2017.
Sigh another comment on the x86 vs A series. Why dont people understand running an x86 code on ARM will have a massive impact in performance ? How do people think a fanless BGA processor with sub 10W design beat an x86 in realworld just because it has Muh Benchwarrior ? There are so many possible workloads from SIMD, HT/SMT, ALU.
Having scalability is also the key. Look at x86 AMD and Intel how they do it by making a Large Wafer and having multi SKUs with LGA/PGA (AM4) sockets allowing for maximum robustness.
ARM is all about efficiency and economical bandwidth and it won't scale like x86 for all workloads. If you add AVX its dead. And Freq scaling with HT/SMT. Add the TSMC N7 which is only fit for mobile SoCs. Ryzen don't scale much into clocks because of this limitation.
ARM is always Custom if you see as per Vendor. Its bad. Look at MediaTek trash no GPL policy. Huawei as well. Except QcommCAF and Exynos. Its a shame that TI OMAP left.
> Why dont people understand running an x86 code on ARM will have a massive impact in performance ?
Nobody even mentioned anything regarding this, you're going off on a nonsensical rant yet again. For once, please keep the comments section level-headed.
What ? Its a genuine point. ARM based 8c processors Windows machines like Surface Pro X can only emulate 32bit x86 code. 64bit isnt here and running both emulation will have am impact (slow) That's what I mean. They need native code to run and rival.
Rant ? Benches = Realworld right. How come a user is able to see an OP7 Pro breeze through and not lag and offer shitty performance vs an iPhone ? I saw with my own OP3 downclocked on Sultan ROM due to the high clockspeed bug on 82x platform not just me, So many other users. GB score and benches do not only mean performance esp in ARM arena.
Except for bragging rights, This is pure Whiteknighting.
x86 emulation on Arm has absolutely nothing to do with any topic discussed here or QC vs Apple performance. I'm sick and tired of your tirades here as nothing you say remains technical or on point to the matter.
The experience I have, when dismissing any other aspects such as iOS's super slow animations, is that the iPhones are far ahead in performance of any Android device out there, which is very much what the benchmark depict.
Did I mention anything from your article on QC vs x86 ? I was replying to a comment on "Revolutionary" performance of A series vs x86. And then you claimed it as nonsensical point of x86 on ARM.
So "super slow animations" & "far ahead". What do you mean by that ? An iPhone X vs a 11 Pro will exhibit the launching speed, then loading speed differences same as 835 vs 855 which can be observed. Everything ApplePro guy did a massive video of iPhones across multiple A series iterations which is the ONLY way a user can see the performance improvement.
But when Android vs iOS you are saying iPhone animation speeds are super slow yet the benches show much lead..So how is the user seeing the far ahead in performance out there when OP7 Pro vs iPhone 11 Pro Max, like iPhone is still faster as you claim but in reality user is seeing same ?
Apparently I'm able say that because I'm able to differentiate between CPU performance, raw performance, and "platform performance".
CPU performance is clear cut on where we're at and if you're still arguing this then I have no interest in discussing this.
Raw performance is what I would call things that are not actually affected by the OS, web content *is* far faster on the latest iPhone than on Androids, that's a fact. Among this is actual real applications, when Civilization came to iOS the developers notably commented on the performance being essentially almost as good as desktop devices, the performance is equal to x86 laptops or better: https://www.anandtech.com/show/13661/the-2018-appl...
And finally, the platform experience includes stuff like the very slow animations. I expect this is a big part as to what you regard as being part of your "experience" and "reality". I even complained about this in the iPhone 11 review as I stated that I feel the hardware is being held back by the software here.
Now here's what might blow your mind: I can both state that Apple's CPUs are far superior at the same time as stating that the Android experience might be faster, because both statements are very much correct.
Okay thanks for that clarity on Raw performance and other breakdowns like CPU, Platform. Yes I can also see that Web performance on A series has always been faster vs Androids.
I forgot about that article. Good read, and on Civ 6 port however it lacks the GFX options. I would also mention that TFlops cannot be even compared within same company. Like Vega 64 is 12TFs vs a 5700XT at 9TFs, latter completely wrecks the former in majority except for the compute loads utlizing HBM. I know you mentioned the FP16 and other aspects of the figure in opening, just saying as many people just take that aspect. Esp the new Xbox SX and Console as a whole (They add the CPU too into that figure)
And finally. Yes ARM scales in normal browsing, small tasks vs x86 laptops which majority of the people nowadays are doing (colleagues don't even use PCs) but for higher performance and other workloads ARM cannot cut it at all.
Plus I'd also add these x86 laptop parts throttle a lot incl. Macbooks obv because they are skimping on cooling them for thinness so their consistency isn't there as well just like A series.
When I look at the comparisons here, I look only for Android vs. Android or Apple vs. Apple. Comparing them with different OSes and more so primitive tools is a worthless approach. Firstly, the results need to be normalized, one Soc is showing lead while sucking more power than the other. Secondly, the bloated scores of Apple Soc here does not represent real-world results. Most Android phones with SD855 are faster if not the same than iPhone 11.
> Comparing them with different OSes and more so primitive tools is a worthless approach.
SPEC is a native apples-to-apples comparison. The web benchmarks and the 3D benchmarks are apples-to-apples interpreted or abstracted, same-workload comparisons. All the tests here are directly comparable - the tests which aren't and which rely on OS specific APIs, such as PCMark, obviously don't have the Apple data.
> Firstly, the results need to be normalized, one Soc is showing lead while sucking more power than the other.
That's a very stupid rationale. If you were to follow that logic you'd have to normalise little cores up in performance as well because they suck much less power.
> The web benchmarks and the 3D benchmarks are apples-to-apples interpreted or abstracted, same-workload comparisons. All the tests here are directly comparable - the tests which aren't and which rely on OS specific APIs, such as PCMark, obviously don't have the Apple data.
How? Just like Geekbench, different compilers are used. Different distribution of loads are made. My Ryzen 2700 can finished 5 full GB run as fast as one full GB run in an iPhone and yet the single core score of iPhone is higher than any Ryzen. You are showing Apple A13 (LOL A13 is faster than the fastest AMD or Intel chip) using Jurassic Spec benchmark?
Talk about dreams vs. reality.
> That's a very stupid rationale. If you were to follow that logic you'd have to normalise little cores up in performance as well because they suck much less power.
We are talking about efficiency here, your beloved Apple chip is sucking twice the power than SD855 or SD865 per workload.
Have you ever load a consumer website or run an consumer app with these phones side-by-side? Don't tell they are not using cpu or memory resources. They are, they are doing most if not all of the workloads on the charts here. While your chart if showing Apple has twice the performance vs SD865, the phone doesn't tell lies. A bloated benchmark score does not translate to real-world result.
It is time to stop this worthless propaganda that Android SoC is inferior than Apple and the laughable IPC king (iPhone chip is faster than desktop processors).
Until iPhone can play Crysis smoother than even low end laptops, this BS claim that it is the fastest chip should stop.
It really feels like a propaganda every single article on CPU Apple gets super limelight because of these benches on a closed walled garden platform from OS to HW to Repair.
The power consumption of A series processors deteriorating the battery was nicely thrown under the rug by Apple throttling bs. They even added the latest throttle switch for XS series. But yea no one cares. Apple's deeppockets allow top lawyers in their hands to manipulate every thing.
The consumer app part. Its perfect use case since we never see any of the Android phones lag as interpreted here due to the dominance of A series by 2-3x folds and in real life nothing is observable. And comparing that to the x86 Desktop machines with proper OS and a computing usecases like Blender, Vray, MATLAB, Compliation, MIPS of Compression and decompression, Decode/Encoding and superior Filesystem support and socketed / Standardized HW (PCIe, I/O options), Virtualization and Gaming, DRAM scaling choice (user can buy whatever memory they want or any HW as its obvious)..this whole thing screams bs. It would be better if the highlight is mentioned on benches and realwork might differ but its not the case at all.
The worst is spineless corporate agenda of allowing Chinese CPC to harvest every bit from their Cloud data Center in China allowing the subversion and anti liberty. A.k.a Anti American principles.
You forgot I'm member of the Illuminati, half mole-people from my dad's side and half lizard-man from my mother's side. I love my monthly deep state paycheck alongside the Apple subsidies I get for spreading their narrative. Wait till people find out the earth is really flat.
LOL. Lawyer manipulation is for their Class Actions KB fiasco, Touch Disease, Error 53..not you (Just clarifying) and idk if you know Louis Rossman on YT. If not I suggest to watch and know how the fleecing is done and consumer is kept in dark always. The revelations of their stranglehold on HW IC chip for supplying to repair services and Lobbying against Repair is enough to understand and gauge the fundamemal pillars of a company and its ethics.
Sorry I take ethics and choice/liberty into account over utopian performance and elitist / Luxury status quo stance.
> How? Just like Geekbench, different compilers are used. Different distribution of loads are made.
Please explain to me what the hell "different distributions of loads are made" is meant to mean? You have zero technical rationale behind such statements. All the comparisons here were made with the Clang/LLVM compilers on all platforms - bar the ISA, there is exactly zero difference in the workload logic between the platforms, and Apple's toolchain isn't doing something completely different either that it would suddenly invalidate the comparison.
> You are showing Apple A13 (LOL A13 is faster than the fastest AMD or Intel chip) using Jurassic Spec benchmark?
Yes I am because that is the reality of the matter.
> We are talking about efficiency here, your beloved Apple chip is sucking twice the power than SD855 or SD865 per workload.
And it's finishing the workload than twice as fast, ending up being *almost* as efficient in terms of the energy used by the computation. What matters here is the energy efficiency, not the power efficiency, and in this regard Apple's devices are top of the line.
> While your chart if showing Apple has twice the performance vs SD865, the phone doesn't tell lies.
What's even your point here? Of course the iPhones are significantly faster in loading webpages?
Return here when you have an actual factual argument to present, because right now you just have been repeating complete nonsense.
> Please explain to me what the hell "different distributions of loads are made" is meant to mean? You have zero technical rationale behind such statements. All the comparisons here were made with the Clang/LLVM compilers on all platforms - bar the ISA, there is exactly zero difference in the workload logic between the platforms, and Apple's toolchain isn't doing something completely different either that it would suddenly invalidate the comparison.
The compiler maybe the same but the scheduler of tasks in Android and Windows are different than in iOS. Many background apps are running simultaneously on Android and Windows machine, how about iOS? Frozen apps? LOL
>Yes I am because that is the reality of the matter.
Only matters to you, not in outside world. If you really think A9 has better IPC than Ryzen or Skylake, why don't you join the Apple engineers and build the fastest gaming/productivity PC with Apple A9 chip and sell it like hotcakes? No? Cannot t be? Even Apple doesn't claim their SoC is faster than even low end desktop today LOL. Even milking the customers with overpriced Macs with "Intel" inside.
> And it's finishing the workload than twice as fast, ending up being *almost* as efficient in terms of the energy used by the computation. What matters here is the energy efficiency, not the power efficiency, and in this regard Apple's devices are top of the line.
What matters is how fast it can finish the whole task not each micro-workload nonsense. If I want to zip and upload a file or encode and upload a video, I only care how fast it will finish the whole task and for that matter. If I want to play games, do I care how the fast the damn phone will compute the vector, pixel location, math operations etc? I only care how elegant, smooth and how fast the gaming experience will be.
iPhone is not twice as fast as loading any web page, any consumer app or even exporting or transcoding videos. Different apps yield different results, you are showing one worthless primitive benchmark where iPhone is fast, but out there, hundreds or thousands of different apps and website are showing the opposite results.
Here is one or two for you, one is showing twice the performance over the other =D
> the scheduler of tasks in Android and Windows are different than in iOS.
The scheduler isn't any different, because the scheduler doesn't do anything when there's only a single thread on a core to be run. There is literally no scheduling.
> If you really think A9 has better IPC than Ryzen or Skylake
Correction, I don't really just think it, I know it.
> What matters is how fast it can finish the whole task not each micro-workload nonsense.
The whole SPEC suite takes exactly an hour to complete, so quit with the micro nonsense if you have no idea what's even being tested here.
> Here is one or two for you, one is showing twice the performance over the other =D
Both phones don't even use the freaking CPU when transcoding videos - they're both offloaded using the dedicated fixed function video encoders much like you can offload encoding on desktop PCs to your GPU's encoders, instead of doing it inefficiently on the CPU.
You have absolutely ZERO understanding of what's going on here.
> The scheduler isn't any different, because the scheduler doesn't do anything when there's only a single thread on a core to be run. There is literally no scheduling.
Then the SoC is not maximized but underperforming.
> Correction, I don't really just think it, I know it.
Sure you do, now where is the fastest processor in this planet? Where is our A9-powered gaming PC LOL.
> The whole SPEC suite takes exactly an hour to complete, so quit with the micro nonsense if you have no idea what's even being tested here.
Just goes to show how primitive your tool is. 2020 is just around the corner, here you are still using a 2006 tool. This is like claiming Wolfdale is faster than Ryzen because it can finish 1M SuperPI faster LOL.
Am I or you? Isn't it clear that SPEC result does not translate to real-world? Where is the double performance as shown here? Show us proof that iPhone has twice the performance, I've posted links with two Android phones decimating iPhone 11.
Sure you can claim all day you want that iPhone is the fastest phone via SPEC LOL, I'd rather see it translate to actual performance, not imaginary numbers.
You clearly have no idea what you are talking about. Dunno why Andrei dedicated so much of his time trying to explain to you in primitive language what's going on (so you can understand).
I feel you Andrei. I'm sitting here facepalming at these comments. I think a lot of people truly do not understand what SPEC was designed for or how energy efficiency works.
To an average Joe or Jane, SPEC is a worthless basis of comparison.You can tell the sheep his phone has the fastest SoC on the planet and he will prolly believe you.
If you can show an iPhone can finish a bunch of tasks in half a day and bunch of tasks on Android phone in a whole day then I will believe you that iPhone has twice the performance versus competition. But if you are just showing a nanosecond difference between two phones and thousand difference in benchmark scores then keep your palm on your face. =D
I think Andrei has made it clear enough, perhaps not for you, but then Anandtech is not the site for you. Go visit Engadget or something you'll fit right in.
I remember reading an article a couple years ago, where it was mentioned that a couple key BitBoys staff members left the company. The writing has been on the walls for years and recently Adreno architectural development has slowed down to a halt.
While Apple cores are faster, Android flagships will come shitloads of memory and so when it comes to daily tasks it will still keep in pace. S11+ will supposedly start at 12GB LPDDR5 ram vs 4GB ram for Apple flagships.
At this point performance is not the issue for these android flagships considering the workloads of mobile phone. I would prefer them to make it more efficient working with Google at OS level. iphone's big advantage is how efficient it is relative to battery size of its phone.Key metrics are web browsing on Wifi and LTE plus video playback(streaming on netflix).
As already said - ios is a lot less RAM hungry and it's efficient. 4GB is quite enough + most android phones with a lot of memory loves to drop apps from there too. Not to mention that you will not notice that speed difference till you try to do something demanding power... and buying a phone for 1k euro just to browse FB is a bad buy decision anyway (for anyone except those who have money to burn ofc).
But you will notice the efficiency difference. My iphone 11 pro max will last twice and more times the exynos note 9 I got in light workloads. The same iphone will last x3+ times more in heavy workloads while giving smooth and fast performance/gaming in contrary to the note 9.
Qualcomm works for Android, so Apple's competition doesn't generate much trouble, just embarrassment. It seems that the second does not motivate them much haha.
This comment might be too late to be seen, but is there any chance we can see the power use for the Zen 2/SKL (ICL?) based devices on the spec charts as well? It might be off by a lot, but I'm curious how they compare to the mobile SoCs. If they're too high because they're desktop chips intended for higher TDPs where maximum efficiency isn't needed, maybe it's worth it to throw in an Icelake-U number as well as a Zen 2 mobile chip when they come in.
We never measured it accurately, the corresponding platform power for those desktop chips is generally going to be in the 30-40W range or even higher. The laptop platforms are also going to be in the 10-15W range.
Fair enough; thanks for the reply! There is an awful lot more platform related stuff that isn't optimized on DT, I kind of derped on that.
Thanks for your work on the article, too. I really enjoy your writeups and sympathize with the author who stays as engaged as you do with *every* commenter.
I'm not engineer, not much knowlege but one part in this article really concerns me:
"Generally, we’ve noted that there’s a discrepancy factor of roughly 3x. We’ve reached out to Qualcomm and they confirmed in a very quick testing that there’s a discrepancy of >2.5x. Furthermore, the QRD865 phones this year again suffered from excessive idle power figures of >1.3W."
Does this mean compared to SD855, SD865 consume more power when idling? Also, was this test conducted with internet connection or not?
The above quote is only valid for the QRD865; similar thing happened to the QRD855 test devices. It's not a concern for final commercial devices, so nothing to worry about.
Humiliating by what? Some imaginary worthless bloated benchmark scores from a primitive tool that doesn't translate to real-world? For the last 2 years Apple is the one catching up in any side-by-side comparisons out there.
There are countless shallow and useless arguments to be made from your standpoint, for example you could argue that turning system animations off "slows down" "real world experience", because without the animations filling in for the latency, "the average joe and jane" perceive "real world" lags/stutters which in reality take less time than playing the animation takes, i.e. is faster, not to mention a decrease to the load on the GPU.
How the hell Apple A9 is faster than Ryzen or Skylake if A13 is pathetically slower in this comparison and not even close to double performance as show in SPEC.
ahhh yes, poo-poo an industry standard benchmark like SPEC for SoC bencharking in an article about an SoC, then link to a device performance test developed by AndroidAuthority.
@diehardmacfan What exactly is wrong with Speed Test GX 2.0? And it wasn't developed by Android Authority. The SD 865 completed a bunch of real world CPU related tasks, faster than the A13. This makes this "industry standard benchmark like SPEC" quite irrelevant for somebody interesting to buying a smartphone because in actual usage the A13 doesn't present any real performance advantage. Also in the GPU test the SD 865 was only slightly behind even if it pushed more pixels.
If I would only be interested in buying a smartphone in order to use it to run SPEC and GFXBench Aztec Ruins off-screen benchmark all day long than the iphone 11 would be my number one pick.
For anything other than that I don't see any real and tangible performance advantage. This Anandtech performance analysis seems disconnected from the real world experience of using such high end devices. Android sites do a better job analyzing the experience and significance of the performance of these mobile SOC and what it actually means for smartphone users. For example XDA has a realy nice benchmark where they test the overall fluidity of using certain smartphones. This both tests the OS optimizations and SOC performance.
Excellent point, I am sick and tired of this propaganda to uplift an Apple product just because it shines in one or two primitive and bias benchmarking tool when thousands of other apps say otherwise.
In short: Poor validity and poor reliability. There's nothing particularly useful about that test. It generates mixed, or rather obfuscated scores correlating to an unknown extent to UI design choice, certain drivers, and hardware performance. This is somewhat metaphysics, and has no place in science.
Ah poorman's attempt to hide the truth. I feel sorry for those buying a phone (even replacing a desktop) because they see it flying with colors in SPEC.
You're just a blabbering idiot. You keep pulling things out your ass, nobody ever said A9 is faster than Ryzen or Skylake, I dare you find a quote or data that says that. The A13 was the first to *match* them.
The test you quote isn't ST like the SPEC results, and it's not even a full CPU test as it has API components.
There it shows A13 and even A9 stomping the latest and greatest Ryzen and Skylake processors
But then when you compare the A13 versus the Android SoC in various apps and websites, it is the complete opposite.
I respect you because you have an excellent knowledge in what you do but it comes down to the toilet drain once your critical thinking is subpar and you are shadowed with your ego that you think yours and only yours speak the truth. I would not hesitate to hire you as my design engineer really but you have to back your claims with facts. When you state one is the fastest (especially by huge margin), it has to reflect in any test that you throw at it.
I would rest my case if you can convince Lisa or Bob that their processors are mediocre compared to Apple's latest SoC LOL.
That tweet is about IPC of the microarchitectures, not absolute performance.
You literally have absolutely not a single whim of understanding of what's going on here and keep making a complete utter fool of yourself repeating lies, all you see is a bar graph being bigger than the other and suddenly that's the your whole basis on the truth of the world.
The actual engineers and architects in the industry very well know where they lie in relation to what's Apple's doing; I don't need to convince anybody.
No, you just told the whole world, that the fastest chip on the planet is the Apple SoC. A chip with great IPC will give great performance result, right? Your graph is telling us, a 1Ghz A12x core is equivalent to a 2Ghz Ryzen core which is utter BS. When AMD or Intel announce that their next processor has 20% IPC improvement, it does show in any tool/benchmark or app you throw at it not the opposite.
Your tests methodology/tools are completely flawed and outdated as they don't translate to real world results. They are great though if you are comparing two similar platforms.
> No, you just told the whole world, that the fastest chip on the planet is the Apple SoC
I did not. High IPC doesn't just mean it's the fastest overall. AMD and Intel still have a slight lead in over performance.
> A chip with great IPC will give great performance result, right?
As long as the clock-rate also is high enough, yes.
> Your graph is telling us, a 1Ghz A12x core is equivalent to a 2Ghz Ryzen core
That's exactly correct. Apple current has the highest IPC microarchitecture in the industry by a large margin.
> which is utter BS.
The difference between you and me is that I actually have a plethora of data to back this up, actual instruction counter data from the performance counters, actual tests tests that show that Apple's µarch is in fact 50% wider than anything else out there.
You are doing absolutely nothing than spewing rubbish comments with absolutely zero understanding of the matter. You have absolutely nothing to back up your claims about flawed and outdated methodologies, while I have the actual companies who design these chips agreeing with the data I present.
It is truly disappointing that Android HW needs to run on SoC with the performance of the iPhone 3-4 generations older. I really don't understand with all the demand there is, why nobody comes up with something at least within the range of Apple's SoC.
You mean 2 generations behind at most on SPEC. And while interesting technically, it remains debatable how much that actually matters in actual phone use (where having fast SSD, download speeds and a lot of memory can help more). As well as having ~20% better power efficiency of course.
It would be relatively easy to quadruple L2 to 1MB, L3 to 8MB and system cache to 16MB and get ~20% performance gain on SPEC. The area would be much larger and hence the cost of the SoC which would add to the cost of phones. QC's competitors would be happy to increase their market share with far cheaper SoCs which are equally fast in real-world usage.
No, I was looking at the Web benchmarks. All of them are miserable compared even to the iPhone X. And Web browsing is certainly a key part of mobile experience.
Yes and have you actually seen an Android flagship that performs noticeably worse than an iphone(any iphone) in web browsing? Because I sure haven't.
What I have seen is Android phones with better connection speed and better reception in general, especial in crowded places like concerts, stadiums, subways etc. In those places the performance from the iphone x was actually 0 in many instances because there was no signal. If we are talking about the mobile experience let's not ignore the Modem.
"If we are talking about the mobile experience let's not ignore the Modem." And there are connectivity tests for that, although controlling variables in a connectivity test is almost impossible outside of lab conditions. Ultimately these are separate tests and test entirely different aspects. You're free to test both, but if you try to test them simultaneously outputting one single result, you will obtain one worthless data point.
The Android fanatics are out tonight. I only buy Android phones, period. Clearly Apple's CPUs are miles ahead in performance. Anyone use the latest iPad Pro? It's faster than any Windows laptop I've bought or used personally. Boss gave us the latest and greatest dual core HP ultra slim laptop, and I immediately replaced it with a Ryzen 8 core computer and said "I don't use laptops for real work". We don't need benchmarks to tell us how fast Apple's CPUs are (though Andrei's benchmarks are perfectly valid), it is immediately apparent when comparing vs. similarly clocked Intel products. The reasons Ryzen and Intel seem great right now are high clocks and many cores. Run your 9900k in 2 core mode, at 2.6 Ghz and squirm at its low speed. That's what the iPhone uses, a low clocked dual core. Put 8 together, and run it at 4 Ghz, and you have a monster.
The software running on these platforms in not identical. Some of it depends on CPU extensions that are not equivalent between platforms. A dual core 2.6 Ghz intel chip will run slower Win10 than an ipad pro would run ios. But you could find a Linux distribution and some oss apps that would run very fast on that 2.6 Ghz dual core intel.
LOL. You won the most stupid comment here congrats.
"Android fanatics" so you are an Apple sheep I guess.
Sandybridge OCed to 4GHz+ still keeps up with a 1080Ti without any issues. That shows how Intel milked and Ryzen caught up due to monopoly. And you are crippling an x86 LGA socket processor to 2.6GHz Dual core and compare with an iPad Pro ? In what usecase ? What is the ultimate goal here ? Lets disable all Lightning and Thunder cores and run 1 Lightning then (You cant do it anyways since Apple is the overlord here). What the actual fuck lmao. Also magically slapping in more A13 cores means x86 Intel and AMD are dead, haha you think this is making a sandwich at home ? I thinm you never heard of Sparc or IBM Power go and read snd get your mind blown on threads but do not compare that to x86 or Apple A series Alien technology please. An iOS cannot even process zip file extraction nor a config file for a VPN. That alone breaks the whole A series King to ashes as its not used in a real computer at all. A psuedo Filesystem and fake filemanager app doesn't make it a proper OS. Unfortunately Android is also following same.retarded path thanks to Apple disease at Google emulating by the abomination called Scoped Storage disaster.
Let me tell you a secret the laptop you used all are garbage and they are cut down bottom barrel silicon from the failed Desktop chips and so on. The age of rPGA Intel is fucked (Last XM is 4930MX, a true binned Mobile Chip like K) Thanks to BGA greed of Apple infecting Intel for max profits and people to be subverted to use BGA / soldered trash throttling thin and light crappy planned obsolescence HW.
Let us run a Cinebench on your beloved processor then or lets run a POVray or a H264 Transcoding. Well how about we game a PUBG and stream it at 1080P highest quality.
This is the reason why I see x86 vs ARM talk irrelevant and often AT articles are quoted to prove the IPC and all SPEC scores but completely ignore scalability, compatibility, legacy code, HW market etc, when the compute workloads / OS / Software Code / HW which are entirely different world. Like comparing a Jet fighter to a Jet ski.
Nice rant. Might want to read my comment before going off like a crazy person? I said I only buy Android phones... right there in the first sentence.
Before you blindly state how amazing x86 CPUs are, as I said, run your Intel CPU in dual core mode at 2.6 Ghz and compare how slow it is vs. the A13 Apple chip that is also 2 power cores at a low frequency. That's what IPC is. I can't understand why people get so triggered about saying Apple has the highest IPC in the industry. It's a simple fact and I just have to assume you don't know what you are talking about. Andrei's articles always seem to attract the most illiterate part of the internet.
A13 IPC is superior to Intel or AMD but Apple CPU core is huge in terms of transistors budget. IIRC in A11 times Apple CPU core had at least 2 times more transistors than x86 CPU core. Considering that there is no surprise that Apple has a higher IPC.
They might as well just make a chart and put "BUZZWORDS" on it at this point.
That is what is so silly about current state of smart phones. So much they can cram into one..but rarely do they do..or if they do its crippled by terrible software.
Google is already facing so much criticism of Google Photos bullshit, months in and they just say "we are aware of it". lol
I don't think i'll ever understand how Google can take a product, that works great, then release a dozen updates with release notes that "fixes bugs" and completely borks it for users. I mean what is the fucking incentive. Oh and because everything google is so entwined with other software, for whatever reason, it fucks up other things. So now you try to figure out what software is the original culprit or is it the others now. This shit never ends with google.
Do you know what's amusing? The vast majority of people reading this article don't really understand what those numbers showed by the SPEC actually mean, what they represent for the functionality of the phone. Most only copy paste things form the article that they like, especially the parts where it's mentioned that the Qualcomm chips are "years" behind. So it doesn't matter how the phone runs and how fast it can execute real world tasks.
One thing I don't understand is if I would buy a Galaxy S11+ instead of an iphone 11 Pro Max what will I be missing in terms of performance? What specific advantage would the A13 SOC give me because "it's years ahead" in performance?
My second hand iphone 6s runs super smooth, including in heavy beautiful games - no fps drops or performance issues. Doing so on Galaxy s6 (the available competition model from the same year) is not possible.
This is where the powerful SOC shines - years down the road where the phone stays relevant and a pleasure to use. Ofc you will not have to worry about that with your S11+ as samsung support their phones fully for only two years (3rd is security updates only). Really they drop the phone as full/serious support even before the first year drops - this is what happened with my note 9, no attention at all - just quick security updates and no interest beyond that. After all - they put the capable engineers to work only towards the new upcoming phones. Apple support FULLY their phones for 5-6 years + they released this summer a security update for iphone 5 and 4s - 2011 and 2012 model. Any android phone from those years receiving a security update?
Also you can play full PC civilization 6 game on the iphone. No android port and not only because piracy, but because on later turns/bigger maps the android SOCs will choke and the wait time between turns - unbearable. I can list you also a lot more games exclusive to ios, a lot because of performance. Dead cells for example, keeping in touch with the devs - the mobile port dev team (it's outsourced) struggles BIG TIME with performance on android thus massively delaying it.
So the main point that I want to know,is it worth upgrading to a Snapdragon 865 over a Snapdragon 855 and while we on the subject does the mi mix 3 5g has less LTE bands than the standard snapdragon 845 mi mix 3?
Many comments, likely authored by data driven nerds (similar to me) are doing their best to ignore the facts: Years ago, Apple took the performance crown, and still wears it today. Inevitably, one day, someone will usurp Apple's position, but that day does not appear to be soon.
Every comment which offers an explanation or justification as to 'why' Apple holds the top position, intrinsically agrees that they so do.
So QC did the same as samsung, just add vanilla ARM cores to their soc, all this years with "custom" core for almost zero gain but tons of problems at certain gens.
I bought an iPhone 11 pro max. I took the thing back. A13 is way over rated. I don't believe these bench marks. The 865 is a better. Pubg is not good on the iPhone 11 pro max. The previous iPhone allowed you to set the graphics setting higher than the current settings available on the iPhone 11 pro max. Also the ram management is apauling.
When playing Pubg longer than 10 minutes. The phone heats upto 50 degrees Celsius. Hot 🔥.
Then it gets worse. Four out of six cores shut down. Throttling.
And the quality of the game just deteriorates.
The A13 is actually only a 5% increase over the A12.
The 865 when using other bench marks such as a truly cross compatible such as Speed Test G actually reveals that the 865 beats the A13.
Gary explains is a better comparison. And more accurate.
I am now waiting for an 865 handset.
These tests seem like some sort of laboratory test instead of a real world test.
The SoC's have been designed knowing what kind of other peripherals are attached.
Amazing... When using the iPhone 11 pro max... You guys make me laugh. For something with such high statistical measurements in comparison to other SoC's. Only makes the A13 look even more foolish.
Take an 855+... When I use a realme x2 pro. When I use the same apps as what was on my iPhone 11 pro max. The realme x2 pro with its 855+ processor on board absolutely runs circles round the iPhone 11 pro max.
For something that is supposedly such high in Specs. Just makes the phone seem even more confounding. And even more humiliating.
When it comes to gaming. 855+ or definately the 865...
A13 in the iPhone 11 pro max is to be avoided for heavy gamers.
And we all know that the flagship snapdragons make better processors for gamers. Which requires optimal CPU's.
Sorry. But as a gamer. These benchmarks are not accurate or realistic at all. More like a laboratory benchmark.
I used to design chipsets and pcbs. At a discrete government laboratory. These benchmarks have a huge amount of discrepancies.
The 865 is overall actually a better SoC than the A13. It is way more dynamic than the A13.
Finally, someone with great understanding and experience on how to properly rate a phone. They don't realize, CPU alone cannot function properly without the help of other modules or components. iPhone 11 is like a PC with i7-9900 + GT 2060 + 3GB DDR4-2400 while Android phone is like Ryzen 3900X + GTX 2080Ti + 2x4GB DDR4-3200
"I used to design chipsets and pcbs. At a discrete government laboratory. These benchmarks have a huge amount of discrepancies."
Even taking into consideration the rapid decline of comment quality on AT, this... is next level. Kudos, MagicMonkeyBoy, may your crazy never burn out. A++, would read again.
For start - the RAM management was a OS bug issue that was fixed in ios 13.2.X release and surely in ios 13.3 that is current. Secondly the missing GFX option is because the DEVELOPER didn't update the game for the new iphone.
I had the same problem with the main game I play - vainglory. While the screen is on paper the same as XS max (as resolution and size) - the game UI was horribly buggy and it stayed like that for 2 months till they released an update for that model support. Ofc you will not have problems in any app or such a slow reaction by all the devs - but it happens. Going without saying that after those first few months you will never have such problems in that phone lifetime even if it's 5-6 years.
Third - taking speed test g serious is a lol thing to do. With everything stated below I will not waste my time to go technical why it's not serious.
Btw, used android for 10 years (only high end phones) till I switched to the pro max + I have highly technical background as education, hobby and work - especially in the field of electronics and computers.
Also talking how a chip that even didn't see a release, is better/worse vs X - hahahah :) Not to mention on what usage it's based.
Lastly - my iphone doesn't heat at all even in the heaviest games that are A LOT more heavy than your mentioned pubg joke. Try running full pc civ 6 on your android phones or dead cells... oh, no civ 6 as the performance will be poor on later turns. Also still no dead cells because devs can't make it run good on android available SOCs. ;)
The game runs, but how it will run on big map turn 200+ is another story. :) As for dead cells - this is actually quite common, people think the game is light simple gfx wise, because of the art style/decisions. Actually talked with the devs on that topic - everything is 3d and the game is not that light as you might think. As for your absurd last statement - every developer would target the lowest end as it will bring more potential customers.
Do you want to talk about the hundreds more ios exclusive apps? Or to list the recent android "great games" that are on ios from years? I can also tell you thing or two how much better is to develop for ios vs android, how easy is to optimise for 10 devices vs 100000 or even how decent is actually the GPU in the 6s given it's low resolution. Because on android a crap GPU is paired frequently with high resolution screen and defo atleast 1080p, but 1440 is also seen in the budget oriented phones. So the statement how the regular size iphone 6s can game in 2019 is kinda rushed.
"As for your absurd last statement - every developer would target the lowest end as it will bring more potential customers." - no they dont. Look at grid autosport. Look at fortnite.
"Do you want to talk about the hundreds more ios exclusive apps? Or to list the recent android "great games" that are on ios from years? I can also tell you thing or two how much better is to develop for ios vs android, how easy is to optimise for 10 devices vs 100000 or even how decent is actually the GPU in the 6s given it's low resolution. Because on android a crap GPU is paired frequently with high resolution screen and defo atleast 1080p, but 1440 is also seen in the budget oriented phones. So the statement how the regular size iphone 6s can game in 2019 is kinda rushed" - you are pretty ignorant arent you? Do you really think all games run at the screens native resolution on android? Why do you think graphics options exist? Do you want to talk emulation? How easy it is to run emulators on android? How many more systems are available to emulate? I dont. Because comparing platforms wasnt the point. It was all about android phones being too weak to run those games you mentioned. The devs should make it available for the flagships atleast for now, if they really want to.
So your iphone never heats up? If you have gfxtool for pubg on ios, get it and put everything to max and run it. Because even at the ingame max settings i have seen iphone x heating up.
I am into android from the start + symbian before than and also senior member with dev/helping known devs with project @ xda. So thank you, I know enough about android.
I know that the iphone X heat a lot, it was known design flaw with that phone (if you will point heating apple device, this will top out the list most likely). I am currently with iphone 11 pro max and it never heats even half what my exynos note 9 do (and the exynos note 9 is colder vs the snapdragon variant). It's the first iphone with cooling solution and it really do wonders, you can refer to Andrei's iphone review for deep dive into the matter.
I can play fortnite maxed at 60fps and no fps drops or whatever even after 2 hours of play without major heating and you are talking about PUBG maxed. :) Ofc that heat will be there, but as you can also read in Andrei's articles/reviews - apple's A chips are leading in performance AND efficiency. The heat you will see coming from A13 will be less than what you will see from the current android SOCs and/or literally can play games smoother with higher quality GFX and with more FPS.
Almost none of the heavier games is running native on mobile, but also most are running on lower res/game details on android vs ios.
Emulation is cool, did a lot on android with it. Including fun stuff like running diablo 2 LOD latest patch on my note 9, believe me - it's playable with the spen when on the go, in home one mouse and the TV = you are good to go. Still, ported or developed games for mobile just works better and you have such a vast library nowdays with high quality games that you really don't need to revisit old classics on your phone. Actually on ios the situation is a lot better, you got a lot more paid apps there vs android.
Btw, I generally prefer android and can write x3 times longer post about what I love there, but if we are talking about gaming - ios is the device to go.
"I am into android from the start + symbian before than and also senior member with dev/helping known devs with project @ xda. So thank you, I know enough about android." Haha... I should have known you would come up with something like that.
"Btw, used android for 10 years (only high end phones) till I switched to the pro max + I have highly technical background as education, hobby and work - especially in the field of electronics and computers" this one too lol.
And then you somehow decide civ6 and deadcells dont run cus android too weak. No. Its just the devs dont bother with it. They could have restricted it to atleast sd820 devices like what grid autosport devs are doing.
"Emulation is cool, did a lot on android with it. Including fun stuff like running diablo 2 LOD latest patch on my note 9, believe me - it's playable with the spen when on the go, in home one mouse and the TV = you are good to go. Still, ported or developed games for mobile just works better and you have such a vast library nowdays with high quality games that you really don't need to revisit old classics on your phone. Actually on ios the situation is a lot better, you got a lot more paid apps there vs android." Im yet to find some good stuff like god of war, nfs, burnout, wipeout, xenoblade, pokemon, zelda or mario, on android, or any other thousands of games. You could say you can stream them, but same goes for pc games too. Emulators and a switch style gamepad is great on the go. I see apple has done a great job with metal, vulkan is worse than opengl on android 10 sd855. Looking forward to the updatable drivers on the 865.
"I can play fortnite maxed at 60fps and no fps drops or whatever even after 2 hours of play without major heating and you are talking about PUBG maxed. :)" Thats awesome, sd855 heats up a lot on pubg maxed. I guess there is no pubg gfxtool for ios.
There are emulators for ios and you don't need jailbreak to install/play games. They are not on the app store tho, they are on custom stores - still, it's not any different than installing APK from outside playstore. The emulators library is quite big, including ppsspp. As I said tho - android is better for emulators imho + I didn't say android is weak as OS. Weak are the SOCs on android phones compared to the A series of apple. I would totally love to see android phone with apple SOC/similar performance to it and longer full support than two years.
As for the gfxtool, I hope you understand that when you have literally just a few phones to optimise for - you really do a great job with it, or with other words - the ios pubg variant is greatly optimised for every iphone that supports it to extract the best experience with the best possible gfx for the hardware. Ofc you can argue than personal preferences can apply and tweaking can be done, but it's not that necessary.
I respect your opinion and share few viewpoints, just from personal experience - gaming on ios is generally better. Hard to explain, games run smoother and better. If you love emulators tho - android is obviously a better choice + snapdragon SOC.
"Weak are the SOCs on android phones compared to the A series of apple." Yeah everyone knows. They still are strong enough for the games you mentioned though, atleast the last 2 years of flagships. And last years 730/730g are also good enough. I guess the devs want even those with 100$ phones play their games. I doubt those people would even bother buying the game once it hits the store.
"The test here is mostly sensible to the performance scaling of the A55 cores. The QRD865 in the default more is more conservative than some existing S855 devices," "mode" not "more": "The test here is mostly sensible to the performance scaling of the A55 cores. The QRD865 in the default mode is more conservative than some existing S855 devices,"
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
178 Comments
Back to Article
UglyFrank - Monday, December 16, 2019 - link
I imagine the Tab S7 will have this.Meanwhile the iPad Pro 2020 will most likely have more than double the GPU power.
Kishoreshack - Monday, December 16, 2019 - link
That's not how it works broUglyFrank - Monday, December 16, 2019 - link
It is. The A12X has more than double the S855's GPU performance and we can expect ~ 20% increase in GPU performance (A12X to A13X) as the A12 to A13 had a similar increase.generalako - Monday, December 16, 2019 - link
Ok, but then again the SD875 (or whatever it will be called) is expected to be on a new architecture after 3 generation, which generally means 50%+ jump just there. With the transition over to 5nm, you can expect even more performance from that. That would, after all, be the most fair comparison to the A14 (or A14X) on 5nm later this year, due to process node comparisons. Same with CPUs (don't forget, the A77 in the SD865 was released in the summer before by ARM, and even presented in the SD865 in December).close - Tuesday, December 17, 2019 - link
Over the past few years Apple has been doing a consistently better job than Qualcomm regardless of process node. Probably they can afford to since they are in full control of the whole technology stack, including the software which means they can squeeze additional performance and efficiency like that. But this doesn't change the fact that year after year A-chips are better than their counterparts.tuxRoller - Wednesday, December 18, 2019 - link
I'm not sure that apple is much, if at all, more optimized than the Android bsps. If you're aware of proof to the contrary I'd be interested in reading it.michael2k - Wednesday, December 18, 2019 - link
It doesn’t mean optimized the way you envision it. It means more tailored to the design, since Apple has a fixed number of systems it has to support. There are three ways to see it: how many years does Apple push iOS updates? That is a function of performance as well, as as the OS.Another way to see it is knowing that Apple ships iPhones with much less RAM, meaning their OS and apps have to be designed to use less RAM too.
Likewise their iPhone usually ships with smaller batteries; by designing the OS, SoC, and RAM synergistically they can use a smaller battery too. RAM happens to use energy even when idle, so less RAM does translate to lower energy usage.
michael2k - Tuesday, December 17, 2019 - link
Yeah, but anything Qualcomm does to boost performance, Apple will be doing too.The 865 is going to compete with the A14 in 2020, and the 875 will compete with the A15 in 2021. So if we expect the A14 to boost perf by 15% and the A14X to boost perf by 40%, and the A15 to boost perf again by 10% and A15X to boost perf again by 25%, you'll see:
855 = 1.00
865 = 1.25
875 = 1.50
A13 = 1
A13X = 1.4
A14 = 1.15
A14X = 1.96
A15 = 1.26
A15X = 2.45
Technically Qualcomm has more room to improve when you compare transistor budgets: the A13 is approximately 8.5b transistors, the A12 7b transistors.
In comparison, the 855 only had 6b transistors, per Qualcomm itself:
https://www.qualcomm.com/media/documents/files/sna...
id4andrei - Tuesday, December 17, 2019 - link
The 865 competes with A13 not with the future A14. Apple sets the cadence in the SoC space and have done so since breaking rank with sheer performance and transition to a 64bit arch.generalako - Tuesday, December 17, 2019 - link
This is just misrepresentative. The past two generations ARM's architecture has been closing the gap to Apple. It closed the gap by around 30% in IPC with A76, and doing so by around 15% in IPC with A77 (A77 had 27% IPC gain vs. A13's 12% IPC gain). The gap has been getting smaller, and hopefully it will continue. But the fact is still that it's closing for the performance cores.Also, you're comparisons are way off. The SD855 was comparable to the A12, just as the SD865 is to the A13, and so on and so forth. This with process node and the actual release date of the Cortex Core in mind.
michael2k - Wednesday, December 18, 2019 - link
I know you want to think that, but the 855 right now barely competes with the A13. The gap may be getting smaller, but when Apple has a multiple year lead in performance, it will take multiple years of iteration to catch up, and that assumes Apple isn't growing either.From the article, the 865 wasn't very competitive with the older A12:
On the integer side, the A77 still trails Apple’s Monsoon cores in the A11, but the new Arm design now has been able to trounce it in the FP suite. We’re still a bit far away from the microarchitectures catching up to Apple’s latest designs, but if Arm keeps up this 25-30% yearly improvement rate, we should be getting there in a few more iterations.
...
The QRD865 really didn’t manage to differentiate itself from the rest of the Android pack even though it was supposed to be roughly 20-25% ahead in theory. I’m not sure what the limitation here is, but the 5-10% increases are well below what we had hoped for. For now, it seems like the performance gap to Apple’s chips remains significant.
...
There’s one apparent issue here when looking at the chart rankings; although there’s an improvement in the peak performance, the end result is that the QRD865 still isn’t able to reach the sustained performance of Apple’s latest A13 phones.
...
Looking at the estimated power draw of the phone, it indeed does look like Qualcomm has been able to sustain the same power levels as the S855, but the improvements in performance and efficiency here aren’t enough to catch up to either the A12 or A13, with Apple being both ahead in terms of performance, power and efficiency.
The 855 was released early this year and was not very competitive with the slightly older A11:
https://www.anandtech.com/show/14031/samsung-galax...
Deboo - Monday, January 27, 2020 - link
Isn't apple cpu cores are arm basedmichael2k - Tuesday, December 17, 2019 - link
The formal term for it is induction.Apple's GPU is very competitive in phones, sometimes taking the top spot, sometimes not:
https://www.anandtech.com/show/13392/the-iphone-xs...
But for their tablets, Apple proceeds to beef up the GPU over 2x the iPhone, in this case by having 75% more CPU cores, much better thermal capacity, and higher clockspeed:
https://www.anandtech.com/show/13661/the-2018-appl...
It is beaten by the GTX 1060, but beats the Ryzen 7, in PC space, and soundly beats the iPhone by 80% when not CPU bound.
So the inductive part is that, given the 865 approaches the performance of an iPhone, it won't approach the performance of an iPad.
id4andrei - Tuesday, December 17, 2019 - link
One thing to remember when you compare GPUs. Benchmarks usually run half precision operations on mobile compared to what they run on desktop - FP16 vs FP32. Also on ios they run Metal while on Windows DX12 or Vulkan or OpenGL. Not the same thing.IUU - Thursday, December 26, 2019 - link
Apple cores beating the ryzen 7 is a very big word and requires a big leap of faithph00ny - Monday, December 16, 2019 - link
Probably. Android tablet is pretty dead and there is no real solid demand for a "faster" hardwaregeneralako - Monday, December 16, 2019 - link
That's a stupid exaggeration. If it were dead how come Huawei just released the Mediapad M6 this summer with Kirin 980, and is currently releasing an even more premium tablet right now? How come Samsung makes a new Tab S every year, along with iterations of cheaper models every now and then? Same with Xiaomi, Lenovo and others. Low-end market is sprawling with new tablets from all sorts of brands. Even LG released a tablet this year. The tablet market may not be huge, and certainly lackluster in a lot of ways on Android (mainly due to Google neglecting it since Nexus 9 was a flop; you don’t follow-up Nexus 7 v2—the best tablet ever made—with that nonsense), but it very much exists and is desired by a lot of customers. This reality is true for Apple too, that waited 4-5 years before it released a new iPad Mini iteration.As an avid tablet user myself, even with a lot to be desired on Android, I still very much like many offerings by them the past 2 years. I'm currently using both an iPad Mini 5 and Mediapad M6 8.4”. Seeing as I, like most other people, use tablett for media consumption (YouTube, streaming movies/shows, reading books, Reddit, browsing, Spotify), the available supported apps are equally good for both platforms. If I were to dive deeper to dedicated application, sure, iPad has much better support, I really never do as I use a tablet for the specific uses it was made for. I can see this be a complaint if you're an iPad Pro user and use it for professional work, but I don't really see that being a desire for even those using regular iPad or iPad Mini. Maybe if you’re a gamer, but that’s really it.
I use the Huawei more than the iPad Mini due to how much more intuitive Android is. At the end of the day, they both run oversized variants of their smartphone OS, and Android is simply more intuitive in a lot of respects, with iOS use still feeling like having one hand behind my back. Where Apple is fantastic though, is in its hardware implementation, like its excellent screen calibration and touch latency, or having a 3.5mm input (unlike M6 or newer Tab S, sadly) with really good DAC that properly drives my HD650. Mediapad wins in more effective and ease-of-use OS, 16:10 aspect ratio. Android also makes it easier to do things like torrenting (which is great for downloading/streaming movies, football matches, etc.), local file management and sharing and more. Ironically, Apple has made up for the Mediapad’s lack of jack with its fantastic 3.5mm Type-C for $7, which beats DACs upwards of 10x its price (no joke – Apple really knocked it out of the park).
imaheadcase - Tuesday, December 17, 2019 - link
Let alone the thousands of Samsung tablets with android that sell like hot cakes. Given the sell over holidays on them, they are even more attractive.Sure Samsung is terrible at updating software, but not a deal breaker by any means, i tried parents Ipad and it just felt wrong using it, i can't quite put finger on reason, just seems so limited with apps and just not responsive.
Things just work great on Android variant for me.
Oliseo - Thursday, December 19, 2019 - link
One look at Android App Stores tells you just how successful Android Tablets are. That's something everyone can do, and not just take some random fanboys drivel on a forum./end debate
Lois - Friday, December 27, 2019 - link
This is just your feeling and not a fact… Some people feel better on iPadOS and some others on Android. Now the fact is that iPad is more powerfull than andoid Tablets…Bulat Ziganshin - Monday, December 16, 2019 - link
Unfortunately the first page lack latency picture for Core i9Ryan Smith - Monday, December 16, 2019 - link
Whoops! Reload and try it now.shabby - Monday, December 16, 2019 - link
Performance looks good but I'm really wary of the external 4g/5g chip and its additional antennas and how they'll affect battery life. Not going to buy any new sd865 phone until reviews pop up.eastcoast_pete - Monday, December 16, 2019 - link
That's the elephant in the room! Haven't seen any good real-world data on just how much power 5G use will actually add. Faster data are nice (in principle), but if 5G cuts battery life by 30-40% vs. current 4G LTE , pretty much useless. If anyone here has any links to such tests, please post - Thanks!Kangal - Friday, December 20, 2019 - link
Well, I think generally they're going to stick the 5G SoC next to the QSD 865 SoC.That means heat from one will affect the other, and so OEMs will potentially require a larger phone (and larger screen = more drain), but with a smaller area for the battery.
Not to mention, there is going to be significant battery life hit when using an external radio chipset instead of an integrated one. Remember the huge power savings we saw going from the QSD 600 to the QSD 800 back in 2013. So it will be kind of reversed.
I think overall, what will happen is that all the improvements in the battery technology and the Cortex A77 are going to be nullified. So phones from 2019 which had the best battery life and performance, are going to see a "side-grade" compared to 2020 flagships. So I believe in essence, the QSD 865 will be much less competitive against the Apple A13 and Exynos 990... compared to the QSD 855 against the Apple A12 and Kirin 980.
...I do believe more in-depth benchmarks and reviews will come within 3 months (and validate my hypothesis).
MrCommunistGen - Monday, December 16, 2019 - link
Based on the initial performance estimates from the Cortex A77 announcement article Andrei published back in May, I was actually really excited to see the next generation Snapdragon 865. The lack of performance uplift in the real-world web metrics with the QRD865 is a bit underwhelming, but my main concern is the requirement of an external modem.I don't have any experience with the current crop of 5G external-modem devices, but back in the day I used several Qualcomm APQ external-modem devices. Back then battery life being terrible was basically a given since batteries were small and Android was pretty bad at power management, but in hindsight I'm sure at least some of it was due to the external modems...
tuxRoller - Wednesday, December 18, 2019 - link
The disappointing web improvements were also seen last year (https://www.anandtech.com/show/13786/snapdragon-85..."The web-based benchmarks such as Speedometer 2.0 and WebXPRT 3 showcase similar relatively muted results. Here I had expected Qualcomm to perform really well given the scheduler performance showcase of the Snapdragon 845. The results of the Snapdragon 855 are quite meagre, especially in a steady-state throughput workload such as Speedometer 2.0. Here the Snapdragon 855 only manages to showcase a ~17% improvement over the last generation, and also lags behind the Kirin 980 by some notable amount."
However, compare those 855qrd to the numbers seen in this article with actual shipping devices. Pretty big difference.
generalako - Monday, December 16, 2019 - link
>Not going to buy any new sd865 phone until reviews pop up.Shouldn't really ever do that anyhow...
Kishoreshack - Monday, December 16, 2019 - link
Don't know why the Performance looks disappointing in web browsing testKishoreshack - Monday, December 16, 2019 - link
What is the reason is it performs worse than last years 855 chipare we gonna see same kind of implementation in real world devices?
rpg1966 - Monday, December 16, 2019 - link
How is Apple so far ahead in some/many respects, given that Arm is dedicated to designing these microarchitectures?eastcoast_pete - Monday, December 16, 2019 - link
In addition to spending $$$ on R&D, Apple can optimize (tailor, really) its SoCs 100% to its OS and vice versa. Also, not sure if anybody has figures just how much the (internal) costs of Apple's SoCs are compared to what Samsung, Xiaomi etc. pay QC for their flagship SoCs. Would be interesting to know how much this boils down to costs.jospoortvliet - Monday, December 16, 2019 - link
I think cody I'd the big factor. Qualcomm and arm keep chips small for cost reasons. Apple throws transistors at the problem and cares little...s.yu - Monday, December 16, 2019 - link
I like the approach of throwing transistors :)generalako - Monday, December 16, 2019 - link
Can we stop with these excuses? What cost reasons? Whose stopping them from making two architectures then, letting OEMs decide which to use -- if Apple does it, why not them? Samsung aiming at large cores with their failed M4 clearly points towards a desire/intention to have larger cores that are more performant. Let's not make the assumption that there's no need here--there clearly is.Furthermore, where is the excuse in ARM still being on the A55 for the third straight year? Or Qualcomm being on their GPU architecture for 3 straight years, with so incremental GPU improvements the past two years that they not only let Apple both match and vastly surpass them, but are even getting matched by Mali?
There's simply no excuse for the laziness going on. ARM's architecture is actually impressive, with still big year-on-year IPC gains (whereas Apple has actually stagnated here the past two years). But abandoning any work on efficiency cores is inexcusable. As is the fact that none of the OEMs has done anything to deal with this problem.
Retycint - Monday, December 16, 2019 - link
Probably because ARM designs for general use - mobiles, tablets, TVs, cars etc, whereas Apple designs specifically for their devices. So naturally Apple is able to devote more resources and time to optimize for their platform, and also design cores/chips specific to their use (phone or tablet).But then again I'm an outsider, so the reality could be entirely different
generalako - Monday, December 16, 2019 - link
TIL using the same A55 architecture is "for general use" /sIf ARM had actually done their job and released efficiency cores more often, like Apple does every year, we'd have far more performant and efficient smartphones today across the spectrum. Flagship phones would benefit in idle use (including standby), and also in assigning far more resource-mild works to these cores than they do today.
But mid-range and low-end phones would benefit a huge amount here, with efficiency cores performing close to performance cores (often 1-2 older gen clocked substantially lower). That would also be cheaper, as it would make cluster of 2 performance cores not as necessary--fitting right in with your logic of making cheap designs for general use.
quadrivial - Monday, December 16, 2019 - link
There's a few reasons.Apple seems to have started before arm did. They launched their design just 2 years or so after the announcement of a64 while arm needed the usual 4-5 years for a new design. I don't believe apples designers are that much better than normal (I think they handed them the ISA and threatened to buy out MIPS if they didn't). Arm has never recovered that lead time.
That said, PA Semi had a bunch of great designers who has already done a lot of work with low power designs (mostly POWER designs if I recall correctly).
Another factor is a32 support. It's a much more complex design and doesn't do performance, power consumption, or die area any favors. Apple has ecosystem control, so they just dropped the complex parts and just did a64. This also drastically reduces time to design any particular part of the core and less time to verify everything meaning more time optimizing vs teams trying to do both at once.
Finally, Apple has a vested interest in getting faster as fast as possible. Arm and the mobile market want gradual performance updates to encourage upgrades. Even if they could design an iPhone killer today, I don't think they would. There's already enough trouble with people believing their phones are fast enough as is.
Apple isn't designing these chips for phones though. They make them for their pro tablets. The performance push is even more important for laptops though. The current chip is close to x86 in mobile performance. Their upcoming 5nm designs should be right at x86 performance for consumer applications while using a fraction of the power. They're already including a harvested mobile chip in every laptop for their T2. Getting rid of Intel on their MacBook air would do two things. It would improve profits per unit by a hundred dollars or so (that's almost 10% of low end models). It also threatens Intel to get them better deals on higher end models.
We may see arm move in a similar direction, but they can't get away with mandating their users and developers change architectures. Their early attempts with things like the surface or server chips (a57 was mostly for servers with a72 being the more mobile-focused design) fell flat. As a result, they seem to be taking a conservative approach that eliminates risk to their core market.
The success or failure of the 8cx will probably be extremely impactful on future arm designs. If it achieves success, then focusing on shipping powerful, 64-bit only chip designs seems much more likely. I like my Pixelbook, but I'd be willing to try an 8cx if the price and features were right (that includes support for Linux containers).
Raqia - Monday, December 16, 2019 - link
Nice post! You're right, it really does seem like Apple's own implementations defined the ARM v8.x spec given how soon after ARM's release their chips dropped. ARM is also crimped by the need to address server markets so their chips have a more complex cache and uncore hierarchies than Apple's and generally smaller caches with lower single threaded performance. Their customers' area budgets are also more limited compared to Apple who doesn't generally integrate a modem into their SoC designs.aliasfox - Monday, December 16, 2019 - link
I would also add that Qualcomm only makes a dozen or so dollars per chip, whereas Apple makes hundreds of dollars per newest generation iPhone and iPad Pro. Qualcomm's business model just puts them at a disadvantage in this case - they have to make a chip that's not only competitive in performance, but at a low enough cost that a) they can make money selling it, and b) handset vendors can make money using it. Apple doesn't really have to worry about that because for all intents and purposes, their chip division is a part of their mobile division.I wonder if it's in the cards for Apple to ever include both an Intel processor as well as a full fledged mobile chip in the future, working in the same way as integrated/discrete graphics - the system would primarily run on the A13x, with the Intel chip firing up for Intel-binary apps as needed.
quadrivial - Monday, December 16, 2019 - link
I think there could be some possibility of AMD striking that deal with some stipulations. They have the semi-custom experience to make it happen and they don't have much to lose in mobile. AMD already included a small arm chip on their processors. They already use AMD GPUs too. A multi-chip package with be great here.I've given some thought to the idea of 8 Zen cores, 8 core ARM complex, 24CU Navi, 32GB HBM2, and a semi-custom IO die to the it together. You could bin all of these out for lower-spec'd devices. The size of this complex would be much smaller than a normal dedicated GPU, CPU, and RAM while using a bit less power. Most lower end devices would probably only need 2 x86 cores and 8-11CU with 8GB of RAM.
zanon - Wednesday, December 18, 2019 - link
>"I wonder if it's in the cards for Apple to ever include both an Intel processor as well as a full fledged mobile chip in the future, working in the same way as integrated/discrete graphics - the system would primarily run on the A13x, with the Intel chip firing up for Intel-binary apps as needed."Doubt it, if only because x64 is already coming out of patent protection, and with each passing year newer feature revisions will have the same thing happen. By 2025 or 2026 or so, Apple (or anyone else) will just flat out be able to implement x86-64 all the way up to Core 2 at least however they like (be it hardware, software, or some combo with code morphing or the like). That would probably be enough to cover most BC, sure stuff wouldn't run as fast but it would run. And there'd be a lot of power efficiency to be gained as well.
Midwayman - Monday, December 16, 2019 - link
OSX on arm seems a given soon. That would allow them to really blur the line between their ipad pro and the lower end laptops. Even if they are still technically different OSes it would make getting real pro apps onto the ipad pro a ton easier. MS tried this of course but didn't have the clout or tablet market to really make it happen. Apple is in a position to force the issue and has switch architectures in the past.levizx - Tuesday, December 17, 2019 - link
Nope, Apple still support AArch32, and Apple 64bit is only ahead of ARM by 1 year max, actual S810 silicon by Qualcomm was only 15 months later than A7, you can't possibly say Apple started earlier AND took 2-3 years LESS than ARM's partners to design silicon. That would mean Apple has to beat A57 by at least 3 year. Reality says otherwise.quadrivial - Tuesday, December 17, 2019 - link
Apple dropped aarch32 starting with A11.ARM announced their 64-bit ISA on 27 October 2011. The A7 launched 19 September 2013 -- less than two years later. Anandtech's first review of a finished A53 and A57 product was 10 Feb 2015 -- almost 3.5 years later and their product was obviously rushed with new revision coming out after and A57 being entirely replaced and forgotten.
Qualcomm and others were shocked because they only had 2 years to do their designs and they weren't anywhere near complete. A ground-up new design in 23 months with a band new ISA isn't possible under and circumstances.
https://www.google.com/amp/s/appleinsider.com/arti...
ksec - Monday, December 16, 2019 - link
Apple SoC uses more Die Space for CPU Core, it is as simple as that, so they are not a fair comparison. For roughly the same die size, Qualcomm has to fit in the Modem, while Apple has the modem external.rpg1966 - Monday, December 16, 2019 - link
I'm not sure I understand the "fair" bit? The other chip makers are free to design a larger-core variant if they so choose. And, the 865 has the modem external, just like the Apple chips. Also, generally speaking, the SoC + external modem approach should require more power, yet Apple seems to do very well on those benchmarks.Maybe it's more as per another reply, i.e. Apple just optimises everything, one example being throwing out a32.
generalako - Monday, December 16, 2019 - link
That's not an argument -- the modem costs money for both parties either way at the end of the day. Also, Cortex Cores are pretty great, with still bigger year-on-year improvements than Apple (which seems to have stagnated), so it is closing the gap, albeit slowly. The big complaint however is in things like Qualcomm's complacency in GPU, or in ARM doing shit-all to give us a new efficiency core architecture, after 3 years.Apple has surpassed them hugely here, to the point that their efficiency cores perform more than 2x as much with half the power. Now, if you want bring price into here, think about how much that costs OEMs. It costs them by forcing them to use mid-range SoCs that use expensive performance cores, when they could make due with only efficiency cores that performed better. It costs them, as well as flagship phones, in a lot of power efficiency, forcing them to do hardware compromises, or spend more on larger batteries, to compete.
generalako - Monday, December 16, 2019 - link
ARM has been catching up, though. The IPC increases since A11 have been pretty meagre, whereas A76 was a pretty sizeable jump (cutting a lot of the gap), and A77 is doing a 25% IPC jump, whereas the A13 did what, half that? Of course Apple still has a huge foothold, but the gap has been getting smaller...ARM's issue right now, though, is in efficiency cores. The fact that their Cambrdige team hasn't developed anything for 3 straight years now (going into the 4th), whereas Apple's yearly architecture improvement has given them efficiency cores that is monumentally better in both performance and efficiency, is getting embarrassing at this points. It's hurting Android phones a lot and getting kind of ridiculous at this point. No less frustrating that none of the SoC actors are bothering to make any dedicated architectures themselves to make up for it. Qualcomm is complacent in even their GPUs, which have been on the same architecture for 3 straight years and has in this time completely lost its crown to Apple--even ARM's Mali has caught up!
FunBunny2 - Tuesday, December 17, 2019 - link
"How is Apple so far ahead in some/many respects, given that Arm is dedicated to designing these microarchitectures?"based on what I've read in public reporting, Apple appears to mostly thrown hardware at the ISA. Apple has the full-boat ISA license, so they can take the abstract spec and lay it out on the silicon anyway they want. but what it appears is that all that godzilla transistor budget has gone to caches(s) and such, rather than a smarter ALU, fur instance. may haps AT has done an analysis just exactly what Apple did to the spec or RD to make their versions? did they actually 'innovate' the (micro-?) architecture, or did they, in fact, just bulk up the various parts with more transistors?
eastcoast_pete - Monday, December 16, 2019 - link
Thanks Andrei! Amy chance to post the S855's QRD's figures also? These QRDs are "for example" demo units, and the final commercial handsets are often different (faster). Also, any word from QC on how much AI processing power will be needed to run 5G functionality? Huawei's Kirin 990 5G has twice the AI TOPs than their LTE version, and that seems to be due to their (integrated) 5G modem using about half the AI TOPs when actually working in 5G modeeastcoast_pete - Monday, December 16, 2019 - link
Any chance, of course. Edit function would be nice.Andrei Frumusanu - Monday, December 16, 2019 - link
I don't see the point in showing the QRD855 results, there's a large spectrum of S855 device results out there and likely we'll see the same with the S865. The QRD855 and QRD865 aren't exactly apples-to-apples configuration comparisons either so that comparison doesn't add any value.ChitoManure - Monday, December 16, 2019 - link
Because QRDs from qualcomm might have the simikar cooling system and the OEMs usually have better thermal design which is why they are faster..Andrei Frumusanu - Monday, December 16, 2019 - link
None of the tests were made under thermal stress scenarios, the cooling isn't a limitation on the QRDs, the performance showcased is the best the chip can achieve.Kishoreshack - Monday, December 16, 2019 - link
Man the web benchmarks are DISAPPOINTINGfeel like buying a S10+ now
Kishoreshack - Monday, December 16, 2019 - link
Just shows how Samsung does the best implementation of Qualcomm Soc'seven last years Samsung 855 devices are able to out perform Snapdragon 865 in many benchmarks
Can't wait for S11 now
Kishoreshack - Monday, December 16, 2019 - link
Anyone even expected Qualcomm beating Apple in performance?You were dreaming then
don't know whom to blame Arm or Qualcomm
but the Android world is constantly receiving inferior chips
Karmena - Monday, December 16, 2019 - link
IMHO all these SOCs are at the level that average Joe can do with any of these and the device will feel snappy and good. Now it comes down to the OS delivering the performance and features that users crave.doungmli - Monday, December 16, 2019 - link
the only benchmarks are the web, 3dmark and geekbench for the a13 chip the rest is in favor of the snapdragon. It should perhaps be remembered that this is a soc so cpu + isp + gpu + ... and when adding the snapdragon >>>> A13. just see the AI markers which take into account the entire soc. For gfx bench it would be necessary to explain why so much difference whereas in the other benchmarks GPU there is not this difference but gfx bench is not outdated for more than a year for me it is no longer a reference. For web performance just see the speed tests on youtube to see that this score is not justifiedjospoortvliet - Monday, December 16, 2019 - link
The best snapdragon can barely keep up with the a11, as Andrei points out in his analysis. YouTube speed tests are by far the most useless and pointless benchmarks ever devised, which is why not a single reputable source (like anandtech) ever uses them...Sorry, but the only question here is how much faster the a14 will be. 40%, 50% or even more...
Kishoreshack - Monday, December 16, 2019 - link
Why doesn't Qualcomm simply increases their die size & use a larger die properly to at least come closer to applemaybe it is needs more than a larger die size
it needs a better Architecture
Arm or Qualcomm whom to blame?
eastcoast_pete - Monday, December 16, 2019 - link
A key problem for smartphones is power budget. These SoCs are already pushing 5 W/h and up if running at full tilt, so even a nicely sized battery (5000 mAh) can be drained in 3-4 hours top if someone runs them accordingly. Apple has managed to accommodate high peak/burst performance while still getting good overall power usage, and I still find their battery life wanting.Quantumz0d - Monday, December 16, 2019 - link
Why do they need to ? Apple is only Apple and it only works for them.If you see realworld speedtests on YouTube see how OP7 Pro flies through the tasks giving the user a faster and smoother experience.
And go to ScyllaDB website and see how AWS Graviton 2 stacks with Intel in Benches and how they mention benches only should not be taken as a measure.
Apple OS lacks Filesystem. It cannot be a computer ever. iOS is a kid friendly OS. You can't even fucking change launcher / icons forget other system level changes.
Qcomm needs competition from MediaTek, Exynos. Huawei HiSilicon but except Exynos all are garbage because they do not let us unlock Bootloaders. And Android phones see community driven ROMs there is so much or choice to add even the DAPs from 200USD to 3000USD have Qcomm technology.
Repairing is also easier due to the HW Boxes which can bring a QComm9008 Brick to life. Whereas with Apple its Ball and Chain ecosystem.
I see my SD835 run like butter through everything I throw at it and has an SD slot too.
This stupid Whiteknighting of Apple processors beating x86 and their use case / Android Phones is a big sham. People need to realize benches are not the only case when you compare Processors accross OSes.
jospoortvliet - Monday, December 16, 2019 - link
A 1995 computer running MS DOS 6.0 is also butter smooth, I hope you dont think that means an intel 486 DX4 is faster than an apple chip.Please stop with your nonsense about "real world tests". Real world your 835 has a slower cpu, GPU, and storage. Doesn't mean it is garbage - it is fine you are happy with it but it is not your duty to defend the honor of Oppo against facts. I dont want an iphone either die to their walked garden but that doesn't mean I live under the delusion that my brand new galaxy s10e is anything other than at least 40% slower and twice as inefficient as an iPhone 11...
cha0z_ - Friday, December 27, 2019 - link
Coming from exynos 9810 note 9 to iphone 11 pro max... the SOC on the iphone is literally times faster and more efficient than the exynos. The difference is absurdly big and people still calls apple slower because of design choices (like the slow animations, etc). It's super smooth in all conditions/times + it's rofl fast in any app/game (not to mention apps got functions not available on android). GL running full PC civilization 6 on android with decent performance later in the game on bigger map and decent battery life. There is a reason why the game was not ported on android too (and not only piracy) - it will run poor even on most high end current gen android phones.ksec - Monday, December 16, 2019 - link
They could, but are you going to pay for it? Let say Qualcomm has to bump up $50 ( inclusive of their profits ) to reach the same level of performance, as you consumer you will have to pay roughly $100 more.In a cut throat Android market, who is going to risk putting up their Smartphone price by $100?
There is a reason why Samsung and Huawei are trying to make SoC themselves, instead of putting those profits into Qualcomm's hand, they want those cost to go towards more die space to better differentiate their product and compete with Apple.
Now here is another question, how many consumer will notice the different in CPU speed? And how many consumer will notice the Modem quality different?
They are all set of trade offs, not only in engineering, but also in cost, markets, risk... etc...
jospoortvliet - Monday, December 16, 2019 - link
It is a matter of cost. Arm could design a cpu core that is 4 times the size of the a76 and 50% faster, catching up to apple. But that would cost a lot of die size and thus money... for high margin, high cost devices it is ok but not for cheap ones. Ape can afford this...jospoortvliet - Monday, December 16, 2019 - link
Ape - I mean apple of course!cha0z_ - Friday, December 27, 2019 - link
It's not that simple as putting a lot of transistors in it. You can somewhat tackle the problem with that, but by itself it will not lead to the desired end result.I can elaborate, but it will be lengthily and highly technical post.Bulat Ziganshin - Monday, December 16, 2019 - link
The Spec2006 tables show that A13 has performance similar to x86 desktop chips, which may be considered as revolution. Can you please add frequencies of the chips (both x86 and Apple) too, at least some estimations? Also, what are the memory configs (freq/CAS/...)? It will be also interesting to see x86 chips in individual SPEC benchmarks so we can analyze what are the weak and string points of Apple architecture.Andrei Frumusanu - Monday, December 16, 2019 - link
The Apple chips are running near their peak frequencies, with some subtests being slightly throttled due to power. The 9900K was at 5GHz, the 3950X at 4.6-4.65GHz, 3200CL16 on the desktop parts.I added the detailed overview over all chips; here's it again: https://images.anandtech.com/doci/15207/SPEC2006_o...
unclevagz - Monday, December 16, 2019 - link
It would be nice if some contemporary x86 laptop chips could be added to that list (Ryzen/Ice Lake/Coffee Lake...) just for ease of comparison between ARM and x86 mobile chips.sam_ - Monday, December 16, 2019 - link
Any strong reason for these tests being compiled with -mcpu=cortex-a53 on Android/Linux?One might expect for SoCs with 8.2 on all cores there may be some uplift from at least targeting cortex-a55, if not cortex-a75?
When you're expecting to run on a big core, forcing the compiler to target a in-order core which can only execute one ASIMD instruction per cycle seems likely to restrict the perf (unrolling insufficiently etc.). Certainly seems a bit unfair for aarch64 vs. x64 comparison, and probably makes the apple SoCs look better too (assuming XCode isn't targeting a LITTLE core by default). It also likely makes newer bigger cores look worse than they should vs. older cores with smaller OoO windows.
I get not wanting to target compilation to every CPU individually, but would be interesting to know how much of an effect this has; perhaps this could contribute to the expected IPC gains for FP not being achieved?
Andrei Frumusanu - Monday, December 16, 2019 - link
The tuning models only have very minor impact on the performance results. Whilst using the respective models for each µarch can give another 1-1.5% boost in some tests, as an overall average across all micro-architectures I found that giving the A53 model results in the highest performance. This is compared to not supplying any model at all, or using the common A57 model.The A55 model just points to the A53 scheduling model, so they're the same.
sam_ - Monday, December 16, 2019 - link
Hmm, I took a look at LLVM and the scheduling model is indeed the same for A53 and A55, but A55 should enable instruction generation for the various extensions introduced since v8.0. I can believe that for spec 2006 8.1 atomics/SQRDMLAH/fp16/dot product/etc. instructions don't get generated.It looks like not much attention has been paid to tweaking the LLVM backend for more recent big cores than A57, beyond getting the features right for instruction generation, so I can believe cortex-a53 still ends up within a couple of percent of more specific tuning. Probably means there's more work to be done on LLVM.
If it is easy to test I think it would be interesting to try cortex-a57, or maybe exynos-m4 tuning on a77 because these targets do seem to unroll more aggressively than other cortex-X targets with the current LLVM backend.
I made a toy example on godbolt: https://godbolt.org/z/8i9U5- , though for this particular loop I think a77 would have the vector integer MLA unit saturated with unroll by 2 (and is probably memory bound!), still the other targets would seem more predisposed to exposing instruction level parallelism.
Andrei Frumusanu - Tuesday, December 17, 2019 - link
I pointed out to Arm that there's not much optimisations going on in terms of the models, but they said that they're not putting a lot of effort into that, and instead trying to optimise the general Arm64 target.I tested the A57 targets in the past, I'll have a look again on things like the M4 tuning over the coming months as I finally get to port SPEC2017.
Quantumz0d - Monday, December 16, 2019 - link
Sigh another comment on the x86 vs A series. Why dont people understand running an x86 code on ARM will have a massive impact in performance ? How do people think a fanless BGA processor with sub 10W design beat an x86 in realworld just because it has Muh Benchwarrior ? There are so many possible workloads from SIMD, HT/SMT, ALU.Having scalability is also the key. Look at x86 AMD and Intel how they do it by making a Large Wafer and having multi SKUs with LGA/PGA (AM4) sockets allowing for maximum robustness.
ARM is all about efficiency and economical bandwidth and it won't scale like x86 for all workloads. If you add AVX its dead. And Freq scaling with HT/SMT. Add the TSMC N7 which is only fit for mobile SoCs. Ryzen don't scale much into clocks because of this limitation.
ARM is always Custom if you see as per Vendor. Its bad. Look at MediaTek trash no GPL policy. Huawei as well. Except QcommCAF and Exynos. Its a shame that TI OMAP left.
Andrei Frumusanu - Monday, December 16, 2019 - link
> Why dont people understand running an x86 code on ARM will have a massive impact in performance ?Nobody even mentioned anything regarding this, you're going off on a nonsensical rant yet again. For once, please keep the comments section level-headed.
Quantumz0d - Monday, December 16, 2019 - link
What ? Its a genuine point. ARM based 8c processors Windows machines like Surface Pro X can only emulate 32bit x86 code. 64bit isnt here and running both emulation will have am impact (slow) That's what I mean. They need native code to run and rival.Rant ? Benches = Realworld right. How come a user is able to see an OP7 Pro breeze through and not lag and offer shitty performance vs an iPhone ? I saw with my own OP3 downclocked on Sultan ROM due to the high clockspeed bug on 82x platform not just me, So many other users. GB score and benches do not only mean performance esp in ARM arena.
Except for bragging rights, This is pure Whiteknighting.
joms_us - Monday, December 16, 2019 - link
Right, he even claimed a 2015 Apple A9 is faster than Skylake and Ryzen processors today. Only a complete !Diot will believe this claim.Quantumz0d - Monday, December 16, 2019 - link
You should see AT forum. A thread has been dedicated to discuss this BS fanboyism and outcome was Apple won.Andrei Frumusanu - Monday, December 16, 2019 - link
x86 emulation on Arm has absolutely nothing to do with any topic discussed here or QC vs Apple performance. I'm sick and tired of your tirades here as nothing you say remains technical or on point to the matter.The experience I have, when dismissing any other aspects such as iOS's super slow animations, is that the iPhones are far ahead in performance of any Android device out there, which is very much what the benchmark depict.
Quantumz0d - Monday, December 16, 2019 - link
Did I mention anything from your article on QC vs x86 ? I was replying to a comment on "Revolutionary" performance of A series vs x86. And then you claimed it as nonsensical point of x86 on ARM.So "super slow animations" & "far ahead". What do you mean by that ? An iPhone X vs a 11 Pro will exhibit the launching speed, then loading speed differences same as 835 vs 855 which can be observed. Everything ApplePro guy did a massive video of iPhones across multiple A series iterations which is the ONLY way a user can see the performance improvement.
But when Android vs iOS you are saying iPhone animation speeds are super slow yet the benches show much lead..So how is the user seeing the far ahead in performance out there when OP7 Pro vs iPhone 11 Pro Max, like iPhone is still faster as you claim but in reality user is seeing same ?
Andrei Frumusanu - Monday, December 16, 2019 - link
Apparently I'm able say that because I'm able to differentiate between CPU performance, raw performance, and "platform performance".CPU performance is clear cut on where we're at and if you're still arguing this then I have no interest in discussing this.
Raw performance is what I would call things that are not actually affected by the OS, web content *is* far faster on the latest iPhone than on Androids, that's a fact. Among this is actual real applications, when Civilization came to iOS the developers notably commented on the performance being essentially almost as good as desktop devices, the performance is equal to x86 laptops or better: https://www.anandtech.com/show/13661/the-2018-appl...
And finally, the platform experience includes stuff like the very slow animations. I expect this is a big part as to what you regard as being part of your "experience" and "reality". I even complained about this in the iPhone 11 review as I stated that I feel the hardware is being held back by the software here.
Now here's what might blow your mind: I can both state that Apple's CPUs are far superior at the same time as stating that the Android experience might be faster, because both statements are very much correct.
Quantumz0d - Monday, December 16, 2019 - link
Okay thanks for that clarity on Raw performance and other breakdowns like CPU, Platform. Yes I can also see that Web performance on A series has always been faster vs Androids.I forgot about that article. Good read, and on Civ 6 port however it lacks the GFX options. I would also mention that TFlops cannot be even compared within same company. Like Vega 64 is 12TFs vs a 5700XT at 9TFs, latter completely wrecks the former in majority except for the compute loads utlizing HBM. I know you mentioned the FP16 and other aspects of the figure in opening, just saying as many people just take that aspect. Esp the new Xbox SX and Console as a whole (They add the CPU too into that figure)
And finally. Yes ARM scales in normal browsing, small tasks vs x86 laptops which majority of the people nowadays are doing (colleagues don't even use PCs) but for higher performance and other workloads ARM cannot cut it at all.
Plus I'd also add these x86 laptop parts throttle a lot incl. Macbooks obv because they are skimping on cooling them for thinness so their consistency isn't there as well just like A series.
joms_us - Monday, December 16, 2019 - link
When I look at the comparisons here, I look only for Android vs. Android or Apple vs. Apple. Comparing them with different OSes and more so primitive tools is a worthless approach. Firstly, the results need to be normalized, one Soc is showing lead while sucking more power than the other. Secondly, the bloated scores of Apple Soc here does not represent real-world results. Most Android phones with SD855 are faster if not the same than iPhone 11.Andrei Frumusanu - Monday, December 16, 2019 - link
> Comparing them with different OSes and more so primitive tools is a worthless approach.SPEC is a native apples-to-apples comparison. The web benchmarks and the 3D benchmarks are apples-to-apples interpreted or abstracted, same-workload comparisons.
All the tests here are directly comparable - the tests which aren't and which rely on OS specific APIs, such as PCMark, obviously don't have the Apple data.
> Firstly, the results need to be normalized, one Soc is showing lead while sucking more power than the other.
That's a very stupid rationale. If you were to follow that logic you'd have to normalise little cores up in performance as well because they suck much less power.
joms_us - Monday, December 16, 2019 - link
> SPEC is a native apples-to-apples comparison.Stop right there, Apple vs. Apple only
> The web benchmarks and the 3D benchmarks are apples-to-apples interpreted or abstracted, same-workload comparisons.
All the tests here are directly comparable - the tests which aren't and which rely on OS specific APIs, such as PCMark, obviously don't have the Apple data.
How? Just like Geekbench, different compilers are used. Different distribution of loads are made.
My Ryzen 2700 can finished 5 full GB run as fast as one full GB run in an iPhone and yet the single core score of iPhone is higher than any Ryzen. You are showing Apple A13 (LOL A13 is faster than the fastest AMD or Intel chip) using Jurassic Spec benchmark?
Talk about dreams vs. reality.
> That's a very stupid rationale. If you were to follow that logic you'd have to normalise little cores up in performance as well because they suck much less power.
We are talking about efficiency here, your beloved Apple chip is sucking twice the power than SD855 or SD865 per workload.
Have you ever load a consumer website or run an consumer app with these phones side-by-side? Don't tell they are not using cpu or memory resources. They are, they are doing most if not all of the workloads on the charts here. While your chart if showing Apple has twice the performance vs SD865, the phone doesn't tell lies. A bloated benchmark score does not translate to real-world result.
It is time to stop this worthless propaganda that Android SoC is inferior than Apple and the laughable IPC king (iPhone chip is faster than desktop processors).
Until iPhone can play Crysis smoother than even low end laptops, this BS claim that it is the fastest chip should stop.
Quantumz0d - Monday, December 16, 2019 - link
Agreed.It really feels like a propaganda every single article on CPU Apple gets super limelight because of these benches on a closed walled garden platform from OS to HW to Repair.
The power consumption of A series processors deteriorating the battery was nicely thrown under the rug by Apple throttling bs. They even added the latest throttle switch for XS series. But yea no one cares. Apple's deeppockets allow top lawyers in their hands to manipulate every thing.
The consumer app part. Its perfect use case since we never see any of the Android phones lag as interpreted here due to the dominance of A series by 2-3x folds and in real life nothing is observable. And comparing that to the x86 Desktop machines with proper OS and a computing usecases like Blender, Vray, MATLAB, Compliation, MIPS of Compression and decompression, Decode/Encoding and superior Filesystem support and socketed / Standardized HW (PCIe, I/O options), Virtualization and Gaming, DRAM scaling choice (user can buy whatever memory they want or any HW as its obvious)..this whole thing screams bs. It would be better if the highlight is mentioned on benches and realwork might differ but its not the case at all.
The worst is spineless corporate agenda of allowing Chinese CPC to harvest every bit from their Cloud data Center in China allowing the subversion and anti liberty. A.k.a Anti American principles.
Andrei Frumusanu - Monday, December 16, 2019 - link
You forgot I'm member of the Illuminati, half mole-people from my dad's side and half lizard-man from my mother's side. I love my monthly deep state paycheck alongside the Apple subsidies I get for spreading their narrative. Wait till people find out the earth is really flat.Quantumz0d - Monday, December 16, 2019 - link
LOL. Lawyer manipulation is for their Class Actions KB fiasco, Touch Disease, Error 53..not you (Just clarifying) and idk if you know Louis Rossman on YT. If not I suggest to watch and know how the fleecing is done and consumer is kept in dark always. The revelations of their stranglehold on HW IC chip for supplying to repair services and Lobbying against Repair is enough to understand and gauge the fundamemal pillars of a company and its ethics.Sorry I take ethics and choice/liberty into account over utopian performance and elitist / Luxury status quo stance.
Andrei Frumusanu - Monday, December 16, 2019 - link
I pleaded with you to not go into tangential rants for this article again, yet here we are.Andrei Frumusanu - Monday, December 16, 2019 - link
> How? Just like Geekbench, different compilers are used. Different distribution of loads are made.Please explain to me what the hell "different distributions of loads are made" is meant to mean? You have zero technical rationale behind such statements. All the comparisons here were made with the Clang/LLVM compilers on all platforms - bar the ISA, there is exactly zero difference in the workload logic between the platforms, and Apple's toolchain isn't doing something completely different either that it would suddenly invalidate the comparison.
> You are showing Apple A13 (LOL A13 is faster than the fastest AMD or Intel chip) using Jurassic Spec benchmark?
Yes I am because that is the reality of the matter.
> We are talking about efficiency here, your beloved Apple chip is sucking twice the power than SD855 or SD865 per workload.
And it's finishing the workload than twice as fast, ending up being *almost* as efficient in terms of the energy used by the computation. What matters here is the energy efficiency, not the power efficiency, and in this regard Apple's devices are top of the line.
> While your chart if showing Apple has twice the performance vs SD865, the phone doesn't tell lies.
What's even your point here? Of course the iPhones are significantly faster in loading webpages?
Return here when you have an actual factual argument to present, because right now you just have been repeating complete nonsense.
joms_us - Monday, December 16, 2019 - link
> Please explain to me what the hell "different distributions of loads are made" is meant to mean? You have zero technical rationale behind such statements. All the comparisons here were made with the Clang/LLVM compilers on all platforms - bar the ISA, there is exactly zero difference in the workload logic between the platforms, and Apple's toolchain isn't doing something completely different either that it would suddenly invalidate the comparison.The compiler maybe the same but the scheduler of tasks in Android and Windows are different than in iOS. Many background apps are running simultaneously on Android and Windows machine, how about iOS? Frozen apps? LOL
>Yes I am because that is the reality of the matter.
Only matters to you, not in outside world. If you really think A9 has better IPC than Ryzen or Skylake, why don't you join the Apple engineers and build the fastest gaming/productivity PC with Apple A9 chip and sell it like hotcakes? No? Cannot t be? Even Apple doesn't claim their SoC is faster than even low end desktop today LOL. Even milking the customers with overpriced Macs with "Intel" inside.
> And it's finishing the workload than twice as fast, ending up being *almost* as efficient in terms of the energy used by the computation. What matters here is the energy efficiency, not the power efficiency, and in this regard Apple's devices are top of the line.
What matters is how fast it can finish the whole task not each micro-workload nonsense. If I want to zip and upload a file or encode and upload a video, I only care how fast it will finish the whole task and for that matter. If I want to play games, do I care how the fast the damn phone will compute the vector, pixel location, math operations etc? I only care how elegant, smooth and how fast the gaming experience will be.
iPhone is not twice as fast as loading any web page, any consumer app or even exporting or transcoding videos. Different apps yield different results, you are showing one worthless primitive benchmark where iPhone is fast, but out there, hundreds or thousands of different apps and website are showing the opposite results.
Here is one or two for you, one is showing twice the performance over the other =D
https://youtu.be/ay9V5Ec8eiY?t=529
https://youtu.be/DtSgdrKztGk?t=432
Andrei Frumusanu - Monday, December 16, 2019 - link
> the scheduler of tasks in Android and Windows are different than in iOS.The scheduler isn't any different, because the scheduler doesn't do anything when there's only a single thread on a core to be run. There is literally no scheduling.
> If you really think A9 has better IPC than Ryzen or Skylake
Correction, I don't really just think it, I know it.
> What matters is how fast it can finish the whole task not each micro-workload nonsense.
The whole SPEC suite takes exactly an hour to complete, so quit with the micro nonsense if you have no idea what's even being tested here.
> Here is one or two for you, one is showing twice the performance over the other =D
Both phones don't even use the freaking CPU when transcoding videos - they're both offloaded using the dedicated fixed function video encoders much like you can offload encoding on desktop PCs to your GPU's encoders, instead of doing it inefficiently on the CPU.
You have absolutely ZERO understanding of what's going on here.
joms_us - Monday, December 16, 2019 - link
> The scheduler isn't any different, because the scheduler doesn't do anything when there's only a single thread on a core to be run. There is literally no scheduling.Then the SoC is not maximized but underperforming.
> Correction, I don't really just think it, I know it.
Sure you do, now where is the fastest processor in this planet? Where is our A9-powered gaming PC LOL.
> The whole SPEC suite takes exactly an hour to complete, so quit with the micro nonsense if you have no idea what's even being tested here.
Just goes to show how primitive your tool is. 2020 is just around the corner, here you are still using a 2006 tool. This is like claiming Wolfdale is faster than Ryzen because it can finish 1M SuperPI faster LOL.
Dug - Monday, December 16, 2019 - link
You really don't have any argument because you really aren't sure what you are talking about.joms_us - Monday, December 16, 2019 - link
Am I or you? Isn't it clear that SPEC result does not translate to real-world? Where is the double performance as shown here? Show us proof that iPhone has twice the performance, I've posted links with two Android phones decimating iPhone 11.Sure you can claim all day you want that iPhone is the fastest phone via SPEC LOL, I'd rather see it translate to actual performance, not imaginary numbers.
cha0z_ - Monday, December 23, 2019 - link
You clearly have no idea what you are talking about. Dunno why Andrei dedicated so much of his time trying to explain to you in primitive language what's going on (so you can understand).ThreeDee912 - Monday, December 16, 2019 - link
I feel you Andrei. I'm sitting here facepalming at these comments. I think a lot of people truly do not understand what SPEC was designed for or how energy efficiency works.joms_us - Monday, December 16, 2019 - link
To an average Joe or Jane, SPEC is a worthless basis of comparison.You can tell the sheep his phone has the fastest SoC on the planet and he will prolly believe you.If you can show an iPhone can finish a bunch of tasks in half a day and bunch of tasks on Android phone in a whole day then I will believe you that iPhone has twice the performance versus competition. But if you are just showing a nanosecond difference between two phones and thousand difference in benchmark scores then keep your palm on your face. =D
s.yu - Tuesday, December 17, 2019 - link
I think Andrei has made it clear enough, perhaps not for you, but then Anandtech is not the site for you. Go visit Engadget or something you'll fit right in.jospoortvliet - Monday, December 16, 2019 - link
Same here. 🤦♀️🤦♂️🤦♀️🤦♂️joms_us - Monday, December 16, 2019 - link
You must have spent thousand of dollars on expensive phones because the SPEC result is higher on those phone? LOLYou buy them to run SPEC? LOL
milli - Monday, December 16, 2019 - link
I remember reading an article a couple years ago, where it was mentioned that a couple key BitBoys staff members left the company. The writing has been on the walls for years and recently Adreno architectural development has slowed down to a halt.trivik12 - Monday, December 16, 2019 - link
While Apple cores are faster, Android flagships will come shitloads of memory and so when it comes to daily tasks it will still keep in pace. S11+ will supposedly start at 12GB LPDDR5 ram vs 4GB ram for Apple flagships.At this point performance is not the issue for these android flagships considering the workloads of mobile phone. I would prefer them to make it more efficient working with Google at OS level. iphone's big advantage is how efficient it is relative to battery size of its phone.Key metrics are web browsing on Wifi and LTE plus video playback(streaming on netflix).
NetMage - Friday, December 27, 2019 - link
iPhone is also efficient at RAM usage - native code versus JIT bytecode gives iOS a 1.5x to 2x less RAM advantage over Android.cha0z_ - Friday, December 27, 2019 - link
As already said - ios is a lot less RAM hungry and it's efficient. 4GB is quite enough + most android phones with a lot of memory loves to drop apps from there too. Not to mention that you will not notice that speed difference till you try to do something demanding power... and buying a phone for 1k euro just to browse FB is a bad buy decision anyway (for anyone except those who have money to burn ofc).But you will notice the efficiency difference. My iphone 11 pro max will last twice and more times the exynos note 9 I got in light workloads. The same iphone will last x3+ times more in heavy workloads while giving smooth and fast performance/gaming in contrary to the note 9.
quiksilvr - Monday, December 16, 2019 - link
I will wait until they develop later processors with 5G built in.gagegfg - Monday, December 16, 2019 - link
Qualcomm works for Android, so Apple's competition doesn't generate much trouble, just embarrassment. It seems that the second does not motivate them much haha.Drumsticks - Monday, December 16, 2019 - link
This comment might be too late to be seen, but is there any chance we can see the power use for the Zen 2/SKL (ICL?) based devices on the spec charts as well? It might be off by a lot, but I'm curious how they compare to the mobile SoCs. If they're too high because they're desktop chips intended for higher TDPs where maximum efficiency isn't needed, maybe it's worth it to throw in an Icelake-U number as well as a Zen 2 mobile chip when they come in.Andrei Frumusanu - Monday, December 16, 2019 - link
We never measured it accurately, the corresponding platform power for those desktop chips is generally going to be in the 30-40W range or even higher. The laptop platforms are also going to be in the 10-15W range.Drumsticks - Monday, December 16, 2019 - link
Fair enough; thanks for the reply! There is an awful lot more platform related stuff that isn't optimized on DT, I kind of derped on that.Thanks for your work on the article, too. I really enjoy your writeups and sympathize with the author who stays as engaged as you do with *every* commenter.
TEAMSWITCHER - Monday, December 16, 2019 - link
The ads on this site are obnoxious.TheinsanegamerN - Tuesday, December 17, 2019 - link
-Not using an adblocker in 2019....GH-CC - Monday, December 16, 2019 - link
I'm not engineer, not much knowlege but one part in this article really concerns me:"Generally, we’ve noted that there’s a discrepancy factor of roughly 3x. We’ve reached out to Qualcomm and they confirmed in a very quick testing that there’s a discrepancy of >2.5x. Furthermore, the QRD865 phones this year again suffered from excessive idle power figures of >1.3W."
Does this mean compared to SD855, SD865 consume more power when idling?
Also, was this test conducted with internet connection or not?
Andrei Frumusanu - Monday, December 16, 2019 - link
The above quote is only valid for the QRD865; similar thing happened to the QRD855 test devices. It's not a concern for final commercial devices, so nothing to worry about.assyn - Monday, December 16, 2019 - link
Seems like Apple is basically untouchable.Evben an old A11 humiliating the top Android soc..:D
joms_us - Monday, December 16, 2019 - link
Humiliating by what? Some imaginary worthless bloated benchmark scores from a primitive tool that doesn't translate to real-world? For the last 2 years Apple is the one catching up in any side-by-side comparisons out there.s.yu - Tuesday, December 17, 2019 - link
There are countless shallow and useless arguments to be made from your standpoint, for example you could argue that turning system animations off "slows down" "real world experience", because without the animations filling in for the latency, "the average joe and jane" perceive "real world" lags/stutters which in reality take less time than playing the animation takes, i.e. is faster, not to mention a decrease to the load on the GPU.Sam6536 - Monday, December 16, 2019 - link
Where are rog phone 2 benchmarks?Not taking the most powerful android phone into consideration in this test isn't fair
joms_us - Tuesday, December 17, 2019 - link
How the hell Apple A9 is faster than Ryzen or Skylake if A13 is pathetically slower in this comparison and not even close to double performance as show in SPEC.https://cdn57.androidauthority.net/wp-content/uplo...
Makes me think if somebody is drinking Koolaid here?
diehardmacfan - Tuesday, December 17, 2019 - link
ahhh yes, poo-poo an industry standard benchmark like SPEC for SoC bencharking in an article about an SoC, then link to a device performance test developed by AndroidAuthority.Andrei your patience with idiots is astounding.
Nicon0s - Tuesday, December 17, 2019 - link
@diehardmacfan What exactly is wrong with Speed Test GX 2.0? And it wasn't developed by Android Authority.The SD 865 completed a bunch of real world CPU related tasks, faster than the A13. This makes this "industry standard benchmark like SPEC" quite irrelevant for somebody interesting to buying a smartphone because in actual usage the A13 doesn't present any real performance advantage.
Also in the GPU test the SD 865 was only slightly behind even if it pushed more pixels.
If I would only be interested in buying a smartphone in order to use it to run SPEC and GFXBench Aztec Ruins off-screen benchmark all day long than the iphone 11 would be my number one pick.
For anything other than that I don't see any real and tangible performance advantage.
This Anandtech performance analysis seems disconnected from the real world experience of using such high end devices. Android sites do a better job analyzing the experience and significance of the performance of these mobile SOC and what it actually means for smartphone users. For example XDA has a realy nice benchmark where they test the overall fluidity of using certain smartphones. This both tests the OS optimizations and SOC performance.
joms_us - Tuesday, December 17, 2019 - link
Excellent point, I am sick and tired of this propaganda to uplift an Apple product just because it shines in one or two primitive and bias benchmarking tool when thousands of other apps say otherwise.s.yu - Tuesday, December 17, 2019 - link
May I interest you in some rhino horn powder claimed by thousands of traditional Chinese witch...I mean doctors to enlarge your penis?s.yu - Tuesday, December 17, 2019 - link
In short: Poor validity and poor reliability. There's nothing particularly useful about that test.It generates mixed, or rather obfuscated scores correlating to an unknown extent to UI design choice, certain drivers, and hardware performance.
This is somewhat metaphysics, and has no place in science.
cha0z_ - Friday, December 27, 2019 - link
That test is fun and great, but totally not representative of anything. Taking it serious is not serious. :)MetaCube - Tuesday, December 17, 2019 - link
How are you still not banned ?joms_us - Tuesday, December 17, 2019 - link
Ah poorman's attempt to hide the truth. I feel sorry for those buying a phone (even replacing a desktop) because they see it flying with colors in SPEC.Andrei Frumusanu - Tuesday, December 17, 2019 - link
You're just a blabbering idiot. You keep pulling things out your ass, nobody ever said A9 is faster than Ryzen or Skylake, I dare you find a quote or data that says that. The A13 was the first to *match* them.The test you quote isn't ST like the SPEC results, and it's not even a full CPU test as it has API components.
joms_us - Tuesday, December 17, 2019 - link
Ahh the irony... Let's see who is the blabbering !d!ot here.You reminded us on who the IPC gorila is...
https://twitter.com/andreif7/status/11569659188089...
There it shows A13 and even A9 stomping the latest and greatest Ryzen and Skylake processors
But then when you compare the A13 versus the Android SoC in various apps and websites, it is the complete opposite.
I respect you because you have an excellent knowledge in what you do but it comes down to the toilet drain once your critical thinking is subpar and you are shadowed with your ego that you think yours and only yours speak the truth. I would not hesitate to hire you as my design engineer really but you have to back your claims with facts. When you state one is the fastest (especially by huge margin), it has to reflect in any test that you throw at it.
I would rest my case if you can convince Lisa or Bob that their processors are mediocre compared to Apple's latest SoC LOL.
Andrei Frumusanu - Tuesday, December 17, 2019 - link
That tweet is about IPC of the microarchitectures, not absolute performance.You literally have absolutely not a single whim of understanding of what's going on here and keep making a complete utter fool of yourself repeating lies, all you see is a bar graph being bigger than the other and suddenly that's the your whole basis on the truth of the world.
The actual engineers and architects in the industry very well know where they lie in relation to what's Apple's doing; I don't need to convince anybody.
joms_us - Tuesday, December 17, 2019 - link
No, you just told the whole world, that the fastest chip on the planet is the Apple SoC. A chip with great IPC will give great performance result, right? Your graph is telling us, a 1Ghz A12x core is equivalent to a 2Ghz Ryzen core which is utter BS. When AMD or Intel announce that their next processor has 20% IPC improvement, it does show in any tool/benchmark or app you throw at it not the opposite.Your tests methodology/tools are completely flawed and outdated as they don't translate to real world results. They are great though if you are comparing two similar platforms.
Andrei Frumusanu - Tuesday, December 17, 2019 - link
> No, you just told the whole world, that the fastest chip on the planet is the Apple SoCI did not. High IPC doesn't just mean it's the fastest overall. AMD and Intel still have a slight lead in over performance.
> A chip with great IPC will give great performance result, right?
As long as the clock-rate also is high enough, yes.
> Your graph is telling us, a 1Ghz A12x core is equivalent to a 2Ghz Ryzen core
That's exactly correct. Apple current has the highest IPC microarchitecture in the industry by a large margin.
> which is utter BS.
The difference between you and me is that I actually have a plethora of data to back this up, actual instruction counter data from the performance counters, actual tests tests that show that Apple's µarch is in fact 50% wider than anything else out there.
You are doing absolutely nothing than spewing rubbish comments with absolutely zero understanding of the matter. You have absolutely nothing to back up your claims about flawed and outdated methodologies, while I have the actual companies who design these chips agreeing with the data I present.
arsjum - Wednesday, December 18, 2019 - link
Andrei,As a member of Anandtech staff, you should be better than this. This is not an XDA forum.
Come on.
LordConrad - Tuesday, December 17, 2019 - link
Now if Samsung could just increase the anemic L2 cache. I want 1MB per A7x core and 512KB per A5x core.yankeeDDL - Tuesday, December 17, 2019 - link
It is truly disappointing that Android HW needs to run on SoC with the performance of the iPhone 3-4 generations older.I really don't understand with all the demand there is, why nobody comes up with something at least within the range of Apple's SoC.
Wilco1 - Tuesday, December 17, 2019 - link
You mean 2 generations behind at most on SPEC. And while interesting technically, it remains debatable how much that actually matters in actual phone use (where having fast SSD, download speeds and a lot of memory can help more). As well as having ~20% better power efficiency of course.It would be relatively easy to quadruple L2 to 1MB, L3 to 8MB and system cache to 16MB and get ~20% performance gain on SPEC. The area would be much larger and hence the cost of the SoC which would add to the cost of phones. QC's competitors would be happy to increase their market share with far cheaper SoCs which are equally fast in real-world usage.
yankeeDDL - Tuesday, December 17, 2019 - link
No, I was looking at the Web benchmarks. All of them are miserable compared even to the iPhone X.And Web browsing is certainly a key part of mobile experience.
joms_us - Tuesday, December 17, 2019 - link
And yet in real world comparison, iPhone is faster only in handful of sites (excluding apple.com LOL) by millisec. Does that look miserable to you? =DNicon0s - Tuesday, December 17, 2019 - link
Yes and have you actually seen an Android flagship that performs noticeably worse than an iphone(any iphone) in web browsing? Because I sure haven't.What I have seen is Android phones with better connection speed and better reception in general, especial in crowded places like concerts, stadiums, subways etc. In those places the performance from the iphone x was actually 0 in many instances because there was no signal. If we are talking about the mobile experience let's not ignore the Modem.
s.yu - Tuesday, December 17, 2019 - link
"If we are talking about the mobile experience let's not ignore the Modem."And there are connectivity tests for that, although controlling variables in a connectivity test is almost impossible outside of lab conditions.
Ultimately these are separate tests and test entirely different aspects. You're free to test both, but if you try to test them simultaneously outputting one single result, you will obtain one worthless data point.
Wilco1 - Tuesday, December 17, 2019 - link
Synthetic browsing benchmarks depend highly on software implementation and tuning, so they are not useful to compare CPU performance.yeeeeman - Tuesday, December 17, 2019 - link
A bit MEH...Alistair - Tuesday, December 17, 2019 - link
The Android fanatics are out tonight. I only buy Android phones, period. Clearly Apple's CPUs are miles ahead in performance. Anyone use the latest iPad Pro? It's faster than any Windows laptop I've bought or used personally. Boss gave us the latest and greatest dual core HP ultra slim laptop, and I immediately replaced it with a Ryzen 8 core computer and said "I don't use laptops for real work". We don't need benchmarks to tell us how fast Apple's CPUs are (though Andrei's benchmarks are perfectly valid), it is immediately apparent when comparing vs. similarly clocked Intel products. The reasons Ryzen and Intel seem great right now are high clocks and many cores. Run your 9900k in 2 core mode, at 2.6 Ghz and squirm at its low speed. That's what the iPhone uses, a low clocked dual core. Put 8 together, and run it at 4 Ghz, and you have a monster.id4andrei - Tuesday, December 17, 2019 - link
You're forgetting a stripped down mobile OS and relatively stripped down mobile apps that are part of the speed equation.Alistair - Tuesday, December 17, 2019 - link
that has nothing to do with it, that's what I'm saying, that's what Andrei is sayingAlistair - Tuesday, December 17, 2019 - link
the other Andrei lolAlistair - Tuesday, December 17, 2019 - link
go into your BIOS and run your Intel computer in dual core mode, 2.6Ghz, and come back and tell me it is fast...id4andrei - Wednesday, December 18, 2019 - link
The software running on these platforms in not identical. Some of it depends on CPU extensions that are not equivalent between platforms. A dual core 2.6 Ghz intel chip will run slower Win10 than an ipad pro would run ios. But you could find a Linux distribution and some oss apps that would run very fast on that 2.6 Ghz dual core intel.Quantumz0d - Tuesday, December 17, 2019 - link
LOL. You won the most stupid comment here congrats."Android fanatics" so you are an Apple sheep I guess.
Sandybridge OCed to 4GHz+ still keeps up with a 1080Ti without any issues. That shows how Intel milked and Ryzen caught up due to monopoly. And you are crippling an x86 LGA socket processor to 2.6GHz Dual core and compare with an iPad Pro ? In what usecase ? What is the ultimate goal here ? Lets disable all Lightning and Thunder cores and run 1 Lightning then (You cant do it anyways since Apple is the overlord here). What the actual fuck lmao. Also magically slapping in more A13 cores means x86 Intel and AMD are dead, haha you think this is making a sandwich at home ? I thinm you never heard of Sparc or IBM Power go and read snd get your mind blown on threads but do not compare that to x86 or Apple A series Alien technology please. An iOS cannot even process zip file extraction nor a config file for a VPN. That alone breaks the whole A series King to ashes as its not used in a real computer at all. A psuedo Filesystem and fake filemanager app doesn't make it a proper OS. Unfortunately Android is also following same.retarded path thanks to Apple disease at Google emulating by the abomination called Scoped Storage disaster.
Let me tell you a secret the laptop you used all are garbage and they are cut down bottom barrel silicon from the failed Desktop chips and so on. The age of rPGA Intel is fucked (Last XM is 4930MX, a true binned Mobile Chip like K) Thanks to BGA greed of Apple infecting Intel for max profits and people to be subverted to use BGA / soldered trash throttling thin and light crappy planned obsolescence HW.
Let us run a Cinebench on your beloved processor then or lets run a POVray or a H264 Transcoding. Well how about we game a PUBG and stream it at 1080P highest quality.
This is the reason why I see x86 vs ARM talk irrelevant and often AT articles are quoted to prove the IPC and all SPEC scores but completely ignore scalability, compatibility, legacy code, HW market etc, when the compute workloads / OS / Software Code / HW which are entirely different world. Like comparing a Jet fighter to a Jet ski.
Alistair - Tuesday, December 17, 2019 - link
Nice rant. Might want to read my comment before going off like a crazy person? I said I only buy Android phones... right there in the first sentence.Before you blindly state how amazing x86 CPUs are, as I said, run your Intel CPU in dual core mode at 2.6 Ghz and compare how slow it is vs. the A13 Apple chip that is also 2 power cores at a low frequency. That's what IPC is. I can't understand why people get so triggered about saying Apple has the highest IPC in the industry. It's a simple fact and I just have to assume you don't know what you are talking about. Andrei's articles always seem to attract the most illiterate part of the internet.
markol4 - Monday, December 23, 2019 - link
A13 IPC is superior to Intel or AMD but Apple CPU core is huge in terms of transistors budget. IIRC in A11 times Apple CPU core had at least 2 times more transistors than x86 CPU core. Considering that there is no surprise that Apple has a higher IPC.imaheadcase - Tuesday, December 17, 2019 - link
They might as well just make a chart and put "BUZZWORDS" on it at this point.That is what is so silly about current state of smart phones. So much they can cram into one..but rarely do they do..or if they do its crippled by terrible software.
Google is already facing so much criticism of Google Photos bullshit, months in and they just say "we are aware of it". lol
imaheadcase - Tuesday, December 17, 2019 - link
I don't think i'll ever understand how Google can take a product, that works great, then release a dozen updates with release notes that "fixes bugs" and completely borks it for users. I mean what is the fucking incentive. Oh and because everything google is so entwined with other software, for whatever reason, it fucks up other things. So now you try to figure out what software is the original culprit or is it the others now. This shit never ends with google.Nicon0s - Wednesday, December 18, 2019 - link
Do you know what's amusing?The vast majority of people reading this article don't really understand what those numbers showed by the SPEC actually mean, what they represent for the functionality of the phone. Most only copy paste things form the article that they like, especially the parts where it's mentioned that the Qualcomm chips are "years" behind. So it doesn't matter how the phone runs and how fast it can execute real world tasks.
One thing I don't understand is if I would buy a Galaxy S11+ instead of an iphone 11 Pro Max what will I be missing in terms of performance? What specific advantage would the A13 SOC give me because "it's years ahead" in performance?
cha0z_ - Friday, December 27, 2019 - link
My second hand iphone 6s runs super smooth, including in heavy beautiful games - no fps drops or performance issues. Doing so on Galaxy s6 (the available competition model from the same year) is not possible.This is where the powerful SOC shines - years down the road where the phone stays relevant and a pleasure to use. Ofc you will not have to worry about that with your S11+ as samsung support their phones fully for only two years (3rd is security updates only). Really they drop the phone as full/serious support even before the first year drops - this is what happened with my note 9, no attention at all - just quick security updates and no interest beyond that. After all - they put the capable engineers to work only towards the new upcoming phones. Apple support FULLY their phones for 5-6 years + they released this summer a security update for iphone 5 and 4s - 2011 and 2012 model. Any android phone from those years receiving a security update?
Also you can play full PC civilization 6 game on the iphone. No android port and not only because piracy, but because on later turns/bigger maps the android SOCs will choke and the wait time between turns - unbearable. I can list you also a lot more games exclusive to ios, a lot because of performance. Dead cells for example, keeping in touch with the devs - the mobile port dev team (it's outsourced) struggles BIG TIME with performance on android thus massively delaying it.
tranceazure1814 - Tuesday, December 17, 2019 - link
So the main point that I want to know,is it worth upgrading to a Snapdragon 865 over a Snapdragon 855 and while we on the subject does the mi mix 3 5g has less LTE bands than the standard snapdragon 845 mi mix 3?sweetca - Wednesday, December 18, 2019 - link
Many comments, likely authored by data driven nerds (similar to me) are doing their best to ignore the facts: Years ago, Apple took the performance crown, and still wears it today. Inevitably, one day, someone will usurp Apple's position, but that day does not appear to be soon.Every comment which offers an explanation or justification as to 'why' Apple holds the top position, intrinsically agrees that they so do.
Lolimaster - Wednesday, December 18, 2019 - link
So QC did the same as samsung, just add vanilla ARM cores to their soc, all this years with "custom" core for almost zero gain but tons of problems at certain gens.MagicMonkeyBoy - Thursday, December 19, 2019 - link
I bought an iPhone 11 pro max. I took the thing back. A13 is way over rated. I don't believe these bench marks. The 865 is a better. Pubg is not good on the iPhone 11 pro max. The previous iPhone allowed you to set the graphics setting higher than the current settings available on the iPhone 11 pro max.Also the ram management is apauling.
When playing Pubg longer than 10 minutes. The phone heats upto 50 degrees Celsius. Hot 🔥.
Then it gets worse. Four out of six cores shut down. Throttling.
And the quality of the game just deteriorates.
The A13 is actually only a 5% increase over the A12.
The 865 when using other bench marks such as a truly cross compatible such as Speed Test G actually reveals that the 865 beats the A13.
https://www.androidauthority.com/snapdragon-865-be...
Gary explains is a better comparison. And more accurate.
I am now waiting for an 865 handset.
These tests seem like some sort of laboratory test instead of a real world test.
The SoC's have been designed knowing what kind of other peripherals are attached.
Amazing... When using the iPhone 11 pro max... You guys make me laugh. For something with such high statistical measurements in comparison to other SoC's. Only makes the A13 look even more foolish.
Take an 855+... When I use a realme x2 pro. When I use the same apps as what was on my iPhone 11 pro max. The realme x2 pro with its 855+ processor on board absolutely runs circles round the iPhone 11 pro max.
For something that is supposedly such high in Specs. Just makes the phone seem even more confounding. And even more humiliating.
When it comes to gaming. 855+ or definately the 865...
A13 in the iPhone 11 pro max is to be avoided for heavy gamers.
And we all know that the flagship snapdragons make better processors for gamers. Which requires optimal CPU's.
Sorry. But as a gamer. These benchmarks are not accurate or realistic at all. More like a laboratory benchmark.
I used to design chipsets and pcbs. At a discrete government laboratory. These benchmarks have a huge amount of discrepancies.
The 865 is overall actually a better SoC than the A13. It is way more dynamic than the A13.
joms_us - Thursday, December 19, 2019 - link
Finally, someone with great understanding and experience on how to properly rate a phone. They don't realize, CPU alone cannot function properly without the help of other modules or components. iPhone 11 is like a PC with i7-9900 + GT 2060 + 3GB DDR4-2400 while Android phone is like Ryzen 3900X + GTX 2080Ti + 2x4GB DDR4-3200The Garden Variety - Thursday, December 19, 2019 - link
"I used to design chipsets and pcbs. At a discrete government laboratory. These benchmarks have a huge amount of discrepancies."Even taking into consideration the rapid decline of comment quality on AT, this... is next level. Kudos, MagicMonkeyBoy, may your crazy never burn out. A++, would read again.
cha0z_ - Friday, December 27, 2019 - link
For start - the RAM management was a OS bug issue that was fixed in ios 13.2.X release and surely in ios 13.3 that is current. Secondly the missing GFX option is because the DEVELOPER didn't update the game for the new iphone.I had the same problem with the main game I play - vainglory. While the screen is on paper the same as XS max (as resolution and size) - the game UI was horribly buggy and it stayed like that for 2 months till they released an update for that model support. Ofc you will not have problems in any app or such a slow reaction by all the devs - but it happens. Going without saying that after those first few months you will never have such problems in that phone lifetime even if it's 5-6 years.
Third - taking speed test g serious is a lol thing to do. With everything stated below I will not waste my time to go technical why it's not serious.
Btw, used android for 10 years (only high end phones) till I switched to the pro max + I have highly technical background as education, hobby and work - especially in the field of electronics and computers.
Also talking how a chip that even didn't see a release, is better/worse vs X - hahahah :) Not to mention on what usage it's based.
Lastly - my iphone doesn't heat at all even in the heaviest games that are A LOT more heavy than your mentioned pubg joke. Try running full pc civ 6 on your android phones or dead cells... oh, no civ 6 as the performance will be poor on later turns. Also still no dead cells because devs can't make it run good on android available SOCs. ;)
iphonebestgamephone - Saturday, December 28, 2019 - link
Civ 6, the game that runs on a 6s? Dead cells, a side scroller? The developers are targeting what, the lowest end android socs?cha0z_ - Monday, December 30, 2019 - link
The game runs, but how it will run on big map turn 200+ is another story. :)As for dead cells - this is actually quite common, people think the game is light simple gfx wise, because of the art style/decisions. Actually talked with the devs on that topic - everything is 3d and the game is not that light as you might think. As for your absurd last statement - every developer would target the lowest end as it will bring more potential customers.
Do you want to talk about the hundreds more ios exclusive apps? Or to list the recent android "great games" that are on ios from years? I can also tell you thing or two how much better is to develop for ios vs android, how easy is to optimise for 10 devices vs 100000 or even how decent is actually the GPU in the 6s given it's low resolution. Because on android a crap GPU is paired frequently with high resolution screen and defo atleast 1080p, but 1440 is also seen in the budget oriented phones. So the statement how the regular size iphone 6s can game in 2019 is kinda rushed.
iphonebestgamephone - Monday, December 30, 2019 - link
"As for your absurd last statement - every developer would target the lowest end as it will bring more potential customers." - no they dont. Look at grid autosport. Look at fortnite."Do you want to talk about the hundreds more ios exclusive apps? Or to list the recent android "great games" that are on ios from years? I can also tell you thing or two how much better is to develop for ios vs android, how easy is to optimise for 10 devices vs 100000 or even how decent is actually the GPU in the 6s given it's low resolution. Because on android a crap GPU is paired frequently with high resolution screen and defo atleast 1080p, but 1440 is also seen in the budget oriented phones. So the statement how the regular size iphone 6s can game in 2019 is kinda rushed" - you are pretty ignorant arent you? Do you really think all games run at the screens native resolution on android? Why do you think graphics options exist? Do you want to talk emulation? How easy it is to run emulators on android? How many more systems are available to emulate? I dont. Because comparing platforms wasnt the point. It was all about android phones being too weak to run those games you mentioned. The devs should make it available for the flagships atleast for now, if they really want to.
So your iphone never heats up? If you have gfxtool for pubg on ios, get it and put everything to max and run it. Because even at the ingame max settings i have seen iphone x heating up.
cha0z_ - Tuesday, December 31, 2019 - link
I am into android from the start + symbian before than and also senior member with dev/helping known devs with project @ xda. So thank you, I know enough about android.I know that the iphone X heat a lot, it was known design flaw with that phone (if you will point heating apple device, this will top out the list most likely). I am currently with iphone 11 pro max and it never heats even half what my exynos note 9 do (and the exynos note 9 is colder vs the snapdragon variant). It's the first iphone with cooling solution and it really do wonders, you can refer to Andrei's iphone review for deep dive into the matter.
I can play fortnite maxed at 60fps and no fps drops or whatever even after 2 hours of play without major heating and you are talking about PUBG maxed. :)
Ofc that heat will be there, but as you can also read in Andrei's articles/reviews - apple's A chips are leading in performance AND efficiency. The heat you will see coming from A13 will be less than what you will see from the current android SOCs and/or literally can play games smoother with higher quality GFX and with more FPS.
Almost none of the heavier games is running native on mobile, but also most are running on lower res/game details on android vs ios.
Emulation is cool, did a lot on android with it. Including fun stuff like running diablo 2 LOD latest patch on my note 9, believe me - it's playable with the spen when on the go, in home one mouse and the TV = you are good to go. Still, ported or developed games for mobile just works better and you have such a vast library nowdays with high quality games that you really don't need to revisit old classics on your phone. Actually on ios the situation is a lot better, you got a lot more paid apps there vs android.
Btw, I generally prefer android and can write x3 times longer post about what I love there, but if we are talking about gaming - ios is the device to go.
iphonebestgamephone - Tuesday, December 31, 2019 - link
"I am into android from the start + symbian before than and also senior member with dev/helping known devs with project @ xda. So thank you, I know enough about android." Haha... I should have known you would come up with something like that."Btw, used android for 10 years (only high end phones) till I switched to the pro max + I have highly technical background as education, hobby and work - especially in the field of electronics and computers" this one too lol.
And then you somehow decide civ6 and deadcells dont run cus android too weak. No. Its just the devs dont bother with it. They could have restricted it to atleast sd820 devices like what grid autosport devs are doing.
"Emulation is cool, did a lot on android with it. Including fun stuff like running diablo 2 LOD latest patch on my note 9, believe me - it's playable with the spen when on the go, in home one mouse and the TV = you are good to go. Still, ported or developed games for mobile just works better and you have such a vast library nowdays with high quality games that you really don't need to revisit old classics on your phone. Actually on ios the situation is a lot better, you got a lot more paid apps there vs android." Im yet to find some good stuff like god of war, nfs, burnout, wipeout, xenoblade, pokemon, zelda or mario, on android, or any other thousands of games. You could say you can stream them, but same goes for pc games too. Emulators and a switch style gamepad is great on the go. I see apple has done a great job with metal, vulkan is worse than opengl on android 10 sd855. Looking forward to the updatable drivers on the 865.
"I can play fortnite maxed at 60fps and no fps drops or whatever even after 2 hours of play without major heating and you are talking about PUBG maxed. :)"
Thats awesome, sd855 heats up a lot on pubg maxed. I guess there is no pubg gfxtool for ios.
cha0z_ - Thursday, January 2, 2020 - link
There are emulators for ios and you don't need jailbreak to install/play games. They are not on the app store tho, they are on custom stores - still, it's not any different than installing APK from outside playstore. The emulators library is quite big, including ppsspp. As I said tho - android is better for emulators imho + I didn't say android is weak as OS. Weak are the SOCs on android phones compared to the A series of apple. I would totally love to see android phone with apple SOC/similar performance to it and longer full support than two years.As for the gfxtool, I hope you understand that when you have literally just a few phones to optimise for - you really do a great job with it, or with other words - the ios pubg variant is greatly optimised for every iphone that supports it to extract the best experience with the best possible gfx for the hardware. Ofc you can argue than personal preferences can apply and tweaking can be done, but it's not that necessary.
I respect your opinion and share few viewpoints, just from personal experience - gaming on ios is generally better. Hard to explain, games run smoother and better. If you love emulators tho - android is obviously a better choice + snapdragon SOC.
iphonebestgamephone - Tuesday, January 7, 2020 - link
"Weak are the SOCs on android phones compared to the A series of apple." Yeah everyone knows. They still are strong enough for the games you mentioned though, atleast the last 2 years of flagships. And last years 730/730g are also good enough. I guess the devs want even those with 100$ phones play their games. I doubt those people would even bother buying the game once it hits the store.Gamebench did a test and the huawei mate 30 pro actually performed better than the iphone 11 pro amd note 10 in games. https://blog.gamebench.net/huawei-mate-30-pro-ipho...
The iphone probably had better visual settings/higher resolution as default probably.
Ahmedrr1 - Sunday, December 22, 2019 - link
Nicehttps://www.technewsahmed.com/2019/12/huaweis-p30-...
AceMcLoud - Sunday, December 22, 2019 - link
Ouch, that doesn't look very promising.ballsystemlord - Friday, February 7, 2020 - link
Spelling error:"The test here is mostly sensible to the performance scaling of the A55 cores. The QRD865 in the default more is more conservative than some existing S855 devices,"
"mode" not "more":
"The test here is mostly sensible to the performance scaling of the A55 cores. The QRD865 in the default mode is more conservative than some existing S855 devices,"
Hrel - Tuesday, April 28, 2020 - link
Man these Watt listings make no sense at all.5.12 Watts is shows as lower than 4.24 Watts then 2.73W is somehow HIGHER than that?! WTF is going on?
Then 3.33W is higher than 2.73, which makes sense, but then 3.05W is lower than 2.56W?! What are these charts?
Hrel - Tuesday, April 28, 2020 - link
Oh, the bar is for the Joules, the Watts aren't visually represented. Runtime being a critical variable, I gotcha now. Lol, I was so confused :)