Very glad to hear that they have the resources to bifurcate their data center focused and gaming focused GPU designs. More gaming performance for less money.
Agreed, this is a good sign in general for their investment in GPUs. I'm hopeful that RDNA was their Zen moment, and RDNA 2 signals a true return to competitiveness with Nvidia.
Good, they have finally confirmed my suspicion that "7nm+" is not N7+, but rather a enhanced version of N7HP or N7 "Large Die" - the later a bit of a misnomer since Zen2 chiplet is smaller than most mobile SOC such as Apple A13, Snapdragon 865 and Kirin 990.
This is all good to hear, but what about the software ecosystem? At the moment CUDA is king in the GPGPU space, how do they want to address this? On the CPU side there are several HPC applications that are heavily tuned to the Intel compiler stack, what is their angle on this front? Granted I am looking at this from a pretty niche scientific computing / HPC side.
AMD's latest on HPC software might have been at SC19. It looks like much of the work on HPC applications was done by developer communities of those applications. I'd guess that AMD's emphasis is on the critical hardware-specific portions (compilers, drivers, OS-level management tools, etc.). AMD might have supported some proof-of-concept ports for testing their compilers, but doesn't have the application expertise to maintain ports of applications and libraries. I guess this means AMD (or AMD's customers, such as HPE/Cray) need good HPC developer relations engineers.
> AMD Introduces ROCm 3.0 > > Community support for the pre-exascale software ecosystem continues to grow. This ecosystem > is built on ROCm, the foundational open source components for GPU compute provided by AMD. > The ROCm development cycle features monthly releases offering developers a regular cadence > of continuous improvements and updates to compilers, libraries, profilers, debuggers and system > management tools. Major development milestones featured at SC19 include: > > + Introduction of ROCm 3.0 with new innovations to support HIP-clang – a compiler built upon > LLVM, improved CUDA conversion capability with hipify-clang, library optimizations for both HPC > and ML. > > + ROCm upstream integration into leading TensorFlow and PyTorch machine learning > frameworks for applications like reinforcement learning, autonomous driving, and image and > video detection. > > + Expanded acceleration support for HPC programing models and applications like OpenMP > programing, LAMMPS, and NAMD. > > + New support for system and workload deployment tools like Kubernetes, Singularity, SLURM, > TAU and others.
Supercomputing uses mostly large, legacy code bases that are parallelized with compiler directives. So just building good hardware is enough, along with tens or a hundred million dollars worth of effort, done by AMD and the national labs over the course of 2 or 3 years, to optimize the compilers for AMDs hardware
The majority of the burgeoning market for GPU compute relies on CUDA. NVIDIA isn't spending hundreds of millions of dollars a year, maybe even a billion dollars a year now, on their software ecosystem, directly targeting various verticals through efforts including libraries, integration into storage, databases, and the cloud, just because they can. Improved CUDA conversion will cut it for a national lab that doesn't care much for the small fraction of the code dependent on native CUDA, but it will perform significantly worse than native code and thus make their hardware uncompetitive in most of the commercial market.
Yeah unfortunately, this does not sound too good. I mean we are talking about two 600 pound gorillas who pushed their respective software stacks deep into the market. I hope they succeed in a way that makes them a real contender (from the software side).
Don't know about Cuda opencl counter. They tuned clang and flang to AOCC amd optimized compiler to face Intel compiler in Linux only. They packing also blis,fftw,libflame,scalapack into AOCL and optimized library versus Intel MKL. My last fortran polyhedron bench, AOCC + Openblas is slightly better on matrix inverse than AOCC+ AOCL, GCC + Openblas. But the speed compiled AOCC + openblas on desktop ryzen 3600X still slightly under intel.compiler + Intel.mkl result on notebook i7-9750H. The znver2 tune increase the speed like Qax Intel did. Still I hope amd had better software development like Intel has for all.platform including windows.
My only concern with next gen amd is support for dlss, nvidia has taken the technology from a gimmicky feature to something with incredible promise, particularly now that ray tracing is the new standard.
You understand that this is bad upscalling? Play your games at lower resolution and have better visual fidelity, no needs to purchase tensor cores for that.
You should check out the lastest videos on the subject from Digital Foundry and Hardware Unboxed. DLSS 2.0 as first implemented in Wolfenstein Young Blood is described as being virtually indistinguishable from native resolution, essentially giving a free performance boost (about 30 percent at 4k with the quality setting if I'm remembering correctly).
Incredible promise? It's making your games blurrier and claiming its a feature. Just lower your resolution and installa shader mod. Boom, you have DLSS for free.
DLSS may be the worst gimmick since Nvidia exclusive physx or AMD exclusive mantle.
Just to clarify a point--AMD did not say they weren't using 7nm+; what they said was that most if not all of the + features--including EUV, I presume--had been rolled into the standard "7nm" TSMC process they'll be using. That's the way it appeared to me, anyway. I recall Papermaster talking about that. Looks like just a simple nomenclature change.
AMD's on a roll, but there's an elephant in the room when they tout their rosy plans. AMD spent $1.5 billion in R&D in the year just ended. NVIDIA spent $2.8 billion. Intel spent $13.4 billion. Of course Intel has many more pots on the fire than AMD, with Mobileye, their IoT stuff, and most notably their fabs, but you can bet that Intel is spending much more than $1.5 billion on stuff that competes directly with AMD's products, even if you exclude GPUs. So AMD is being outspent in R&D by their competitors by 3 to 1 or more. It would take a combination of genius on the part of AMD and stupidity by their competitors for that to play out in AMD's favor in anything but the short term.
AMD can develop the best GPGPU hardware in the world, but without the software ecosystem they won't make a scratch in the market outside supercomputers. They can't develop that ecosystem with their current R&D spend. Zen is a good and successful product, but it came right at a time when Intel faltered with their 10 nm process. Not only has Intel been hurt in terms of process, but their troubles caused their new core architectures to hardly see the light of day as well. That situation is not going to last more than a year or two from now.
Well AMD had low R&D in 2016/2017 in comparison and came out with Zen. It's not how much you spend, it's about spending it efficiently which AMD excels at under Lisa Su.
Because AMD could get from 1 to 10 out of 100 by just making a piece of hardware, and because AMD came out with their chip right when Intel rain into some extremely fortuitous (for AMD) difficulties. But getting from 10 to 20 or to 30 or to wherever they hope to get is a different matter. There'll be less and less low-hanging fruit.
Of course it's not about how much you spend, it's how you spend it. And, I repeat, if AMD can compete by spending less than 1/3rd the amount then they are geniuses and their competitors are morons.
" because AMD came out with their chip right when Intel rain into some extremely fortuitous (for AMD) difficulties " that could of been part of the reason why zen is dong so well. but we dont know for where zen would be if intel didnt screw up with 10nm and after. if you say you do, then you should go buy some lotto tickets
Why buy lotto tickets when you can buy stock with such an intuition? We know exactly what Zen would be. It's a known quantity. What we don't know is where Intel would be, but it's fair to say it would be a lot better off than it is now. And it's fair to say that AMD would have significantly worse margins because they would have to sell their processors at lower prices to try to win market share. It's odd to think that customers are not at all influenced by price/performance advantage. If AMD couldn't demonstrate it, AMD wouldn't be winning market share.
" And it's fair to say that AMD would have significantly worse margins because they would have to sell their processors at lower prices to try to win market share " ahh, so you know where intel could of been performance wise, if they didnt screw up 10nm ?? wow.. maybe you should go buy lotto tickets, cause with that type of intuition, you would be able to know what the winning numbers would be before they are even drawn, and then use that money to by stock.
nope, i just choose not to speculate on things i dont know, like you should, no one could possibly know where intel could be if their 10nm process didnt go wrong.
So, in your mind, if 10nm hadnt gone wrong, intel STILL wouldnt be in any better place then they are now with 14nm+++ and a CPU arch from 2011?
You dont need to speculate. We know Intel's arch improvements were tied to node, and the lack of 10nm node has stopped CPU improvements from retail intel products in its tracks.
We dont know exacts, but we DO know Intel wouldnt be stuck in a holding pattern like they are now. Even if they WERE, someohow, stuck in a holding pattern with a perfect 10nm release, they would still have a leg up over their current position.
It is indeed reasonable to assume AMD would not have been in such a great position had Intel not screwed up 10nm. They would probably have been competitive but not higher winners.
What surprises me is that Intel seems to have such a hard time getting back on track. Why do the woes of 10nm impact 7 so much, wouldn't they already be closer to 5nm? Old roadmaps certainly suggested so.
In the next two years AMD will probably move on to Zen3 / 7nm+ then Zen4 / 5nm... So how exactly are Intel going to catch up ? If AMD slowed down then maybe Intel could catch up. But right now, that doesn't look at all likely. So what you've got is a bunch of false assumptions based on a highly speculative 'What if..' and not much more.
Intel doesn't need AMD to slow down for Intel to catch up and overtake them. Because in the last few years Intel has been stuck on an old node with old cores. It's basically been 14 nm and Skylake since 2015. They did some emergency improvements to their process and to their architecture with Kaby Lake, Coffee Lake, Cascade Lake, Cooper Lake, etc. But those are not efforts that have been able to draw anywhere close to full breaths. In the next 3 years it's going to be 10 nm and then 7 nm and Sunny Cove, then Willow Cove, then Golden Cove, then Ocean Cove. Intel did not stop development of their architecture, they just haven't been able to implement the developments due to their process troubles.
And Intel offers so much more than just a CPU with their platform that once Intel has righted their ships, customers will most likely prefer to mostly use the existence of AMD's offerings as ways to get better pricing from Intel. That will cause a stall in AMD's data center market share. Anyway, this is my prediction, and I don't see AMD changing it as long as they aren't spending the necessary R&D to make inroads into Intel's platform advantage. They aren't going to have a CPU technology lead forever.
So then you think NVIDIA and Intel are wasting billions of dollars. In other words, they are stupid. That's what you have to believe to believe AMD can reach their lofty goals with their current R&D spend.
come on Yojimbo its time intel dropped their current architecture, and came out with a new one, not based on one from 2015 or what ever, its been almost 3 years, and intel still hasnt come out with something new. or even hints there is a new architecture coming out ? " So then you think NVIDIA and Intel are wasting billions of dollars " for nvidia, no, for intel yes. as i just said, intel is just rehashing the same architecture they have been using since 2015, its time they dropped it, and came out with something new. if you can say that intel has been using their billions wisely, then i dont know what to say.
The reason they have been is because of their process problems. Didn't you read what I wrote? And you're making a logical mistake by arguing that because Intel made a mistake (it was because of process execution, but even ignoring that fact) then they are so inept that AMD can consistently succeed with a vastly smaller R&D spend. Frankly, AMD has not shown itself to be the company that can pull that off, if there were to be one.
You only don't understand because you refuse to try. Yes, they can't put their new architecture on 14nm because it was designed for 10 nm. Hence the recent stress on the option to backport architectures on processes going forward, something that people say will be difficult and are even wondering if it will actually work in practice. See the Anandtech articles on the issue, for example. Here is the most recent one, search for "back-port": https://www.anandtech.com/show/15580/intel-cfo-our...
You'd have a better idea of actually knowing what's going on if you stopped making stupid sarcastic comments and read stuff instead.
oh.. so when intel realized say 4 years ago that 10nm, wasnt going to be ready, and then amd releases zen, they couldnt of started to back port so they could of put it on 14nm ?? BS. come on man, and you accuse me of closing my eyes ? " You'd have a better idea of actually knowing what's going on if you stopped making stupid sarcastic comments and read stuff instead. " and the same goes to you
Finally hitting 10% in DC, That is excellent news, even though it is a quarter late. But Good News indeed. And with the roadmap laid out, while their expectation of 20% CARG is still a little too low, but knowing AMD being extremely conservative this is still a very good target.
While Intel is finally waking up which causes quite a bit of concern ( and excitement ), it seems AMD will still have the upper hand for a few years.
We always enjoy your articles its inspired a lot by reading your articles day by day. So please accept my thanks and congrats for success of your latest series. https://www.schmhyd.edu.in/
But what happened with 7nm EUV? Next year? Isn't EUV cheaper to produce (not considering R&D and equipment here) and with better yields, due to the simplification of the process (masks)?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
51 Comments
Back to Article
Alistair - Thursday, March 5, 2020 - link
Very glad to hear that they have the resources to bifurcate their data center focused and gaming focused GPU designs. More gaming performance for less money.FreckledTrout - Friday, March 6, 2020 - link
Finally! This is going to help immensely.peevee - Friday, March 6, 2020 - link
Without decent OpenCL drivers, their compute designs are useless.Bulat Ziganshin - Sunday, March 8, 2020 - link
Even NVIDIA can't afford that. I expect the same arch but as usual with more FP64 alus, registers and shared/cache memorySpunjji - Wednesday, March 11, 2020 - link
Agreed, this is a good sign in general for their investment in GPUs. I'm hopeful that RDNA was their Zen moment, and RDNA 2 signals a true return to competitiveness with Nvidia.marees - Thursday, March 5, 2020 - link
*We’ve been covering FAD 2015 throughout the afternoon*I would have that you'd have covered it since the last 5 years...
Ryan Smith - Thursday, March 5, 2020 - link
Some times it certainly feels like it's been 5 years... Thanks!dotjaz - Thursday, March 5, 2020 - link
Good, they have finally confirmed my suspicion that "7nm+" is not N7+, but rather a enhanced version of N7HP or N7 "Large Die" - the later a bit of a misnomer since Zen2 chiplet is smaller than most mobile SOC such as Apple A13, Snapdragon 865 and Kirin 990.dotjaz - Thursday, March 5, 2020 - link
And by "enhance", I mean similar enhancement that enabled N7P, or it could very well be EUV-based but derived from N6.extide - Friday, March 6, 2020 - link
ALL they are saying is it is not N7. Anything beyond that is pure speculation.Modatu - Friday, March 6, 2020 - link
This is all good to hear, but what about the software ecosystem? At the moment CUDA is king in the GPGPU space, how do they want to address this?On the CPU side there are several HPC applications that are heavily tuned to the Intel compiler stack, what is their angle on this front?
Granted I am looking at this from a pretty niche scientific computing / HPC side.
Gc - Friday, March 6, 2020 - link
AMD's latest on HPC software might have been at SC19. It looks like much of the work on HPC applications was done by developer communities of those applications. I'd guess that AMD's emphasis is on the critical hardware-specific portions (compilers, drivers, OS-level management tools, etc.). AMD might have supported some proof-of-concept ports for testing their compilers, but doesn't have the application expertise to maintain ports of applications and libraries. I guess this means AMD (or AMD's customers, such as HPE/Cray) need good HPC developer relations engineers.> AMD Introduces ROCm 3.0
>
> Community support for the pre-exascale software ecosystem continues to grow. This ecosystem
> is built on ROCm, the foundational open source components for GPU compute provided by AMD.
> The ROCm development cycle features monthly releases offering developers a regular cadence
> of continuous improvements and updates to compilers, libraries, profilers, debuggers and system
> management tools. Major development milestones featured at SC19 include:
>
> + Introduction of ROCm 3.0 with new innovations to support HIP-clang – a compiler built upon
> LLVM, improved CUDA conversion capability with hipify-clang, library optimizations for both HPC
> and ML.
>
> + ROCm upstream integration into leading TensorFlow and PyTorch machine learning
> frameworks for applications like reinforcement learning, autonomous driving, and image and
> video detection.
>
> + Expanded acceleration support for HPC programing models and applications like OpenMP
> programing, LAMMPS, and NAMD.
>
> + New support for system and workload deployment tools like Kubernetes, Singularity, SLURM,
> TAU and others.
Yojimbo - Friday, March 6, 2020 - link
Supercomputing uses mostly large, legacy code bases that are parallelized with compiler directives. So just building good hardware is enough, along with tens or a hundred million dollars worth of effort, done by AMD and the national labs over the course of 2 or 3 years, to optimize the compilers for AMDs hardwareThe majority of the burgeoning market for GPU compute relies on CUDA. NVIDIA isn't spending hundreds of millions of dollars a year, maybe even a billion dollars a year now, on their software ecosystem, directly targeting various verticals through efforts including libraries, integration into storage, databases, and the cloud, just because they can. Improved CUDA conversion will cut it for a national lab that doesn't care much for the small fraction of the code dependent on native CUDA, but it will perform significantly worse than native code and thus make their hardware uncompetitive in most of the commercial market.
Modatu - Friday, March 6, 2020 - link
Yeah unfortunately, this does not sound too good.I mean we are talking about two 600 pound gorillas who pushed their respective software stacks deep into the market.
I hope they succeed in a way that makes them a real contender (from the software side).
realbabilu - Friday, March 6, 2020 - link
Don't know about Cuda opencl counter. They tuned clang and flang to AOCC amd optimized compiler to face Intel compiler in Linux only. They packing also blis,fftw,libflame,scalapack into AOCL and optimized library versus Intel MKL.My last fortran polyhedron bench, AOCC + Openblas is slightly better on matrix inverse than AOCC+ AOCL, GCC + Openblas. But the speed compiled AOCC + openblas on desktop ryzen 3600X still slightly under intel.compiler + Intel.mkl result on notebook i7-9750H.
The znver2 tune increase the speed like Qax Intel did. Still I hope amd had better software development like Intel has for all.platform including windows.
R3MF - Friday, March 6, 2020 - link
any news on desktop Renoir?del42sa - Friday, March 6, 2020 - link
nosenttoschool - Friday, March 6, 2020 - link
The biggest consumer announcement seems to be that RDNA2 has 50% higher performance per watt while staying on the same node.I don’t expect it to beat Ampere but it gives it a fighting chance.
adriaaaaan - Friday, March 6, 2020 - link
My only concern with next gen amd is support for dlss, nvidia has taken the technology from a gimmicky feature to something with incredible promise, particularly now that ray tracing is the new standard.eva02langley - Friday, March 6, 2020 - link
You understand that this is bad upscalling? Play your games at lower resolution and have better visual fidelity, no needs to purchase tensor cores for that.FreckledTrout - Friday, March 6, 2020 - link
+1ThelVadum - Friday, March 6, 2020 - link
You should check out the lastest videos on the subject from Digital Foundry and Hardware Unboxed. DLSS 2.0 as first implemented in Wolfenstein Young Blood is described as being virtually indistinguishable from native resolution, essentially giving a free performance boost (about 30 percent at 4k with the quality setting if I'm remembering correctly).TheinsanegamerN - Monday, March 9, 2020 - link
That matters about as much as the performance boost AMD recieved with mantle in battlefield. One games a card selling feature does not make.TheinsanegamerN - Monday, March 9, 2020 - link
Incredible promise? It's making your games blurrier and claiming its a feature. Just lower your resolution and installa shader mod. Boom, you have DLSS for free.DLSS may be the worst gimmick since Nvidia exclusive physx or AMD exclusive mantle.
Raptord - Friday, March 6, 2020 - link
"well-versed in 2.5D technologies thanks to its use of chipsets on Zen 2 processors"I believe this meant to say "chiplets" rather than "chipsets", no?
WaltC - Friday, March 6, 2020 - link
Just to clarify a point--AMD did not say they weren't using 7nm+; what they said was that most if not all of the + features--including EUV, I presume--had been rolled into the standard "7nm" TSMC process they'll be using. That's the way it appeared to me, anyway. I recall Papermaster talking about that. Looks like just a simple nomenclature change.WaltC - Friday, March 6, 2020 - link
It also occurs to me that possibly they only spoke of 7nm+ EUV from Samsung for mobile--not yesterday--but some months ago--interesting though.Yojimbo - Friday, March 6, 2020 - link
AMD's on a roll, but there's an elephant in the room when they tout their rosy plans. AMD spent $1.5 billion in R&D in the year just ended. NVIDIA spent $2.8 billion. Intel spent $13.4 billion. Of course Intel has many more pots on the fire than AMD, with Mobileye, their IoT stuff, and most notably their fabs, but you can bet that Intel is spending much more than $1.5 billion on stuff that competes directly with AMD's products, even if you exclude GPUs. So AMD is being outspent in R&D by their competitors by 3 to 1 or more. It would take a combination of genius on the part of AMD and stupidity by their competitors for that to play out in AMD's favor in anything but the short term.AMD can develop the best GPGPU hardware in the world, but without the software ecosystem they won't make a scratch in the market outside supercomputers. They can't develop that ecosystem with their current R&D spend. Zen is a good and successful product, but it came right at a time when Intel faltered with their 10 nm process. Not only has Intel been hurt in terms of process, but their troubles caused their new core architectures to hardly see the light of day as well. That situation is not going to last more than a year or two from now.
Shlong - Friday, March 6, 2020 - link
Well AMD had low R&D in 2016/2017 in comparison and came out with Zen. It's not how much you spend, it's about spending it efficiently which AMD excels at under Lisa Su.Yojimbo - Saturday, March 7, 2020 - link
Because AMD could get from 1 to 10 out of 100 by just making a piece of hardware, and because AMD came out with their chip right when Intel rain into some extremely fortuitous (for AMD) difficulties. But getting from 10 to 20 or to 30 or to wherever they hope to get is a different matter. There'll be less and less low-hanging fruit.Of course it's not about how much you spend, it's how you spend it. And, I repeat, if AMD can compete by spending less than 1/3rd the amount then they are geniuses and their competitors are morons.
Korguz - Saturday, March 7, 2020 - link
" because AMD came out with their chip right when Intel rain into some extremely fortuitous (for AMD) difficulties " that could of been part of the reason why zen is dong so well. but we dont know for where zen would be if intel didnt screw up with 10nm and after. if you say you do, then you should go buy some lotto ticketsYojimbo - Saturday, March 7, 2020 - link
Why buy lotto tickets when you can buy stock with such an intuition? We know exactly what Zen would be. It's a known quantity. What we don't know is where Intel would be, but it's fair to say it would be a lot better off than it is now. And it's fair to say that AMD would have significantly worse margins because they would have to sell their processors at lower prices to try to win market share. It's odd to think that customers are not at all influenced by price/performance advantage. If AMD couldn't demonstrate it, AMD wouldn't be winning market share.Korguz - Saturday, March 7, 2020 - link
" And it's fair to say that AMD would have significantly worse margins because they would have to sell their processors at lower prices to try to win market share " ahh, so you know where intel could of been performance wise, if they didnt screw up 10nm ?? wow.. maybe you should go buy lotto tickets, cause with that type of intuition, you would be able to know what the winning numbers would be before they are even drawn, and then use that money to by stock.Yojimbo - Sunday, March 8, 2020 - link
I know they would be a whole lot better than they are now. And so do you. You just choose to close your eyes.Korguz - Sunday, March 8, 2020 - link
nope, i just choose not to speculate on things i dont know, like you should, no one could possibly know where intel could be if their 10nm process didnt go wrong.TheinsanegamerN - Monday, March 9, 2020 - link
So, in your mind, if 10nm hadnt gone wrong, intel STILL wouldnt be in any better place then they are now with 14nm+++ and a CPU arch from 2011?You dont need to speculate. We know Intel's arch improvements were tied to node, and the lack of 10nm node has stopped CPU improvements from retail intel products in its tracks.
We dont know exacts, but we DO know Intel wouldnt be stuck in a holding pattern like they are now. Even if they WERE, someohow, stuck in a holding pattern with a perfect 10nm release, they would still have a leg up over their current position.
jospoortvliet - Tuesday, March 10, 2020 - link
It is indeed reasonable to assume AMD would not have been in such a great position had Intel not screwed up 10nm. They would probably have been competitive but not higher winners.What surprises me is that Intel seems to have such a hard time getting back on track. Why do the woes of 10nm impact 7 so much, wouldn't they already be closer to 5nm? Old roadmaps certainly suggested so.
Haawser - Friday, March 6, 2020 - link
In the next two years AMD will probably move on to Zen3 / 7nm+ then Zen4 / 5nm... So how exactly are Intel going to catch up ? If AMD slowed down then maybe Intel could catch up. But right now, that doesn't look at all likely. So what you've got is a bunch of false assumptions based on a highly speculative 'What if..' and not much more.Yojimbo - Saturday, March 7, 2020 - link
Intel doesn't need AMD to slow down for Intel to catch up and overtake them. Because in the last few years Intel has been stuck on an old node with old cores. It's basically been 14 nm and Skylake since 2015. They did some emergency improvements to their process and to their architecture with Kaby Lake, Coffee Lake, Cascade Lake, Cooper Lake, etc. But those are not efforts that have been able to draw anywhere close to full breaths. In the next 3 years it's going to be 10 nm and then 7 nm and Sunny Cove, then Willow Cove, then Golden Cove, then Ocean Cove. Intel did not stop development of their architecture, they just haven't been able to implement the developments due to their process troubles.And Intel offers so much more than just a CPU with their platform that once Intel has righted their ships, customers will most likely prefer to mostly use the existence of AMD's offerings as ways to get better pricing from Intel. That will cause a stall in AMD's data center market share. Anyway, this is my prediction, and I don't see AMD changing it as long as they aren't spending the necessary R&D to make inroads into Intel's platform advantage. They aren't going to have a CPU technology lead forever.
Korguz - Friday, March 6, 2020 - link
Yojimbo, yea, cause throwing lots of money into something makes it betterYojimbo - Saturday, March 7, 2020 - link
So then you think NVIDIA and Intel are wasting billions of dollars. In other words, they are stupid. That's what you have to believe to believe AMD can reach their lofty goals with their current R&D spend.Korguz - Saturday, March 7, 2020 - link
come on Yojimbo its time intel dropped their current architecture, and came out with a new one, not based on one from 2015 or what ever, its been almost 3 years, and intel still hasnt come out with something new. or even hints there is a new architecture coming out ?" So then you think NVIDIA and Intel are wasting billions of dollars " for nvidia, no, for intel yes. as i just said, intel is just rehashing the same architecture they have been using since 2015, its time they dropped it, and came out with something new. if you can say that intel has been using their billions wisely, then i dont know what to say.
Yojimbo - Saturday, March 7, 2020 - link
The reason they have been is because of their process problems. Didn't you read what I wrote? And you're making a logical mistake by arguing that because Intel made a mistake (it was because of process execution, but even ignoring that fact) then they are so inept that AMD can consistently succeed with a vastly smaller R&D spend. Frankly, AMD has not shown itself to be the company that can pull that off, if there were to be one.Haawser - Saturday, March 7, 2020 - link
I wasn't aware that process problems were to blame for their innumerable security issues..? Thanks for clearing that up. /sYojimbo - Sunday, March 8, 2020 - link
The security issues do not seem to be affecting their sales. No one really batted an eye.Korguz - Saturday, March 7, 2020 - link
ahh so they cant put a new architecture on their 14nm process ?? till they get their 10nm sorted out, i understand now....Yojimbo - Sunday, March 8, 2020 - link
You only don't understand because you refuse to try. Yes, they can't put their new architecture on 14nm because it was designed for 10 nm. Hence the recent stress on the option to backport architectures on processes going forward, something that people say will be difficult and are even wondering if it will actually work in practice. See the Anandtech articles on the issue, for example. Here is the most recent one, search for "back-port": https://www.anandtech.com/show/15580/intel-cfo-our...You'd have a better idea of actually knowing what's going on if you stopped making stupid sarcastic comments and read stuff instead.
Korguz - Sunday, March 8, 2020 - link
oh.. so when intel realized say 4 years ago that 10nm, wasnt going to be ready, and then amd releases zen, they couldnt of started to back port so they could of put it on 14nm ?? BS. come on man, and you accuse me of closing my eyes ?" You'd have a better idea of actually knowing what's going on if you stopped making stupid sarcastic comments and read stuff instead. " and the same goes to you
ksec - Friday, March 6, 2020 - link
Finally hitting 10% in DC, That is excellent news, even though it is a quarter late. But Good News indeed. And with the roadmap laid out, while their expectation of 20% CARG is still a little too low, but knowing AMD being extremely conservative this is still a very good target.While Intel is finally waking up which causes quite a bit of concern ( and excitement ), it seems AMD will still have the upper hand for a few years.
schm121 - Sunday, March 8, 2020 - link
We always enjoy your articles its inspired a lot by reading your articles day by day. So please accept my thanks and congrats for success of your latest series.https://www.schmhyd.edu.in/
Mugur - Monday, March 9, 2020 - link
But what happened with 7nm EUV? Next year? Isn't EUV cheaper to produce (not considering R&D and equipment here) and with better yields, due to the simplification of the process (masks)?