NVIDIA GeForce GTX 980M and GTX 970M: Mobile to the Maxwell
by Jarred Walton on October 7, 2014 9:00 AM ESTClosing the Performance Gap with Desktops
If we look back at the past several generations of GPUs from NVIDIA, the GTX 480 launched in March 2010 and had 480 CUDA cores clocked at 700 MHz with a 384-bit memory interface and 3.7GHz GDDR5 (177.4 GB/s). The mobile counterpart GTX 480M officially launched just a couple months later (though it wasn't really available for purchase for at least another month), but it was a rather different beast. It used the same core chip (GF100) but with a cut-down configuration of 352 cores clocked at 425 MHz and a 256-bit memory interface clocked at 3.0GHz. In terms of performance, it was about 40-45% as fast as the desktop chip. GTX 580 came out in November 2010, with 512 cores now clocked at 772 MHz and 4GHz GDDR5; GTX 580M appeared seven months later in June 2011 with 384 cores at 620 MHz and 3GHz GDDR5, and it used a different chip this time (GF114 vs. GF110). Performance was now around 45-55% of the desktop part.
The story was similar though improved in some ways with GTX 680 and GTX 680M. 680M had 1344 cores at 720 MHz with 3.6GHz GDDR5 while GTX 680 had 1536 cores at up to 1058 MHz with 6GHz GDDR5. They were three months apart and now the mobile chip was around 55-65% of the desktop GPU. GTX 780/780M were basically announced at the same time (though mobile hardware showed up about a month later, in June 2013), and as with 580/580M the notebook part used a smaller chip than the desktop (GK104 vs. GK110). The performance offered was again around 55-65% of the desktop part. Then of course there's GTX 880M, which is sort of the counterpart to GTX 780 Ti. It uses a full GK104 (1536 cores) while 780 Ti uses a full GK110 (2880 cores), and the delay between the 780 Ti and the 880M launches was four months, and while the desktop GPUs never saw the 800 series, GTX 880M is down to around 50-60% of the top desktop GPU, the GTX 780 Ti.
That brings us to today's launch of the GTX 980M/970M. You might say that there have been patterns emerging over the past few years that hint where NVIDIA is going – e.g. Kepler GK107 first launched on laptops back in March 2012, with desktop GPUs coming a month later – but the higher performance parts have almost always been desktop first and mobile several months later, with at best 50-65% of the performance. Now just one month after NVIDIA launched the GTX 980 and 970, they're bringing out the mobile counterparts. What's more, while the mobile chips are yet again cut-down versions of the desktop GPUs, clocks are still pretty aggressive and NVIDIA claims the 980M will deliver around 75% of the performance of the GTX 980. Here's a look at the specifications of the new mobile GPUs.
NVIDIA GeForce GTX 900M Specifications | ||
GTX 980M | GTX 970M | |
CUDA Cores | 1536 | 1280 |
GPU Clock (MHz) | 1038 + Boost | 924 + Boost |
GDDR5 Clock | 5GHz | 5GHz |
Memory Interface | 256-bit | 192-bit |
Memory Configuration | 4GB or 8GB | 3GB or 6GB |
eDP 1.2 | Up to 3840x2160 | |
LVDS | Up to 1920x1200 | |
VGA | Up to 2048x1536 | |
DisplayPort Multimode | Up to 3840x2160 |
The specifications are actually a bit of a surprise, as the core clocks on the 980M are right there with the desktop parts (though it may or may not boost as high). The 980M ends up with 75% of the CUDA cores of the GTX 980 while the memory clock is 29% lower. In terms of pure theoretical compute power, the 980M on paper is going to be 70-75% of the GTX 980. Of course that's only on paper, and actual gaming performance depends on several factors: GPU shader performance and GPU memory bandwidth are obviously important, but the CPU performance, resolution, settings, and choice of game are just as critical. In some games at some settings, the 980M is very likely to deliver more than 75% of the GTX 980's performance; other games and settings may end up closer to 70% or less of the desktop. Regardless, this is as close as NVIDIA has ever come to having their top notebook GPU match their top desktop GPU.
A big part of this is the focus on efficiency with Maxwell GM204. NVIDIA doesn't disclose TDP for their mobile parts, but the top mobile GPUs usually target 100W. NVIDIA went after efficiency in a big way with Maxwell 2, dropping TDP from 250W with GTX 780 Ti down to 165W with GTX 980, all while delivering a similar (often slightly better) level of performance. With further binning and refinements to help create a notebook GPU, the TDP target would be 60% of the GTX 980 and power requirements tend to scale quite a bit near the maximum stable clocks for any particular microprocessor. Reduce the memory clocks a bit and disable some of the SMM units and getting 75% of the performance with 60% of the power requirement shouldn't be too difficult to pull off.
Moving on to the GTX 970M, NVIDIA is still using GM204 but it has even more SMM units disabled leaving it with 1280 CUDA cores. The memory bus has also been dropped to a 192-bit interface, but with a slightly lower core clock and fewer cores to feed, the GTX 970M should do well with a 192-bit bus. The smaller memory bus also translates into less total memory this round, so NVIDIA isn't doing any asymmetrical memory interface on the 970M; it will have 3GB GDDR5 standard, with an option to go with 6GB. It's good to see the potential to get more than 3GB RAM, as we're already seeing a few games that are moving past that target.
In terms of theoretical compute performance (cores * clock speed), the GTX 980M will be about 30-35% faster than the GTX 970M in GPU-bound situations. If you're curious, the GTX 970M will also offer around 55-65% of the performance of the desktop GTX 970, so the second tier GPU ends up being closer to what we've seen with previous generations of NVIDIA mobile GPUs.
With the launch of the new GTX 970M and GTX 980M, it's also worth mentioning that NVIDIA is officially discontinuing some of the existing mobile parts. The current lineup of mobile GPUs from NVIDIA now consists of GeForce 820M, 830M, and 840M for the casual/less demanding market. The 820M is actually a Fermi-derived part, while 830M and 840M use GM108 with 256 and 384 cores, respectively. At the top of the product stack, the GTX 980M and 970M replace the GTX 880M and 870M, while GTX 860M and 850M continue as the "mainstream gaming" notebook GPUs; 860M also continues to be offered in two variants, a Maxwell GM107 version and a Kepler GK104 version, though the latter hasn't been widely used.
68 Comments
View All Comments
scottrichardson - Tuesday, October 7, 2014 - link
One can only assume this is the GPU that Apple will slot into their upcoming 'Retina' iMacs, unless the AMD rumours hold true?tviceman - Tuesday, October 7, 2014 - link
It would be crazy of Apple to use Tonga (the rumor) instead of GM204. If Apple really did choose AMD, Nvidia must not have been willing to budge on price.RussianSensation - Tuesday, October 7, 2014 - link
Actually they wouldn't be crazy. There are at least 2 reasons why this rumour could be true:1) Nvidia is filing a lot of patent infringement lawsuits agains Samsung, Qualcomm and even Apple. Rumour has it as a retaliation, Apple will move some of its products to AMD until NV either withdraws the lawsuit or decides to settle at more agreeable terms.
2) NV's GM204 is not cheap and it certainly won't be cheap for the mobile sector. When you are faced with trying to price the iMac 5K without a stratospheric price, you might want to go with a cheaper GPU because that new 5K display will cost an arm and a leg. In other words, since Macs are hardly used for gaming to begin with, you simply balance your priorities to hit appropriate price points your customers will pay based on historical data. It's possible that the inclusion of GM204 would force a new higher end SKU of the iMac when combined with a 5K display.
3) Manufacturing supply. As you can see from the availability of desktop GM204 chips, there are supply issues. Apple might has required NV to provide a certain amount of GM204 chips and Nvidia couldn't guarantee that many in xx time frame.
Now, all of these are rumours and Apple could still use GM204, or GM204 + AMD GPU + Intel GPU in various SKUs of the iMac. However, there are 3 legitimate reasons why Apple would not use Nvidia's GM204 in the next gen iMac.
Doormat - Tuesday, October 7, 2014 - link
4) As shown in the chart on page 2, the max resolution of the Maxwell GPU is 4K. If Apple is going to field a 5K display in the 27" iMac, they will need a GPU that supports this resolution, likely requiring displayport 1.3 (or some draft version of DP 1.3 that was available when the chips and panel were being designed).Morawka - Tuesday, October 7, 2014 - link
The patent deal has nothing to do with apple because apple does not make gpu's.NV's 204 has the best perf per watt by a huge margin. We all know that having the best perf per watt is going to cost more than perf per dollar.
With that said, apple will probably go with AMD for this years models because apple always flip flop's gpu vendors between generations.
However, it will run hotter, use more power. And historicly AMD/ATI GPU's within apple products have seen a much higher rate of recall and defects in the past.
I'm not sure how a mobile amd chip will perform at 5K. The desktop gpu's seem up to the task but we haven't seen a really good mobile amd discrete gpu in a long time.
name99 - Tuesday, October 7, 2014 - link
nVidia has filed lawsuits against Samsung and Qualcomm. NOT Apple.They may have plans to sue Apple --- anyone may have plans to do anything --- but right now they have not sued Apple, have not threatened to sue Apple, have not even hinted or suggested or implied that they want to sue Apple.
Manufacturing supply (if this rumor is true) strikes me as far more likely. There is a long history of Apple doing things that seemed (especially to haters) as perverse limitations based on maximally trying to screw their customers over, only for us to learn later that they were limited by supply issues. When you're shipping Apple volumes, you CAN'T simply wish that millions of the last doodab were available when they can only be manufactured in the hundreds of thousands.
We saw this with fingerprint sensors (restricted to iPhone 5S, not 5C, touch or iPads), probably with sapphire on iPhone 6's, and probably with low power RAM on iPhone 6's (that being what's restricting them to 1GB, not some nefarious Apple plan).
Of course for this rumor to be true and the explanation to be relevant requires that AMD can manufacture faster than nV (or has acquired a large inventory). It's not clear to me that this has to be true...
chizow - Tuesday, October 7, 2014 - link
It's not surprising actually, AMD is probably throwing these chips at Apple for next to nothing, and Apple probably feels at least some obligation to prop up OpenCL, the standard they championed way back when, over using the proprietary CUDA, by using AMD chips that clearly aren't as good as Nvidia would be for their needs (performance and low TDP).Apple's hubris probably leads them to believe they can live without Nvidia and weather the growing pains. My personal experience here at work is that users who rely on Adobe CS have simply shifted to Windows-based workstations with Quadro parts rather than deal with the Mac Pro 2013 GPU/driver issues.
What will really be interesting to see is what Apple does with their MBP, where Kepler was the obvious choice due to efficiency. Maxwell would really shine there, but will Apple be willing to take a big step backwards in performance there just to stay consistent with their AMD trend?
Omoronovo - Tuesday, October 7, 2014 - link
That's certainly possible, but bear in mind that Apple generally doesn't have current top-end mobile hardware in their iMAcs - That either means a late arrival for iMacs or perhaps a 960M when it releases.I doubt AMD will ever get back into Apple's good graces as none of their GPU's have met the power/performance levels nVidia has - bear in mind that price is usually a secondary concern for Apple and their consumers, so those are the only two metrics that really matter here.
WinterCharm - Tuesday, October 7, 2014 - link
Yeah, not to mention that many people were disappointed when Apple included AMD graphics in their Mac Pro. I *need* CUDA. I can't live without it, and many other professionals will tell you the same thing.RussianSensation - Tuesday, October 7, 2014 - link
This is exactly why open standards like OpenCL should be embraced. Saying things like I can't live without "insert a proprietary/locked GPU feature" is what segregates he industry. A lot of programs benefit from OpenCL and AMD GPUs provide full GPU hardware acceleration for the Adobe Suite (Creative, Mercury, Premiere, etc.) and so on:http://www.amd.com/en-us/press-releases/Pages/amd-...
http://helpx.adobe.com/premiere-pro/system-require...
Also, the iMac is not a workstation which means the primary target market of iMacs doesn't perform professional work with CUDA. If you are really in need of professional graphics, you are using FirePro or Quadro in a desktop workstation or getting MacPro.