NVIDIA Announces NVIDIA Titan Xp Video Card: Fully Enabled GP102 for $1200
by Ryan Smith on April 7, 2017 6:00 AM ESTIf you’ve followed NVIDIA’s video card launches over the last few years, then there’s a pretty clear pattern to the company’s release schedule. If the company starts things off with a cut-down Titan, as they did with the Kepler and Pascal generations, then a full-fledged Titan is sure to follow. And sure enough, with the recent launch of the GeForce GTX 1080 Ti – which effectively put the original Titan X Pascal out to pasture – NVIDIA is back again to launch their full-fledged Titan for this generation: the NVIDIA Titan Xp.
As a sort of mid-cycle replacement for the original NVIDIA Titan X (Pascal), this is a bit more of a low-key launch for the company. There’s nothing new to talk about as far as the design goes, the market positioning, etc. Instead what we have is simply a fully-enabled GP102 GPU coming to an NVIDIA prosumer card, making it the most powerful video card NVIDIA offers.
NVIDIA GPU Specification Comparison | ||||||
NVIDIA Titan Xp | GTX 1080 Ti | NVIDIA Titan X (Pascal) |
GTX Titan X (Maxwell) |
|||
CUDA Cores | 3840 | 3584 | 3584 | 3072 | ||
Texture Units | 240 | 224 | 224 | 192 | ||
ROPs | 96 | 88 | 96 | 96 | ||
Core Clock | 1481MHz? | 1481MHz | 1417MHz | 1000MHz | ||
Boost Clock | 1582MHz | 1582MHz | 1531MHz | 1075MHz | ||
TFLOPs (FMA) | 12.1 TFLOPs | 11.3 TFLOPs | 11 TFLOPs | 6.1 TFLOPs | ||
Memory Clock | 11.4Gbps GDDR5X | 11Gbps GDDR5X | 10Gbps GDDR5X | 7Gbps GDDR5 | ||
Memory Bus Width | 384-bit | 352-bit | 384-bit | 384-bit | ||
VRAM | 12GB | 11GB | 12GB | 12GB | ||
FP64 | 1/32 | 1/32 | 1/32 | 1/32 | ||
FP16 (Native) | 1/64 | 1/64 | 1/64 | N/A | ||
INT8 | 4:1 | 4:1 | 4:1 | N/A | ||
TDP | 250W | 250W | 250W | 250W | ||
GPU | GP102 | GP102 | GP102 | GM200 | ||
Transistor Count | 12B | 12B | 12B | 8B | ||
Die Size | 471mm2 | 471mm2 | 471mm2 | 601mm2 | ||
Manufacturing Process | TSMC 16nm | TSMC 16nm | TSMC 16nm | TSMC 28nm | ||
Launch Date | 04/06/2017 | 03/10/2017 | 08/02/2016 | 03/17/2015 | ||
Launch Price | $1200 | $699 | $1200 | $999 |
Relative to the previous (and now discontinued) Titan, things are pretty straightforward. The last 2 SMs have been enabled, and both the GPU and memory clockspeeds have seen a minor bump as well.
Perhaps more meaningful is to compare the Titan Xp to its only real competition on the market, the GTX 1080 Ti. What does a Titan get you over a Ti for this generation? Okay, it gets you about the same thing: the last 2 SMs are unlocked, and the memory clockspeed has seen a very small bump. However reflecting how NVIDIA opted to hobble the GTX 1080 Ti just a little bit to leave room for the inevitable Titan, NVIDIA’s prosumer card gets a bit more memory and a bit more memory bandwidth, thanks to the re-enabling of the full 384-bit memory bus.
Bringing back the last 32-bit memory channel and its associated GDDR5X chip gives the Titan Xp a total of 547.2GB/sec of memory bandwidth, 13% more than its lower-tier sibling. Otherwise on the GPU performance front, we’re looking at 7% more shader/texture/geometry throughput and 9% more ROP throughput. Or to compare it the last-generation flagship GTX Titan X (Maxwell), from flagship-to-flagship NVIDIA has improved GPU performance by 84%, ROP throughput by 47%, and memory bandwidth by 63%.
As for power and other design considerations, this hasn’t changed. The Titan Xp is still a 250W card, and it’s still designed like the last Titan X, clad in black with NVIDIA’s current-generation heatsink and shroud design. Simply put, if you’re a regular NVIDIA high-end customer, then NVIDIA has made it very easy to pull out your GTX 780Ti/980Ti/Titan and replace it with the new Titan Xp.
However before we get off of the specifications entirely, there’s one aspect of the new Titan Xp that surprises me: the memory capacity. In the previous generations, Titan cards have offered the full memory capacity for their associated GPU, equivalent to NVIDIA’s Tesla and Quadro cards. For the original Titan, this was 6GB, and 12GB for the Titan X (Maxwell). However with the Titan Xp, NVIDIA is still only offering 12GB of VRAM, while the otherwise equivalent Quadro P6000 gets 24GB. This is an interesting departure from the norm for the company.
I have a couple of ideas on why this is, though it’s all speculation. The first is that this is a market reason: NVIDIA needs to enforce better market segmentation between the prosumer Titan and the professional Tesla. Typically this is done via their respective driver sets and what features these cards offer (Tesla not being rigged for workstations, for example). However Titan X (Maxwell) was very popular in the previous generation, and it may be that it did a little too well compared to the Tesla, and NVIDA is concerned that there will be a repeat performance here even though they’ve done a much greater level of feature separation via the differences in the GP100 and GP102 GPUs.
The other theory is that NVIDIA can’t have it all – they can’t both have super-fast GDDR5X, and 24GB of it in clamshell mode at the same time. It’s telling on the memory bandwidth front that NVIDIA has overclocked the Titan Xp’s memory just a bit; 11.4Gbps, even though partner Micron’s GDDR5X tops out at 11Gbps. Granted, 12Gbps is coming, but I think Micron would be announcing that and NVIDIA would just run with 12Gbps. In any case, this compares starkly to the Quadro P6000, which does get 24GB of VRAM, but with its GDDR5X underclocked to 9Gbps. Or to take it one step further, the Tesla P40, which doesn’t get GDDR5X at all and only has GDDR5. NVIDIA and Micron have definitely pushed the envelope with GDDR5X, so given the additional complexities of clamshell mode, it’s not unreasonable to speculate that a 12GB card is for technical reasons along these lines.
In any case, NVIDIA’s handling of the Titan Xp and their intended market hasn’t changed from the previous generation Titan. This means NVIDIA is walking an interesting line with partners and customers in positioning this as a prosumer card. At $1200 it’s 71% more expensive than the GTX 1080 Ti, all for one last GB of memory and 5-10% more performance. That last bit of flagship performance from NVIDIA has always come at a price, and the Titan Xp is no exception.
The expected customer base then is both professionals and consumers, however one that leans more strongly towards professionals than the Maxwell generation. Professionals would just as well work with NVIDIA directly, whereas consumers have (and generally still do) work via NVIDIA’s partners, whom of course won’t be promoting the Titan Xp since they don’t get to sell it. Which is not to say that you can’t buy it and game on it – for those whom money is no object, this will happen – and it’s even conveniently on the GeForce website. But even then, the NVIDIA Titan Xp (remember, this isn’t a GeForce) doesn’t get the same level of promotion with consumers and gamers as past iterations have. The prosumer card is decidedly more professional, especially with the almost-as-good GeForce GTX 1080 Ti right below it.
Meanwhile, on a personal note, I’m entirely nonplussed with the name. NVIDIA named the previous Titan X (Pascal) poorly, and they seem content to continue that trend here. Because the previous Pascal-based Titan was called the “Titan X”, it’s very common to see it informally referred to as the “Titan XP”. Except now we have NVIDIA selling the “Titan Xp”, which is not the same card as the “Titan XP”. I appreciate NVIDIA at least not calling this the “Titan X” yet again – and I assume that all of this started from someone wanting to call it the “Titan X Plus” but they really need to find better names. My suggestion: either pick unique Titan names, or at least go with yearly model numbers like cars and iPads.
Anyhow, for those whose wallets are deep enough to buy NVIDIA’s latest prosumer and budget deep-learning card, the card will set you back $1200. Like the previous Titan, the NVIDA Titan Xp is being sold exclusively through NVIDIA’s website, and has already gone on sale today.
Source: NVIDIA
70 Comments
View All Comments
osxandwindows - Friday, April 7, 2017 - link
Why does Nvidia insist on milking customers for gimped GPus, anyway?damianrobertjones - Friday, April 7, 2017 - link
Probably due to being a business. We get all of these mild speed upgrades KNOWING that the next release will also offer a small speed increase. I'm pretty sure that, if it were not for $$$$$ we'd be far further than we are now with regards to computer tech.ImSpartacus - Friday, April 7, 2017 - link
If not for $$$$, we'd see less r&d spending and be noticeably behind in overall computer tech.grant3 - Friday, April 7, 2017 - link
Tell us more about this modern technology which was designed without any money.nathanddrews - Friday, April 7, 2017 - link
Milking? Gimped? This is a full chip at 1/5th the price of the P6000...SharpEars - Friday, April 7, 2017 - link
Look at the FP64 performance of all of the cards in the table - Gimped!wumpus - Friday, April 7, 2017 - link
This always irritates me (and is true for AMD as well). You should easily be able to get 1/8th of float MIPS from doubles (1/4 should be possible, but requires designing around doubles. 1/2 means you *really* designed around doubles and at least half the transistors are dark during singles).I'm also wondering just how bad you have to be at software fp to not be able to beat 1/64th in fp16. I mean, come on! That is 128 instructions for a 16 bit multiply(/add?)? You should be able to do that niavely in 48, and probably under 32. Personally, I'd rather see fp16 as emphasized as int8 and used for HD pixels.
There's a lot more things a GPU should be doing with FP16 than FP64, but it is embarassing just how much they cripple both (worse for AMD when they keep talking about HSA).
nathanddrews - Friday, April 7, 2017 - link
Ah, gimped for compute. I was thinking only about games.Samus - Saturday, April 8, 2017 - link
You clearly don't understand nvidias architecture shift since Maxwell. They aren't targeting the FP64 crowd anymore. Shifting away from FP64 enabled them to increase TFLOPs with the offset die area. For those special applications that demand FP64 precision there are other high end cards, or just consider something from AMD who still has an architectural providing FP64 in the pro-budget space.On that note, I still think they've outdone themselves with the 1080Ti. The Xp likely won't sell well in comparison. Including the target market for the Titan. Which is probably OK because nvidia probably doesn't have a lot of fully enabled GP102's to go around. After all, these are "perfect" chips with no die flaws and that is going to be a substantial minority of the wafer.
osxandwindows - Friday, April 7, 2017 - link
They sold the same crapp in 2016 for the same price.