Yeah, they're in the business of making money. Nvidia thinks that's what the market will bear. They're probably right. Titans have proven quite popular in a market that they've created (CUDA). Now they're reaping the benefits of that effort. Good for them.
If you're buying this card for gaming, you're rich or dumb, or both. The biggest problem is that it's best use case for gaming, small form factor 1440p/4k, is thrown out the window by the triple slot cooler.
There's a lot of inertia in the scientific community, though. For the longest time CUDA was the only API that was well documented enough to use, and as such there would be a lot of resistance to changing it. When code is written and released for free by academics or developed by a grad student/postdoc researcher who moves on to somewhere else after 2-4 years, it's a MASSIVE pain to re-write it for a different API like OpenCL. Hell, there's still legacy FORTRAN code running around the Physics community partly because no one has been able to dedicate the time to port it to something newer and make it as efficient.
ROFL@OpenCL doing ANYTHING better than cuda. Only people wanting to waste money buy cards for opencl. CUDA gets real work done. I don't see AMD chanting OpenCL benchmark numbers VS CUDA at all. That means it sucks. Ryan etc won't test them vs each other either, so again it SUCKS or he would run it to make Cuda look bad. The problem is YOU CAN'T...ROFL. I'm not a fan of proprietary stuff unless it's better, and in this case it is. Gsync to, until proven otherwise by AMD (I suspect their solution will still have lag, and they'll wait on scalers just like NV who built their own rather than waiting). Cuda is taught in 100's of universities DUE to billions in funding from NV. It's in hundreds of pro apps.
In work, I don't care who it is I'm using, I just want to be faster. OpenCL isn't. Free crap generally is...crap. People don't like working for free, get over it or live with 2nd best for life.
Also AMD must be money grabbing too right? 2x290x is not $1500. But with a worse name in the business you can't charge what the king does. See Intel vs. AMD. Same story. You pay a premium for a premium product. Phase3 drivers for AMD means no $3000 money grab. People seem to miss why AMD can't charge what NV does. R&D like cuda, Gsync, drivers NOT in phase3...Get the point people? This isn't a comment on who I like, just stating the facts of business. Apple had a premium for a few years, but then everyone got retina like screens, fast chips etc etc...Now price parity at the top basically. If AMD stops getting phase 3 drivers, bad ref designs that can run as advertised, etc..etc...They will be able to charge more.
I don't use CUDA to play games these are suppose to be gaming cards or at least that's what Nvida is saying they are if they are not where are the pro drivers for it.You fanboys believe crap smells like grapes as long as its your team saying so ,its a over price card that wants to be a gaming card and wants to be a pro card for that money if I need a pro card which come with pro drivers i spend the little extra for the reall thing.
I don't think it's just that. I think they don't expect many people to buy them. It's there as a halo product to get people to buy their less expensive cards.
If you're a gamer going over 2x GTX 780 Ti doesn't make a lick of sense financially and if you're buying for compute multiple Titan Blacks makes more sense.
As a compute card it does not make any sense. For the same amount of money, you can get 3 Titan Black (x1.5 flops, x1.5 RAM), or 4 780Ti (x2 flops, same RAM).
And yes, I'm aware that 780Ti only does 32bit flops well, but that's actually more than enough for things like neural network simulations.
But it does make a lot of sense. You're thinking purely in terms of performance per dollar, but what if we look at performance per watt? Depending on how often you're pulling peak performance, you could be saving quite a bit on your electric bill.
They are very likely to use the same chips in both Titan Black and Titan Z. What makes you think chips in Z are any more power efficient than chips in Black?
To get TDP down they clocked it lower (peak or not), obviously.
For the companies which use software like ANSYS the performance and price is great but Titan isn't certified for Quadro or Tesla. Otherwise we would buy heaps of these for compute. So we buy K40s instead at two or three times the price.
Plus let's not forget: $1000 to reduce overall power use at max TDP by 125W is a huge investment. You're not saving on space, you're losing performance, and you're spending $3000 instead of $2000. Running 24/7 at the typical cost of $0.10 per kWh, you'd only "save" $0.30 per day! Break-even point for running 24/7/365 is thus 3333 days. Anyone out there really believe that:
A) The cards will still be functional in nine years? B) People would want to use the cards in nine year?
Put it another way: if I offered you an X1800 or 7800 GTX right now for free, would you take it and would you actually use it? I think not. (And I believe in my "box of old stuff" I actually have a newer X1950 that has been unplugged and unused for several years.)
You forgot to mention that you have to share the bus bandwidth with two gpus now, and in case you don't have that many iterations in your code (bootstrapping), you won't get the whole benefit of two gpus sharing the same bus.
I'm sorry. I must have missed that 3 titan blacks can be run at 375w total. WOW, NV sneak in some serious improvements in titan blacks since I read reviews? Where are these magical titan blacks? Same with 780ti. You're missing the point here.
Who cares about power? First of all, you're not paying for it - your company or university does, and second, even if you paid for it yourself - extra 100W = $10/month. Not exactly a bank breaking amount.
People are already beginning to buy 295X2, but I highly doubt anyone will buy this card. Just because it's triple slot doesn't mean it won't run hot and loud; the GTX 690 was about as much as NVTTM can handle without scaling back the clocks.
Not to mention that the Titan Black is still out there...what's the point of this card?
Yes, we had to bear with CUDA when there was no alternative, but now you'd have to be pretty insane to target it instead of OpenCL. No one in industry likes proprietary standards that lock you to one hardware vendor.
The only strong point of this card is DP performance. But DP and no ECC? Ew, where are you going to use that?
plenty of people are running legacy software that uses CUDA - and when they go to replace their dual Titan compute, they will probably just switch to one of these instead of trying to figure out a new hardware/software combination.
Hey fanboy the 295x2 are gaming cards not pro cards.You fanboys are all like on both sides as long as your team said so you believe there crap.I don't buy gaming cards to do cuda or opcl i buy them to play games and this card does not know what it wants to be a gaming card or pro card(sorry no pro drivers)for that price i would spend a little extra sense it seems money means nothing to you guys and buy the pro card that comes with pro drivers.
re: use as a compute card does this mean it has the double-precision floating point enabled? NVDA usually disable that on gaming cards to segment the market, right?
I would much rather displayport also be abandoned, and instead we just have something like AMD's Eyefinity cards: a single row of 6 mini-DisplayPorts (or 3/4 miniDP + 1 DL-DVI) instead. Using that config and some FPGA-style software-configurable wiring, it's feasible to do conversion to DL-DVI using two ports into a passive adapter. Needless to say, passive conversion to HDMI is also trivial, and I doubt much more expensive (if at all in volume quantities) than the usual bundled DVI to VGA adapters that are usually shipped. Oh, and ditch the MOLEX -> PCI-E adapters too. They're not particularly useful in modern times....
"The triple slot design is also going to be notable since when coupled with the split-blower design of the cooler, it further increases the amount of space the card occupies. Axial fan designs such as the one used on GTX Titan Z need a PCIe slot’s worth of breathing room to operate, which means that altogether the GTX Titan Z is going to take up 4 slots of space."
The card appears to be a 2.5-slot design with a 3-slot bracket which should allow enough room for the axial fan to pull in air even if a card is installed in the slot next to it. An empty slot between cards shouldn't be necessary given the design.
I'm confused about the video out comments. As far as I know HDMI and DVI are electrically identical (video), so why would a DP -> HDMI be significantly cheaper than a DP->DVI adapter? I could see dual link DVI being more expensive, but a glance at Newegg shows options for far less than $100 (akthough I've no idea about their quality).
DL-DVI (Dual-Link, up to 2560x1600@60Hz or 1920x1200@120Hz) requires active converters that re in the $50+ range. I picked one up from a Dell clearout auction for ~$25 (iirc) shipped a wihle ago.
SL-DVI/HDMI (Single-Link, up to 1920x1200@60Hz) on the other hand be done by passive adapters since they use fewer pins than DP (HDMI is 19, SL-DVI is 18 pins with more shielding), and most DP source ports will happily switch down to outputting HDMI/DVI signals instead of DP when connected to a passive adapter, which can be made for under $1.
DL-DVI requires two sets of TMDS signals while SL-DVI/HDMI requires one set. That makes all the difference since TMDS passthrough mode on DP only has enough pins for a single set of TMDS transmitters. Which is why we can do passive HDMI, but have to do active for DL-DVI.
HDMI 1.3/1.4 runs its single link at 340MHz compared to the standard 165MHz of SL-DVI and HDMI 1.0/1.1/1.2. Therefore, the HDMI port on this card has roughly equivalent bandwidth to DL-DVI and can theoretically drive a 2560x1600/60Hz monitor. Whether or not any 2560x1600 monitors have hardware that can decode the HDMI 1.3/1.4 signal is another question entirely.
Release Day with lots of press releases while No reviews? Kind of telling of the story!
I see Nvidia wanting to get (dump) the initial run they've made onto the channel / market and then be done with this with as little fan-fare as possible. That means giving out as few as possible for couple of sympathetic reviewers and sell the rest for those who see a cheap compute/Cuda option to hopefully cover a good portion of their costs.
The main beauty of this device is its 6GB / GPU. If it was a bit more efficient (speed/power/size-wise) and not so expensive one or two would be in a pc or pci-e expension board I plan to buy within a month.
Where are the 4GB+ GTX 780 Ti's? Or 8GB+ GTX 790's? Bring them on... :-)
I'm definitely jumping ship to AMD from now on. Nvidia seem to have put the brakes on double memory cards by essentially calling them Titan, and charging double the money. They routinely give their standard cards *just* enough memory for today's applications, guaranteeing you'll need to upgrade in 18 months or so. My 2 680s in SLI would be desperate for an upgrade if I hadn't been able to get the 4Gb version that I currently have (I have a 1440p monitor). So given I'll have a choice between a (likely) 4Gb 880, or an 8Gb Rip off Titan for my next GPU, I'll go with AMD's 290x follow up which will probably have at least 5Gb RAM. In fact since memory is relatively cheap, AMD are missing a trick by not releasing a 6Gb version of the 290X, for a small premium. It would be a complete Titan killer if it was priced right. NVidia have just got greedy and cynical with their specs and pricing structure.
At least someone sees the light i use to love nvidia but they gone crazy for that amount of money i could put 3 290x and blow that card to bits.O i forgot the cuda thing o well good thing i know how to code in opcl.
looking at the PCB in the 2nd pic, I'm assuming that if Nvidia or one of their vendors wanted to, they could fit all the components of a single Titan Black with the full 6GB on a half length PCB since that is essentially what they are doing here. Basically, the PCB will end where the PCIe teeth end. And, if they omitted one of the DVI ports, they could make it a single slot card assuming a thinner heat sink would be able to adequately cool a Titan Black with a single fan.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
58 Comments
Back to Article
grahaman27 - Wednesday, May 28, 2014 - link
It's just a money grab.dagnamit - Wednesday, May 28, 2014 - link
Yeah, they're in the business of making money. Nvidia thinks that's what the market will bear. They're probably right. Titans have proven quite popular in a market that they've created (CUDA). Now they're reaping the benefits of that effort. Good for them.If you're buying this card for gaming, you're rich or dumb, or both. The biggest problem is that it's best use case for gaming, small form factor 1440p/4k, is thrown out the window by the triple slot cooler.
ddriver - Wednesday, May 28, 2014 - link
OpenCL + ATi gives you about 10 times better bang for the buck. Only people looking to waste money will go for nvidia compute solutions.MadMac_5 - Wednesday, May 28, 2014 - link
There's a lot of inertia in the scientific community, though. For the longest time CUDA was the only API that was well documented enough to use, and as such there would be a lot of resistance to changing it. When code is written and released for free by academics or developed by a grad student/postdoc researcher who moves on to somewhere else after 2-4 years, it's a MASSIVE pain to re-write it for a different API like OpenCL. Hell, there's still legacy FORTRAN code running around the Physics community partly because no one has been able to dedicate the time to port it to something newer and make it as efficient.SolMiester - Wednesday, May 28, 2014 - link
for about 100 x less the applications!heffeque - Thursday, May 29, 2014 - link
You only need it to run your own scientific application, not everybody else's apps.TheJian - Friday, May 30, 2014 - link
ROFL@OpenCL doing ANYTHING better than cuda. Only people wanting to waste money buy cards for opencl. CUDA gets real work done. I don't see AMD chanting OpenCL benchmark numbers VS CUDA at all. That means it sucks. Ryan etc won't test them vs each other either, so again it SUCKS or he would run it to make Cuda look bad. The problem is YOU CAN'T...ROFL. I'm not a fan of proprietary stuff unless it's better, and in this case it is. Gsync to, until proven otherwise by AMD (I suspect their solution will still have lag, and they'll wait on scalers just like NV who built their own rather than waiting). Cuda is taught in 100's of universities DUE to billions in funding from NV. It's in hundreds of pro apps.In work, I don't care who it is I'm using, I just want to be faster. OpenCL isn't. Free crap generally is...crap. People don't like working for free, get over it or live with 2nd best for life.
Also AMD must be money grabbing too right? 2x290x is not $1500. But with a worse name in the business you can't charge what the king does. See Intel vs. AMD. Same story. You pay a premium for a premium product. Phase3 drivers for AMD means no $3000 money grab. People seem to miss why AMD can't charge what NV does. R&D like cuda, Gsync, drivers NOT in phase3...Get the point people? This isn't a comment on who I like, just stating the facts of business. Apple had a premium for a few years, but then everyone got retina like screens, fast chips etc etc...Now price parity at the top basically. If AMD stops getting phase 3 drivers, bad ref designs that can run as advertised, etc..etc...They will be able to charge more.
Evilwake - Sunday, June 8, 2014 - link
I don't use CUDA to play games these are suppose to be gaming cards or at least that's what Nvida is saying they are if they are not where are the pro drivers for it.You fanboys believe crap smells like grapes as long as its your team saying so ,its a over price card that wants to be a gaming card and wants to be a pro card for that money if I need a pro card which come with pro drivers i spend the little extra for the reall thing.Flunk - Thursday, May 29, 2014 - link
I don't think it's just that. I think they don't expect many people to buy them. It's there as a halo product to get people to buy their less expensive cards.If you're a gamer going over 2x GTX 780 Ti doesn't make a lick of sense financially and if you're buying for compute multiple Titan Blacks makes more sense.
dstarr3 - Wednesday, May 28, 2014 - link
As a compute card, the price is generally pretty fair. You could spend a lot more. But as a gaming card, it's a right ripoff.p1esk - Wednesday, May 28, 2014 - link
As a compute card it does not make any sense. For the same amount of money, you can get 3 Titan Black (x1.5 flops, x1.5 RAM), or 4 780Ti (x2 flops, same RAM).And yes, I'm aware that 780Ti only does 32bit flops well, but that's actually more than enough for things like neural network simulations.
Omniscientearl - Wednesday, May 28, 2014 - link
But it does make a lot of sense. You're thinking purely in terms of performance per dollar, but what if we look at performance per watt? Depending on how often you're pulling peak performance, you could be saving quite a bit on your electric bill.p1esk - Wednesday, May 28, 2014 - link
They are very likely to use the same chips in both Titan Black and Titan Z. What makes you think chips in Z are any more power efficient than chips in Black?To get TDP down they clocked it lower (peak or not), obviously.
JonathonM - Wednesday, May 28, 2014 - link
For the companies which use software like ANSYS the performance and price is great but Titan isn't certified for Quadro or Tesla. Otherwise we would buy heaps of these for compute. So we buy K40s instead at two or three times the price.p1esk - Wednesday, May 28, 2014 - link
Well, sure, if you need that ECC memory, or whatever else Tesla's offers, and if money is no object, then it makes sense to buy K40.But if you didn't need that, buying Titan Z instead of Titan Black or 780Ti wouldn't make sense.
Gigaplex - Wednesday, May 28, 2014 - link
By aggressively binning for the better performance/watt chips of the bunch (there's a lot of variation in the individual chips they manufacture).p1esk - Wednesday, May 28, 2014 - link
What makes you think Titan Z offers better perf/watt?If my math is correct, they reduced TDP by 25% and reducing the clock by 20%.
JarredWalton - Thursday, May 29, 2014 - link
Plus let's not forget: $1000 to reduce overall power use at max TDP by 125W is a huge investment. You're not saving on space, you're losing performance, and you're spending $3000 instead of $2000. Running 24/7 at the typical cost of $0.10 per kWh, you'd only "save" $0.30 per day! Break-even point for running 24/7/365 is thus 3333 days. Anyone out there really believe that:A) The cards will still be functional in nine years?
B) People would want to use the cards in nine year?
Put it another way: if I offered you an X1800 or 7800 GTX right now for free, would you take it and would you actually use it? I think not. (And I believe in my "box of old stuff" I actually have a newer X1950 that has been unplugged and unused for several years.)
mmrezaie - Thursday, May 29, 2014 - link
You forgot to mention that you have to share the bus bandwidth with two gpus now, and in case you don't have that many iterations in your code (bootstrapping), you won't get the whole benefit of two gpus sharing the same bus.p1esk - Thursday, May 29, 2014 - link
Does CUDA see this card as two separate GPUs, or as one?TheJian - Friday, May 30, 2014 - link
I'm sorry. I must have missed that 3 titan blacks can be run at 375w total. WOW, NV sneak in some serious improvements in titan blacks since I read reviews? Where are these magical titan blacks? Same with 780ti. You're missing the point here.p1esk - Friday, May 30, 2014 - link
Who cares about power? First of all, you're not paying for it - your company or university does, and second, even if you paid for it yourself - extra 100W = $10/month. Not exactly a bank breaking amount.BobSwi - Wednesday, May 28, 2014 - link
Talking about niches.Wonder what will sell more: GTX Titan-Z, or Core i7 Surface Pro 3.
XZerg - Wednesday, May 28, 2014 - link
Easily the Core i7 SP3. i7 doesn't mean quad core with HT but could be one of those low power dual core, eg: i7-4550u or i7-4610Y...SirMaster - Wednesday, May 28, 2014 - link
The Core i7 Surface Pro 3 comes with the 1.7GHz (3.3GHz turbo) i7-4650U with HD 5000 graphics.tabascosauz - Wednesday, May 28, 2014 - link
People are already beginning to buy 295X2, but I highly doubt anyone will buy this card. Just because it's triple slot doesn't mean it won't run hot and loud; the GTX 690 was about as much as NVTTM can handle without scaling back the clocks.Not to mention that the Titan Black is still out there...what's the point of this card?
HighTech4US - Wednesday, May 28, 2014 - link
Quote: People are already beginning to buy 295X2How exactly do you run CUDA applications on the 295X2.
Oh, right, YOU CAN'T
Senti - Wednesday, May 28, 2014 - link
And who now needs CUDA? Oh, right: NO ONE.Yes, we had to bear with CUDA when there was no alternative, but now you'd have to be pretty insane to target it instead of OpenCL. No one in industry likes proprietary standards that lock you to one hardware vendor.
The only strong point of this card is DP performance. But DP and no ECC? Ew, where are you going to use that?
NukNuk - Wednesday, May 28, 2014 - link
plenty of people are running legacy software that uses CUDA - and when they go to replace their dual Titan compute, they will probably just switch to one of these instead of trying to figure out a new hardware/software combination.Morawka - Wednesday, May 28, 2014 - link
Deep Sea Oil is run by CUDAminijedimaster - Thursday, May 29, 2014 - link
For nowEvilwake - Sunday, June 8, 2014 - link
That's why Gas prices are so high lolEvilwake - Sunday, June 8, 2014 - link
Hey fanboy the 295x2 are gaming cards not pro cards.You fanboys are all like on both sides as long as your team said so you believe there crap.I don't buy gaming cards to do cuda or opcl i buy them to play games and this card does not know what it wants to be a gaming card or pro card(sorry no pro drivers)for that price i would spend a little extra sense it seems money means nothing to you guys and buy the pro card that comes with pro drivers.Jon Tseng - Wednesday, May 28, 2014 - link
re: use as a compute card does this mean it has the double-precision floating point enabled? NVDA usually disable that on gaming cards to segment the market, right?blanarahul - Wednesday, May 28, 2014 - link
All Titan branded cards have the full FP64 capabilities enabled.ZeDestructor - Wednesday, May 28, 2014 - link
I would much rather displayport also be abandoned, and instead we just have something like AMD's Eyefinity cards: a single row of 6 mini-DisplayPorts (or 3/4 miniDP + 1 DL-DVI) instead. Using that config and some FPGA-style software-configurable wiring, it's feasible to do conversion to DL-DVI using two ports into a passive adapter. Needless to say, passive conversion to HDMI is also trivial, and I doubt much more expensive (if at all in volume quantities) than the usual bundled DVI to VGA adapters that are usually shipped. Oh, and ditch the MOLEX -> PCI-E adapters too. They're not particularly useful in modern times....Zok - Wednesday, May 28, 2014 - link
Good catch! You can clearly see the "2.5 slot" size in this photo: http://images.anandtech.com/doci/8069/geforce-gtx-...Zok - Wednesday, May 28, 2014 - link
Whoops. Posting fail. Sorry.WithoutWeakness - Wednesday, May 28, 2014 - link
"The triple slot design is also going to be notable since when coupled with the split-blower design of the cooler, it further increases the amount of space the card occupies. Axial fan designs such as the one used on GTX Titan Z need a PCIe slot’s worth of breathing room to operate, which means that altogether the GTX Titan Z is going to take up 4 slots of space."The card appears to be a 2.5-slot design with a 3-slot bracket which should allow enough room for the axial fan to pull in air even if a card is installed in the slot next to it. An empty slot between cards shouldn't be necessary given the design.
Zok - Wednesday, May 28, 2014 - link
Good catch! You can clearly see the "2.5 slot" size in this photo: http://images.anandtech.com/doci/8069/geforce-gtx-...GeorgeH - Wednesday, May 28, 2014 - link
I'm confused about the video out comments. As far as I know HDMI and DVI are electrically identical (video), so why would a DP -> HDMI be significantly cheaper than a DP->DVI adapter? I could see dual link DVI being more expensive, but a glance at Newegg shows options for far less than $100 (akthough I've no idea about their quality).Meaker10 - Wednesday, May 28, 2014 - link
Dual link DVI is more reliable at going above 1080p than HDMI and has been used in many high quality but slightly older displays.SirKnobsworth - Wednesday, May 28, 2014 - link
Certain DisplayPort ports allow HDMI or single-link DVI output with a passive adapter, but dual-link requires some electronics.ZeDestructor - Wednesday, May 28, 2014 - link
DL-DVI (Dual-Link, up to 2560x1600@60Hz or 1920x1200@120Hz) requires active converters that re in the $50+ range. I picked one up from a Dell clearout auction for ~$25 (iirc) shipped a wihle ago.SL-DVI/HDMI (Single-Link, up to 1920x1200@60Hz) on the other hand be done by passive adapters since they use fewer pins than DP (HDMI is 19, SL-DVI is 18 pins with more shielding), and most DP source ports will happily switch down to outputting HDMI/DVI signals instead of DP when connected to a passive adapter, which can be made for under $1.
Ryan Smith - Wednesday, May 28, 2014 - link
The above pretty much sums it up.DL-DVI requires two sets of TMDS signals while SL-DVI/HDMI requires one set. That makes all the difference since TMDS passthrough mode on DP only has enough pins for a single set of TMDS transmitters. Which is why we can do passive HDMI, but have to do active for DL-DVI.
ZeDestructor - Wednesday, May 28, 2014 - link
Weeell.... technically DL-HDMI exists.....The Von Matrices - Wednesday, May 28, 2014 - link
HDMI 1.3/1.4 runs its single link at 340MHz compared to the standard 165MHz of SL-DVI and HDMI 1.0/1.1/1.2. Therefore, the HDMI port on this card has roughly equivalent bandwidth to DL-DVI and can theoretically drive a 2560x1600/60Hz monitor. Whether or not any 2560x1600 monitors have hardware that can decode the HDMI 1.3/1.4 signal is another question entirely.Ammaross - Thursday, May 29, 2014 - link
Let's not forget HDMI also does audio, whereas DVI does not.Casecutter - Wednesday, May 28, 2014 - link
Release Day with lots of press releases while No reviews? Kind of telling of the story!I see Nvidia wanting to get (dump) the initial run they've made onto the channel / market and then be done with this with as little fan-fare as possible. That means giving out as few as possible for couple of sympathetic reviewers and sell the rest for those who see a cheap compute/Cuda option to hopefully cover a good portion of their costs.
tipoo - Wednesday, May 28, 2014 - link
Yeah yeah, halo product that 500 people buy. Where is the top to bottom Maxwell range?HisDivineOrder - Thursday, May 29, 2014 - link
Currently being manufactured to be released end of this year with delays probably pushing it into next year around March? ;)Ettepet - Thursday, May 29, 2014 - link
The main beauty of this device is its 6GB / GPU. If it was a bit more efficient (speed/power/size-wise) and not so expensive one or two would be in a pc or pci-e expension board I plan to buy within a month.Where are the 4GB+ GTX 780 Ti's? Or 8GB+ GTX 790's? Bring them on... :-)
krskipp - Thursday, May 29, 2014 - link
I'm definitely jumping ship to AMD from now on. Nvidia seem to have put the brakes on double memory cards by essentially calling them Titan, and charging double the money. They routinely give their standard cards *just* enough memory for today's applications, guaranteeing you'll need to upgrade in 18 months or so. My 2 680s in SLI would be desperate for an upgrade if I hadn't been able to get the 4Gb version that I currently have (I have a 1440p monitor). So given I'll have a choice between a (likely) 4Gb 880, or an 8Gb Rip off Titan for my next GPU, I'll go with AMD's 290x follow up which will probably have at least 5Gb RAM. In fact since memory is relatively cheap, AMD are missing a trick by not releasing a 6Gb version of the 290X, for a small premium. It would be a complete Titan killer if it was priced right. NVidia have just got greedy and cynical with their specs and pricing structure.piroroadkill - Thursday, May 29, 2014 - link
There's an 8GiB 290X by Sapphire.Evilwake - Sunday, June 8, 2014 - link
Yea and in reviews it craps all over the Titan Black we need more companies like Sapphire that step out of the box for its gaming fans.Evilwake - Sunday, June 8, 2014 - link
At least someone sees the light i use to love nvidia but they gone crazy for that amount of money i could put 3 290x and blow that card to bits.O i forgot the cuda thing o well good thing i know how to code in opcl.Hrel - Friday, May 30, 2014 - link
that shit crayjman9295 - Monday, March 16, 2015 - link
looking at the PCB in the 2nd pic, I'm assuming that if Nvidia or one of their vendors wanted to, they could fit all the components of a single Titan Black with the full 6GB on a half length PCB since that is essentially what they are doing here. Basically, the PCB will end where the PCIe teeth end. And, if they omitted one of the DVI ports, they could make it a single slot card assuming a thinner heat sink would be able to adequately cool a Titan Black with a single fan.