Comments Locked

69 Comments

Back to Article

  • MrSpadge - Tuesday, October 7, 2014 - link

    I'd like to see AMD at least use their current chips properly by tweaking their clock-voltage profiles. R9 285 uses about 1.15 V even at 918 MHz. That's insanely bad for a GPU using the mature 28 nm process. No wonder their energy efficiency is so bad in comparison.

    And Hawaii: AMD also uses high voltages for the top speed bins, which is fine for a high end product with a massive TDP, but they hardly scale that voltage down when they have to lower the GPU frequency due to hitting power and thermal limits. No wonder performance suffers so much once Hawaii hits these limits.
  • Wreckage - Tuesday, October 7, 2014 - link

    They had to water cool their dual chip card. AMD pushed their GPUs to their absolute limit in order to compete with the 7xx series. Now the 9xx series is simply on another level. They will need to drop their price a lot more to even be considered "competitive" at least in price/performance. I believe the technological gap between them is just too much for AMD to overcome.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    Not completely "on another level", and not so huge technological gap, but, yes, AMD is lagging now (counting from GTX 9xx release); they better should do something about that, indeed.
  • chizow - Tuesday, October 7, 2014 - link

    No, given the fact it is still on 28nm, Maxwell really is on another level. It not only outperforms previous generation handily, it uses less power in doing so, all on the same process node. On another level.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    You guys are overexcited by Maxwell. Let's quantify things a bit. I would say "on completely another level" if it's really 2X performance/watt, as was advertised. But it seems to be not the case. On desktop (the subject of the pipeline) GTX 780 Ti uses around 250 W and yes, it is surpassed by 10% on average by GTX 980, which uses around 180 W (and not really 165 W, as advertised, see AT own power consumption charts in GTX 980 review).
    So, the practical performance per watt ratio of GTX 980 vs 780 Ti is like 250/180 * 1.1 ~= 1.53.
    Say, 1.5X. Same for mobile GPU parts, I assume - in around 100 W there for top MXMs you get around 40-50% more performance from 980M, than from 880M, with the same number of CUDA cores, which actually does correspond to the promised average 40% increase in CUDA core efficiency from Kepler to Maxwell.
    Yes. it is exceptionally good by itself, considering the fact, that it's the same 28 nm, so all gains are because of the totally re-hauled GPU architecture (Maxwell vs Kepler meant).
    However, it's not 2X, as was advertised and it's not 2X as was in the past with the jump to the next mfg node. So, to me, it's not a "totally different level". But yes, its much better than Kepler and current AMD's products, and yes, it's a problem for AMD now.
  • dragonsqrrl - Tuesday, October 7, 2014 - link

    It's not quite 2X performance per W, but it's close enough that people aren't compelled to challenge the marketing claim. That in itself should be beyond impressive to any informed enthusiast, so your reaction seems a little strange to me. I don't think the comments you're responding to are 'overexcited' about Maxwell, at least no more so than the countless articles by tech journalists praising GM204 since launch. And I'm sure it varies a bit per benchmarking suit, but according to Anandtech the 980 is ~13% faster then the 780Ti, not taking into account superior overclocking headroom.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    Well, I'm from natural sciences, so I just like to quantify things a bit. OK, if it's 13% advantage of 980 vs 780 Ti, then the perf/watt leap of Maxwell vs Kepler is 1.57 - closer to 1.6X (60%).
    When I was saying "overexcited", I just meant the perf/watt ratio is not 2X - it's actually smaller.

    Generally, regardless of the particular GPU vendor from RGB triad (Red = AMD, Green = nVidia, Blue = Intel with its integrated Iris GPUs), it would be nice to see HBM (GDDR5 is so long in the tooth, more than 6 years on the market already), new mfg nodes and DX12 in real action. That's what I would call really interesting IMHO.
  • dragonsqrrl - Tuesday, October 7, 2014 - link

    I'm guessing Nvidia decided to delay HBM a gen for a reason. I'm not super familiar with it, but I'm a little curious about the practical real world benefits HBM would bring to upcoming gen cards unless it's actually paired with a GPU that can take advantage of the additional bandwidth, so at least for now it may very well benefit AMD's architecture more than Nvidia's. Has it been confirmed that AMD with be integrating HBM into their lineup next year?

    Given the the current state of memory bandwidth constraints both companies are having to get creative, and it seems like Nvidia has already found a practical solution moving forward for Maxwell (larger cache and more efficient image compression). But I'm guessing this won't last very long, to scale performance further, beyond a big die Maxwell on a 384-bit bus, Nvidia will need something new.

    ... actually, what ever happened to GDDR6?
  • TiGr1982 - Tuesday, October 7, 2014 - link

    1) HBM or similar new memory tech for GPUs will happen eventually, but now AFAIK it's all in the rumours. Nothing is confirmed as of yet.
    2) GDDR5 is a derivative from DDR3, so, then, "GDDR6" would be a derivative from DDR4.
    Currently, DDR4 does not have that much bandwidth advantage with respect to DDR3, so, I'm just assuming, "GDDR6" may not happen - instead, HBM may be used in the future, skipping "GDDDR6" altogether.
    3) Indeed, probably, common sense points that big die Maxwell will be on 384 bit bus, but later on, next GPU generation ("Pascal") may use some different and more advanced memory solution like HBM or similar.
  • chizow - Tuesday, October 7, 2014 - link

    @dragonsqrrl, HBM or stacked DRAM was always on their roadmap for the arch after Maxwell. That recently changed from Volta to Pascal, but it was never planned for Maxwell. Unified virtual memory was however, but that got bumped down to Pascal. Nvidia did do some tweaks to bandwidth for Maxwell in the form of new compression algorithms and it seems to hold up well on 256-bit Maxwell parts at 4K.

    http://www.anandtech.com/show/7900/nvidia-updates-...
  • tuxfool - Wednesday, October 8, 2014 - link

    It is almost guaranteed that AMD will beat Nvidia to be first to use HBM by virtue of the fact that they're co-developing it with Hynix.

    More bandwidth would benefit Nvidia especially at the higher end. Despite their new colour compression techniques the 256bit bit is bottlenecking at higher resolutions vs. the Hawaii 512bit bus. It is impressively competitive but still limiting.
  • dragonsqrrl - Tuesday, October 7, 2014 - link

    Sorry, posted before I was finished.

    Also the power consumption results that a lot of people seem to quote when criticizing the 980's advertised TDP comes from total system power draw. Ryan reminded readers in his article that this also includes the potential for higher power draw from the processor and other system components to feed the higher performance GPU. In benchmarks that either isolate the power draw of the graphics card, or nullify the power impact of the processor, the spread widens, which is further evidence in support of Ryan's analysis. I believe the spread between the 780Ti and 980 was ~80W, which brings the avg TDP of the 980 to 170W assuming 250W for the 780Ti.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    OK, then the perf/watt leap is 250/170 *1.13 = 1.66.

    Actually, one can look at these things another way around: if Maxwell (say, GM204) is probably close to the most optimal GPU which is ever possible to build on 28 nm, then this means that Kepler was in a sense a waste of silicon and power: greatly oversimplifying, 192 ALUs "monolithic" SMX was not an optimal choice. Now the choice of 4x32 = 128 ALUs in SMM is presumably pretty much optimal from the point of view of computer science, I would assume. Then, it took two years of R&D to get to the optimal ALUs configuration from sub-optimal (plus, the rest of the stuff like memory bandwidth-saving texture compression etc). Very impressive, but not a breakthrough to me - just an optimization of initially suboptimal design. That's it.

    Then, the same logic tells that AMD's GCN, being roughly on par with Kepler, is also seriously sub-optimal in comparison with Maxwell. So, nVidia "did their R&D homework" on 28 nm in the last two years, and if AMD didn't, then that's grades A (nV) vs B- (AMD). That's what Maxwell appearance means. That's the story IMHO.
  • dragonsqrrl - Tuesday, October 7, 2014 - link

    Totally agree, almost.
  • chizow - Tuesday, October 7, 2014 - link

    @TiGr

    Problem with your comparison is Nvidia is not deriving their comparisons against GK110 (Huang said so in his keynote, and all the slidedecks confirm this), they are comparing against GK104, which is what they consider true predecessor part to GM204. Their calculations are probably more along the lines of:

    210/180*1.7 and you can see, the ~2x perf/watt they claim is retained.

    But, throw all that out the window and just look at it from an empirical standpoint, since you say you are from natural sciences.

    Say you have one specimen and it is fast, but its successor, all else being equal, is 1.5-1.7x faster but shares all other physical characteristics and habitat. You wouldn't consider that impressive? You want to compare to GK110, you don't think a 250W GM210 on the same 28nm process node that delivers even 1.5x perf of GTX 780Ti wouldn't be hugely impressive?
  • TiGr1982 - Tuesday, October 7, 2014 - link

    We already settled on an estimated 1.5-1.7 leap above together with dragonsqrrl. Yes, it's impressive, but as I said above, it's just a well done optimization, meaning that their previous design aka Kepler was sub-optimal (same with current AMD GCN, now lagging behind Maxwell considerably). There is no "magic" in there - just new and much better design.

    In a sense, figuratively speaking, it feels a little bit like going from P4 to Core 2 Duo on the same (65 nm then) process - much better architecture design does the job (of course, it's not a good comparison, CPUs to GPU, "apples to oranges", but this comes to mind from history). As you like to say, new ASIC has much better architectural design, than the old one.
  • eek2121 - Wednesday, October 8, 2014 - link

    Please do be careful there...I like AMD as much as most of these AMD morons around here...but they never recovered from THAT jump...
  • chizow - Wednesday, October 8, 2014 - link

    @TiGr, again more nonsense, if anything Kepler actually performed better than expected. Look at GK110 (780Ti) compared to GF100 (480), full generation to full generation + process node. It is ~2.6x faster, which is absolutely unprecedented. Typically, you expect 1.6-1.8x max, with ~2.0x being amazing. And, they used less power in the process. That is an amazing leap in performance. It's no wonder Nvidia was able to jump their SKUs up an entire product range and command $700+ on the high-end.

    It's really just 2 home runs in a row for Nvidia and we haven't even seen Big Maxwell yet, while AMD is going the opposite direction. They are still competitive in terms of performance, but their efficiency is tanking in the process.

    And as I already broke out, your calculations for perf/watt are wrong, Nvidia is comparing GM204 with GK104, so once you make that comparison, their 2x perf/watt comparison is accurate. They even use this in their bar graphs and slide decks, so while it is nice you came to the conclusion it is "only 1.5-1.7 leap", it's just not accurate. :)

    http://international.download.nvidia.com/geforce-c...
  • TiGr1982 - Wednesday, October 8, 2014 - link

    @chisow, I leave you believing in whatever you like. Bye.
  • chizow - Wednesday, October 8, 2014 - link

    @TiGr: likewise, I'm sure you believe whatever you like.
  • Zap - Wednesday, October 8, 2014 - link

    The other thing to consider is how are they measuring the 2x? Is it just the GPU itself, or the entire card including VRM efficiencies and RAM?
  • typographie - Wednesday, October 8, 2014 - link

    I'm fairly sure that 165 W claim is the 980's TDP, not its actual power draw. Those aren't the same metrics. I don't believe I've seen Nvidia put any clear figures on what they believe the 980's power draw to be.
  • chizow - Wednesday, October 8, 2014 - link

    @typographie: The 165W is typical board power at its RATED stock clockspeeds. The problem you see in many of these reviews, is that OEMs are using factory OC'd or increasing the power target of the card during testing Most reviews will show the differences, but some, like THG carelessly omit this information only to retract/clarify their results later, causing a lot of confusion. You will see the power draw at bone stock reference configurations is actually very close to that 165W rating and in-line with other cards with rated TDPs around it.

    In any case, as we have seen time and again, clockspeeds and power draw are a non-linear function, so the more you push clocks, you'll see disproportionately higher the power draw. Nvidia obviously set their reference clocks so that the 980 was still a convincing winner over the 780Ti/290X, while still maintaining excellent TDP numbers.

    This in stark contract to a chip like Hawaii, where it quickly became evident AMD blew past that optimal "efficiency" threshold and went for a win at any cost approach to clockspeeds to meet their goal of beating Titan/780.
  • Alexvrb - Thursday, October 9, 2014 - link

    Funny how you bash AMD's 285 (overclocked models, some poorly designed) for blowing past rated TDP... yet when overclocked Nvidia chips blow TDP it's "Well they're overclocked".

    Oh, and Chizow... Maxwell can't hold boost. Throttles even in ordinary game titles. Whoops! You should bash them for that as hard as you bashed AMD for it. You know, since you're totally not an Nvidia fanboy and all. Tell it like it is... declare that they need a higher TDP and better cooling.

    Now me personally I think the 970 is great! Especially at that price. But I know how much you hate it when a card throttles or blows past stock TDP - look at those Crysis numbers. Or is that only when AMD is involved hmm... strange...
  • TiGr1982 - Thursday, October 9, 2014 - link

    @ Alexvrb
    1) Overall, objectively, Maxwell is really great, indeed - no doubt. Huge step ahead in power efficiency on a deep architecture level, I suppose.
    2) I agree that, with all respect, it seems that there is a little bit of nV fanboi sitting inside Mr. chisow, so it's not really constructive to argue with him. E.g., he thinks that he always right that nV rules the world completely and, in particular, does not accept the 1.7X Maxwell average power efficiency improvement estimation based on comparison of 16 SMM GM204 vs 15 SMX GK110 - so, "only 2X", as Huang said from the stage - no less than that... Huang is a very smart man, but he is so smart that he likes to mix marketing and reality a little bit in his favour.
    So, I'm done arguing.
  • chizow - Thursday, October 9, 2014 - link

    @TiGr1982

    Again, if you're using the wrong source material, of course you're going to come to the wrong conclusion. If this were an exam, you've already failed, but hey, every AMD fanboi will want to believe what they like, even in the face of evidence directly to the contrary. ;)

    http://international.download.nvidia.com/geforce-c...
  • chizow - Thursday, October 9, 2014 - link

    @alexvrb, no I didn't bash AMD's 285 for blowing past rated TDP, I simply said that in order to convincingly beat the 280, it would most likely have to blow past rated TDP and clockspeeds, and it did! It took an overclocked, non-reference part to beat its 3 year old predecessor. A performance only true AMD fanboys could love (you).

    This is of course, completely different from the Maxwell situation, which amazingly, you still try to draw parallels to Turdga with (yeah..not an AMD fanboy, not at all!) where in BONE STOCK situations, Maxwell has no problems whatsoever meeting its rated TDP, in fact, it's dead last in most of the tests in terms of power draw compared against cards in this class, yet it still easily outpaces it's predecessors (GTX 780Ti) and the competition's best offerings (R9 290/290X).

    Only in factory OC/overvoltaged results does it blow past its TDP, but the thing is, there's actually MORE benefit as it EXTENDS its lead over its predecessors.

    So in summary, throttle or not, TDP or not, Turdga underwhelmed and thus was a turd of a GPU. Meanwhile, Maxwell and both SKUs based off of it, were absolutely amazing not only outperforming their predecessors while using less power, but also extending this lead even further with additional overclocking headroom.
  • Alexvrb - Thursday, October 9, 2014 - link

    I've seen some Tonga based cards that do pretty well on power. I've seen others that were pretty terrible - the manufacturer was too aggressive on voltage. Either way its biggest opponent is pricing. They need to clearance the 280 and drop 285 down to about $200.

    Anyway I'm still in shock at your complete reversal over throttling and TDP. Shock, I tell you. You can make a lame attempt to label me a fanboy, but it's pretty laughable. You on the other hand are quite transparent, everyone here knows you're a HUGE Nvidia fanboy. Personally I think Maxwell is great. The 970 in particular is the best overall graphics chip on the market at the moment. But I'm not blind enough to think it has zero faults, at least in reference form.

    Oh, and for the record I just helped a friend configure a custom gaming laptop and I had him opt for a 6GB 970M over the M290X. Gosh I'm such a fanboy, what with my history of using and recommending cards from various vendors over the years.
  • TiGr1982 - Friday, October 10, 2014 - link

    @Alexvrb I simply advise you to not to feed this troll. I didn't know about his "personality" two days ago. Now I do. So it's not really worth the time spent typing.
  • chizow - Friday, October 10, 2014 - link

    Troll? LMAO, hey I'm not the one that made this erroneous claim:

    "You guys are overexcited by Maxwell. Let's quantify things a bit. I would say "on completely another level" if it's really 2X performance/watt, as was advertised. But it seems to be not the case."

    And then when given proof as to why that claim is erroneous, chooses to ignore it.
  • TiGr1982 - Saturday, October 11, 2014 - link

    @ chisow
    OK, let it be 2X in case of GK104 vs GM204, if you really happy with this particular comparison (8 old SMXs vs 16 new SMMs and so on). I don't mind, eventually.
    But, then, please try to stop calling other people idiots, "Turdga" and using other abusive language next time. This looks childish and disrespectful and tends people to end any discussions with such a poster.
    BTW, I fully agree with you from the start regarding CPU side of things in the other recent thread here on AT devoted to AMD CEO change.
  • chizow - Friday, October 10, 2014 - link

    The 285's opponent isn't just pricing, it's also performance. And why is this? Because it's not significantly better and in some cases *WORST* than its 3 year old predecessor, all at a higher price point. And this is the turd of a GPU you chose to defend, while trying to make backhanded comments about Maxwell, which gives unprecedented improvements in performance and efficiency on the same process node! LMAO.

    And what reversal are you talking about with regard to TDP and throttling? Did you not read the Maxwell reviews? Maxwell has a hard TDP limit, meaning it will absolutely throttle regardless of temps or clocks as soon as it hits that TDP wall, but it doesn't matter! Because it doesn't impact its relative performance, unlike the dishonest and disingenuous "win-at-all costs" approach taken by AMD during Hawaii's launch.

    To put this into perspective against the turd of a GPU you love, Tonga, it would be as if Nvidia rated Maxwell at 165W and 1100MHz, but in reality, they only released factory overclocked 200W versions that had to run at 1300MHz just to still lose to 290X/GTX 780Ti. That's the kind of junk you obviously have a soft spot for, but thankfully, that's not what Maxwell is! It wins even with its hard TDP of 165W, and when that power target is increased, it's in a class by itself while *STILL* drawing less power than 290X.

    And lmao at the fanboy comment, you mean you're not a fanboy when you are literally, the only idiot on the internet that will mention Tonga and Maxwell in the same sentence as if they are equals, defending AMD's most unremarkable GPU quite possibly ever, all while downplaying Maxwell? Maxwell is beyond reproach, I would be shocked, I tell you SHOCKED to see you find faults when you have so vigorously defended Tonga. :D

    I'll be the first to admit I'm a fan of great products, so obviously I'm going to prefer Nvidia products, Intel products, Samsung, Microsoft, Logitech, Asus etc. Unfortunately for AMD, it's been a long time since they've produced one. What's your excuse for being an AMD fanboy again?
  • Gasaraki88 - Thursday, October 16, 2014 - link

    It's on another level when you consider price/performance/power use
  • Dribble - Tuesday, October 7, 2014 - link

    Prices aren't low enough - anything AMD try to sell above or anywhere close to 970 hasn't got a chance. Even if AMD drop the 290X price to $300 they'd struggle as most would spend the extra $30 for the significantly cooler, lower power, newer and shinier bit of kit.
  • bebimbap - Tuesday, October 7, 2014 - link

    From Anandtech's own tests the gtx970 seems to perform the same as or better than a 290x ubermode at 1080p/1440p but at 4k the 290x seems to pull ahead by tiny lead, but not 20% better which is what a $70 difference is. Seems unless you want to run 4k multi-gpu setup Nvidia 970/980 rules the roost, but even then, 3x 290x costs $1200 and 4x970s costs $1320 10% more expensive for 33% more gpu. the $120 difference probably will go to the beefier psu you need for the amd system... HMMMMMMMmmmmmmm.....
  • Subyman - Tuesday, October 7, 2014 - link

    To be fair, the Anandtech review uses the reference cooler not he R9 290X, a far cry from the aftermarket coolers available now. I'd like to see an updated review with newer 290's and 290X's with their widely available custom coolers and the custom 970's.
  • bebimbap - Tuesday, October 7, 2014 - link

    That's why I included "uber mode" which i remember was added as a "reference point" for custom cooled version and I compared it against the "stock" numbers of the EVGA GTX 970, "to be fair"

    I wasn't comparing the "FTW" to the "uber mode." The FTW version seems to make the gtx970 vs 290x argument pointless.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    Indeed, you're right - reference blower cooler on R9 290/290X is a poor design decision - no less than that - with 94 C load temperatures and still throttling. They shouldn't release a product with such a cooler in a first place...
  • nathanddrews - Tuesday, October 7, 2014 - link

    The Maxwell reviews have not been very honest in this regard. Comparing aftermarket 970s with custom coolers/overclocks against stock cooler 290s seems to be the norm. Where are all the OC vs OC comparisons?
  • chizow - Friday, October 10, 2014 - link

    @nathananddrews

    Maxwell is still much faster, and draws a lot less power too (over 100W less than 780Ti and 290X). This is with the reference 980 cooler too vs. Asus DirectCU II coolers on the 780Ti and 290X.

    http://hardocp.com/article/2014/10/08/nvidia_gefor...

    Maxwell is just that good.
  • garadante - Tuesday, October 7, 2014 - link

    So much this. And I remember seeing that the 290X power draw decreased when it wasn't running at a blistering 95 C. And anandtch could easily compare efficiencies by putting a lowish frame rate cap on cards so the CPU is working the same. Compare average gpu utilization for frame rate with power draw. Also people need to remember AMD's flagships are competent with compute which wastes die area and power from a gaming standpoint. Gm204 is a lean mean gaming machine but not so for compute but it gives it the die area and power advantage. Sorry for any typos, typing this from a phone.
  • MrSpadge - Wednesday, October 8, 2014 - link

    Using a frame rate cap to compare efficiency would be great for mobile gaming, as it represents a realistic and clever operating mode which could really be used over there. But for desktop systems this would not be representative of what people actually use.

    And regarding AMDs good compute capabilities: while I envy them I'm using nVidia for my compute tasks, simply because they need CUDA. And the only compute benchmarks being compared are using OpenCL, where nVidia limits their performance on purpose by not providing their best compiler. This is their real world OpenCL performance - but it's not representing what the cards are capable of using optimized software.. which is readily available with CUDA. In summary: I'm not convinced the compute difference is as large as the OpenCL benchmarks show, if using CUDA is an option.
  • Flunk - Tuesday, October 7, 2014 - link

    Multi-GPU setups tend to start dropping off in gains after 2 cards. A lot of games aren't really set up for 3 or 4 GPUs and sometimes the performance is actually worse. Even 2 cards can be annoying if you're playing new games that don't have profiles yet or old games that aren't supported.
  • The Von Matrices - Tuesday, October 7, 2014 - link

    You can't use any *70 card in 4-way SLI. Only the *80 cards and Titans support 4-way SLI.
  • piroroadkill - Wednesday, October 8, 2014 - link

    I agree, basically. It's the lower power that does it. Newer and shinier I don't care about, but given two cards with similar performance at the same price, NVIDIA is offering a cooler running card. They're still winning.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    GPU price wars are definitely a good thing for consumers. Not so for AMD this time, though.

    IMHO, the main issue for AMD here is that the power efficiency of their 28 nm GCN GPUs hasn't really improved at all since the launch of the very first GCN GPU, Tahiti, around 3 years ago - back in December 2011.

    Simple example: R9 290X is around 60% faster than my HD 7950 Boost at around the same frequency (that's just, roughly speaking, 44 GCN compute units in R9 290X vs 28 compute units in HD 7950 Boost), but R9 290X is at the same time 50-60% (~100 W) more power hungry. So essentially there seems to be no power efficiency progress with their top GCN GPUs so far (from Tahiti to Hawaii).

    So, AMD should clearly do something about this in their future products.
  • Mondozai - Tuesday, October 7, 2014 - link

    Tonga is going to be a lot better than Hawaii was. Although, my 290 Tri-X works very well with the vast majority of games. It's super-quiet and has very low temperatures and I have only 2 fans in my case, as I give a premium to a smoothly sounding PC.

    If 290 gets to about 270 dollars or so, the value proposition becomes that much more compelling. Nvidia is not going to have HBM next year, which AMD will have. Until then, AMD could/should build goodwill by going further on the price war. The driver's are pretty damn good right now, certainly better than their reputation.
  • TiGr1982 - Tuesday, October 7, 2014 - link

    Tonga is somewhat better from the power/performance perspective, but the issue with Tonga is that it's not a top GPU, but just a better Tahiti replacement, and there is still no full Tonga card ("Tonga XT") like R9 285X as of yet. It may be released later, however, who knows.

    Rumors about HBM for AMD Radeons next year are unconfirmed; hopefully, these can be true next year (it's still too early for this stuff to be really disclosed, I guess).
  • MrSpadge - Wednesday, October 8, 2014 - link

    Tonga is a little more efficient, yes, but it can still not match Kepler. Just compare R9 285 and GTX670: both are cut-down to a comparable degree, consume about the same power, whereas GTX670 is faster and a significantly cheaper chip.
  • HisDivineOrder - Tuesday, October 7, 2014 - link

    Things are going to look very grim for AMD once the 960 releases. Given that the 970 is already at the $300-350 range (best case scenario with sales to come once it stops selling out), the 960 is likely to hit a $250-275 range, which is where the currently released R9 285 (AMD's first Tonga part) is.

    Assuming that the 960 is every bit the hellion that every Maxwell thus far out the gate has been, you have to imagine that at $250-275, it's going to wallop AMD hard in a duel against the lower end Tonga.

    Which leaves room for the uncut Tonga... where? 275-295? Yeah right. Price drops should bring the newer Tonga essentially to slot in where they originally positioned the current Tonga, which immediately undercuts its profit margins before it's more than a month out the gate. And that's assuming that even the unreleased Tonga can match the 960 or even come close. It might not. Given the tendency so far, nVidia's got a real winner with Maxwell while Tonga came in at essentially JUST matching the Tahiti parts it's replacing. That won't be enough.

    And that's assuming nVidia doesn't pull another 970-like move and really go for the kill with a $200-230 price point on the 960. That'd squeeze AMD's latest GPUs into trying to find a profitable place in the $200 or less markets...

    It's hard to believe that these cards could just come out of nowhere and crush AMD's entire product stack so easily, but I guess that's what happens when you sit on essentially the same products you designed over two years ago, rebadging and slowly trickling out the newer designs meant for months over the course of years...
  • SunLord - Tuesday, October 7, 2014 - link

    Tonga is meant to be a direct replacement for the 280 and 280X so we will likely see the price points move around to something like this.
    270 at $135
    270X at $155
    280 at $175
    285 at $200
    280x at $235
    285X at $265
    290 at $290
    290X at $360

    The 280 and 280X are good enough that they will stay in production until it's no longer profitable to sell them at lower price points.
  • The Von Matrices - Tuesday, October 7, 2014 - link

    I don't see a reason to keep any Tahiti GPUs in the lineup when the 285X becomes available. AMD's lineup already has too many cards as is; there's a lot of overhead in keeping so many SKUs around.
  • SunLord - Wednesday, October 8, 2014 - link

    I can see them staying on the market for at least another 6 months if not a year longer depending on chip supply. It all comes down to how many Tahiti wafers AMD bought
  • frenchy_2001 - Tuesday, October 7, 2014 - link

    I feel the Nvidia/AMD fight in graphics is now mirroring the Intel/AMD fight in processors.
    AMD is trying to fight very efficient opponents by brute force. They used to be level or even ahead of the game, but could not take advantage of it and are now completely outpaced.
    In both domains, their competition targeted lower power consumption (Intel starting with the Core architecture and now Nvidia with Kepler), their first gen barely hitting target (Intel original Core and Kepler). but the refresh getting there easily (Intel Core 2, Nvidia Maxwell).
    If they have no counter, I'm afraid to see a repeat of that (Intel going on to target lower and lower TDP while keeping perfs stable while AMD retreated to lower cost markets) with AMD going for unreasonable power usage to get some power out of its architecture (see R9 290X or FX processors at 220W).
  • kyuu - Tuesday, October 7, 2014 - link

    We'll have to see what AMD counters with before we can say the situation is going to mirror the one with AMD and Intel. In any case, the 290(X) hardly had "unreasonable" power usage. The issue was never one of power usage. The issue was that the stock cooler was garbage.
  • Dribble - Wednesday, October 8, 2014 - link

    It is power usage. Total performance = performance per watt * number of watts.
    If you have a less efficient architecture then that limits performance. AMD was less efficient vs Kepler so they had to produce a stupidly high power part to compete, and now vs Maxwell it becomes impossible.
    It's more obvious in mobile where you are basically limited to 100W for the biggest parts - haven't seen a top end amd mobile chip since nvidia produced kepler as they couldn't compete on equal power usage. Now vs mobile maxwell they are far far behind.
    AMD's biggest problem is not the quality of their coolers, it's that their current architecture is a long way behind their competition in efficiency, so far that they basically can't compete.
  • TiGr1982 - Wednesday, October 8, 2014 - link

    Indeed; maybe a jump to next manufacturing node can help them, if it happens next year (2015).
  • hammer256 - Wednesday, October 8, 2014 - link

    Well, the good news is that at least in this case Nvidia does not have any advantages in manufacturing nodes. So then it comes down to who allocated their transistors to best suit the task at hand. You can choose different design points to compromise/optimize to, but it's not like Nvidia has access to any magic sauce that AMD can't get. So then it comes down to lots of work, money, and time to decide on a design point and implement it. Personally, I would not be surprised if for the next GCN (er, next next?) AMD decides on a design point that gets them excellent efficiency.

    So yeah,I would say that lucky for us, the playing field is level for GPU competition, as opposed to the CPU side. Maybe that 16nm finfet at TSMC or whatever Samsung or Global Foundaries are cook up can allow AMD to catch up a bit, assuming that they do want to catch up. Frankly, that X86 licensing thing is freaking annoying. Talking about monopoly at work...
  • D. Lister - Friday, October 10, 2014 - link

    The problem with AMD is, instead of focusing on their primary markets (CPU/GPU), they keep spreading their resources thinner and thinner by trying to expand into newer markets and product lines. It is almost as if they're hoping that with their jack-of-all-trades approach, they might get some lucky advantage somewhere (ala A64 vs P4), so then they can focus more resources in that product line to make some quick bucks.
  • Hrel - Wednesday, October 8, 2014 - link

    Eh, the R9 280 is still a better buy than the GTX 760, Power Consumption is pretty much identical between those. The sooner Nvidia can get Maxwell down the stack to "sane" prices the better.

    As an aside, does anyone remember when you could get the GTX X60 level cards for $125-$150? I miss those days.
  • Impulses - Wednesday, October 8, 2014 - link

    Hmm, dunno about GTX x60 cards as x60 has only been a valid NV moniker for about a fourth of the time I've been buying video cards, probably less. (GTX xXxx not too long ago, GF XXXX before that, etc).

    I've always paid $200-300 for the 2nd best card in a line tho, seemed the last couple years that went up drastically ($400 for an R9 290 was considered high value, thankfully I paid $350 + 360 for mine). By that token even GTX 970 is priced a little on the high side even tho it's considered a terrific value given recent pricing.

    Maybe I need to go back and recheck what I've paid in the past... I know my HD6950 were no more than $250 each tho they were higher at launch, and the GTX 260 was $200, can't remember past that.
  • Impulses - Wednesday, October 8, 2014 - link

    I never had anything in the GTX 8800 line and the millions of rebadged, I think, I remember a Radeon 9700 before the GTX 260. Just looked up the price I paid for my NV GF 6800 tho and it was $350 (I think it came bundled with Farcry 1!).

    So I guess I have actually paid more for cards in the past and just got lucky with timing and whatnot over the last 2-3 cycles. Email receipts are too jumbled to go looking at what I paid for my earliest NV TNT & Riva cards, or the first few Geforces, never mind the Voodoo cards.

    I probably did pay $300+ for some. Flagship top tier card pricing has certainly gone whackadoo with your Titans and whatnot but NV & AMD/ATI have always done a good job of filling the midrange and having a card at every conceivable price point, just a matter of how much bang for buck vs re badging at any given time.
  • theflow4321 - Wednesday, October 8, 2014 - link

    What the fuck are you guys talking about. Power efficiency ? Your last bastion/refuge ? Sorry I did not know you were running server rooms. Welcome back to earth. The last batch of NV cards runs faster because they boosted the clock and they could do so because their design have only 5 billions transistors or so with much less cores too, something AMD could easily tweek too, reduces the number of cores vs frequency. The more texture oriented the game is the more you will get from clock frequency and the more polygons and lighting your game is the more you will get with compute/math power. Here the AMD cards beating the latest NV offering for you to cool down with Shadow of Mordor one of the latest and greatest game release.
    http://goo.gl/frv4hd
  • hammer256 - Thursday, October 9, 2014 - link

    Dude relax. For Maxwell Nvidia choose a good design point and I'm sure had some clever ideas and all that, but fundamentally they are still operating under the same constraints AMD is working with, namely the 28nm process. So it's not like AMD can't do what Nvidia did. Patience, and see what the next round from AMD looks like. No need to be so angry about it...
  • D. Lister - Friday, October 10, 2014 - link

    "Power efficiency ? Your last bastion/refuge ?"

    It would be the "last refuge" if it was the ONLY advantage of Maxwell (or even Kepler, for that matter) over its competition.

    Secondly, greater power efficiency directly correlates with better thermals which, together, mean a greater overclocking headroom, which in turn means higher performance.

    "The last batch of NV cards runs faster because they boosted the clock and they could do so because their design have only 5 billions transistors or so with much less cores too, something AMD could easily tweek too, reduces the number of cores vs frequency."

    Erm... again your point would be valid if some Nvidia fanboy was boasting about their GPU running at higher megahertz.

    "Here the AMD cards beating the latest NV offering for you to cool down with Shadow of Mordor one of the latest and greatest game release."

    the Maxwell drivers are far from mature so early in the architecture's life cycle. Also, it wouldn't be very wise to base a GPU comparison on a single cherry-picked benchmark.
  • ol1bit - Saturday, October 11, 2014 - link

    I've had 2 ATI/AMD cards (first was the original (Radeon) for all of the ones I've had. Nivida drivers have always been better for me, but as a side note I love completion, I wonder if AMD has any valid response to the 980 (which is now my new card, 670 & 460 before that)?
  • TiGr1982 - Saturday, October 11, 2014 - link

    No, currently they don't - R9 290X is overall slower, than GTX 980, and R9 290X is MUCH more power hungry at the same time. That's the issue for AMD they still have to address.
    R9 290X is an old tech overall - Hawaii is just "widened" Tahiti plus few changes - but Hawaii's power efficiency is still the same as of almost 3 years old Tahiti, so it's a good competitor at best to Kepler, but not to Maxwell. Congrats on your new GTX 980 card - it's a really new tech and a big step forward in graphics.
  • atlantico - Sunday, October 19, 2014 - link

    I don't care much about power usage, as long as my 1000W PSU can run it and as long as the final performance is good, I give a pass on power usage.

    It's a practical issue, nothing else. If the GPU can be cooled adequately, be it with air or water, fine.

    However, as it is with every type of GPU or CPU, better performance per watt allows for better performance overall. But that's not my concern as a user, that's something the engineers have to lose sleep over.

    I'm not running a rendering farm, just a gaming PC.
  • madwolfa - Monday, October 20, 2014 - link

    GTX970/980 out of stock for last week? It's been out of stock since launch.

Log in

Don't have an account? Sign up now