Comments Locked

84 Comments

Back to Article

  • Stuka87 - Saturday, August 23, 2014 - link

    I am not sure I understand this card. It looks like it is going to be slower than a 280, but it cost more? Seems like dropping it down to a 256bit bus would be so that they can drop the cost of it, not raise it?

    Plus the naming scheme is a bit confusing, since the 280X is faster than the 285, but the number is lower.
  • Rockmandash12 - Saturday, August 23, 2014 - link

    It uses the new Tonga chip so it has a lower TDP and it has all the new features like true audio.
  • Frenetic Pony - Saturday, August 23, 2014 - link

    Which is cool and all, but doesn't help with the confusing naming part.
  • Mr Perfect - Saturday, August 23, 2014 - link

    What someone on another site claimed was that the third digit in the number indicates if it's a refresh part, but the X suffix tells you which is the performance part. Meaning that the 280 refreshes to 285 and 280X refreshes to 285X, but the 280X would still be faster then the 285 since it, well, has an X.

    If true, it's the most confusing naming convention since... well, they're all pretty bad. But still, bigger number traditionally means faster. This is going to confuse people.
  • Samus - Sunday, August 24, 2014 - link

    I still find it ironic the NVidia GT430 is faster than a GT630 (they're both the same chip but many GT430's had 3200MHz DDR3 on a 128-bit bus.) The confusion is further compounded when you discover there are many GT430's that have a 64-bit bus!

    It's almost like somebody from GM worked at NVidia and used different revisions without changing the part number...too soon?
  • Samus - Sunday, August 24, 2014 - link

    Dropping the memory bus to 256-bit will have a negligible impact on performance. GCN has an efficient memory controller and 384-bit was always overkill for a 1792 SP configuration. My guess is increasing the bandwidth provided by the 5.5GHz memory is going to offset whatever loss there is from the previous 384-bit memory bus.

    Although this is an apples to oranges comparison, this is similar to nVidia's strategy where the 770 has a 256-bit bus (~1500 cores) when the 780 has a 384-bit bus (~2300 cores.) NVidia probably found it necessary to implement a 384-bit bus in the same way AMD decided NOT to implement it with the R9 285; they can't saturate a 384-bit bus with "just" 1792 SP's. The transistors saved by reduction of memory bus size can be better utilized elsewhere (or eliminated, reducing power consumption.)

    But what's really interesting are the clock speeds. AMD shouldn't have any trouble increasing the clock to 1000MHz and boost to 1100MHz, similar to the 256-bit 270X...how is an essentially simplified Tahiti chip only boosting at 918MHz? You'd assume by now this process would have matured enough to do what nVidia's been successful doing and really push the clock.
  • ET - Sunday, August 24, 2014 - link

    I assume that the low clock is used to lower TDP.
  • TechFanatic - Sunday, August 24, 2014 - link

    The low boost clock is likely the direct consequence of Globalfoundries SHP 28nm node.
    If you recall, a few months back AMD announced that it would be making "specific" GPUs at Globalfoundries and I have it on good authority that Tonga is one of said GPUs.

    Globalfoundries 28nm SHP process is more power efficient than TSMCs HP 28nm node but clocks significantly lower.
    Which is why we only see a 730mhz boost clock for the integrated GPU in Kaveri.
  • lefty2 - Sunday, August 24, 2014 - link

    The article says that it's TSMC 28nm, not Globalfoundries.
  • Samus - Sunday, August 24, 2014 - link

    There is a question mark (?) for manufacturing process in the chart. TSMC's 28nm is, as I said, pretty mature at this point. I don't understand why the clock speed is dropping when millions of transistors are being eliminated by the memory bus reduction, and presumably additional efficiency and tweaks are being added by the revision of Tahiti.

    This leaves the impression this chip will indeed be a GF part, and that is unfortunate because with high-performance Maxwell parts on the horizon AMD is going to need a miracle to compete over the next 12 months. As far as I'm concerned the 750Ti is the best $150-class GPU you can buy, and it can often be had for $120-$130, effectively undercutting AMD by $50.
  • extide - Monday, August 25, 2014 - link

    It's either that (a GF Chip) OR they are neutering the clockspeeds so it doesn't intrude on the R9 280X ? Maybe? I dunno.
  • ppi - Monday, August 25, 2014 - link

    Both AMD and nVidia are apparently waiting for TSMC to finally launch their production on 20nm process. For AMD, given their limited R&D budget, this is clearly the way to go (along with some minor optimizations we can see today).
  • HisDivineOrder - Sunday, August 24, 2014 - link

    I suspect a large reason they went with the 384-bit bus at first was the 280 (and the 7970/50 series it was originally based on) were designed in a time when the card was meant to be the high end and it was supporting higher than 1080p resolutions.

    With the stack now topped with the R9 290/X for that resolution and higher with AA, the 280 is largely being used by users topping out at 1080p or less, probably without AA. For those users, I imagine the bus is largely unused and so it makes sense to target them more directly with a product more tailored to their unspectacular needs.
  • Kevin G - Sunday, August 24, 2014 - link

    The 384 bit memory memory interface of the Radeon 7950/R9 280 was noticeable as resolutions increased. Most notably using Eyefinity or 4K. I suspect that the drop down to a 256 bit wide interface could also be felt at 2560 x 1440.

    The drop wouldn't be so bad if AMD cranked the memory bus clock. There was a small 10% clock speed boost but that doesn't offset cutting off a third of the width.

    The only saving grace for this part is if AMD prices it aggressively. I don't see this happening until nVidia responds with Maxwell parts and/or the holiday season hits. I'm hoping to see sub $200 prices then which would be acceptable if the R9 280 hangs around $220 to $250 USD
  • mickulty - Saturday, August 23, 2014 - link

    The 285 has a higher texel rate, higher pixel fillrate, and higher compute performance, each by around 10%. I think it's safe to say it's gonna be slightly faster than the 280 (at appropriately low resolutions).
  • Penti - Saturday, August 23, 2014 - link

    Half the performance in DP though.
  • mickulty - Sunday, August 24, 2014 - link

    Yeah, which is annoying but unlikely to affect gaming benchmarks. Unfortunately, as with the memory bus, amd seem to be focusing on competitive positioning with nvidia more than a good product.
  • chizow - Saturday, August 23, 2014 - link

    What numbers are you basing this on? Everything I've seen has the 285 being slower than the 280 in terms of both pixel and texel fillrate with less bandwidth, less VRAM, and undoubtedly less compute performance (GCN1.1 was neutered compared to 1.0).
  • mickulty - Sunday, August 24, 2014 - link

    R9 285:
    -3.29TFlops
    -102.8GTexels/s
    -29.8GPixels/s
    (according to the slide in this article)

    R9 280:
    -2.96TFlops
    -92.6GTexels/s
    -26.5GPixels/s
    (according to techpowerup's gpu database)
  • chizow - Sunday, August 24, 2014 - link

    Yeah unfortunately those numbers are misleading as that's literally best-case vs. worst-case. TPU is using base clock numbers for the 280 while AT is using max boost clock for the 285. The 280 has a higher boost clock of 933MHz to the 285's 918 and if other GCN 1.1 parts like Hawaii are any indication, it's much more likely the 280 maintains its boost clocks compared to the 285 (due to low TDP limits). In either case, it just highlights another misstep in AMD's boost mechanics. Complete crapshoot on what these cards can maintain in terms of clockspeeds. Might as well just put a bunch of ??? marks in there for specs and say "Bonne Chance!" Not French btw, just thought it would be more original than good luck.
  • Alexvrb - Sunday, August 24, 2014 - link

    "The 280 has a higher boost clock of 933MHz to the 285's 918 and if other GCN 1.1 parts like Hawaii are any indication, it's much more likely the 280 maintains its boost clocks compared to the 285 (due to low TDP limits)."

    Nonsense. 7790/260/260X (Bonaire) is GCN 1.1 and it does just fine. GCN 1.1 actually has improved PowerTune capabilities. The 290 series is fine once cooled properly - hence aftermarket cooling solutions helping tremendously. I'm gonna go out on a limb here and say that the 285 is going to hit turbo clocks just fine, especially given the reduced TDP and less heat. PowerTune limits by themselves only really seem to become an issue when overclocking or when you don't have enough cooling.

    The TrueAudio support is a nice bonus, and I'm sure it will be practically an across-the-board addition in the next gen models. On the APU front it's already in Kaveri, although personally I think it would benefit the smaller cat-based APUs more - maybe next die shrink.
  • chizow - Sunday, August 24, 2014 - link

    The 280 is still clocked higher than the 285's max boost and there is absolutely no guarantee the 285 consistently hits that number, just as the 290's tended to be far below their max boost clock especially in extended gaming periods.

    Also, the 285 is most likely built on TSMC's 28nm HPL process which specifically sacrifices performance for power efficiency. The low TDP and low leakage of Tonga is most likely a hindrance vs. the leaky, but potentially fast transistors of Tahiti and Hawaii. Otherwise it would leave no doubt and be clocked higher than the 280 out of the box and leave no doubt.

    http://www.tsmc.com/english/dedicatedFoundry/techn...
    "The 28nm low power with high-k metal gates (HPL) technology adopts the same gate stack as HP technology while meeting more stringent low leakage requirements with a trade of performance speed. With a wide leakage and performance spectrum, N28HPL is best suitable for cellular baseband, application process, wireless connectivity, and programmable logics. The 28HPL process reduces both standby and operation power by more than 40%."

    Translation: Tonga is a cheaper to manufacturer version of Tahiti (we knew this) but sacrifices speed/performance for power efficiency.
  • Alexvrb - Sunday, August 24, 2014 - link

    Look, friend, I don't mean to be rude, but you're being a little dense. Does the more efficient process hinder peak clocks? Maybe. We won't really know for certain until we see overclocking results or a 285X variant. But I really wasn't arguing that point. I only took issue with your "280 is more likely to maintain boost clocks vs 285" comment.

    I'm not arguing about final performance or peak clocks (although you'll have to wait for a review to see if it actually loses any ground in real-world games). I'm saying that 290's issues with maintaining boost clocks were HEAT related only. Take a reference 290, rip off the cooler, stick on a better cooler. Problem goes away. Alternatively, buy one that already has better-than-reference cooling. It's NOT a design issue of GCN 1.1.

    Also you completely ignored the obvious and perfect example of the Bonaire based cards, which are also GCN 1.1 and have zero problems hitting max turbo. The 285 will have no problems hitting its turbo speeds. The lower TDP of the 285 is NOT a hindrance in terms of holding boost. GCN 1.1 is not at a disadvantage to 1.0.

    Once again, if you want to argue that it hinders peak clocks and thus speed, that's fine. But they set the boost clock based on what the card can do, and unless you disconnect its fan I'm sure it will boost just fine.
  • chizow - Sunday, August 24, 2014 - link

    Look, friend, likewise, you don't seem to understand, my point was that even if the 285 is able to maintain its boost clocks, it will still clock lower than the 280 as we know the max boost clock of the 285 is lower, most likely due to 1) efficiency focused process or 2) TDP limits. Meaning, if AMD could clock it higher and leave no doubt about which is the faster card, they would. But they didn't.

    But, going out on a limb might not be the safest bet given AMD has already demonstrated a terrible track record of maintaining advertised boost clocks especially on their newer GCN 1.1 parts like Hawaii. Cheaper, more efficient, but lower performance process coupled with a mind to efficiency may mean Powertune prevents it from hitting those boost clocks as frequently as the 250W Tahiti-based 280.
  • Alexvrb - Monday, August 25, 2014 - link

    No, that wasn't your original point. You changed your tune ( kind of sort of), but here's what you said originally, and this is what I disagree with.

    "The 280 has a higher boost clock of 933MHz to the 285's 918 and if other GCN 1.1 parts like Hawaii are any indication, it's much more likely the 280 maintains its boost clocks compared to the 285 (due to low TDP limits)."

    I take issue with this. First, the 285 will not have issues reaching and maintaining boost clocks. Second, the 290 ONLY had issues because of heat - everyone else here gets that. Heat won't be an issue for the 285. Finally, the one you keep ignoring - Bonaire (7790, 260, 260X) is GCN 1.1 too.

    As far as peak clocks or anything else, I'm not arguing about that. The new process might prove to be a burden for overclockers, or a boon. Hard to say - depends how well it takes voltage and how leaky it gets while being overvolted.
  • chizow - Monday, August 25, 2014 - link

    Yes, it was my original point in illustrating the numbers the person I replied to was a literal worst and best case. When taking both best cases, the 280 is clocked higher to begin with.

    "Yeah unfortunately those numbers are misleading as that's literally best-case vs. worst-case. TPU is using base clock numbers for the 280 while AT is using max boost clock for the 285."

    Secondly, you CAN'T assume anything about the 285 maintaining boost speeds because you don't know if:

    1) the 28HPL process its made on prevents it from clocking higher
    2) the cooler is not as efficient as the 280's proven coolers (the card appears smaller than Tahiti, more along the lines of the 270 in size)
    3) the goal of keeping TDP low prevents PowerTune from maintaining peak Boost clockspeeds to stay within the 190W stated TDP.

    AMD's idea of Boost clocks CLEARLY are NOT the same as Nvidia's so assuming this card is going to hit its Boost the majority of the time is certainly not a given with AMD's track record in this regard, sorry.
  • Alexvrb - Tuesday, August 26, 2014 - link

    You can run the 290 with better cooling and it holds boost just fine. No need to fiddle with PowerTune unless you're overclocking. Heat was the enemy there. The 190W "Typical Board Power" is not going to hold the 285 back. The only way you'll have a problem is if you unplug the fan.

    Their track record regarding boost is fine in that regard. Their track record regarding _reference cooling_ is somewhat blemished, but that is another story entirely.
  • chizow - Wednesday, August 27, 2014 - link

    And heat output is a function of TDP, which can limit the Boost clockspeeds, same as with this 285. You really don't know, just as I don't know for complete certainty, but it is obvious that if AMD could clock this part higher with confidence to leave no doubt about which part is faster (especially given the mis-nomenclature), they WOULD have.

    But this should really be the final bit of evidence needed to give merit to what I have been saying all along. Straight from AT's AMD FX CPU reveal yesterday:

    http://anandtech.com/show/8429/amd-set-to-announce...

    "The E designation is slightly interesting. As a tradeoff for a lower TDP of 95 watts versus the 125 watts of the standard CPU, only the amount of boost time is affected. Base and boost clocks are the exactly the same as non-E chips."

    Wonder why that sounds familiar? This is the EXACT same phenomena I have already described. It is possible that AMD, in an effort to meet TDP limitations had to sacrifice Boost uptime, just as is the case with these two otherwise identical FX CPUs with TDP differences. This also happens all over the mobile space, most recently with the Surface Pro 3.
  • Alexvrb - Wednesday, August 27, 2014 - link

    The two are related but your logic is flawed.

    It's really an an apples to oranges comparison. Imagine if the FX chips didn't sustain their turbos unless you put massive cooling on them... you could then imagine they were bouncing off thermal limits. But that isn't happening! The FX chips are POWER limited, not HEAT limited. A more power efficient model creates less heat, but the important part to remember is that the reason they have to reduce clocks (or sustain the high clocks for a shorter duration) is due to power limits. You can put the CPU on ice and you STILL won't get more performance.

    But consider the 290X. Opposite problem. You continue to fail to realize that the 290X could reach it's boost and sustain it as long as it was cooled properly. No adjustments to PowerTune were needed - the TBP wasn't altered. So the 290/290X were HEAT limited, not POWER limited. Exact opposite situation. As long as they keep thermals under control I am quite certain they'll reach and hold boost just fine for the 285.

    But go on spinning and twisting this into an argument about something else. Or you COULD just take back your words and admit GCN 1.1 boosts fine (you're still ignoring Bonaire), that it was a HEAT issue, and that the 285 will boost just fine - even if it IS a little slower overall.

    Oh, and yes, power consumed generates waste heat. Excess heat can force a chip to slow down (throttling - especially evident in high-end graphics or when overclocking), so there are cases where a chip slows down due to heat, and not due to power consumption. Things get messier in mobile chips such as APUs because you essentially have both problems to contend with constantly.
  • chizow - Thursday, August 28, 2014 - link

    Again, you don't seem to understand. Heat is a function of Power, and vice versa. You set your target power based on rated TDP, how much heat you can dissipate for a given form factor and cooling solution. Obviously, the same chips are rated differently and that starts with their target TDP. The E version will not boost as for as long as the non-E, which can sustain those speeds because it has a higher target TDP.

    This is the SAME exact phenomena as I have described with the 285. I've given at least 4 strong indicators as such, AMD implementing this exact TDP rating to throttle boost clocks on FX CPUs, TSMC's own guidance for HPL, lower target TDP (on same process), potentially less efficient cooling solution, and the fact the target Boost clocks are relatively low compared to both the 280 and 290.

    The 290X suffered exactly the same problem because you once again don't understand. AMD set a high TDP (250W) but the card actually dissipated closer to 300W under load at those Boost clocks. The cooler was UNABLE to dissipate this much heat and Powertune and the onboard thermal regulators then throttled the clockspeeds. Only after more efficient and robust cooling solutions were implemented, were those cards able to hit their rated Boost clocks.

    I'm not ignoring Bonaire, because it obviously had a power target and TDP that its cooling solution was able of sustaining, and even taking it into account, AMD is *STILL* only shooting 50% for GCN 1.1 parts with Powertune 2.0, which is still clearly worst than its GCN 1.0 counterparts.

    Bringing us full circle to my original points: 1) The original 280 was clocked higher and actually hit its Boosted clockspeed the majority of the time, unlike GCN 1.1's spotty record and 2) The 280's boost is clocked higher than the 285, both of which directly refute the original speculation the 285 has higher theoretical specs than the 280.
  • bwat47 - Sunday, August 24, 2014 - link

    I've yet to see an AMD card that can't consistently maintain its boost clocks.
  • chizow - Sunday, August 24, 2014 - link

    @bwat47 is that comment serious? Did you miss the 290/X launch debacles with Boost clock controversy and rumors of cherry-picked press samples?
  • JDG1980 - Sunday, August 24, 2014 - link

    To be fair, that was mostly due to the crappy stock cooler. Once we started seeing custom Hawaii designs, this problem went away.
  • extide - Monday, August 25, 2014 - link

    Exactly, that was a COOLING issue, and these cards will produce way less heat so that will be a non issue! Not even a valid comparison!
  • chizow - Monday, August 25, 2014 - link

    And on the other side of that argument, Hawaii was the first ASIC that rendered the stock reference blower designs incapable of maintaining stated clockspeeds. Sure 3rd party custom coolers eventually remedied the situation 2-3 months later, but the fact remains, these coolers were NEVER required in the past just to hit stock rated speeds. In fact, these coolers were generally reserved for either 1) significantly overclocked SKUs or 2) running significantly cooler at the same clockspeeds or both.
  • Alexvrb - Tuesday, August 26, 2014 - link

    If 290 was GCN 1.0 based and it produced that much heat, it would have had the same problem. That has nothing to do with GCN 1.1, nor boost/TBP/PowerTune flaws.
  • chizow - Wednesday, August 27, 2014 - link

    Sure it does, it was the first SKU and Arch where AMD used its ambiguous, unstated Base clocks and its overstated, best-case Boost clocks and Powertune 2.0. AMD also felt the pressure to outperform Nvidia's 780, so it overclocked its hot card to the tits to do so, only to find out the stock reference cooler was insufficient to run the GPU at those speeds. Same impetus is there for the 285, except the pressure will be to convincingly beat the 280, which from early leaks, doesn't seem to be the case.
  • Alexvrb - Wednesday, August 27, 2014 - link

    7790 was the first chip with GCN 1.1 and PT 2.0. It works fine.

    You know what... it took me a while because you're not very obviously a troll but... you're a troll. Goodbye, sir troll, good trolling.
  • chizow - Thursday, August 28, 2014 - link

    LMAO troll, good one. I've given numerous examples illustrating my point. And again, even if the 7790 works fine, that's still only 50%, which is still a worst track record than GCN 1.0 boost parts....proving my point to begin with. The 285 is less likely than the 280 of maintaining its Boost clockspeeds, because we know the 280 has no problems doing so.
  • HisDivineOrder - Sunday, August 24, 2014 - link

    They're setting an MSRP they can have some wiggle room with. They can release the cards, get a minor markup over the R9 280 equivalent with people who don't know better, and eventually do "a drop" from MSRP as people mostly expect to match up to pricing at or slightly below the current 280/X cards.

    This is also to give them wiggle room when nVidia takes its ancient 760 and adjusts it lower in the way they usually do. AMD has room to lower their pricing, too, without cutting into the bone.
  • lazarpandar - Saturday, August 23, 2014 - link

    Why would they not call this the 275?
  • garadante - Saturday, August 23, 2014 - link

    I think the bigger thing to watch out for here would be 280s going on some nice sales. If I see them dropping to around $150~ after rebates, I'd be very tempted to pick up a second to Crossfire with my 7950. Then I'd be able to hold over until after this upcoming generation. The narrower memory bus and less VRAM seems like a huge turnoff considering the 384 bit/3 GB aspect of the 7950/7970 was what was so appealing with them over the competing 670/680 at launch.
  • Flunk - Sunday, August 24, 2014 - link

    Might as well, I've got 2x 280x and there really isn't anything that stresses them until you get up to 4K resolution. I'm sure 2x 280 will last you quite a while assuming you're running at a normal resolution.
  • chizow - Saturday, August 23, 2014 - link

    @ Ryan, pretty sure your specs table is off for the R9 290, it should still be 512-bit bus. Other than that this should probably read:

    "The R9 285 will take up an interesting position in AMD’s lineup, being something of a refresh of a rebadge/rebrand that spans all the way back to Tahiti (Radeon 7970)."

    Since the R9 280/X was more of a rebrand of the original Tahiti 7950/7970. Seems this card is about a year late, it would've made a lot more sense to launch this alongside the other GCN1.1 cards like Hawaii R9 290/X and R7 260/X Bonaire parts as the R9 280/X, instead its going to be confusing with this as the R9 285 and slower than the older 280 in higher resolutions or with higher levels of AA.
  • JDG1980 - Saturday, August 23, 2014 - link

    Any efficiency gains are welcome, but dropping from 250W (nominal) to 190W with a slight performance hit isn't nearly as impressive as what Nvidia has done with Maxwell. This is going to have a hard time keeping up once the big Maxwell chips come out in a couple of months.
  • R0H1T - Sunday, August 24, 2014 - link

    Except that AMD is "nearly" as efficient with their R7 265 as Nvidia is with the 750Ti ~
    anandtech.com/bench/product/1130?vs=1127

    Also the Radeon 2xx products are based on GCN 1.1 which is at least a year old, since the 7790 launched more than a year ago, while Maxwell is relatively new(er) & so till the time GCN 1.1x parts are launched alongside Maxwell's mid range offerings (later in the year) we'll not know which of the two is really more efficient.
  • FriendlyUser - Sunday, August 24, 2014 - link

    Who cares is the 750Ti is more efficient? The performance difference is quite significance. Even the comparison with the 265 shows the 265 to be faster overall with only 3-4 db more noise at load.

    From my own experience (I own a 7950) the card is quite cool and reasonably quiet. Why should I care if the 750Ti has a lower TDP? What I want is good performance and price. Power efficiency only becomes an issue if you have excessive noise or huge PSU requirements. With the 285, clearly this is not the case.

    Nvidia speaks of power efficiency because this is all they've got.
  • R0H1T - Sunday, August 24, 2014 - link

    So what exactly are you trying to say here ? I merely pointed out that Maxwell isn't all that efficient on the desktop as people are making it out to be, also the fact that cramming more transistors is simply not feasible on 28nm anymore that's why Nvidia & AMD both are waiting for greater access to 20nm fabs.

    This isn't about efficiency as it's highly impractical to expect a successor to 290(x) & 780(Ti) to be made on 28nm, so everyone will have to wait 6~12 months before the next gen (real) high end GPU's hit the market cause atm Apple & Qualcomm are way ahead in the priority list of TSMC.
  • Samus - Sunday, August 24, 2014 - link

    Maxwell is twice as efficient per "core" as Kepler and about twice as efficient per SP as AMD's "Sea Islands" architecture. If nVidia releases a 1500-2000 core Maxwell GPU this year, AMD will be crushed at the top. The 640 Core 750Ti is the most powerful <80-watt GPU in the world by a huge margin. It can be dropped into practically any OEM system without need for a power supply upgrade.

    When you consider it is as fast or faster than a GTX660 or Radeon R7 260X (and sometimes as fast as an R9 270) I think its pretty obvious nVidia is going to reign the next generation of GPU architecture. AMD's current crop is already stretched so thin they throttle back from overheating (R9 290X)
  • PEJUman - Sunday, August 24, 2014 - link

    ummm... who pays for your electricity bill?
  • JDG1980 - Sunday, August 24, 2014 - link

    That chart you linked shows *performance*, not efficiency. The GTX 750Ti has a TDP of just 60 watts, while the R7 265 comes in at 150W - two and a half times as much. And for all that, the R7 265 only has a slight performance edge. Therefore, the 750Ti is much more efficient.
  • R0H1T - Sunday, August 24, 2014 - link

    And everyone knows TDP =! power consumption, did you even check the power consumptions right at the end ? The R7 265 is ~15% than the 750Ti in most gaming benchmarks while it consumes slightly under 30% more in terms of load power for Crysis 3, so something in the region of 10~20% less efficient than 750Ti I'd say !
  • chizow - Sunday, August 24, 2014 - link

    Wow your analysis is WAY off, you do realize those power figures are for TOTAL SYSTEM POWER from the wall before PSU efficiency right? That includes the CPU, mainboard, HDDs, memory, system fans etc.

    So if the 750Ti is drawing 60W under load and the total system is ~180W, everything else is drawing around 120W. And the R7 265 is drawing 235W, assuming the same 120W for the rest of the system, you are looking at 115W for just the 265. So you are looking at almost 100% more power consumption for a mere 15-20% increase in performance over the Maxwell part.

    Clearly, Maxwell is MUCH more efficient than anything AMD has to offer, but this is no surprise, as it is the same conclusion anyone who actually understands these dynamics came to months ago.
  • tuxRoller - Sunday, August 24, 2014 - link

    Except that the r285 is about 10% faster wrt pixel/texel fills, and GFLOPS.
    Where are folks getting the idea it's slower?
  • Alexey291 - Sunday, August 24, 2014 - link

    They see the turbo frequency compared to the (older gcn 1.0 based) 280 being lower and instantly assume that it must be slower.
  • tuxRoller - Sunday, August 24, 2014 - link

    Are you serious? That's it? IF this is a newer arch, that makes little sense to assume. If it's not, then amd is trading a slower card for a faster one, charging more for it, AND increasing the product number id (implying faster perf). That seems unfathomably stupid of amd, if true.
  • RaistlinZ - Saturday, August 23, 2014 - link

    I don't see the point of this card at $250 when a R9 280 costs $220, is faster, and has more VRAM. Even if the 285 uses less power it doesn't make up for a $30.00 difference in price.
  • just4U - Sunday, August 24, 2014 - link

    "The GTX 760 is frequently found at $239 – a hair under the R9 285’s launch price .."
    -------

    It seems to me that it's only been very recently.. (the last month or so) that 760s could be had in the $250 range.. (same goes for 280s for different reasons) Most of the time I've seen them hanging in and around the $300 price point..
  • FriendlyUser - Sunday, August 24, 2014 - link

    The 285 will offer superior performance in most games (and, also, TrueAudio and XDMA crossfire). Plus the launch price is MSRP. Real price will quickly stabilize at approximately $200, dominating the GTX 760.
  • hojnikb - Sunday, August 24, 2014 - link

    Considering how cheap 280X are nowdays (atleast where i come from) there is little reason to buy this :)
  • hojnikb - Sunday, August 24, 2014 - link

    Used ones obviously. Mining craze was really awsome if you think about it.
  • bwat47 - Sunday, August 24, 2014 - link

    @hojnikb

    There were some new ones on newegg for only 249 recently (think it was sapphire dual-x), seems to have gone back up to 289 now though
  • bwat47 - Sunday, August 24, 2014 - link

    Yeah they seem to be going for as cheap as ~249 dollars now which is a very nice deal for that card. I got mine for 299 when they first came out and am still very happy with that purchase.
  • OrphanageExplosion - Sunday, August 24, 2014 - link

    Over the longer term I can't help but feel that the 3GB 7950/R9 280 is going to be the better deal compared to a 2GB R9 285, regardless of the extra perf.

    As Ryan points out, the next-gen consoles are a game-changer. Available VRAM will trump small percentage boosts in performance - a situation that will only become more important once you scale up to 1440p and 4K.

    Adding Watch Dogs to the benchmark suite at ultra settings would be a good first step. Titanfall too - though repeatable benchmarks there are going to be quite a challenge. These two games are just going to be the beginning though.
  • B3an - Sunday, August 24, 2014 - link

    2GB is unacceptable on a card at this price. Some games already recommend 3GB and this will continue to become more standard as the consoles finally have a lot more RAM to work with.
  • Beany2013 - Sunday, August 24, 2014 - link

    Show me a game that needs more than 2gb of VRAM to keep a consistent framerate?

    Metro Last Light is one of the most graphically impressive games out there, and I've never seen it use more than a gig at any time - although it does have a very efficient way of streaming textures in. Crysis 3 seems to use about 1.5-2gb if available - it'll run happily with less, it just streams the textures in when required, rather than preloading them in VRAM first. Its not like when you hit the VRAM limit your start getting half the frame rate.

    Bear in mind that the consoles use shared RAM - the 8gb is split between the GPU and CPU, so the VRAM has to be considered against operational CPU ram for physics, AI, sound, and anything else that can't be accelerated in pure hardware, etc.

    2gb is perfectly acceptable IMHO - but in a couple of years time (once the XBone and PS4 start getting optimised and stretch their legs) it *might* not be, not unless the game requires 2gb of VRAM just to draw a the scene in front of/around you.
  • OrphanageExplosion - Sunday, August 24, 2014 - link

    Titanfall for one. Watch Dogs, for another.

    Titanfall has no background texture streaming. Everything is dumped into VRAM and 3GB is required just to house the highest quality artwork, regardless of resolution.

    Ubisoft has indicated that perf issues with Watch Dogs are down to a lack of unified RAM and suggest as much VRAM as possible for ultra settings.
  • tuxRoller - Sunday, August 24, 2014 - link

    At what resolution, though? Although, I do agree that it has enough ram, and if you need more, they mentioned a 4gb option.
  • _zenith - Monday, August 25, 2014 - link

    Bioshock Infinite eats up 2.5GB at 1080p with all settings maxed out (in-game UI settings only; no driver mods etc)...
  • Beany2013 - Monday, August 25, 2014 - link

    Sigh. One word, kids.

    Caching.

    Something that TF, BS:I and WD all do to varying degrees. They don't *need* 3gb VRAM otherwise you'd not see 2gb GPUs besting 3gb GPUs in the benchmarks when the 2gb card has a better GPU in it - the games are mostly GPU limited, not VRAM limited.

    To prove the point, have a look at the 260x 2gb and 7850 1gb in this benchmark
    http://www.gamersnexus.net/game-bench/1352-titanfa...

    The difference is not huge on average with insane graphics@1080p, which if it *needed* 3gb, would be unplayableon both cards - not from an elite gamers perspective who spits on anything less than 100fps, but from a technical perspective - it'd be sub 10fps as it constantly loads entire environments into VRAM as it fills up.

    We're talking 55fps average vs 50fps average, which is likely VRAM affected given the both run the same basic GPU, but could also be accounted for by the modernisation the 260X got over the 7850, higher clock speed, faster memory speed, better thermal management meaning more time at peak speed, etc but the 7850 isn't crippled by the lack of VRAM by any stretch of the imagination.

    Oh, and Ubisoft have already marked their coding incompetence by crippling the graphical fidelity from where it was on the E3 demo. Along with coding comments like 'is pc, who cares' which shows how little of a toss they gave to the PC version. So I'll be taking no lessons from them on system requirements and optimisations, thanks.

    The fact is that no game out there *needs* 3gb VRAM, unless you are playing at 4k, in full detail and expect silky smooth framerates. I don't think anyone paying $250 for a GPU expects to get that, as you can only really get that with a truly top flight card at this stage, which is well north of $300.
  • Beany2013 - Monday, August 25, 2014 - link

    I will concede thought that it's a funny spec at a funny price point.
  • hojnikb - Sunday, August 24, 2014 - link

    Unless you're playing on 1440p+, 2GB is still plenty enough.
  • B3an - Sunday, August 24, 2014 - link

    No it's not. And when buying a new card you should be thinking of the future, not just current games, some of which already need more than 2GB.
  • hojnikb - Sunday, August 24, 2014 - link

    I can play pretty much any game perfectly fine with only 1GB of vram.
    Yeah, some games can USE more than 2GB of vram. That doesn't mean, they NEED it.
  • PEJUman - Sunday, August 24, 2014 - link

    i play on 2160p 28" screen. i ended up disabling AA since i can no longer see the pixels at 1.5' away. not only I gain some free memory (780ti, 3GB), it also increases compatibility with older, no official AA support games.

    I really think the HiDPI display will disrupt how we set our 3D settings, AA will eventually be obsolete. Personally, I start to lean towards 2160p no AA vs. my old setup at 1600p 4xAA.
  • MrSpadge - Sunday, August 24, 2014 - link

    Ryan, Tonga is "rumored pretty surely" to have increased the number of raster engines from 2 to 4 over Tahiti. This should increase game performance at similar clocks and shader counts quite a bit, as Tahiti was never really well-balanced in this regard. We don't know it for sure, but at least mentioning it in the article might have avoided or tamed down lot's of the critism in the comments here. 2GB vs. 3 GB RAM is pretty much the only drawback left compared to the 280.
  • techguyz - Sunday, August 24, 2014 - link

    I don't like it. It still requires two 6-pins. I like the decrease in wattage....but systems running 400-500w PSU's in the first place, aren't typically strong enough on the CPU end to drive such a card. $250 isn't a great price either.

    Less VRAM obviously hurts future proofing, because once you reach any game that needs more than 2GB, you're automatically crippled regardless of if you have more than enough GPU power. .
    Memory bus speed is very important for things like anti-aliasing.

    This is essentially an r9 270's guts with an r9 280's GPU chip and texture unit count.

    Keep in mind that AMD still lists the r9 280's MSRP as $279, so we may be looking at a $180 card instead, hopefully. In that price segment, it would be very competitive.

    As for the 'Tongo' architecture, there's really not much on it. It doesn't really seem to bring anything worthwhile.
  • FermiKel - Sunday, August 24, 2014 - link

    How is it hard launch when you don't have cards on sale on the day of announcement?
  • FermiKel - Sunday, August 24, 2014 - link

    Did my comment get deleted? I wonder if companies are back to the days when they announced products before giving them to reviewers?
  • PEJUman - Sunday, August 24, 2014 - link

    Anybody else thinks that AMD does NOT have a new GPU here? maybe just a die harvested version of hawaii? not the 1st time for AMD/ATI (5830, aka CYPRESS LE). the ratio & power scaling somewhat fits within the 290X, 290 scaling, as well as the fact that it's a virtual card, therefore likely to be a drop in solution for board makers?

    Maybe I am wrong, but If this is a new design, this is the first time AMD launched something near high end without any architecture info/details.
  • haukionkannel - Sunday, August 24, 2014 - link

    I think that both companies have developed new GPU for 20nm or even smaller for many years and have been forced to do some improvisations because 28nm is everything that is practically available.
    If this is cheaper product to make than 280, then this is useful product. At first it will more expensive, like Nvidia 750 is quite expensive (for gpu power). But in longer run smaller chip will make it more economical.
  • Drunktroop - Monday, August 25, 2014 - link

    ATI is saying that it can score around P7000 in 3DMark Firestrike (w/ 4960X)

    The Tahiti LE was released back in Q4 12 at ~USD249.
    Similar power rating (Board Power 185W IIRC?)
    Firestrike score shall be around P5500 if paired with 4960X
    So it is a ~25% improvement
    Not too bad considering the architecture is mostly the same and we are still on the same process node, But there still not enough reason to upgrade after almost two years.
  • HighTech4US - Monday, August 25, 2014 - link

    Who the heck is ATI?
  • piroroadkill - Tuesday, August 26, 2014 - link

    Array Technologies Incorporated, a Canadian GPU company.

    A common name for the graphics division of AMD, and the longest-standing name for Radeon cards. The preferred name for AMD Radeon cards among enthusiasts, as it is the name they've known for the longest time.

    But you were just being pedantic, so, bleh.

Log in

Don't have an account? Sign up now