The AMD Radeon R9 380X Review, Feat. ASUS STRIX
by Ryan Smith on November 23, 2015 8:30 AM EST- Posted in
- GPUs
- AMD
- Radeon
- Asus
- Radeon 300
Meet the ASUS STRIX R9 380X OC
For the launch of the Radeon R9 380X AMD sampled us with ASUS’s STRIX R9 380X OC. Arguably the highest-end of the R9 380X launch cards, the STRIX R9 380X OC comes with a factory overclock tied for the largest of any R9 380X and a further optional overclock for $259.
Radeon R9 380X Cards | ||||
ASUS STRIX R9 380X OC | Reference R9 380X | |||
Boost Clock | 1030MHz / 1050MHz (GPUTweak OC) |
970MHz | ||
Memory Clock | 5.7Gbps GDDR5 | 5.7Gbps GDDR5 | ||
VRAM | 4GB | 4GB | ||
Length | 10.75" | N/A | ||
Width | Double Slot | N/A | ||
Cooler Type | Open Air | N/A | ||
Price | $259 | $229 |
The STRIX R9 380X is the latest entry in ASUS’s popular STRIX family of cards. At one point STRIX was ASUS’s brand for upscale video cards, occupying a slot between their standard cards and their high-end Republic of Gamers cards, but at this point with the majority of ASUS’s cards falling under the STRIX branding, it arguably has transformed into what is their de facto mainstream lineup of video cards.
The STRIX R9 380X OC ships at 1030MHz for the core clock, a 60MHz (6%) boost over the reference R9 380X. On top of that ASUS offers a pre-programmed 1050MHz mode via their GPU Tweak software, though a further 20MHz overclock is going to be pretty small in the long-run. Otherwise ASUS only touches the GPU clockspeed, leaving the memory clock at AMD’s default of 5.7Gbps. Out of the box, the STRIX R9 380X OC is going to be around 4% faster than a reference R9 380X card.
Like the other STRIX cards we’ve looked at this year, ASUS has been focusing on workmanship and a common visual theme for these cards. The STRIX R9 380X OC features a version of one of ASUS’s DirectCU II coolers, combining an oversized fan assembly with a 3 heatpipe heatsink assembly. The fan assembly in turn uses a pair of the company’s “wing-blade” fans, each measuring 94mm in diameter and giving the fan assembly its overall large size.
As is usually the case on ASUS cards, the STRIX R9 380X OC implements ASUS’s variation of zero fan speed idle technology, which the company calls 0dB Fan technology. While ASUS is no longer the only partner shipping zero fan speed idle cards, they are still one of the most consistent users of the technology, and surprisingly we still don’t see this in every open air card released on the market.
Sitting below the fan assembly, the DirectCU heatsink being used in ASUS’s 380X card is a typical tri-pipe configuration. The aluminum heatsink runs virtually the entire length of the card – and past the PCB – with a pair of 8mm heatpipes and a 10mm heatpipe providing additional heat transfer between the Tonga GPU and the rest of the heatsink. ASUS’s design doesn’t make contact with anything other than the GPU – so the GDDR5 RAM chips sit uncovered – with the airflow coming through the heatsink being sufficient to cool those chips.
Moving on to the PCB, ASUS has implemented their standard Super Alloy family of MOSFETs, capacitors, and chokes. ASUS uses an 8 phase VRM system here, taking advantage of the already oversized fan assembly to allow them to use a slightly taller than normal PCB to fit all of the power phases.
Flipping over to the back side of the card, we find a full-size backplate running the length of the card. There are no critical components on the back of the card, so while the backplate doesn’t provide any cooling it does serve to protect the card and reinforce it against bending. To that end a small lip extends past the backplate and meets up with the heatsink, preventing the heatsink from flexing towards the board. Small details such as these are why the STRIX cards have consistently been the most solid of the custom cards to make it through our hands this year, as the card is well-supported and isn't free to warp or bend.
Looking at the back we can also see the two 6-pin power connectors used to supply additional power to the card, along with the red and white power LEDs for each connector. Like some of their other cards, ASUS has flipped the PCIe power connectors so that the clip is on the back side of the card, keeping the clip clear of the heatsink and making it easier to plug and unplug the card. On a side note, I suspect this will be one of the last cards we review with two 6-pin connectors rather than a single 8-pin connector. Though electrically equivalent (150W), we’re already seeing cards like the R9 Nano shipping with the single 8-pin connector, and dual 6-pin connector cards will become increasingly rare.
As for Display I/O, ASUS is using a rather typical 1x DL-DVI-I, 1x DL-DVI-D, 1x DisplayPort, 1x HDMI port configuration. Multiple DVI ports, though not in any way petite, have been a common fixture on sub-$250 cards this generation and will likely remain that way for some time to come due to slower adoption of newer display I/O standards in the APAC market, which only recently has finally seen analog VGA phased out.
Finally, on the software front, the STRIX R9 380X OC includes ASUS’s GPU Tweak II software. The software hasn’t significantly changed since we last looked at it in July, offering the basic overclocking and monitoring functions one would expect from a good overclocking software package. GPU Tweak II allows control over clockspeeds, fan speeds, and power targets, while also monitoring all of these features and more.
Wrapping things up, as briefly mentioned earlier the STRIX R9 380X OC is the most expensive of the R9 380X launch cards. ASUS is charging a $30 premium for the card over AMD’s reference MSRP, putting the price at $259. Premium, factory overclocked cards aren’t anything new, but it does mean ASUS is in a bit of a precarious spot since the much more powerful Radeon R9 390 cards start at $289, meaning the premium price further amplifies the spoiler effect of the R9 390.
101 Comments
View All Comments
FriendlyUser - Monday, November 23, 2015 - link
This is not a bad product. It does have all the nice Tonga features (especially FreeSync) and good tesselation performance, whatever this is worth. But the price is a little bit higher than what would make a great deal. At $190, for example, this card woule be the best card in middle territory, in my opinion. We'll have to see how it plays, but I suspect this card will probably find its place in a few months and after a price drop.Samus - Monday, November 23, 2015 - link
Yeah, it's like every AMD GPU...overpriced for what it is. They need to drop the prices across the entire line about 15% just to become competitive. The OC versions of the 380X is selling for dollars less than some GTX970's, which use less power, are more efficient, are around 30% faster, and you could argue have better drivers and compatibility.SunnyNW - Monday, November 23, 2015 - link
To my understanding, the most significant reason for the decreased power consumption of Maxwell 2 cards ( the 950-60-70 etc.) was due to the lack of certain hardware in the chips themselves specifically pertaining to double precision. Nvidia seems to recommend Titan X for single precision but Titan Z for DP workloads. I bring this up because so many criticize AMD for being "inefficient" in terms of power consumption but if AMD did the same thing would they not see similar results? Or am I simply wrong in my assumption? I do believe AMD may not be able to do this currently due to the way their hardware and architecture is configured for GCN but I may be wrong about that as well, since I believe their 32 bit and 64 bit "blocks" are "coupled" together. Obviously I am not a chip designer or any sort of expert in this area so please forgive my lack of total knowledge and therefore the reason for me asking in hopes of someone with greater knowledge on the subject educating myself and the many others interested.CrazyElf - Monday, November 23, 2015 - link
It's more complex than that (AMD has used high density libraries and has very aggressively clocked its GPUs), but yes reducing DP performance could improve performance per watt. I will note however that was done on the Fury X; it's just that it was bottlenecked elsewhere.Samus - Tuesday, November 24, 2015 - link
At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?The answer is both. The problem is, they have multiple mainstream architectures with multiple GPU designs\capabilities in each. Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have.
NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications.
BurntMyBacon - Tuesday, November 24, 2015 - link
@Samus:: "At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?"Both
@Samus: "The answer is both."
$#1+
@Samus: " Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have."
They tried the gaming only route with the 6xxx series. They went back to compute oriented in the 7xxx series. Which of these had more success for them?
@Samus: "NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications."
This would make a lot of sense save for one major issue. AMD wants the compute capability in their graphics cards to support HSA. They need most of the market to be HSA compatible to incentivize developers to make applications that use it.
CiccioB - Tuesday, November 24, 2015 - link
HSA and DP64 capacity have nothing in common.People constantly confuse GPGPU capability with DP64 support.
nvidia GPU have been perfectly GPGPU capable and in fact they are even better than AMD ones for consumer calculations (FP32).
I would like you to name a single GPGPU application that you can use at home that makes use of 64bit math.
Rexolaboy - Sunday, January 3, 2016 - link
You asked a question that's been answered in the post you reply to. Amd wants to influence the market to support fp64 compute because it's ultimately more capable. No consumer programs using fp64 compute is exactly why amd is trying so hard to release cards capable of it, to influence the market.FriendlyUser - Tuesday, November 24, 2015 - link
It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts. Nvidia is only being competitive thanks to the CUDA dominance they have built and their aggressive driver tuning for pro applications.BurntMyBacon - Tuesday, November 24, 2015 - link
@FriendlyUser: "It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts."This. AMD has a vision where GPU's are far more important to compute workloads than they are now. Their end goal is still fusion. They want the graphics functions to be integrated into the CPU so completely that you can't draw a circle around it and you access it with CPU commands. When this happens, they believe that they'll be able to leverage the superior graphics on their APUs to close the performance gap with Intel's CPU compute capabilities. If Intel releases better GPU compute, they can still lean on discrete cards.
Their problem is that there isn't a lot of buy-in to HSA. In general, there isn't a lot of buy-in to GPU compute on the desktop. Sure there are a few standouts and more than a few professional applications, but nothing making the average non-gaming user start wishing for a discrete graphics card. Still, they have to include the HSA (including DP compute) capabilities in their graphics cards if they ever expect it to take off.
HSA in and of itself is a great concept and eventually I expect it will gain favor and come to market (perhaps by another name). However, it may be ARM chip manufacturers and phones/tablets that gain the most benefit from it. There are already some ARM manufacturers who have announce plans to build chips that are HSA compatible. If HSA does get market penetration in phones/tablets first as it looks like may happen, I have to wonder where all the innovative PC programmers went that they couldn't think of a good use for it with several years head start.