The ASUS X99-E-10G WS Motherboard Review: 10GBase-T Networking with Intel’s X550-AT2
by Ian Cutress on November 7, 2016 9:00 AM EST- Posted in
- Motherboards
- Intel
- Asus
- 10G Ethernet
- X99
- 10GBase-T
- X99-E-10G WS
- X550
- X550-AT2
SLI, SLI, SLI
In our last review of a motherboard capable of 4-way GPU gaming, it was eloquently stated that our failure to test the feature was an indication of AnandTech’s current path on quality. I would have preferred a discourse in actually being able to source four identical GPUs at the time (and then showing ‘wow, it works’ in the review). There are plenty of things we don’t test as rigorously on each product that some users might like, such as PCIe x1 slot bandwidth latency, or USB 2.0 performance on CentOS, either due to time, relevance, or as was the case before, a lack of hardware. This is still the case - I do not have four expensive and high profile GPUs and I live a whole continent and an ocean away from our GPU testing facility. I have procured a third identical GTX 980 however, and will show you that at least 3-way GPU works in our test suite.
GPU scaling beyond two cards depends on the game engine and the drivers, or if the game has implemented specific multi-GPU modes to accelerate game features. For some titles, the GPU is not the bottleneck, and it might either be CPU performance, PCIe bandwidth, DRAM, or the fact that the game doesn’t scale and we’re relying on the performance of a single card.
Both GTA5 and Alien Isolation scaled to two cards with our hardware setup, but failed to move going to three. GRID is typically a title that scales with almost anything, however the jump from two to three cards was only 7%.
Shadow of Mordor gets the best scaling, but only at 4K and not at 1080p. At 1080p the move from 1 GPU (98 FPS) to 2 GPUs (150 FPS) is significant, but 3 GPUs (158 FPS) is not. For 4K, the scaling keeps going from 1 GPU (39 FPS) to 2 GPUs (69 FPS) and 3 GPUs (80 FPS), although not as much for that last jump. At 4K we are running at our Ultra preset, indicating that some other hard compute part of frame rendering might be the limiting factor in AFR modes.
Testing 10GBase-T and X550-AT2
Similar to our GPU testing, we have not the ideal hardware for Ethernet testing. In our previous 10G motherboard review, we implemented an ESXi 6 platform and used two Windows Server VMs, each with 8 threads, 16GB of DRAM, and one of the 10G ports. As a result each VM had a direct OS-to-OS 10G connection with a custom configured IP and testing was done.
Testing 10G with ESXi was actually more difficult this time around. The X550 series of drivers are not supporting in the default image, requiring the admin to install the relevant plug-in. While this enabled the ports to work in the Windows Server 2016 VMs, ESXi would not allow them under VSXNET mode, which is typically the high-performance mode. I was unable to find a quick solution, and along with the X550 controller being newer, deciphering what needed to be done was also a minefield of frustration.
It is interesting then to note that our results for the ASUS board and X550 are similar to previous results with the ASRock board using X540. This is ultimately because the chips are mostly similar, with the primary difference in the way they communicate with the CPU – the X540 requires PCIe 2.0 x8, while the X550 requires PCIe 3.0 x4. The X550 also introduces some professional level features, but the 10G copper market remains in Intel’s hands without another major player (or that professional environments turn to fiber).
When we last performed this with the ASRock X99 WS-E/10G, and a number of our readers were very helpful in describing ways in which 10G network performance (with the right hardware and knowledge) could be improved. As our test is point-to-point without a managed switch, and the frustrating element of learning to debug the environment, I highly recommend you read the post by Jammrock back in that review. But the 10G ports do both work, I can tell you that.
63 Comments
View All Comments
maglito - Monday, November 7, 2016 - link
Article is missing references to XeonD with integrated 10Gbps networking in a much lower power envelope (Supermicro and ASRock Rack have great solutions). Also switches from Mikrotik ( CRS226-24G-2S+RM ) and Ubiquiti ( EdgeSwitch ES‑16‑XG ).dsumanik - Monday, November 7, 2016 - link
Fair enough, but one thing this article is NOT missing is better multi GPU testing, thank you Ian.In this day and age It is important to test every aspect of the board, not take the mfg's word for it or you wind up being a part of thier beta test.
Then when the bugs occur and sales slow, the bios team gets allocated to the more popular boards And you wait in limbo -sometimes permanently- for fixes
Gadgety - Monday, November 7, 2016 - link
I agree.prisonerX - Monday, November 7, 2016 - link
The XeonD has 10G MACs, which are not the particularly power hungry part of 10G ethernet, it's the PHY block, and in particular 10GBase-T which is the power hog. XeonD doesn't implement those.BillR - Tuesday, November 8, 2016 - link
Correct, the PHY is where the bulk of the power is used. I would expect the performance between the XeonD and the X550 to be similar since they use the same basic Ethernet MAC block logic. I would be a bit leery of using another LAN solution though, the Intel solution has been pretty rock solid. A problem I rarely have to think about is the best problem of all.ltcommanderdata - Monday, November 7, 2016 - link
You mentioned PCIe switches add a little bit of overhead which isn't a problem for graphics cards, but is the small added latency likely to be a concern for more sensitive applications like audio cards? Or is it better to use PCIe slots that are not on the PCIe switch for those?Also is there any sense yet on a time-to-market schedule for 2.5G/5G ethernet controllers and when motherboards and routers will start showing up with them?
TheinsanegamerN - Monday, November 7, 2016 - link
My guess is never. Outside of a very specific niche, nobody needs more then 1Gbps.JoeyJoJo123 - Monday, November 7, 2016 - link
>My guess is never, nobody needs HD. The human eye can't see past 640x480 interlaced.Eden-K121D - Monday, November 7, 2016 - link
nah 320X240 is the maxprisonerX - Monday, November 7, 2016 - link
640K should be enough for anyone.