GIGABYTE BRIX Gaming UHD GB-BNi7HG4-950 mini-PC Review
by Ganesh T S on October 28, 2016 7:30 AM ESTHTPC Credentials
The previous BRIX Gaming BXi5G-760 had two 40mm fans that had to work overtime to keep things cool. With the redesigned chassis, the GB-BNi7HG4-950 can handle the cooling duties with just one 80mm fan. Under idling conditions and hardware-accelerated duties like video playback, the fan spins at less than 700 RPM. Out of all the mini-PCs that we have evaluated with discrete GPUs, this one ranks in the bottom half when it comes to noise. This is a good thing for users intending to throw HTPC duties at the PC. It is not going to satisfy the purists, though. Under sustained loading conditions such as playing games, the unit does get loud (rough sound level measurements using a smartphone app showed around 55 dB at the bottom, compared to around 70 dB under similar conditions for the Zotac ZBOX MAGNUS EN980).
Refresh Rate Accuracy
Starting with Haswell, Intel, AMD and NVIDIA have been on par with respect to display refresh rate accuracy. The most important refresh rate for videophiles is obviously 23.976 Hz (the 23 Hz setting). Similar to other systems equipped with NVIDIA GPUs, the GIGABYTE GB-BNi7HG4-950 has no trouble with refreshing the display approximately in this setting. Accuracy out of the box is close at 23.971 Hz. Tweaking custom settings should enable achievement of the perfect numbers delivered by modern Intel GPUs.
The gallery below presents some of the other refresh rates that we tested out. The first statistic in madVR's OSD indicates the display refresh rate.
Network Streaming Efficiency
Evaluation of OTT playback efficiency was done by playing back our standard YouTube test stream and five minutes from our standard Netflix test title. Using HTML5, the YouTube stream plays back a 1080p H.264 encoding. Since YouTube now defaults to HTML5 for video playback, we have stopped evaluating Adobe Flash acceleration. Note that only NVIDIA exposes GPU and VPU loads separately. Both Intel and AMD bundle the decoder load along with the GPU load. The following two graphs show the power consumption at the wall for playback of the HTML5 stream in Mozilla Firefox (v 49.0.1).
GPU load and VPU load were around 16.88% and 14.09% for the YouTube HTML5 stream.GPU load in the steady state for the Netflix streaming case was 9.82% and the VPU load was 17.65%.
Netflix streaming evaluation was done using the Windows 10 Netflix app. Manual stream selection is available (Ctrl-Alt-Shift-S) and debug information / statistics can also be viewed (Ctrl-Alt-Shift-D). Statistics collected for the YouTube streaming experiment were also collected here.
Decoding and Rendering Benchmarks
In order to evaluate local file playback, we concentrate on EVR-CP, Kodi and madVR. We already know that EVR works quite well even with the Intel IGP for our test streams. As we are not doing a fully GPU-focused review, we only used madVR with its default settings. Starting with this review, we are including 4Kp60 HEVC Main and Main10 videos in our decoding and rendering benchmarks. All the playback is done on a 1080p display driver over HDMI. This means that the decoded videos have to be scaled up or down appropriately, and this can drive up GPU loading independent of the decoding process itself. The latest nightly build of MPC-HC (from October 2016) was used so as to pick up the latest version of LAV Filters. For Kodi, we used Beta 3 of the 17.0 release.
In our earlier reviews, we focused on presenting the GPU loading and power consumption at the wall in a table (with problematic streams in bold). Starting with the Broadwell NUC review, we decided to represent the GPU load and power consumption in a graph with dual Y-axes. Eleven different test streams of 90 seconds each are played back with a gap of 30 seconds between each of them. The characteristics of each stream are annotated at the bottom of the graphs below. Note that the GPU usage is graphed in red and needs to be considered against the left axis, while the at-wall power consumption is graphed in green and needs to be considered against the right axis.
Frame drops are evident whenever the GPU loading parameters consistently stays above the 85 - 90% mark.
The PC has absolutely no trouble with any of our test streams.
It should already be evident from the above graphs that the decoding of HEVC Main and Main10 profile videos are completely hardware accelerated. Given that the rebadge of the GTX 965M as the GTX 950 might lead to some confusion, we take a little bit of detour to look at the codec support reported by DXVA Checker.
We can see that 4K HEVC Main10 is indeed supported for hardware-accelerated decode. This is because the GTX 950 in the PC is actually the GTX 965M that is based on the GM206M architecture. This has the updated VPU with full HEVC decode support that was absent in the other Maxwell-series GPUs.
50 Comments
View All Comments
StevoLincolnite - Friday, October 28, 2016 - link
nVidia must be giving these GPU's away. Such a missed opportunity not going with Pascal.aj654987 - Wednesday, November 2, 2016 - link
Alienware Alpha r2 with the gtx 960 desktop GPU is a better deal than this.Samus - Wednesday, November 2, 2016 - link
I don't think you can get an i7 in the Alpha r2...not that it really matters for gaming, but the extra horsepower of the i7-6700HQ in the Brix might help its GTX950 creep up on the GTX960 in the Alpha r2.But I agree, they are similar in almost every other aspect (even size) and the Alpha r2 is cheaper.
setzer - Friday, October 28, 2016 - link
Regarding the last comment about going with the Skull Canyon NUC + External GPU.I'm not sure that is really a better solution.
It's true that it gives the user the option of adding more graphics power (and easy upgradability), on the other side it also requires buying a discrete graphics card which is not as straight-forward as on desktops. This is because you are restricted on one side by the soldered CPU (which you can not change, thought the Skull Canyon NUC cpu should not be a problem for some time) and on another side by the bandwidth between the system and the external enclosure (just 4 lanes of PCIE 3.0 bandwidth).
This last point makes it hard to figure out on what graphics card is actually the best for your restrictions. So instead of a selection of all the graphics cards up to the power limit of the enclosure you have to figure out which ones do actually offer the best price-performance. I.e of course you can drop a Titan there but will the difference to a GTX 965M (over the 16 lanes of PCIE) be significant?
Regarding this last point, would it be possible to test external enclosures and figure out actual metrics for the performance gains?
wavetrex - Friday, October 28, 2016 - link
I wonder if I can build a house out of these bricks ... excuse me, Brix :)Joking aside, very few people would know it's an actual computer.
nico_mach - Monday, October 31, 2016 - link
It's a SQUARE trash can! Progress! Where's the pedal, tho?hubick - Friday, October 28, 2016 - link
I'm typing this on my Skull Canyon NUC, and have a Razer Core, and having read the benchmarks before buying, the PCIe4x limitation is surprisingly small. IIRC, it's somewhere in the ballpark of 10-15% or so, and that doesn't really change when going from a 980's to a 1080 either. It makes sense when you think about it... you're essentially transferring textures, shaders, and a bunch of vector information to the GPU for rendering... and that will be pretty much constant regardless of if you're rendering the output at 720@30hz or 4K@60hz.aj654987 - Wednesday, November 2, 2016 - link
why would you even bother with that, might as well build an itx for less money and less clutter.hojnikb - Friday, October 28, 2016 - link
Guys, are there any passive mini pcs coming out with kaby lake ?TheinsanegamerN - Friday, October 28, 2016 - link
there are no PCs period with kaby lake yet. kaby lake isnt out yet.