Cheap SATA controller cards use exactly the same chips which mobo makers put to increase the number of ports up from what the chipset provides. Complaining the board doesn't come with extra ports via a cheap controller and then complaining cheap controllers are no good? Seriously?
A proper TR build would be at least 3000$, in that price range, 200$ for a good HBA like the LSI 9300-8i should not be an issue. Surely, AMD offers great value and brought extremely high performance to a new level of affordability, but this is not, I repeat, NOT a product for penny pinchers.
The correct solution adding more sata is not use SAS2 cards. You can pick up a nice x4 PCIe G3 lsi or IBM SAS2 card for about 100 bucks or less on ebay.. Without any HBA, you can run 8 drives off those 2 sas controller cards since each one will give you x4 lanes.. you can get easily 500+ MB/s out of each controller.
The problem however, these cards suck a golfball through a garden hose performance wise compared to onboard softraid. And before people go ape-sh!t that softraid is bad-mkay... ever looked under the hood of lets say .... EMC / NetApp or any NAS semi/pro vendor out there ? It's all Softraid. Why ? Because the hardware raid chipsets that can actually cope with 3Gb/8Gb throughput are relative new & start around 3K $. So, a poor mans 8x sata 600 onboard chipset is hard 2 beat.
Yeah, you can get U.2 SFF-8643 to 4x SATA branch-out cables. I have a feeling it won't work directly off U.2 PCIe 3.0 x4 (although who knows?), but surely a PCIe SAS controller providing some SFF-8643 connectors will work. That is the way I was thinking.
The fact you still can't buy them is one thing, and it will be expensive. Then again, if money ain't a thing & you'll agree one kidney should be enough for anyway. Buy this for your (Yes, it's in the sales sheet) for you games / media. For the real OG's would go for something like Sandisk's Infiniflash..... I know right .... https://www.youtube.com/watch?v=iWvrOItRSyQ
It's a shame Intel's never made a PCIe card using its own controller, because of course Intel's SATA3 ports on its own boards always work nicely. But then, if they did, loads of people would buy the card to fit to older boards (especially X58, Z68, X79, etc.) instead of upgrading to a newer board, and it'd be cool to have such an option for AMD boards aswell. Never gonna happen though I guess.
A file server is a good wife excuse to build with a Threadripper. He can pretend he needs one core per hard drive and in order to stream video to more than one TV he needs two GPUs. It works.
Actually, many people in the developed world call a technician for even trivial things like changing a fuse. Building a computer, as simple as it is, is out of this world achievement in their eyes.
Weird, normally such a person wouldn't even ask a relevant question. My gf doesn't care what my HTPC has inside, as long as it runs YT and plays DVDs, etc. ok, and she can use the wifi link to find stuff for her Kindle.
No anything that has more than 2 HDDs is a server. I also have gaming PC that has 10 HDDs. They are low capacity, and connected to low-end HBA, but this way I won't ever lose any data. And I just can't think of a reason to pay well over $1000 for 10 disk-capable NAS.
You really must love noise then, and waste power. Besides, the more drives, the more drives, the higher the odds some of them die. You can get a couple of large HGST drives, huge capacity, excellent reliability. The odds of both drives failing is minuscule, and would require the entire system burning up, which would be just as devastating regardless of many drives you have. You don't require 10 drives for the sake of the number. 10 TB HGST HE10 are like 350$.
Besides, TR doesn't fall neither in the "budget", nor in the "gaming" category. That's a workstation CPU, and it actually does pretty bad in gaming, considering its price. Paying that much money for a product you are not gonna use for the job it is best at and complaining the mobo doesn't fit your senseless and wrong usage scenario - that's kinda dumb.
It is a new high end product, you are gonna put new high end components in it, not 10 poor old tiny HDDs.
It's not bad at gaming, however, it is not wise to use this solely for gaming. A workstation that you can use to handling your day to day workload that just happens to be fairly decent for gaming when you have time.
Anyone building a gaming rig that chooses to buy a TR fails at life.
Ask the manufacturers to make M.2 PCIe cards with a SATA controller and a heap of SATA ports on board. I want them to provide these, and exclude SATA from motherboards, to save space and reduce complexity and cost. That way, the motherboards are cleaner for people who only require NVMe, and allow for people who need SATA to sacrifice M.2 slots for SATA.
The RGB cancer isn't why these boards are so expensive. It's stupid, but only adds a few dollars to the cost; not a few hundred.
It's all the extra PCB layers they need to support the 4000 connections to the CPU socket and route all the extra PCIe lanes. Limiting the number of PCIe lanes in mainstream chips is as much about being able to use smaller sockets and fewer PCB layers to keep board costs down as it is a desire on AMD and Intel's part to upsell to X299/X399 systems. The fact that the mobo vendors are probably expecting to sell dozens of mainstream boards for every one of these halo products doesn't help either because it means that the R&D costs can't be spread anywhere near as widely.
For the two boards that have them, the $100 for a 10GB NIC doesn't help any.
Still annoying and useless. They just wanted to appeal the extra gaming segment when this entire system is not for gaming. Now add the fact that almost every goddarn high end videocard AND memory now have shitty RGB lights as well. Its a waste of power.
Are you assuming enthusiasts are all just gamers? You can be an enthusiast that loves gaming AND content creation. If you want no-nonsense then maybe go EPYC instead (YES, it is a workstation platform too, not just server): http://b2b.gigabyte.com/Server-Motherboard/MZ31-AR...
Amazing how many people just assume that a PC user is either a gamer or not a gamer. There's a lot of crossover with content creation these days, and also people who stream. Being able to play a game, record the gameplay, convert a previous session and upload it to YT/etc. will be a boon for those who make a living doing such things.
Anyone interested in pairing the Asus Zenith Extreme with the Threadripper version of the Noctua NH-U14S should note that the cooler blocks the first PCI-E x16 slot on the board. Noctua says Asus didn't follow the AMD clearance guidance on this board; you can see in the pic that the top slot is very very close to the CPU bracket.
NH-U12S TR4-SP3 will fit though: http://noctua.at/en/nh-u12s-tr4-sp3/specification. I read a German article that showed that this cooler provides very similar cooling perf and noise as compared to the 14S. I have 2 other systems that use variations of the 14s, and I can not find any noticeable noise difference. Still an unfortunate design decision from Asus.
Other than the MSI board, I notice that none of them are putting USB2 ports on the back panel. Does that mean the interference problems that some USB2 devices encountered in 3.0 ports have been fixed; or that the mobo makers just feel anyone who needs them can use an IO bracket attached to an onboard header?
With CPU power approaching 150 Amps, all these MBs are brain dead right out the gate, for using 50y old 8+ phase Buck converter topology, where VRMs with multiple phases still have poor response settling times (not one cycle settling but 20 cycles), lowish efficiencies way below 98% needing substantial heat sinks; while being saddled with the vagaries of inductive energy storage which takes up a lot of space and cost much more. Instead they should be using hybrid-PWM switching eliminating hard switch of high currents, where fractional cycle resonance ensures zero current switching and almost no switching losses. Resonance frequency scaling of the inductor/capacitor eliminates the need for ferrite cores altogether, reducing both size and core losses.
Rehashing the same old power technologies with only stepwise minuscule improvements at each iteration, is not going to bode well at all. These engineers need to join the rest of us in the 21st century, and stop rehashing inductive energy storage solutions in power supplies. These new power topologies have been available for 5y to 8y now. And further this will also reduce EMI noise.
I might add eliminating the multi-phase Buck converters for a solution based on PWM-resonance switching (hybrid switching), fractional cycle resonance (one cycle settling responses), and resonance scaling of resonance components, will likely permit a X399 mITX board to be made; with all that PSU space recovered for other purposes. Who will take up the challenge? AMD, ASROCK? Or will Intel beat AMD in the rush to high current capability at 99% efficiency? Or one of the other competitors out there! A clarion call to action this is to once and for all retire the Buck converter!
I doubt they will even make a micro atx board, much less itx...
It will be a waste, gigantic socket will take up most of the space, there won't be enough room for all 4 memory channels, the CPUs generous amount of PCIE lanes will be utterly wasted.
Yes, and yet ASROCK put the X99 on a mITX board. However cutting heat from the VRM's by 60% to 70% or more should be done anyway, not to mention the graphics cards that draw even more power. Why settle for 95/96% efficiency (or worse in some cases) when one could get 98.5% to 99% efficiency. But then again you might be right concerning Thread Ripper, although I would still buy it with M2 slots on the back. The power delivery could be moved entirely to the back of the board without heat sinks with a total height of only 1.5mm (possibly at the center of socket itself on the back). The challenge of course is just as applicable to the graphics cards.
I agree about how little sense it would make; but there have been a few LGA2011 mITX boards for people who only needed the higher core count. One of the two currently listed on Newegg uses SoDIMMs and riser boards around the CPU socket to make everything fit. Doing it for Threadripper would definitely be harder on account of the bigger socket; but I won't say never.
mITX is very hard even with riser boards & only 4x SO-DIMMs
but mATX on the other hand is very plausible... lets hope for a mATX version of the ASRock Pro Gaming with it's 10GbE ( and at least one of it's Intel's 1GbE ), at least 2 of it's M.2's and 6+ SATA ports... and please ASRock, add a front USB 3.1g2 header...
Amps, or amperes, is a unit of current, not a unit of power. Power is measured in watts.
"Rehashing the same old power technologies with only stepwise minuscule improvements at each iteration, is not going to bode well at all." - this is unfortunately a fact, and the de-fact motto of the industry.
I mean it is kinda naive to assume several years old switching circuits will find their way into the mainstream just because they are a few percent better.
We've had gas turbine engines for decades, but they are still only used in tanks, helicopters or naval ships, while mainstream vehicles are still stuck with the internal combustion engine, which is easier to break, harder to maintain and a whooping 60% less efficient.
A few % better - you are looking at it wrong. Platinum PSU units with peak 94% efficiency means 6% heat dissipated. Now at 99% efficiency that is 1% dissipated as heat. That means a 5/6th reduction in heat or 83% less heat which is not just a few percent better. Further the new topology yields a >70% cost reduction which is also not insignificant. Gas turbines are also too noisy by the way, which is the main reason these are not considered (expensive when they go wrong too), and thus not a good comparison versus improvements being discussed here. Are you saying the linked article on PWM-resonance and resonance scaling topology is not worthwhile, or the problems with Buck converter inductors is not a severely limited and highly noisy power solution? Perhaps Power Electronics engineering is not your field of interest at all!
ICE is not 60% less efficient than a gas turbine. A gas turbine doesn't scale downward well. One of similar power would actually less efficient as an equivalent rated ICE. Gas turbines are impractical for vehicles.
One benefit of a turbine would be no tailgating. The intake would suck in and crap out small critters all over your windshield.
Yes I appreciate that - we draw power (Watts) at the rate of Joules of energy per second, Notice I said "we draw power", but that is also directly reflected proportionally as current in Amps (or a rate of Coulombs per sec), There was not a direct inference that power is measured in Amps, but only (a) "We are drawing a measure of power", and (b) this is approaching proportionally a resultant 150A. Yes I probably could have said better, but in wise did I say power is measured in AMPS.
I do not believe a defacto motto of the industry is "we are wasting 5% in heat, so let us not bother with wasting only 1%). Clearly you did not read the link to the Power Electronics article (I surmise), as then you would have realized the solution offers a huge reduction in cost as well as heat. The cost factor is something everybody is interesting in, even the industry at large, in my humble opinion.
What I meant is the solution you linked to is like 99% efficient, a good buck converter is what? Like 90-92%?
That's nowhere near the difference between an internal combustion engine and a gas turbine, the latter being more than twice more efficient. And still no adoption, even thou the solution is not really all that complex, and decades old. They still only use gas turbines in the most demanding applications, which is pretty much the same as with the converters from that article, which that dude developed for NASA's most demanding applications. I am pretty sure computers in NASA run on buck converters too, and they will use his designs only for the stuff they launch into space.
You probably don't realize how immense of an impact it would have if all cars on the planet become more than twice as efficient, burning more than twice as little fuel, outputting less than half the harmful emissions, traveling twice as far on a single fill. It would completely dwarf the benefits of boosting computer power converters from 90 to 99% :)
I am not saying it is not cool, I am just saying there have been a lot more beneficial a lot more high priority solutions that haven't been adopted yet for a lot longer, so you should not be surprised that the entire industry hasn't switched to a new power converter design overnight. They will do as they will always do, they will milk the cow until it dies, and then make it into jerky, and only then will then go for the new and better thing.
As noted earlier, 92% which is 8% heat versus 99% which is 1% heat means a 7/8th reduction in heat or 87% reduction. Thinking this is only a 7% improvement differential is incorrect - it is in this example an 87% improvement. That is not small potatoes. When this is coupled with a corresponding large cost reduction, then it becomes apparent the chip manufacturers would rather make more money using the older technology. Maybe the PSU engineers are just plain lazy or are not following the advances in their field as they are snowed under with work.Let's keep the discussion focused on power supplies.
The 600W @ 12V fed to the motherboard with ~8% losses is ~50W heat in the VRMs (driving 180W CPU, 320W GPU, leaving 100W for other parts), The ATX power supply is 90% efficient or less with 60W heat also in the same case. The GPU power supply also is another 25W waste power supply heat, for a total of 135W of waste heat (as opposed to the useful heat generated within the various components themselves, which is heat generated from useful work being done). It is tough to manage this 660W in the case, but over 135W of it (>21% low ball estimate) is waste heat from just power supplies alone (ATX PSU, VRMs, GPU supply), of which over 80% or 110W could be eliminated by abandoning the Buck converter topology. This of course is a simplistic view, as there are other components too that are ignored here, for brevity's sake. I suspect >25% of heat comes from power supplies alone which can be dramatically curtailed. It's a 3 ringed circus: ATX PSU steals 10% for itself, motherboard steals another 8% for itself and GPU steals a further 8% for itself. We have no control over the power drawn from the mega chips themselves, but we do have control over the power supplies that drive them, and manufacturers could be doing a lot better here. And this is by no means a monster power hungry system with only one high-end graphics card. A >85% reduction in power supply waste heat can be realized if the Buck converter is abandoned, and that applies to resonant LLC power supplies also. The motherboard manufacturers and ATX PSU manufacturers need to take this aspect much more seriously.
AMD's Thread Ripper X399 and Intel's X299 platforms should have been their first attempt at abandoning the Buck, half-bridge, and resonant LLC topologies. They failed us in that regard. We need this fiasco to come to an end by using hybrid PWM-resonant switching and resonance scaling, which eliminates the ferrite cored inductors altogether, and replaces them with just copper traces on the PCB. This is not rocket science.
Yeah, but then again, an overclocked TR is like 200W, even an entry level car is like 200 KILOWATTS. So percentages are not really that much indicative.
The facts remain. A TR mobo with a better power regulator circuit will save like 10 watts of power, a car with a gas turbine engine will save like 100 KW of power.
That's 10 000 times larger saving measured on absolute scales. What's more important in your opinion? Saving a watt, or saving 10 000 watts? Naturally, I'd rather have both. The goal here is to illustrate how low of a priority it is to improve mobo power delivery compared to some other, longer standing improvement opportunities that have been ignored.
They tried turbine cars, they are terrible due to the way they deliver power. Several successful drag cars have used them as in that application the power delivery works well. Same reason we aren't going to be driving rocket cars.
You should look up into it. A gas turbine is not a jet engine. It is actually more efficient, and it doesn't utilize jet propulsion. Gas turbines power certain tanks, ships and helicopters. They are also used in power plants.
Honestly, could you just stop bringing your stupid gas turbine rant, since you don't really seem to grasp what efficiency is, and not even what power a typical engine has.
Gas turbines are very efficient in use at or around their designed typical load. They are not efficient under medium and low load scenarios, where they will drop below modern gasoline combustion engines.
Those come with 200 kW - in high-powered sports cars, or top-of-the-line luxury limousines. A "entry level car" will be at max. 75 kW peak power; and guess what: most of the time they are used far below the maximum output.
Modern gasoline car engines typically reach 45% efficiency, which they achieve in their typical load scenarios, at less than 50% of their design load.
Modern gas turbines can reach up to 60% efficiency, which is great - but this is usage at their design load. At half load, the efficiency will drop below 30%. The majority of miles driven with cars are at below half load.
What we expect from car engines is efficiency at their usage, while having enough reserves for quick acceleration. Gas turbines cannot do this efficiently, and gas turbines are notoriously laggy in variable load.
However, they can be used effectively in fully-hybrid cars, where peak-load is achieved by battery-backed electric motors. But since these engines are so expensive to produce, it is simply more cost-effective to use fully-electric cars for this.
Of course that a gas turbine consumer vehicle will also utilize a battery buffer. You basically charge it while stationary, drive on battery until power runs out, which is when the turbine is activated to supply power and charge. An all-electric drive will significantly simplify the design, the transmission, and will ensure maximum torque on any level of power.
You are way off, only F1 engines approach 50% of efficiency, but they only last like a few races, which is the cost of that efficiency. Totally impractical for consumer vehicles. The typical operational efficiency of consumer vehicles is as low as 20%. And they are also intrinsically limited in terms of torque delivery, which happens in a specific and rather narrow RPM range.
So, transitioning to gas turbine engines will not have a 100% but a 200% increase in efficiency. I guess little minds simply cannot appreciate then significance of that. Not to mention it will defacto force wide hybrid vehicle adoption, and a very overhead compared to internal combustion engines, as a gas turbine with the same power delivery will weight 1/4 of that, and will deliver 3 times as much energy from the same amount of fuel.
Gas turbine engines are also actually easier to make and maintain, they have far less moving parts. You seem to be confusing a regular gas turbine engine with the ultra-efficient one, which requires expenssive and time staking 5 axis machining of the components. Those exceed 70% efficiency, but if you aim for 60%, the manufacturing is much cheaper, easier and faster. Overall much more cost efficient.
Which is exactly why they are not being adopted. It will result in massive loss of revenue - cheaper engines, that need less of the expensive maintenance, less parts, and burn much less fuel. The priority of the industry is profit, and gas turbine vehicles will result in a massive dip in that aspect. Internal combustion engines are pathetic in terms of efficiency, but are very profit-effective.
Get a glue ddriver. This is not about power saved whose numbers are minuscule compared to the power o/p of an automobile engine - IT IS ALL ABOUT TEMPERATURE OF THE INTEGRATED CHIPS WHICH ARE LIMITED TO NOT MUCH MORE THAN 100 deg C, and the difficulty of providing sufficient air flow or liquid cooling necessary to remove that heat. Failure to remove the heat dramatically reduces life of the chips by as much as 60% - 70% life reduction. The problem is greatly curtailed by not using circuity designs that would generate ever larger amounts of waste heat. Please stick to subject matter of this posting, which is about reducing CPU and GPU and VRM operating temperatures, without using huge heat sinks and liquid cooling radiators. How does one reduce these temperatures? The first step is to eliminate >85% of the heat in the ATX PSU, motherboard VRMs, and GPU VRMs, to reduce total heat load on the cooling system. And that technology is already available as mentioned above. Please cease with the 100 kW rhetoric which is meaningless in context of this temperature problem (yes little k for kilo not capitol K). Let's talk about the excess temperature issue. Get it!
Thread Ripper is a HEDT platform and thus deserves a HEDT VRM solution, and not the same old worn out technologies that use air-gapped ferrite cored inductors, when resonance scaling permits increased resonant capacitance in exchange for much smaller resonant inductance using cheaper air-cored inductors. And to boot, a dramatic reduction is both size and cost. AMD should lead this charge and bring forth an appropriate reference VRM, using PWM-resonant switching and resonance scaling of the Cr/Lr resonant components. ARE YOU LISTENING AMD - LETS RETIRE THE BUCK CONVERTER ACROSS THE BOARD.
Here is a clue - remove the stock heatsink, install better cooling. Takes like 5 minutes. Heat problem solved. Crude, but it delivers result.
The industry standards are so low, there is barely a product, regardless of its price range, that someone with basic engineering cannot tangibly improve in a few easy steps.
An example, I recently got a yoga 720 2in1. Opened it up, removed the cooling, put good TIM, reinstalled cooling, now I have a 5 minute 5$ improvement that gave me a 10% boost in performance, temperature and battery life. They are just lazy, and don't go even for the most obvious, easiest to implement improvements.
They DONT WANT IT TO RUN COOL. They deliberately engineer it to run at its limit, so close that often they actually mess it up. So that this device can fail, so you can get a new one. It is a time bomb, planned obsolescence, and you can bet your ass they would have done the same regardless of the power delivery circuit involved. It may actually be a far more delicate and harder to address time bomb than hot running VRMs. Which you can easily cool down by ordering a custom heatpipe solution, which will set you back like 50$. That's a rather quick and affordable way to solve your problem, compared to complaining about it in this cesspool of mediocrity ;)
Sorry ddriver, but I disagree with all your perspectives on this matter. You are clearly not capable of addressing the technical issues of Power Supply design for efficiency, and cannot get to grips with electronic circuity (or do not want to) and how different designs compare. You appear only interested in hijacking the original subject matter for your own purposes. You never contributed a single element addressing the original purpose of this thread, and so you have lost your credibility as a serious participant in my book, and hence you and I done.
Based on my experience with the Asus Zenith Extreme, you can expect a bumpy ride which should not be surprising with a new product. My last 3 systems were Intel X58, X79, and X99 boards purchased shortly after thier releases, and this platform (x399) has had the most issues. I expect that in a few months after some BIOS and driver updates, the experiecne will get much better. I suspect that the validation process is not as thorough as the Intel boards. Here are a few issues that I have experienced as an early adopter: System would not boot with 2 video cards (resolved with BIOS update) The 10G Network card would randomly disconnect (resolved? with driver update) System sometimes will not come back from sleep and requires a hard reset (no resolution yet) USB devices disconnecting/reconnecting randomly (no resolution yet)
I'm crossing my fingers I made the correct choice with MSI's x399 offering. I too have been burned by the ASUS early-adopter-penalty and although Gigabyte has been good to me in the past, the MSI offered everything I needed and then some (although I'm firmly in the "get rid of the tacky LED" camp). Everything is getting stuffed into a Cooler Master HAF XB II EVO (with no glass but with the mesh top panel). Even if it's not perfect it can't be worse than running Windows on Boot Camp with a "trash can" (aptly named) Mac "Pro".
You got to give some credit to AMD clever marketing teams by naming this x399 when Intel has x299 and then calling this the "The Most Advanced Desktop Motherboard in the World"
But in reality there is no Thunderbolt 3.0 support and most likely be updated next year or later with PCIe 4.0 support.
Does it support DDR4-4600?
AMD sure likes to play numbers game and not just chipset name - but with number cores - just remember it mostly marketing - yes it still 16 cores - but 16 cores by one manufacturer does not mean 16 cores from completion.
If you're considering the ROG Zenith be aware that it's having tons of problems overclocking CPU and memory. Theres a huge thread on overclock dot net that is filled with people having problems with the bios. There WAS an Asus rep who was trying to help, but he's pretty much disappeared in the last few weeks.
Asus has really huge troubles in their software segment. I still remember they needed like 50 patches to get their high end routers stable. And yet they couldnt get all what they promised in a working condition and even disabled some stuff in later firmwares.
Same with bioses. I still remember a time where you had to enable the "floppy" connector in a motherboard (even if you had no drive) if you wanted the bios to actually apply the overclocking settings.
A note for those with older ASUS ROG and other ASUS X79 boards, etc.: there's a thread on the ASUS ROG forum site where a guy has provided modded BIOS files to support booting from NVMe devices, aswell as rolling in all the updates to the latest microcode, Intel RAID, etc. Very handy indeed, and he takes requests for other ASUS boards. SSDs like the 950 Pro have their own boot ROM, but a BIOS with boot support is perfect for the 960 EVO/Pro and other models which don't have their own boot ROM.
I just bought two R4Es (one basically new), a 4930K, a 16GB/2400 DDR3 kit and a 120mm AIO for a total of 320 UKP. Who needs new stuff? :D My next new build will be TR or EPYC though I'm sure.
Cynically I'm going to guess that if you use several generation old cards that did support 4 way SLI you could combine all 4 together for an act of supreme WTFery.
I looked at the manual for the Zenith Extreme. It claims the two that the block diagram of the board in this review indicates are 4x are 8x, with the lowest one on the board being 8x as long as you don't have a U.2 drive connected to the U.2 port. So, it is unclear which is correct the block diagram or the manual.
The noisy 40mm fan is a major deal breaker for me - what are ASUS thinking by including this? They should add better passive cooling - I don't need the stupid little fan failing or becoming less effective in 2-3 years, leading to random instability.
Real shame as my last two motherboards (Intel and AMD) have been from ASUS and I've been really happy, but for Threadripper they're off my list until they design a passively cooled motherboard.
Must admit I quite the look of that Gigabyte Designare-EX, and likewise until now almost all my boards have been ASUS (except for some X58/P55 boards from Asrock, though I have three top-end ASUS P55 boards aswell).
I note the absence of EVGA. Have they basically quit the mbd business?
Nice, but one deficiency that's carried through a lot of boards is dealing with headers. The writing's tiny. The locations are difficult even if the boards not populated. Documentation a lot of times isn't clear. The one good thing is that it's usually done only once (build time).
He's either trolling or just ignorant. I know a guy who has MS, got a PhD in a relevant discipline and now does research on his own condition. A TR board would be ideal for the work he does, using lots of GPUs for compute, etc. (biomedical apps) Then there are those who game but also want to stream and encode, and of course solo professionals who can't afford dual-socket XEONs or Opteron-type boards. And plenty of people work at home, so the location of a system at home means nothing. I have a 36-CPU supercomputer in my garage. :D
The MSI Pro Carbon looks like it has 4 PCI-e 3.0 X16 slots from the diagram in the article. I have to wonder whether those slots are actually electrically X16 slots or some other configuration. I have not been able to find any information on their site or in this article that gives the electrical configuration of those slots. Anyone know for sure?
I'm waiting for a 6+1 PCIe 3.0 x16 slots, x399 motherboard, with 3x PLX switches to allow for 6x GPUs, for a rendering workstation. The ASRock X99 WS-E and the Asus X99 WS-E are no longer available in the market.
Quote from article: "Thunderbolt 3 certification requires a few things from the CPU side like graphical output which we haven't been able to do. We expect this will be developed upon through Raven Ridge and possibly get more groundwork down to activate TB3 on the X399 Designare EX." End quote from article.
Then how do Intel Skylake-X with X299 motherboard have Thunderbolt 3 certification, when Skylake-X has no integrated graphics?
Hence why the quoted statement sounds fishy to me.
*facepalm* Because Thunderbolt was developed, and is owned by Intel. They can license whatever they want to be Thunderbolt capable, because they literally created it, and set the standard for what is & isn't. Intel hasn't actually opened up Thunderbolt licensing & certification for non-Intel platforms yet like they promised though, so Gigabyte can't do anything about anything until they do & let AMD CPU's be TB3 capable.
MOtherboards need to be powerful no doubt in that. What attracted me towards The ASUS Strix X399-E Gaming is that it has an system which is automated to facilitate a five way optimisation http://www.printspoolerservices.com/printer-issues...">How to fix printer spooler error
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
99 Comments
Back to Article
nathanddrews - Friday, September 15, 2017 - link
The ROG Zenith has all the networking IO I want, but is lacking in SATA ports. Hmm...tarqsharq - Friday, September 15, 2017 - link
With all those extra PCI-E lanes you can just use add-in boards for anything you need more of.Gothmoth - Friday, September 15, 2017 - link
yeah lol at another 100 euro... i tried SATA cards from 6 different brands and all SUCKED.delock, i-tec, syba, logiclink.
just read the reviews at retailers.. these cheap cards are buggy as hell.
i ended up with an adaptec card that works well. but it cost 100+ euro.
nathanddrews - Friday, September 15, 2017 - link
Yes, that has been my experience as well.ddriver - Friday, September 15, 2017 - link
Cheap SATA controller cards use exactly the same chips which mobo makers put to increase the number of ports up from what the chipset provides. Complaining the board doesn't come with extra ports via a cheap controller and then complaining cheap controllers are no good? Seriously?A proper TR build would be at least 3000$, in that price range, 200$ for a good HBA like the LSI 9300-8i should not be an issue. Surely, AMD offers great value and brought extremely high performance to a new level of affordability, but this is not, I repeat, NOT a product for penny pinchers.
mWMA - Monday, September 18, 2017 - link
The correct solution adding more sata is not use SAS2 cards. You can pick up a nice x4 PCIe G3 lsi or IBM SAS2 card for about 100 bucks or less on ebay.. Without any HBA, you can run 8 drives off those 2 sas controller cards since each one will give you x4 lanes.. you can get easily 500+ MB/s out of each controller.mWMA - Monday, September 18, 2017 - link
Correction.. not to use SATA cards.. use SAS2 or SAS3 cards insteadBoemlauweBas - Friday, October 20, 2017 - link
The problem however, these cards suck a golfball through a garden hose performance wise compared to onboard softraid. And before people go ape-sh!t that softraid is bad-mkay... ever looked under the hood of lets say .... EMC / NetApp or any NAS semi/pro vendor out there ? It's all Softraid. Why ? Because the hardware raid chipsets that can actually cope with 3Gb/8Gb throughput are relative new & start around 3K $. So, a poor mans 8x sata 600 onboard chipset is hard 2 beat.karatekid430 - Thursday, October 26, 2017 - link
Yeah, you can get U.2 SFF-8643 to 4x SATA branch-out cables. I have a feeling it won't work directly off U.2 PCIe 3.0 x4 (although who knows?), but surely a PCIe SAS controller providing some SFF-8643 connectors will work. That is the way I was thinking.CheapSushi - Sunday, September 17, 2017 - link
How about going for something more serious then instead of low end: http://www.highpoint-tech.com/USA_new/series-ssd71...That would give you 16 SATA drives.
BoemlauweBas - Friday, October 20, 2017 - link
The fact you still can't buy them is one thing, and it will be expensive. Then again, if money ain't a thing & you'll agree one kidney should be enough for anyway. Buy this for your (Yes, it's in the sales sheet) for you games / media. For the real OG's would go for something like Sandisk's Infiniflash.....I know right .... https://www.youtube.com/watch?v=iWvrOItRSyQ
plonk420 - Tuesday, September 19, 2017 - link
you go fairly high end motherboard, but then you get terrible SATA cards? ಠ_ಠyeah, do like Supermicro (there's a couple cards low $100s) or at least that Adaptec.
just like IDE cards back in the day... buy a crap card, get crap performance.
mapesdhs - Tuesday, September 19, 2017 - link
It's a shame Intel's never made a PCIe card using its own controller, because of course Intel's SATA3 ports on its own boards always work nicely. But then, if they did, loads of people would buy the card to fit to older boards (especially X58, Z68, X79, etc.) instead of upgrading to a newer board, and it'd be cool to have such an option for AMD boards aswell. Never gonna happen though I guess.Oksana - Saturday, September 19, 2020 - link
I really like Thunderbolt, but also want to build a Threadripper computer. Is it possible to have the best of both worlds and add a thunderbolt card?ddriver - Friday, September 15, 2017 - link
It has six. How much do you need? Considering it also has 3 M2 slots.That 10 gbit lan card is for connecting to a proper nas server. You don't cram a workstation with a dozen HDDs.
nathanddrews - Friday, September 15, 2017 - link
I'm not building a workstation.ddriver - Friday, September 15, 2017 - link
So you are building a file server then? Senseless choice of hardware then ;)HomeworldFound - Friday, September 15, 2017 - link
A file server is a good wife excuse to build with a Threadripper. He can pretend he needs one core per hard drive and in order to stream video to more than one TV he needs two GPUs. It works.ddriver - Friday, September 15, 2017 - link
So a dumb wife having to approve your purchases is a realistic scenario? :DHomeworldFound - Friday, September 15, 2017 - link
In my location telling people that you can build a computer makes people want to touch you.Holliday75 - Friday, September 15, 2017 - link
Where is this magical land?ddriver - Friday, September 15, 2017 - link
Actually, many people in the developed world call a technician for even trivial things like changing a fuse. Building a computer, as simple as it is, is out of this world achievement in their eyes.mapesdhs - Tuesday, September 19, 2017 - link
Weird, normally such a person wouldn't even ask a relevant question. My gf doesn't care what my HTPC has inside, as long as it runs YT and plays DVDs, etc. ok, and she can use the wifi link to find stuff for her Kindle.mapesdhs - Tuesday, September 19, 2017 - link
NB: I was replying to ddriver saying, "So a dumb wife having to approve your purchases is a realistic scenario? :D".Vatharian - Friday, September 15, 2017 - link
No anything that has more than 2 HDDs is a server. I also have gaming PC that has 10 HDDs. They are low capacity, and connected to low-end HBA, but this way I won't ever lose any data. And I just can't think of a reason to pay well over $1000 for 10 disk-capable NAS.ddriver - Friday, September 15, 2017 - link
You really must love noise then, and waste power. Besides, the more drives, the more drives, the higher the odds some of them die. You can get a couple of large HGST drives, huge capacity, excellent reliability. The odds of both drives failing is minuscule, and would require the entire system burning up, which would be just as devastating regardless of many drives you have. You don't require 10 drives for the sake of the number. 10 TB HGST HE10 are like 350$.Besides, TR doesn't fall neither in the "budget", nor in the "gaming" category. That's a workstation CPU, and it actually does pretty bad in gaming, considering its price. Paying that much money for a product you are not gonna use for the job it is best at and complaining the mobo doesn't fit your senseless and wrong usage scenario - that's kinda dumb.
It is a new high end product, you are gonna put new high end components in it, not 10 poor old tiny HDDs.
Guwapo77 - Saturday, September 16, 2017 - link
It's not bad at gaming, however, it is not wise to use this solely for gaming. A workstation that you can use to handling your day to day workload that just happens to be fairly decent for gaming when you have time.Anyone building a gaming rig that chooses to buy a TR fails at life.
cyberguyz - Thursday, December 28, 2017 - link
It has 2x M.2 slots not 3x. You need an add-in card to get the 3rd with this onefazalmajid - Sunday, September 17, 2017 - link
Annoying, but a LSI add-in HBA will generally outperform integrated SATA by a wide margin.karatekid430 - Thursday, October 26, 2017 - link
Ask the manufacturers to make M.2 PCIe cards with a SATA controller and a heap of SATA ports on board. I want them to provide these, and exclude SATA from motherboards, to save space and reduce complexity and cost. That way, the motherboards are cleaner for people who only require NVMe, and allow for people who need SATA to sacrifice M.2 slots for SATA.Vorl - Friday, September 15, 2017 - link
Heads up, the X399 Taichi product link is linked to the pro gaming motherboard link.milkod2001 - Friday, September 15, 2017 - link
Grossly overpriced boards. Cay they drop this LED nonsense & ugly plastics and make proper boards at reasonable $250?DanNeely - Friday, September 15, 2017 - link
The RGB cancer isn't why these boards are so expensive. It's stupid, but only adds a few dollars to the cost; not a few hundred.It's all the extra PCB layers they need to support the 4000 connections to the CPU socket and route all the extra PCIe lanes. Limiting the number of PCIe lanes in mainstream chips is as much about being able to use smaller sockets and fewer PCB layers to keep board costs down as it is a desire on AMD and Intel's part to upsell to X299/X399 systems. The fact that the mobo vendors are probably expecting to sell dozens of mainstream boards for every one of these halo products doesn't help either because it means that the R&D costs can't be spread anywhere near as widely.
For the two boards that have them, the $100 for a 10GB NIC doesn't help any.
tamalero - Sunday, September 17, 2017 - link
Still annoying and useless. They just wanted to appeal the extra gaming segment when this entire system is not for gaming.Now add the fact that almost every goddarn high end videocard AND memory now have shitty RGB lights as well.
Its a waste of power.
CheapSushi - Sunday, September 17, 2017 - link
Are you assuming enthusiasts are all just gamers? You can be an enthusiast that loves gaming AND content creation. If you want no-nonsense then maybe go EPYC instead (YES, it is a workstation platform too, not just server): http://b2b.gigabyte.com/Server-Motherboard/MZ31-AR...mapesdhs - Tuesday, September 19, 2017 - link
Amazing how many people just assume that a PC user is either a gamer or not a gamer. There's a lot of crossover with content creation these days, and also people who stream. Being able to play a game, record the gameplay, convert a previous session and upload it to YT/etc. will be a boon for those who make a living doing such things.CheapSushi - Sunday, September 17, 2017 - link
Finally a voice of reason for the naggers.ddarko - Friday, September 15, 2017 - link
Anyone interested in pairing the Asus Zenith Extreme with the Threadripper version of the Noctua NH-U14S should note that the cooler blocks the first PCI-E x16 slot on the board. Noctua says Asus didn't follow the AMD clearance guidance on this board; you can see in the pic that the top slot is very very close to the CPU bracket.glennst43 - Friday, September 15, 2017 - link
NH-U12S TR4-SP3 will fit though: http://noctua.at/en/nh-u12s-tr4-sp3/specification. I read a German article that showed that this cooler provides very similar cooling perf and noise as compared to the 14S. I have 2 other systems that use variations of the 14s, and I can not find any noticeable noise difference. Still an unfortunate design decision from Asus.DanNeely - Friday, September 15, 2017 - link
Other than the MSI board, I notice that none of them are putting USB2 ports on the back panel. Does that mean the interference problems that some USB2 devices encountered in 3.0 ports have been fixed; or that the mobo makers just feel anyone who needs them can use an IO bracket attached to an onboard header?LordanSS - Saturday, September 16, 2017 - link
My money is on the latter.... =/vgray35@hotmail.com - Friday, September 15, 2017 - link
With CPU power approaching 150 Amps, all these MBs are brain dead right out the gate, for using 50y old 8+ phase Buck converter topology, where VRMs with multiple phases still have poor response settling times (not one cycle settling but 20 cycles), lowish efficiencies way below 98% needing substantial heat sinks; while being saddled with the vagaries of inductive energy storage which takes up a lot of space and cost much more. Instead they should be using hybrid-PWM switching eliminating hard switch of high currents, where fractional cycle resonance ensures zero current switching and almost no switching losses. Resonance frequency scaling of the inductor/capacitor eliminates the need for ferrite cores altogether, reducing both size and core losses.http://www.powerelectronics.com/power-management/s...
Rehashing the same old power technologies with only stepwise minuscule improvements at each iteration, is not going to bode well at all. These engineers need to join the rest of us in the 21st century, and stop rehashing inductive energy storage solutions in power supplies. These new power topologies have been available for 5y to 8y now. And further this will also reduce EMI noise.
vgray35@hotmail.com - Friday, September 15, 2017 - link
I might add eliminating the multi-phase Buck converters for a solution based on PWM-resonance switching (hybrid switching), fractional cycle resonance (one cycle settling responses), and resonance scaling of resonance components, will likely permit a X399 mITX board to be made; with all that PSU space recovered for other purposes. Who will take up the challenge? AMD, ASROCK? Or will Intel beat AMD in the rush to high current capability at 99% efficiency? Or one of the other competitors out there! A clarion call to action this is to once and for all retire the Buck converter!ddriver - Friday, September 15, 2017 - link
I doubt they will even make a micro atx board, much less itx...It will be a waste, gigantic socket will take up most of the space, there won't be enough room for all 4 memory channels, the CPUs generous amount of PCIE lanes will be utterly wasted.
vgray35@hotmail.com - Friday, September 15, 2017 - link
Yes, and yet ASROCK put the X99 on a mITX board. However cutting heat from the VRM's by 60% to 70% or more should be done anyway, not to mention the graphics cards that draw even more power. Why settle for 95/96% efficiency (or worse in some cases) when one could get 98.5% to 99% efficiency. But then again you might be right concerning Thread Ripper, although I would still buy it with M2 slots on the back. The power delivery could be moved entirely to the back of the board without heat sinks with a total height of only 1.5mm (possibly at the center of socket itself on the back). The challenge of course is just as applicable to the graphics cards.ddriver - Saturday, September 16, 2017 - link
Hm, it seems that they launched some lga2066 boards recently, an asrock itx and an msi matx. TR4 however is still atx and up only...DanNeely - Friday, September 15, 2017 - link
I agree about how little sense it would make; but there have been a few LGA2011 mITX boards for people who only needed the higher core count. One of the two currently listed on Newegg uses SoDIMMs and riser boards around the CPU socket to make everything fit. Doing it for Threadripper would definitely be harder on account of the bigger socket; but I won't say never.Xajel - Saturday, September 16, 2017 - link
mITX is very hard even with riser boards & only 4x SO-DIMMsbut mATX on the other hand is very plausible... lets hope for a mATX version of the ASRock Pro Gaming with it's 10GbE ( and at least one of it's Intel's 1GbE ), at least 2 of it's M.2's and 6+ SATA ports... and please ASRock, add a front USB 3.1g2 header...
ddriver - Friday, September 15, 2017 - link
Amps, or amperes, is a unit of current, not a unit of power. Power is measured in watts."Rehashing the same old power technologies with only stepwise minuscule improvements at each iteration, is not going to bode well at all." - this is unfortunately a fact, and the de-fact motto of the industry.
ddriver - Friday, September 15, 2017 - link
I mean it is kinda naive to assume several years old switching circuits will find their way into the mainstream just because they are a few percent better.We've had gas turbine engines for decades, but they are still only used in tanks, helicopters or naval ships, while mainstream vehicles are still stuck with the internal combustion engine, which is easier to break, harder to maintain and a whooping 60% less efficient.
vgray35@hotmail.com - Saturday, September 16, 2017 - link
A few % better - you are looking at it wrong. Platinum PSU units with peak 94% efficiency means 6% heat dissipated. Now at 99% efficiency that is 1% dissipated as heat. That means a 5/6th reduction in heat or 83% less heat which is not just a few percent better. Further the new topology yields a >70% cost reduction which is also not insignificant. Gas turbines are also too noisy by the way, which is the main reason these are not considered (expensive when they go wrong too), and thus not a good comparison versus improvements being discussed here. Are you saying the linked article on PWM-resonance and resonance scaling topology is not worthwhile, or the problems with Buck converter inductors is not a severely limited and highly noisy power solution? Perhaps Power Electronics engineering is not your field of interest at all!Manch - Monday, September 18, 2017 - link
ICE is not 60% less efficient than a gas turbine. A gas turbine doesn't scale downward well. One of similar power would actually less efficient as an equivalent rated ICE. Gas turbines are impractical for vehicles.One benefit of a turbine would be no tailgating. The intake would suck in and crap out small critters all over your windshield.
vgray35@hotmail.com - Friday, September 15, 2017 - link
Yes I appreciate that - we draw power (Watts) at the rate of Joules of energy per second, Notice I said "we draw power", but that is also directly reflected proportionally as current in Amps (or a rate of Coulombs per sec), There was not a direct inference that power is measured in Amps, but only (a) "We are drawing a measure of power", and (b) this is approaching proportionally a resultant 150A. Yes I probably could have said better, but in wise did I say power is measured in AMPS.vgray35@hotmail.com - Friday, September 15, 2017 - link
I do not believe a defacto motto of the industry is "we are wasting 5% in heat, so let us not bother with wasting only 1%). Clearly you did not read the link to the Power Electronics article (I surmise), as then you would have realized the solution offers a huge reduction in cost as well as heat. The cost factor is something everybody is interesting in, even the industry at large, in my humble opinion.ddriver - Friday, September 15, 2017 - link
Huge? Like what? 6-7% better? Maybe 8-9%?What I meant is the solution you linked to is like 99% efficient, a good buck converter is what? Like 90-92%?
That's nowhere near the difference between an internal combustion engine and a gas turbine, the latter being more than twice more efficient. And still no adoption, even thou the solution is not really all that complex, and decades old. They still only use gas turbines in the most demanding applications, which is pretty much the same as with the converters from that article, which that dude developed for NASA's most demanding applications. I am pretty sure computers in NASA run on buck converters too, and they will use his designs only for the stuff they launch into space.
You probably don't realize how immense of an impact it would have if all cars on the planet become more than twice as efficient, burning more than twice as little fuel, outputting less than half the harmful emissions, traveling twice as far on a single fill. It would completely dwarf the benefits of boosting computer power converters from 90 to 99% :)
I am not saying it is not cool, I am just saying there have been a lot more beneficial a lot more high priority solutions that haven't been adopted yet for a lot longer, so you should not be surprised that the entire industry hasn't switched to a new power converter design overnight. They will do as they will always do, they will milk the cow until it dies, and then make it into jerky, and only then will then go for the new and better thing.
vgray35@hotmail.com - Saturday, September 16, 2017 - link
As noted earlier, 92% which is 8% heat versus 99% which is 1% heat means a 7/8th reduction in heat or 87% reduction. Thinking this is only a 7% improvement differential is incorrect - it is in this example an 87% improvement. That is not small potatoes. When this is coupled with a corresponding large cost reduction, then it becomes apparent the chip manufacturers would rather make more money using the older technology. Maybe the PSU engineers are just plain lazy or are not following the advances in their field as they are snowed under with work.Let's keep the discussion focused on power supplies.vgray35@hotmail.com - Saturday, September 16, 2017 - link
The 600W @ 12V fed to the motherboard with ~8% losses is ~50W heat in the VRMs (driving 180W CPU, 320W GPU, leaving 100W for other parts), The ATX power supply is 90% efficient or less with 60W heat also in the same case. The GPU power supply also is another 25W waste power supply heat, for a total of 135W of waste heat (as opposed to the useful heat generated within the various components themselves, which is heat generated from useful work being done). It is tough to manage this 660W in the case, but over 135W of it (>21% low ball estimate) is waste heat from just power supplies alone (ATX PSU, VRMs, GPU supply), of which over 80% or 110W could be eliminated by abandoning the Buck converter topology. This of course is a simplistic view, as there are other components too that are ignored here, for brevity's sake. I suspect >25% of heat comes from power supplies alone which can be dramatically curtailed. It's a 3 ringed circus: ATX PSU steals 10% for itself, motherboard steals another 8% for itself and GPU steals a further 8% for itself. We have no control over the power drawn from the mega chips themselves, but we do have control over the power supplies that drive them, and manufacturers could be doing a lot better here. And this is by no means a monster power hungry system with only one high-end graphics card. A >85% reduction in power supply waste heat can be realized if the Buck converter is abandoned, and that applies to resonant LLC power supplies also. The motherboard manufacturers and ATX PSU manufacturers need to take this aspect much more seriously.http://www.powerelectronics.com/power-management/s...
AMD's Thread Ripper X399 and Intel's X299 platforms should have been their first attempt at abandoning the Buck, half-bridge, and resonant LLC topologies. They failed us in that regard. We need this fiasco to come to an end by using hybrid PWM-resonant switching and resonance scaling, which eliminates the ferrite cored inductors altogether, and replaces them with just copper traces on the PCB. This is not rocket science.
Oxford Guy - Saturday, September 16, 2017 - link
Motherboard makers seem pretty much incompetent. They can't even be bothered to issue BIOS updates to fix serious bugs.ddriver - Sunday, September 17, 2017 - link
Yeah, but then again, an overclocked TR is like 200W, even an entry level car is like 200 KILOWATTS. So percentages are not really that much indicative.The facts remain. A TR mobo with a better power regulator circuit will save like 10 watts of power, a car with a gas turbine engine will save like 100 KW of power.
That's 10 000 times larger saving measured on absolute scales. What's more important in your opinion? Saving a watt, or saving 10 000 watts? Naturally, I'd rather have both. The goal here is to illustrate how low of a priority it is to improve mobo power delivery compared to some other, longer standing improvement opportunities that have been ignored.
Icehawk - Sunday, September 17, 2017 - link
They tried turbine cars, they are terrible due to the way they deliver power. Several successful drag cars have used them as in that application the power delivery works well. Same reason we aren't going to be driving rocket cars.ddriver - Monday, September 18, 2017 - link
You should look up into it. A gas turbine is not a jet engine. It is actually more efficient, and it doesn't utilize jet propulsion. Gas turbines power certain tanks, ships and helicopters. They are also used in power plants.thomasg - Sunday, September 17, 2017 - link
Honestly, could you just stop bringing your stupid gas turbine rant, since you don't really seem to grasp what efficiency is, and not even what power a typical engine has.Gas turbines are very efficient in use at or around their designed typical load. They are not efficient under medium and low load scenarios, where they will drop below modern gasoline combustion engines.
Those come with 200 kW - in high-powered sports cars, or top-of-the-line luxury limousines.
A "entry level car" will be at max. 75 kW peak power; and guess what: most of the time they are used far below the maximum output.
Modern gasoline car engines typically reach 45% efficiency, which they achieve in their typical load scenarios, at less than 50% of their design load.
Modern gas turbines can reach up to 60% efficiency, which is great - but this is usage at their design load. At half load, the efficiency will drop below 30%. The majority of miles driven with cars are at below half load.
What we expect from car engines is efficiency at their usage, while having enough reserves for quick acceleration. Gas turbines cannot do this efficiently, and gas turbines are notoriously laggy in variable load.
However, they can be used effectively in fully-hybrid cars, where peak-load is achieved by battery-backed electric motors.
But since these engines are so expensive to produce, it is simply more cost-effective to use fully-electric cars for this.
ddriver - Monday, September 18, 2017 - link
Of course that a gas turbine consumer vehicle will also utilize a battery buffer. You basically charge it while stationary, drive on battery until power runs out, which is when the turbine is activated to supply power and charge. An all-electric drive will significantly simplify the design, the transmission, and will ensure maximum torque on any level of power.You are way off, only F1 engines approach 50% of efficiency, but they only last like a few races, which is the cost of that efficiency. Totally impractical for consumer vehicles. The typical operational efficiency of consumer vehicles is as low as 20%. And they are also intrinsically limited in terms of torque delivery, which happens in a specific and rather narrow RPM range.
So, transitioning to gas turbine engines will not have a 100% but a 200% increase in efficiency. I guess little minds simply cannot appreciate then significance of that. Not to mention it will defacto force wide hybrid vehicle adoption, and a very overhead compared to internal combustion engines, as a gas turbine with the same power delivery will weight 1/4 of that, and will deliver 3 times as much energy from the same amount of fuel.
Gas turbine engines are also actually easier to make and maintain, they have far less moving parts. You seem to be confusing a regular gas turbine engine with the ultra-efficient one, which requires expenssive and time staking 5 axis machining of the components. Those exceed 70% efficiency, but if you aim for 60%, the manufacturing is much cheaper, easier and faster. Overall much more cost efficient.
Which is exactly why they are not being adopted. It will result in massive loss of revenue - cheaper engines, that need less of the expensive maintenance, less parts, and burn much less fuel. The priority of the industry is profit, and gas turbine vehicles will result in a massive dip in that aspect. Internal combustion engines are pathetic in terms of efficiency, but are very profit-effective.
ddriver - Monday, September 18, 2017 - link
"and a very LOW overhead compared to internal combustion engines"vgray35@hotmail.com - Sunday, September 17, 2017 - link
Get a glue ddriver. This is not about power saved whose numbers are minuscule compared to the power o/p of an automobile engine - IT IS ALL ABOUT TEMPERATURE OF THE INTEGRATED CHIPS WHICH ARE LIMITED TO NOT MUCH MORE THAN 100 deg C, and the difficulty of providing sufficient air flow or liquid cooling necessary to remove that heat. Failure to remove the heat dramatically reduces life of the chips by as much as 60% - 70% life reduction. The problem is greatly curtailed by not using circuity designs that would generate ever larger amounts of waste heat. Please stick to subject matter of this posting, which is about reducing CPU and GPU and VRM operating temperatures, without using huge heat sinks and liquid cooling radiators. How does one reduce these temperatures? The first step is to eliminate >85% of the heat in the ATX PSU, motherboard VRMs, and GPU VRMs, to reduce total heat load on the cooling system. And that technology is already available as mentioned above. Please cease with the 100 kW rhetoric which is meaningless in context of this temperature problem (yes little k for kilo not capitol K). Let's talk about the excess temperature issue. Get it!Thread Ripper is a HEDT platform and thus deserves a HEDT VRM solution, and not the same old worn out technologies that use air-gapped ferrite cored inductors, when resonance scaling permits increased resonant capacitance in exchange for much smaller resonant inductance using cheaper air-cored inductors. And to boot, a dramatic reduction is both size and cost. AMD should lead this charge and bring forth an appropriate reference VRM, using PWM-resonant switching and resonance scaling of the Cr/Lr resonant components. ARE YOU LISTENING AMD - LETS RETIRE THE BUCK CONVERTER ACROSS THE BOARD.
ddriver - Monday, September 18, 2017 - link
Getting glue. Now what?Here is a clue - remove the stock heatsink, install better cooling. Takes like 5 minutes. Heat problem solved. Crude, but it delivers result.
The industry standards are so low, there is barely a product, regardless of its price range, that someone with basic engineering cannot tangibly improve in a few easy steps.
An example, I recently got a yoga 720 2in1. Opened it up, removed the cooling, put good TIM, reinstalled cooling, now I have a 5 minute 5$ improvement that gave me a 10% boost in performance, temperature and battery life. They are just lazy, and don't go even for the most obvious, easiest to implement improvements.
They DONT WANT IT TO RUN COOL. They deliberately engineer it to run at its limit, so close that often they actually mess it up. So that this device can fail, so you can get a new one. It is a time bomb, planned obsolescence, and you can bet your ass they would have done the same regardless of the power delivery circuit involved. It may actually be a far more delicate and harder to address time bomb than hot running VRMs. Which you can easily cool down by ordering a custom heatpipe solution, which will set you back like 50$. That's a rather quick and affordable way to solve your problem, compared to complaining about it in this cesspool of mediocrity ;)
vgray35@hotmail.com - Monday, September 18, 2017 - link
Sorry ddriver, but I disagree with all your perspectives on this matter. You are clearly not capable of addressing the technical issues of Power Supply design for efficiency, and cannot get to grips with electronic circuity (or do not want to) and how different designs compare. You appear only interested in hijacking the original subject matter for your own purposes. You never contributed a single element addressing the original purpose of this thread, and so you have lost your credibility as a serious participant in my book, and hence you and I done.glennst43 - Friday, September 15, 2017 - link
Based on my experience with the Asus Zenith Extreme, you can expect a bumpy ride which should not be surprising with a new product. My last 3 systems were Intel X58, X79, and X99 boards purchased shortly after thier releases, and this platform (x399) has had the most issues. I expect that in a few months after some BIOS and driver updates, the experiecne will get much better. I suspect that the validation process is not as thorough as the Intel boards.Here are a few issues that I have experienced as an early adopter:
System would not boot with 2 video cards (resolved with BIOS update)
The 10G Network card would randomly disconnect (resolved? with driver update)
System sometimes will not come back from sleep and requires a hard reset (no resolution yet)
USB devices disconnecting/reconnecting randomly (no resolution yet)
johnnycanadian - Friday, September 15, 2017 - link
I'm crossing my fingers I made the correct choice with MSI's x399 offering. I too have been burned by the ASUS early-adopter-penalty and although Gigabyte has been good to me in the past, the MSI offered everything I needed and then some (although I'm firmly in the "get rid of the tacky LED" camp). Everything is getting stuffed into a Cooler Master HAF XB II EVO (with no glass but with the mesh top panel). Even if it's not perfect it can't be worse than running Windows on Boot Camp with a "trash can" (aptly named) Mac "Pro".arter97 - Friday, September 15, 2017 - link
ASUS PRIME X399-A : "In the ROG board this lead to a 40mm fan, which is not present here on the Prime."This is wrong. I own one and the fan is present under the shroud.
You can even see it from the side shot of the motherboard.
HStewart - Friday, September 15, 2017 - link
You got to give some credit to AMD clever marketing teams by naming this x399 when Intel has x299 and then calling this the "The Most Advanced Desktop Motherboard in the World"But in reality there is no Thunderbolt 3.0 support and most likely be updated next year or later with PCIe 4.0 support.
Does it support DDR4-4600?
AMD sure likes to play numbers game and not just chipset name - but with number cores - just remember it mostly marketing - yes it still 16 cores - but 16 cores by one manufacturer does not mean 16 cores from completion.
sartwell - Friday, September 15, 2017 - link
Where is the high speed RAM? You cannot get it anywhere.HowardJones - Friday, September 15, 2017 - link
If you're considering the ROG Zenith be aware that it's having tons of problems overclocking CPU and memory. Theres a huge thread on overclock dot net that is filled with people having problems with the bios. There WAS an Asus rep who was trying to help, but he's pretty much disappeared in the last few weeks.tamalero - Sunday, September 17, 2017 - link
Seems like your average typical Asus mainboard.Asus has really huge troubles in their software segment. I still remember they needed like 50 patches to get their high end routers stable. And yet they couldnt get all what they promised in a working condition and even disabled some stuff in later firmwares.
Same with bioses.
I still remember a time where you had to enable the "floppy" connector in a motherboard (even if you had no drive) if you wanted the bios to actually apply the overclocking settings.
mapesdhs - Tuesday, September 19, 2017 - link
A note for those with older ASUS ROG and other ASUS X79 boards, etc.: there's a thread on the ASUS ROG forum site where a guy has provided modded BIOS files to support booting from NVMe devices, aswell as rolling in all the updates to the latest microcode, Intel RAID, etc. Very handy indeed, and he takes requests for other ASUS boards. SSDs like the 950 Pro have their own boot ROM, but a BIOS with boot support is perfect for the 960 EVO/Pro and other models which don't have their own boot ROM.I just bought two R4Es (one basically new), a 4930K, a 16GB/2400 DDR3 kit and a 120mm AIO for a total of 320 UKP. Who needs new stuff? :D My next new build will be TR or EPYC though I'm sure.
satai - Saturday, September 16, 2017 - link
Are any details on X399 AORUS Gaming 7 power delivery solution?The rumor was, that the used only 8 phase, but 8s this true for the final design?
danjw - Saturday, September 16, 2017 - link
The Asus Zenith Extreme claims to support 4-way SLI, but I thought Nvidia didn't support 4x slots for SLI? Am I wrong? Or is this a false claim?DanNeely - Saturday, September 16, 2017 - link
Cynically I'm going to guess that if you use several generation old cards that did support 4 way SLI you could combine all 4 together for an act of supreme WTFery.danjw - Saturday, September 16, 2017 - link
I looked at the manual for the Zenith Extreme. It claims the two that the block diagram of the board in this review indicates are 4x are 8x, with the lowest one on the board being 8x as long as you don't have a U.2 drive connected to the U.2 port. So, it is unclear which is correct the block diagram or the manual.danjw - Sunday, September 17, 2017 - link
Apparently, the block diagram wasn't clear. That last slot is 8x/4x depending on if you have a U.2 drive connected.CityBlue - Saturday, September 16, 2017 - link
The noisy 40mm fan is a major deal breaker for me - what are ASUS thinking by including this? They should add better passive cooling - I don't need the stupid little fan failing or becoming less effective in 2-3 years, leading to random instability.Real shame as my last two motherboards (Intel and AMD) have been from ASUS and I've been really happy, but for Threadripper they're off my list until they design a passively cooled motherboard.
mapesdhs - Tuesday, September 19, 2017 - link
Must admit I quite the look of that Gigabyte Designare-EX, and likewise until now almost all my boards have been ASUS (except for some X58/P55 boards from Asrock, though I have three top-end ASUS P55 boards aswell).I note the absence of EVGA. Have they basically quit the mbd business?
Threska - Sunday, September 17, 2017 - link
Nice, but one deficiency that's carried through a lot of boards is dealing with headers. The writing's tiny. The locations are difficult even if the boards not populated. Documentation a lot of times isn't clear. The one good thing is that it's usually done only once (build time).prisonerX - Monday, September 18, 2017 - link
You need the motherboard equivalent of a large print book.Threska - Monday, September 18, 2017 - link
Endoscope actually, but not everyone has those.msroadkill612 - Saturday, September 23, 2017 - link
A large print, e-atx atx.peevee - Monday, September 18, 2017 - link
Sorry, but $350-550 for a dumb, single-socket, homebound motherboard is insane.satai - Monday, September 18, 2017 - link
What don't you get on "HEDT"?You probably don't need it/don't have money for it. Some people do.
mapesdhs - Tuesday, September 19, 2017 - link
He's either trolling or just ignorant. I know a guy who has MS, got a PhD in a relevant discipline and now does research on his own condition. A TR board would be ideal for the work he does, using lots of GPUs for compute, etc. (biomedical apps) Then there are those who game but also want to stream and encode, and of course solo professionals who can't afford dual-socket XEONs or Opteron-type boards. And plenty of people work at home, so the location of a system at home means nothing. I have a 36-CPU supercomputer in my garage. :Dwiyosaya - Monday, September 18, 2017 - link
The MSI Pro Carbon looks like it has 4 PCI-e 3.0 X16 slots from the diagram in the article. I have to wonder whether those slots are actually electrically X16 slots or some other configuration. I have not been able to find any information on their site or in this article that gives the electrical configuration of those slots. Anyone know for sure?mapesdhs - Tuesday, September 19, 2017 - link
According to page 34 of the mbd manual, the main slots are wired x16/x8/x16/x8.msroadkill612 - Saturday, September 23, 2017 - link
Alternately put, no TR allows 3x 16x slots I think.markmi - Wednesday, September 20, 2017 - link
Context: X399 AORUS Gaming 7(rev. 1.0)I asked GIGABYTE about the ECC mode handling for this
board and they reported:
"It can support ECC memory by default in ECC mode."
So the table in the article that says "operates in non-ECC mode"
for this board is wrong on the issue as far as I can tell.
ntsarb - Saturday, September 23, 2017 - link
I'm waiting for a 6+1 PCIe 3.0 x16 slots, x399 motherboard, with 3x PLX switches to allow for 6x GPUs, for a rendering workstation. The ASRock X99 WS-E and the Asus X99 WS-E are no longer available in the market.karatekid430 - Thursday, October 26, 2017 - link
Quote from article:"Thunderbolt 3 certification requires a few things from the CPU side like graphical output which we haven't been able to do. We expect this will be developed upon through Raven Ridge and possibly get more groundwork down to activate TB3 on the X399 Designare EX."
End quote from article.
Then how do Intel Skylake-X with X299 motherboard have Thunderbolt 3 certification, when Skylake-X has no integrated graphics?
Hence why the quoted statement sounds fishy to me.
Cooe - Thursday, April 4, 2019 - link
*facepalm* Because Thunderbolt was developed, and is owned by Intel. They can license whatever they want to be Thunderbolt capable, because they literally created it, and set the standard for what is & isn't. Intel hasn't actually opened up Thunderbolt licensing & certification for non-Intel platforms yet like they promised though, so Gigabyte can't do anything about anything until they do & let AMD CPU's be TB3 capable.Techmister - Saturday, February 17, 2018 - link
MOtherboards need to be powerful no doubt in that. What attracted me towards The ASUS Strix X399-E Gaming is that it has an system which is automated to facilitate a five way optimisation http://www.printspoolerservices.com/printer-issues...">How to fix printer spooler error