1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe
2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.
Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
re: network consolidation Network consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.
If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.
And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.
re: management I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).
Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.
Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.
You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.
The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.
"Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. "
It's not one server. It's actually 2 servers. just because they're in a 2U X 1/2 width form factor doesn't mean they're just one system. There are 2 systems there. Sure you can pack 512GB into 2U with 2 servers, but there are better ways.
1. Dell makes a PowerEdge R620, where you can pack 384GB into 1U, two of those gives you the same number of systems in the same space, with 50% more memory.
2. Dell also has their new R720, which is 2U and has a capacity of 768GB in a 2U form factor. Again, 50% more memory capacity in the same 2U. However, that's short 2 processor sockets.
2. Now, there's the new R820. 4 sockets, 1.5TB of memory, 7 slots, in 2U of space. It's a beast. I have one of these on the way from Dell for my test lab.
Working as an admin in a test lab, dealing with all brands of servers, my experiences with various brands gives me a rather unique insight. I have had very few problems with Dell server, despite having nearly 30% Dell servers. We've had 7 drives die (all Toshiba) and one faceplate LCD go out. Our HP boxes, at less than 10% of our lab, have had more failures. The IBMs, ahile also less than 10%, have had absolutely no hardware failures. Our Supermicros comprise about 25% of the lab, yet contribute >80% of the hardware problems, from motherboards that just quit recognizing memory to backplanes that quit recognizing drives. I'm not too happy with them.
Sure, you can load each of those Rxxx Dell servers with boatloads of memory, but you fail to mention that it comes with a significant performance/penalty. The moment you put a third Dimm on a memory channel your memory speeds drops from 1600 (IF you started with 1600 memory to begin with) to 1066 or worse, 800. On a virtualization host, that makes a big difference.
The latest Dell R620's are 1U servers that can have two 8 core CPU's and 24 DIMM slots. Each slot can hold up to a 32GB DIMM giving total memory capacity of 768GB in a 1U space.
We use these in our data centers for virtualization (we're 100% virtualized). Completely diskless (internal RAID 1 dual SD modules for ESXi)
Each machine has four 10gb NIC plus two 1gb NIC. All storage on iSCSI SAN's through 10gb backbone.
For most virtualization tasks, you really don't need the 2U R720, which has the same CPU/RAM options but gives you more drive bays and expansion slots.
A few corrections - the 192GB for HCDIMMs is incorrect - it should also be 384GB.
There is no data available that confirms a 20% higher power consumption for HCDIMMs over LRDIMMs. There is a suspicious lack of benchmarks available for LRDIMMs. It is possible that figure arises from a comparison of 1.5V HCDIMMs vs. 1.35V LRDIMMs (as were available at IBM/HP).
It is incorrect that LRDIMMs are somehow standard and HCDIMMs are non-standard.
In fact HCDIMMs are 100% compatible with DDR3 RDIMM JEDEC standard.
It is the LRDIMMs which are a new standard and are NOT compatible with DDR3 RDIMMs - you cannot use them together.
The 1600MHz HCDIMM mention is interesting - would be good to hear more on that.
Your article is very interesting - and the first mainstream (and belated) examination of the LRDIMM (new standard - incompatible with RDIMMs) vs. HCDIMM (100% DDR3 RDIMM compatbile) choice for Romley.
I have whittled down the use case for HCDIMMs/LRDIMMs and RDIMMs as follows:
The HCDIMM use case is at: - 16GB at 3 DPC use - 32GB (outperform both RDIMMs and LRDIMMs)
LRDIMMs are not viable at: - 16GB (RDIMMs are better) - 32GB (HCDIMMs are better)
RDIMMs are not viable at: - 32GB (because they are 4-rank - trumped by LRDIMMs/HCDIMMs)
There is a reason the Netlist HCDIMMs were only released on the virtualization servers from IBM/HP - because at 16GB levels the only niche available for LRDIMM/HCDIMM vs. RDIMM is the 3 DPC space. This will expand considerably at 32GB to mainstream levels as soon as 32GB HCDIMMs are released (they are currently in qualification with IBM/HP and have not been announced yet - though maybe expected shortly).
I had created an infographic covering the memory choices - search the net for the article entitled:
Infographic - memory buying guide for Romley 2-socket servers
HCDIMMs are not available at SuperMicro (as they are for IBM/HP) - so I was surprised you even covered HCDIMMs (since the article is after all referring to the SuperMicro line of servers).
BTW, Johan, I work for HP and asked some of the guys in ISS Technical Marketing why we don't send you our servers for eval like you get from SuperMicro and sometimes Dell
They felt that you guys didn't do alot of Server Reviews, and that your readership wasn't generally the kind of folks that buy HP Servers.
So I am curious if you could spin up a poll or something in the future to prove them wrong. If there is enough support I'm sure we can you some gear to play with.
I sometimes giggle when I see the stuff people on here get excited about in these reviews though. "Can you see the BIOS through IPMI?". Thats the kind of thing Compaq offered back with the RILOE II and have been integrated into the motherboard since iLO 1 which is like 4 or 5 years old at least. iLO4 on the Gen8 line have taken that a step further and we now hook the Display system BEFORE POST starts so instead of an invalid memory config getting you a series of beeps, you now get a full blown screen either on local VGA or on the Remote Console that straight up tells you you have a memory mismatch and why. i have seen his demo'd with NO DIMMs even installed in the server and you still get Video and obvious status messages.
Also you are about $2000 high on the HP SL unless I am missing something. I found these prices with QuickSpecs part numbers and Google, nothing magical inside HP.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
26 Comments
Back to Article
koinkoin - Friday, August 3, 2012 - link
For HPC solutions I like the Dell C6220, dense, and with 2 or 4GB of memory per cpu core you get a good configuration in a 2U chassis for 4 servers.But for VMware, servers like the R720 give you more room to play with memory and IO slots.
Not counting that those dense server don’t offer the same level of management and user friendliness.
JohanAnandtech - Friday, August 3, 2012 - link
A few thoughts:1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe
2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.
Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
alpha754293 - Friday, August 3, 2012 - link
re: network consolidationNetwork consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.
If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.
And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.
re: management
I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).
Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
JonBendtsen - Monday, August 6, 2012 - link
I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.bobbozzo - Tuesday, August 7, 2012 - link
Can you access boot screens and the BIOS from the IPMI?For Linux, I use SSH (or VNC server), but when you've got memory or disk errors, etc., it's nice to see the BIOS screens.
Bob
phoenix_rizzen - Thursday, August 9, 2012 - link
Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.
The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
ForeverAlone - Friday, August 3, 2012 - link
Only 128GB RAM? Unacceptable!Guspaz - Monday, August 20, 2012 - link
It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.darking - Friday, August 3, 2012 - link
I think the price on the webpage is wrong. or atleast it differs by market.i just checked the Danish and the British webstores, and the 32GB LRDIMMS are priced at around 2200$ not the 3800$ that the US webpage has.
JohanAnandtech - Friday, August 3, 2012 - link
They probably changed it in the last few days as HP as lowered their price to $2000 a while ago. But when I checked, it was $3800dgingeri - Friday, August 3, 2012 - link
"Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. "It's not one server. It's actually 2 servers. just because they're in a 2U X 1/2 width form factor doesn't mean they're just one system. There are 2 systems there. Sure you can pack 512GB into 2U with 2 servers, but there are better ways.
1. Dell makes a PowerEdge R620, where you can pack 384GB into 1U, two of those gives you the same number of systems in the same space, with 50% more memory.
2. Dell also has their new R720, which is 2U and has a capacity of 768GB in a 2U form factor. Again, 50% more memory capacity in the same 2U. However, that's short 2 processor sockets.
2. Now, there's the new R820. 4 sockets, 1.5TB of memory, 7 slots, in 2U of space. It's a beast. I have one of these on the way from Dell for my test lab.
Working as an admin in a test lab, dealing with all brands of servers, my experiences with various brands gives me a rather unique insight. I have had very few problems with Dell server, despite having nearly 30% Dell servers. We've had 7 drives die (all Toshiba) and one faceplate LCD go out. Our HP boxes, at less than 10% of our lab, have had more failures. The IBMs, ahile also less than 10%, have had absolutely no hardware failures. Our Supermicros comprise about 25% of the lab, yet contribute >80% of the hardware problems, from motherboards that just quit recognizing memory to backplanes that quit recognizing drives. I'm not too happy with them.
JHBoricua - Monday, August 6, 2012 - link
Dgingeri,Sure, you can load each of those Rxxx Dell servers with boatloads of memory, but you fail to mention that it comes with a significant performance/penalty. The moment you put a third Dimm on a memory channel your memory speeds drops from 1600 (IF you started with 1600 memory to begin with) to 1066 or worse, 800. On a virtualization host, that makes a big difference.
Casper42 - Friday, August 10, 2012 - link
No one makes 32GB @ 1600 yet.So 512GB @ 2DPC would be 1333
And 768GB @ 3DPC would be 1066 or 800 like you mentioned.
384 using 16GB DIMMs would still be 3DPC and would drop from 1600 down to like 1066.
256GB @ 1600 @ 2DPC still seems to be the sweet spot.
BTW, why is the Dell R620 limited to 16GB DIMMs? The HP DL360p Gen8 is also 1U and supports 32GB LRDIMMs
ImSteevin - Friday, August 3, 2012 - link
MMhmmm yeahOh yeah ok
I know some of these words.
thenew3 - Friday, August 3, 2012 - link
The latest Dell R620's are 1U servers that can have two 8 core CPU's and 24 DIMM slots. Each slot can hold up to a 32GB DIMM giving total memory capacity of 768GB in a 1U space.We use these in our data centers for virtualization (we're 100% virtualized). Completely diskless (internal RAID 1 dual SD modules for ESXi)
Each machine has four 10gb NIC plus two 1gb NIC. All storage on iSCSI SAN's through 10gb backbone.
For most virtualization tasks, you really don't need the 2U R720, which has the same CPU/RAM options but gives you more drive bays and expansion slots.
shuntian8099 - Saturday, August 4, 2012 - link
We accept any form of payment.
http://commonprosperity.org/ )
S=h=0=x ==sh0es=4oUSD
J=0=r=d=a=n==sh0es=36USD
A=i=r==M=a=x=sh0es=39USD
C=0=a=c=h==b=a=g=36USD
S=u=n=g=l=a=s=s=15USD
simpletrading@hotmail.com
Finally (48 hours) time limit to buy.
LV Muffler $ 5.99
LV Bags $ 19.9
LV Wallet $ 6.55
Armani Glasses $ 5.99
LV Belt $ 6.9
simpletrading@hotmail.com
Buy addresses-
commonprosperity.org
╭⌒╮
∴★∵**☆.∴★∵**☆.∴★∵**☆.█████.::∴★∵**☆.∴★∵**☆.█田田█::
commonprosperity.org ╭⌒╮ ╭⌒╮
∴★∵**☆.∴★∵**☆.█田田█.∴★∵**☆.∴★∵**☆.█田█田█∴★∵**☆.∴★∵*
*☆.█田█田█.★∵**☆.∴★∵**☆.█████.*******************You
r satisfaction is our eternal pursuit!
ddr3memory - Sunday, August 5, 2012 - link
A few corrections - the 192GB for HCDIMMs is incorrect - it should also be 384GB.There is no data available that confirms a 20% higher power consumption for HCDIMMs over LRDIMMs. There is a suspicious lack of benchmarks available for LRDIMMs. It is possible that figure arises from a comparison of 1.5V HCDIMMs vs. 1.35V LRDIMMs (as were available at IBM/HP).
It is incorrect that LRDIMMs are somehow standard and HCDIMMs are non-standard.
In fact HCDIMMs are 100% compatible with DDR3 RDIMM JEDEC standard.
It is the LRDIMMs which are a new standard and are NOT compatible with DDR3 RDIMMs - you cannot use them together.
The 1600MHz HCDIMM mention is interesting - would be good to hear more on that.
ddr3memory - Sunday, August 5, 2012 - link
I have posted an article on the performance comparison of HyperCloud HCDIMMs (RDIMM-compatible) vs. LRDIMMs (RDIMM non-compatible).Cannot post link here it seems - search for the article on the ddr3memory.wordpress.com blog:
Awaiting 32GB HCDIMMs
ddr3memory - Monday, August 6, 2012 - link
VMware has had good things to say about HCDIMM (not a word from VMware about LRDIMMs though). Search on the net for the article entitled:Memory for VMware virtualization servers
ddr3memory - Monday, August 6, 2012 - link
The prices mentioned maybe off - I see IBM showing same retail prices for 16GB LRDIMMs/HCDIMMs and similar at the IBM resellers.These resellers show 16GB HCDIMMs selling at $431 at costcentral for example, $503 at glcomp and $424 at pcsuperstore.
Search the internet for this article:
What are IBM HCDIMMs and HP HDIMMs ?
It has the links for the IBM/HP retail prices as well as the reseller prices.
ddr3memory - Monday, August 6, 2012 - link
Your article is very interesting - and the first mainstream (and belated) examination of the LRDIMM (new standard - incompatible with RDIMMs) vs. HCDIMM (100% DDR3 RDIMM compatbile) choice for Romley.I have whittled down the use case for HCDIMMs/LRDIMMs and RDIMMs as follows:
The HCDIMM use case is at:
- 16GB at 3 DPC use
- 32GB (outperform both RDIMMs and LRDIMMs)
LRDIMMs are not viable at:
- 16GB (RDIMMs are better)
- 32GB (HCDIMMs are better)
RDIMMs are not viable at:
- 32GB (because they are 4-rank - trumped by LRDIMMs/HCDIMMs)
There is a reason the Netlist HCDIMMs were only released on the virtualization servers from IBM/HP - because at 16GB levels the only niche available for LRDIMM/HCDIMM vs. RDIMM is the 3 DPC space. This will expand considerably at 32GB to mainstream levels as soon as 32GB HCDIMMs are released (they are currently in qualification with IBM/HP and have not been announced yet - though maybe expected shortly).
I had created an infographic covering the memory choices - search the net for the article entitled:
Infographic - memory buying guide for Romley 2-socket servers
HCDIMMs are not available at SuperMicro (as they are for IBM/HP) - so I was surprised you even covered HCDIMMs (since the article is after all referring to the SuperMicro line of servers).
Casper42 - Friday, August 10, 2012 - link
BTW, Johan, I work for HP and asked some of the guys in ISS Technical Marketing why we don't send you our servers for eval like you get from SuperMicro and sometimes DellThey felt that you guys didn't do alot of Server Reviews, and that your readership wasn't generally the kind of folks that buy HP Servers.
So I am curious if you could spin up a poll or something in the future to prove them wrong.
If there is enough support I'm sure we can you some gear to play with.
I sometimes giggle when I see the stuff people on here get excited about in these reviews though. "Can you see the BIOS through IPMI?". Thats the kind of thing Compaq offered back with the RILOE II and have been integrated into the motherboard since iLO 1 which is like 4 or 5 years old at least.
iLO4 on the Gen8 line have taken that a step further and we now hook the Display system BEFORE POST starts so instead of an invalid memory config getting you a series of beeps, you now get a full blown screen either on local VGA or on the Remote Console that straight up tells you you have a memory mismatch and why. i have seen his demo'd with NO DIMMs even installed in the server and you still get Video and obvious status messages.
Casper42 - Friday, August 10, 2012 - link
Also you are about $2000 high on the HP SL unless I am missing something.I found these prices with QuickSpecs part numbers and Google, nothing magical inside HP.
Half of one of these:
http://www.provantage.com/hewlett-packard-hp-62923...
Includes 8 fans and 3 PS
2 of these
http://www.provantage.com/hewlett-packard-hp-65904...
2x2665 with 8GB
Comes to about $11,600
JohanAnandtech - Tuesday, August 14, 2012 - link
Hey Casper, contact me on my mail... thx!ad99 - Monday, April 1, 2013 - link
You say:a quad rank DIMM with 4Gb chips is a 32GB DIMM (4 Gbit x 8 x 4 ranks),but I think 4 Gbit x 8 x 4 ranks make only 16GB,is that right?ad99 - Monday, April 1, 2013 - link
No,4 Gbit x 8 x 4 ranks should be 128GB,not 32GB or 16GB,is that right?