Sizing Up Servers: Intel's Skylake-SP Xeon versus AMD's EPYC 7000 - The Server CPU Battle of the Decade?
by Johan De Gelas & Ian Cutress on July 11, 2017 12:15 PM EST- Posted in
- CPUs
- AMD
- Intel
- Xeon
- Enterprise
- Skylake
- Zen
- Naples
- Skylake-SP
- EPYC
AMD’s EPYC 7000-Series Processors
As announced back at the official launch, AMD is planning to hit both the dual socket and single socket markets. With up to 32 cores, 64 threads, 2TB/socket support and 128 PCIe lanes per CPU, they believe that by offering a range of core counts and frequencies, they have the nous to attack Intel, even if it comes at a slight IPC disadvantage.
AMD’s main focus will be on the 2P parts, where each CPU will use 64 PCIe lanes (using the Infinity Fabric protocol) to connect to each other, meaning that in a 2P system there will still be 128 PCIe 3.0 lanes to go around for add-in devices. There will be the top four SKUs available initially, and the other parts should be in the hands of OEMs by the end of July. All the CPUs will have access to all 64MB of the L3 cache, except the 7200-series which will have access to half.
The new processors from AMD are called the EPYC 7000 series, with names such as EPYC 7301 and EPYC 7551P. The naming of the CPUs is as follows:
EPYC 7551P
- EPYC = Brand
- 7 = 7000 Series
- 30/55 = Dual Digit Number indicative of stack positioning / performance (non-linear)
- 1 = Generation
- P = Single Socket, not present in Dual Socket
So in the future, we will see second generation ‘EPYC 7302’ processors, or if AMD scales out the design there may be EPYC 5000 processors with fewer silicon dies inside, or EPYC 3000 with a single die but for the EPYC platform socket (obviously, those last two are speculation).
But starting with the 2P processors:
AMD EPYC Processors (2P) | |||||||||
Cores Threads |
Frequency (GHz) | L3 | DRAM | PCIe | TDP | Price | |||
Base | All | Max | |||||||
EPYC 7601 | 32 / 64 | 2.20 | 2.70 | 3.2 | 64 MB | 8-Ch DDR4 2666 MT/s |
8 x16 128 PCIe |
180W | $4200 |
EPYC 7551 | 32 / 64 | 2.00 | 2.55 | 3.0 | 180W | >$3400 | |||
EPYC 7501 | 32 / 64 | 2.00 | 2.60 | 3.0 | 155W/170W | $3400 | |||
EPYC 7451 | 24 / 48 | 2.30 | 2.90 | 3.2 | 180W | >$2400 | |||
EPYC 7401 | 24 / 48 | 2.00 | 2.80 | 3.0 | 155W/170W | $1850 | |||
EPYC 7351 | 16 / 32 | 2.40 | 2.9 | 155W/170W | >$1100 | ||||
EPYC 7301 | 16 / 32 | 2.20 | 2.7 | 155W/170W | >$800 | ||||
EPYC 7281 | 16 / 32 | 2.10 | 2.7 | 32 MB | 155W/170W | $650 | |||
EPYC 7251 | 8 / 16 | 2.10 | 2.9 | 120W | $475 |
The top part is the EPYC 7601, which is the CPU we were provided for in this comparison. This is a 32-core part with simultaneous multithreading, a TDP of 180W and a tray price of $4200. As the halo part, it also gets the good choice on frequencies: 2.20 GHz base, 3.2 GHz at max turbo (up to 12 cores active) and 2.70 GHz when all cores are active.
Moving down the stack, AMD will offer 24, 16 and 8-core parts. These will disable 1, 2 and 3 cores per CCX respectively, as we saw with the consumer Ryzen processors, and is done in order to keep core-to-core latencies more predictable (as well as keeping access to all the L3 cache). What is interesting to note is that AMD will offer a 32-core part at 155W (when using DDR4-2400) for $3400, which is expected to be very competitive compared to Intel (and support 2.66x more DRAM per CPU).
The 16-core EPYC 7281, while having half the L3, will be available for $650, making an interesting 2P option. Even the bottom processor at the stack, the 8-core EPYC 7251, will support the full 2TB of DRAM per socket as well as 128 PCIe lanes, making it a more memory focused SKU and having almost zero competition on these sorts of builds from Intel. For software that requires a lot of memory but pays license fees per core/socket, this is a nice part.
For single socket systems, AMD will offer the following three processors:
AMD EPYC Processors (1P) | |||||||||
Cores Threads |
Frequency (GHz) | L3 | DRAM | PCIe | TDP | Price | |||
Base | All | Max | |||||||
EPYC 7551P | 32 / 64 | 2.0 | 2.6 | 3.0 | 64 MB | 8-Ch DDR4 2666 MT/s |
8 x16 128 PCIe |
180W | $2100 |
EPYC 7401P | 24 / 48 | 2.0 | 2.8 | 3.0 | 155W/170W | $1075 | |||
EPYC 7351P | 16 / 32 | 2.4 | 2.9 | 155W/170W | $750 |
These processors mirror the specifications of the 2P counterparts, but have a P in the name and slightly different pricing.
219 Comments
View All Comments
TheOriginalTyan - Tuesday, July 11, 2017 - link
Another nicely written article. This is going to be a very interesting next couple of months.coder543 - Tuesday, July 11, 2017 - link
I'm curious about the database benchmarks. It sounds like the database is tiny enough to fit into L3? That seems like a... poor benchmark. Real world databases are gigabytes _at best_, and AMD's higher DRAM bandwidth would likely play to their favor in that scenario. It would be interesting to see different sizes of transactional databases tested, as well as some NoSQL databases.psychobriggsy - Tuesday, July 11, 2017 - link
I wrote stuff about the active part of a larger database, but someone's put a terrible spam blocker on the comments system.Regardless, if you're buying 64C systems to run a DB on, you likely will have a dataset larger than L3, likely using a lot of the actual RAM in the system.
roybotnik - Wednesday, July 12, 2017 - link
Yea... we use about 120GB of RAM on the production DB that runs our primary user-facing app. The benchmark here is useless.haplo602 - Thursday, July 13, 2017 - link
I do hope they elaborate on the DB benchmarks a bit more or do a separate article on it. Since this is a CPU article, I can see the point of using a small DB to fit into the cache, however that is useless as an actual DB test. It's more an int/IO test.I'd love to see a larger DB tested that can fit into the DRAM but is larger than available caches (32GB maybe ?).
ddriver - Tuesday, July 11, 2017 - link
We don't care about real world workloads here. We care about making intel look good. Well... at this point it is pretty much damage control. So let's lie to people that intel is at least better in one thing.Let me guess, the databse size was carefully chosen to NOT fit in a ryzen module's cache, but small enough to fit in intel's monolithic die cache?
Brought to you by the self proclaimed "Most Trusted in Tech Since 1997" LOL
Ian Cutress - Tuesday, July 11, 2017 - link
I'm getting tweets saying this is a severely pro AMD piece. You are saying it's anti-AMD. ¯\_(ツ)_/¯ddriver - Tuesday, July 11, 2017 - link
Well, it is hard to please intel fanboys regardless of how much bias you give intel, considering the numbers.I did not see you deny my guess on the database size, so presumably it is correct then?
ddriver - Tuesday, July 11, 2017 - link
In the multicore 464.h264ref test we have 2670 vs 2680 for the xeon and epyc respectively. Considering that the epyc score is mathematically higher, howdoes it yield a negative zero?Granted, the difference is a mere 0.3% advantage for epyc, but it is still a positive number.
Headley - Friday, July 14, 2017 - link
I thought the exact same thing