The Intel Optane SSD DC P4800X (375GB) Review: Testing 3D XPoint Performance
by Billy Tallis on April 20, 2017 12:00 PM ESTTest Configurations
So while the Intel SSD DC P4800X is technically launching today, 3D XPoint memory is still in short supply. Only the 375GB add-in card model has been shipped, and only as part of an early limited release program. The U.2 version of the 375GB model and the add-in card 750GB model are planned for a Q2 release, and the U.2 750GB model and the 1.5TB model are expected in the second half of 2017. Intel's biggest enterprise customers, such as the Super Seven, have had access to Optane devices throughout the development process, but broad retail availability is still a little ways off.
Citing the current limited supply, Intel has taken a different approach to review sampling for this product. Their general desire for secrecy regarding the low-level details of 3D XPoint has also likely been a factor. Instead of shipping us the Optane SSD DC P4800X to test on our own system, as is normally the case with our storage testing, this time around Intel has only provided us with remote access to a DC P4800X system housed in their data center. Their Non-Volatile Memory Solutions Group maintains a pool of servers to provide partners and customers with access to the latest storage technologies and their software partners have been using these systems for months to develop and optimize applications to take advantage of Optane SSDs.
Intel provisioned one of these servers for our exclusive use during the testing period, and equipped it with a 375GB Optane SSD DC P4800X and a 800GB SSD DC P3700 for comparison. The P3700 was the U.2 version of the drive and was connected through a PLX PEX 9733 PCIe switch. The Optane SSD under test was initially going to be a U.2 version connected to the same backplane, but Intel found that the PCIe switch was introducing some inconsistency in the access latency on the order of a microsecond or two, which is a problem when trying to benchmark a drive with ~8µs best case latency. Intel swapped out the U.2 Optane SSD for an add-in card version that uses PCIe lanes direct from the processor, but the P3700 was still potentially subject to whatever problems the PCIe switch may have caused. Clearly, there's some work to be done to ensure the ecosystem is ready to take full advantage of the performance promised by Optane SSDs, but debugging such issues is beyond the scope of this review.
Intel NSG Marketing Test Server | |
CPU | 2x Intel Xeon E5 2699 v4 |
Motherboard | Intel S2600WTR2 |
Chipset | Intel C612 |
Memory | 256GB total, Kingston DDR4-2133 CL11 16GB modules |
OS | Ubuntu Linux 16.10, kernel 4.8.0-22 |
The system was running a clean installation of Ubuntu 16.10, with no Intel or Optane-specific software or drivers installed, and the rest of the system configuration was as expected. We had full administrative access to tweak the software to our liking, but chose to leave it mostly in its default state.
Our benchmarking is a variety of synthetic workloads generated and measured using fio version 2.19. There are quite a few operating system and fio options that can be tuned, but we generally ignored them: for example the NVMe driver wasn't manually switched to polling mode, or the CPU affinity was not manually set, and nothing was tweaked about power management or CPU clock speed turbo. There is work underway to switch fio over to using nanosecond-precision time measurement, but it has not reached a usable state yet. Our tests only record latencies in microsecond increments, and mean latencies that report fractional microseconds are just weighted averages of eg. how many operations were closer to 8µs than 9µs.
All tests were run directly on the SSD with no intervening filesystem. Real-world applications will almost always be accessing the drive through a filesystem, but will also be benefiting from the operating system's cache in main RAM, which is bypassed with this testing methodology.
To provide an extra point of comparison, we also tested the Micron 9100 MAX 2.4TB on one of our systems, using a Xeon E3 1240 v5 processor. In order to not unfairly disadvantage the Micron 9100, most of the tests were limited to use at most 4 threads. Our test system was running the same Linux kernel as the Intel NSG marketing test server and used a comparable configuration with the Micron 9100 connected directly to the CPU's PCIe lanes rather than through the PCH.
AnandTech Enterprise SSD Testbed | |
CPU | Intel Xeon E3 1240 v5 |
Motherboard | ASRock Fatal1ty E3V5 Performance Gaming/OC |
Chipset | Intel C232 |
Memory | 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15 |
OS | Ubuntu Linux 16.10, kernel 4.8.0-22 |
Because this was not a hands-on test of the Optane SSD on our own equipment, we were unable to conduct any power consumption measurements. Due to the limited time available for testing, we were unable to make any systematic test of write endurance or the impact of extra overprovisioning on performance. We hope to have the opportunity to conduct a full hands-on review later in the year to address these topics.
Due to time, we were unable to cover Intel's new Memory Drive Technology software. This is an optional software add-on that can be purchased with the Optane SSD. The Memory Drive Technology software is a minimal virtualization system that allows software to pretend that their Optane SSD is RAM. The hypervisor will present to the guest OS a pool of memory equal to the amount of available DRAM plus up to 320GB of the Optane SSD's 375GB capacity. The hypervisor manages the placement of data to automatically cache hot data in DRAM, such that applications or the guest OS cannot explicitly address or allocate Optane storage. We may get a chance to look at this in the future, as it offers an interesting aspect of the new ways multi-tiered storage will be affecting the Enterprise market over the next few years.
117 Comments
View All Comments
Ninhalem - Thursday, April 20, 2017 - link
At last, this is the start of transitioning from hard drive/memory to just memory.ATC9001 - Thursday, April 20, 2017 - link
This is still significantly slower than RAM....maybe for some typical consumer workloads it can take over as an all in one storage solution, but for servers and power users, we'll still need RAM as we know it today...and the fastest "RAM" if you will is on die L1 cache...which has physical limits to it's speed and size based on speed of light!I can see SSD's going away depending on manufacturing costs but so many computers are shipping with spinning disks still I'd say it's well over a decade before we see SSD's become the replacement for all spinning disk consumer products.
Intel is pricing this right between SSD's and RAM which makes sense, I just hope this will help the industry start to drive down prices of SSD's!
DanNeely - Thursday, April 20, 2017 - link
Estimates from about 2 years back had the cost/GB price of SSDs undercutting that of HDDs in the early 2020's. AFAIK those were business as usual projections, but I wouldn't be surprised to see it happen a bit sooner as HDD makers pull the plug on R&D for the generation that would otherwise be overtaken due to sales projections falling below the minimums needed to justify the cost of bringing it to market with its useful lifespan cut significantly short.Guspaz - Saturday, April 22, 2017 - link
Hard drive storage cost has not changed significantly in at least half a decade, while ssd prices have continued to fall (albeit at a much slower rate than in the past). This bodes well for the crossover.Santoval - Tuesday, June 6, 2017 - link
Actually it has, unless you regard HDDs with double density at the same price every 2 - 2.5 years as not an actual falling cost. $ per GB is what matters, and that is falling steadily, for both HDDs and SSDs (although the latter have lately spiked in price due to flash shortage).bcronce - Thursday, April 20, 2017 - link
The latency specs include PCIe and controller overhead. Get rid of those by dropping this memory in a DIMM slot and it'll be much faster. Still not as fast as current memory, but it's going to be getting close. Normal system memory is in the range of 0.5us. 60us is getting very close.tuxRoller - Friday, April 21, 2017 - link
They also include context switching, isr (pretty board specific), and block layer abstraction overheads.ddriver - Friday, April 21, 2017 - link
PCIE latency is below 1 us. I don't see how subtracting less than 1 from 60 gets you anywhere near 0.5.All in all, if you want the best value for your money and the best performance, that money is better spent on 128 gigs of ecc memory.
Sure, xpoint is non volatile, but so what? It is not like servers run on the grid and reboot every time the power flickers LOL. Servers have at the very least several minutes of backup power before they shut down, which is more than enough to flush memory.
Despite intel's BS PR claims, this thing is tremendously slower than RAM, meaning that if you use it for working memory, it will massacre your performance. Also, working memory is much more write intensive, so you are looking at your money investment crapping out potentially in a matter of months. Whereas RAM will be much, much faster and work for years.
4 fast NVME SSDs will give you like 12 GB\s bandwidth, meaning that in the case of an imminent shutdown, you can flush and restore the entire content of those 128 gigs of ram in like 10 seconds or less. Totally acceptable trade-back for tremendously better performance and endurance.
There is only one single, very narrow niche where this purchase could make sense. Database usage, for databases with frequent low queue access. This is an extremely rare and atypical application scenario, probably less than 1/1000 in server use. Which is why this review doesn't feature any actual real life workloads, because it is impossible to make this product look good in anything other than synthetic benches. Especially if used as working memory rather than storage.
IntelUser2000 - Friday, April 21, 2017 - link
ddriver: Do you work for the memory industry? Or hold a stock in them? You have a personal gripe about the company that goes beyond logic.PCI Express latency is far higher than 1us. There are unavoidable costs of implementing a controller on the interface and there's also software related latency.
ddriver - Friday, April 21, 2017 - link
I have a personal gripe with lying. Which is what intel has been doing every since it announced hypetane. If you find having a problem with lying a problem with logic, I'd say logic ain't your strong point.Lying is also what you do. PCIE latency is around 0.5 us. We are talking PHY here. Controller and software overhead affect equally every communication protocol.
Xpoint will see only minuscule latency improvements from moving to dram slots. Even if PCIE has about 10 times the latency of dram, we are still talking ns, while xpoint is far slower in the realm of us. And it ain't no dram either, so the actual latency improvement will be nowhere nearly the approx 450 us.
It *could* however see significant bandwidth improvements, as the dram interface is much wider, however that will require significantly increased level of parallelism and a controller that can handle it, and clearly, the current one cannot even saturate a pcie x4 link. More bandwidth could help mitigate the high latency by masking it through buffering, but it will still come nowhere near to replacing dram without a tremendous performance hit.