The Enterpise TLC Storage Era Begins: Rounding Up 13 SSDs With Samsung, Intel, and Memblaze
by Billy Tallis on January 3, 2019 9:45 AM ESTMixed Random Performance
Real-world storage workloads usually aren't pure reads or writes but a mix of both. It is completely impractical to test and graph the full range of possible mixed I/O workloads—varying the proportion of reads vs writes, sequential vs random and differing block sizes leads to far too many configurations. Instead, we're going to focus on just a few scenarios that are most commonly referred to by vendors, when they provide a mixed I/O performance specification at all. We tested a range of 4kB random read/write mixes at queue depth 32, and also tested the NVMe drives at QD128. This gives us a good picture of the maximum throughput these drives can sustain for mixed random I/O, but in many cases the queue depth will be far higher than necessary, so we can't draw meaningful conclusions about latency from this test. As with our tests of pure random reads or writes, we are using 32 (or 128) threads each issuing one read or write request at a time. This spreads the work over many CPU cores, and for NVMe drives it also spreads the I/O across the drive's several queues.
The full range of read/write mixes is graphed below, but we'll primarily focus on the 70% read, 30% write case that is a fairly common stand-in for moderately read-heavy mixed workloads.
Queue Depth 32 | Queue Depth 128 |
The SATA SSDs are all significantly slower at 70% reads than they were at 100% reads on the previous page, but the higher capacity drives come closer to saturating the SATA link. Among the NVMe drives, the Samsung 983 DCT shows no further improvement from increasing the queue depth from 32 all the way to 128, but the more powerful NVMe drives do need the higher queue depth to deliver full speed. The Intel P4510's improvement at QD128 over QD32 is relatively modest, but the Memblaze PBlaze5 almost doubles its throughput and manages to catch up to the Intel Optane P4800X.
QD32 Power Efficiency in MB/s/W | QD32 Average Power in W | ||||||||
QD128 Power Efficiency in MB/s/W | QD128 Average Power in W |
The Intel Optane P4800X is the only drive that stands out with a clear power efficiency advantage; aside from that, the different product segments are on a relatively equal footing. The different capacities within each product line all have similar power draw, so the largest (fastest) models end up with the best efficiency scores. The smaller NVMe drives like the 960GB Samsung 983 DCT and the 2TB Intel P4510 waste some of the performance potential of their SSD controllers, so from a power efficiency standpoint only the larger NVMe drives are competitive with the SATA drives.
QD32 | |||||||||
QD128 |
The SATA drives and slower NVMe drives generally show a steep decline in performance as the test progresses from pure reads through the more read-heavy mixes, accompanied by a increase in power consumption. For the more balanced mixes and the more write-heavy half of the test, those drives show slower performance decline and power consumption plateaus. For the faster NVMe drives (the Memblaze PBlaze5 and Intel Optane P4800X), power consumption climbs through most or all of the test, and they are the only drives for which increasing the queue depth beyond 32 helps on the more balanced or write-heavy mixes. Higher queue depths only help the Samsung 983 DCT and Intel P4510 for the most ready-heavy workloads.
Aerospike Certification Tool
Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:
- fewer than 5% of transactions exceed 1ms
- fewer than 1% of transactions exceed 8ms
- fewer than 0.1% of transactions exceed 64ms
Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. Since this test uses fixed IO rates, the queue depths experienced by each drive will depend on their latency, and can fluctuate during the test run if the drive slows down temporarily for a garbage collection cycle. The test will give up early if it detects the queue depths growing excessively, or if the large block IO threads can't keep up with the random reads.
We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads scheduled across all 72 virtual (36 physical) cores.
The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. For fast NVMe SSDs, this is far longer than necessary for drives to reach steady-state. In order to find the maximum rate at which a drive can pass the test, we start at an unsustainably high rate (at least 150x) and incrementally reduce the rate until the test can run for a full hour, and the decrease the rate further if necessary to get the drive under the latency limits.
Samsung's SATA drives have vastly improved performance over the older PM863—even the entry-level 860 DCT is several times faster, and it's absolutely not intended for workloads that are this write-heavy. The 3.84TB 883 DCT is a bit slower than the lower capacities, but still offers more than twice the performance of the 860 DCT.
The NVMe drives all outperform the SATA drives, with the Samsung 983 DCT M.2 predictably being the slowest of the bunch. The Intel P4510 outperforms the 983 DCTs, and the Memblaze PBlaze5s are much faster still, though even the PBlaze5 C900 can't quite catch up to the Intel Optane DC P4800X.
Power Efficiency | Average Power in W |
The power consumption differences between these drives span almost an order of magnitude. The latest Samsung SATA drives range from 1.6 W up to 2.7 W, while the NVMe drives start at 5.3 W for the 983 DCT M.2 and go up to 12.9 W for the PBlaze 5. However, the power efficiency scores don't vary as much. The two fastest NVMe SSDs also take the two highest efficiency scores, but then the Samsung 883 DCT SATA drives offer better efficiency than most of the rest of the NVMe drives. The SATA drives are at a serious disadvantage in terms of IOPS/TB, but for large datasets the SATA drives might offer adequate performance in aggregate at a lower TCO.
36 Comments
View All Comments
FunBunny2 - Thursday, January 3, 2019 - link
"The rack is currently installed in an unheated attic and it's the middle of winter, so this setup provided a reasonable approximation of a well-cooled datacenter."well... I don't know where your attic is, but mine is in New England, and the temperature hasn't been above freezing for an entire day for some time. what's the standard ambient for a datacenter?
Ryan Smith - Thursday, January 3, 2019 - link
It is thankfully much warmer in North Carolina.=)Billy Tallis - Thursday, January 3, 2019 - link
I"m in North Carolina, so the attic never gets anywhere close to freezing, but it was well below normal room temperature during most of this testing. Datacenters aren't necessarily chilled that low unless they're in cold climates or are adjacent to a river full of cold water, but servers in a datacenter also tend to have their fans set to run much louder than I want in my home office.The Intel server used for this testing is rated for continuous operation at 35ºC ambient. It's rated for short term operation at higher temperatures (40ºC for 900 hours per year, 45ºC for 90 hours per year) with some performance impact but no harm to reliability. In practice, by the time the air intake temperature gets up to 35ºC, it's painfully loud.
Jezzah88 - Friday, January 4, 2019 - link
16-19 depending on sizedrajitshnew - Thursday, January 3, 2019 - link
It enough information available for you to at least make a pipeline post clarifies the differences between Z-Nand (Samsung) and traditional MLC/SLC flashBilly Tallis - Thursday, January 3, 2019 - link
I should have a review up of the Samsung 983 ZET Z-SSD next month. I'll include all the information we have about how Z-NAND differs from conventional planar and 3D SLC. Samsung did finally share some real numbers at ISSCC2018, and it looks like the biggest difference enabling lower latency is much smaller page sizes.MrCommunistGen - Thursday, January 3, 2019 - link
Very much looking forward to the review!Greg100 - Thursday, January 3, 2019 - link
It's a pity that we don't have consumer drives that are fast and at the same time have large enough capacity - 8TB. I would like to have a consumer U.2 drive that has 8TB capacity.What we have now… only 4TB Samsung and… SATA :(
Will Intel DC P4510 8TB be compatible with Z390 motherboard, Intel Core i9-9900K and Windows 10 Pro? Connection via U.2 to M.2 cable (Intel J15713-001). Of course the M.2 port on the motherboard will be compatible with NVMe and PCI-E 3.0 x4.
I know that compatibility should be checked on the motherboard manufacturer's website, but nobody has checked Intel DC P4510 drives and nobody will, because everyone assumes that the consumer does not need 8TB SSDs.
Anandtech should also do tests these drives on consumer motherboards. Am I the only one who would like to use Intel DC P4510 8TB with Intel Z390, Intel Core i9-9900K and Windows 10 Pro? Is it possible? Will there be any compatibility problems?
Billy Tallis - Thursday, January 3, 2019 - link
I don't currently have the necessary adapter cables to connect a U.2 drive to our consumer testbed, but I will run the M.2 983 DCT through the consumer test suite at some point. I have plenty of consumer drives to be testing this month, though.Generally, I don't expect enterprise TLC drives to be that great for consumer workloads, due to the lack of SLC caching. And they'll definitely lose out on power efficiency when testing them at low queue depths. There shouldn't be any compatibility issues using enterprise drives on consumer systems, though. There's no need for separate NVMe drivers or anything like that. Some enterprise NVMe drives do add a lot to boot times.
Greg100 - Thursday, January 3, 2019 - link
Thank you :-) So I will try that configuration.Maybe Intel DC P4510 8TB will not be the boot champion or power efficiency drive at low queue depths, but having 8TB data on a single drive with fast sequential access have huge benefits for me.
Do you think it is worth waiting for 20TB Intel QLC or 8TB+ client drives? Any rumors?