OCZ Announces First SATA Host Managed SSD: Saber 1000 HMS
by Billy Tallis on October 15, 2015 8:00 AM EST- Posted in
- SSDs
- OCZ
- Enterprise SSDs
- Barefoot 3
Today OCZ is introducing the first SATA drive featuring a technology that may be the next big thing for enterprise SSDs. Referred to by OCZ as "Host Managed SSD" technology (HMS) and known elsewhere in the industry by the buzzword "Storage Intelligence", the general idea is to let the host computer know more about what's going on inside the SSD and to have more influence over how the SSD controller goes about its business.
Standardization efforts have been underway for more than a year in the committees for SAS, SATA, and NVMe, but OCZ's implementation is a pre-standard design that may not be compatible with what is eventually ratified. To provide some degree of forwards compatibility, OCZ is releasing an open-source abstraction library to provide what they hope will be a hardware-agnostic interface that can be used with future HMS devices.
OCZ's HMS implementation provides a vendor-specific extension of the ATA command set. A mode switch is required to access HMS features; when HMS mode is off the drive behaves like a normal non-HMS SATA drive and all background processing like garbage collection are managed autonomously by the SSD. When the HMS features are enabled, the host computer can request that the drive override normal operating procedure and disable all background processes, or to perform them as a high-priority task. If background processing is left disabled for too long, the drive will re-enable the background processing when needed and suffer the immediate performance penalty of the emergency garbage collection.
The intention is to allow better aggregate performance from an array of drives. An example OCZ gives is of an array divided into three pools of drives. At any given time, two pools are actively receiving writes, while the drives in the third pool are focusing solely on the "background" housekeeping operations. The two pools that are in active use defer all the background processing and operate with peak performance and consistency. By cycling the drive pools through the two modes, the intention is that none of the active drives will ever reach the steady-state of having constant background processing to free up space for the incoming writes. This provides a big improvement to performance consistency, and can also provide a minor improvement to overall throughput of the array.
Obviously, the load balancing and coordination required by such a scheme is not part of any traditional RAID setup. OCZ expects early adopters of HMS technology to make use of it from application layer code. HMS does not require any new operating system drivers, and OCZ will be providing tools and reference code to facilitate using HMS. They plan to eventually expand this to a comprehensive SDK, but for now everybody is in the position of having to explore how to best make use of HMS for their specific use case. For some customers, that may mean load balancing several pools of SSDs attached to a single server, while others may find it easier to temporarily offline an entire server for housekeeping.
OCZ has also envisioned that future HMS products may expand the controls from managing background processing to also changing overprovisioning or power limits on the fly, but they have no specific timeline for those features.
The drive OCZ is introducing with HMS technology is a variant of their existing Saber 1000 enterprise SATA SSD. The Saber 1000 HMS differs only in the SSD controller firmware; otherwise it is still a low-cost drive using the Barefoot 3 controller and is intended primarily for read-oriented workloads. Pricing is the same with or without HMS capability, though the Saber 1000 HMS is only offered in the 480GB and 960GB capacities. The warranty in either case is limited to 5 years, but because the HMS controls can affect write amplification, the Saber HMS write endurance rating is based on the actual Program/Erase cycle count of the drive rather than the total amount of data written to the drive.
OCZ Saber 1000 HMS | ||
Capacity | 480GB | 960GB |
4kB Random Read IOPS | 90k | 91k |
4kB Random Write IOPS | 22k | 16k |
Random Read Latency | 135µs | 135µs |
Random Write Latency | 55µs | 55µs |
Sequential Read | 550 MB/s | 550 MB/s |
Sequential Write | 475 MB/s | 445 MB.s |
MSRP | $370 | $713 |
As a read-oriented drive with relatively little overprovisioning, the Saber 1000 has a lot to gain from HMS in terms of write performance and consistency, and it may allow the Saber 1000 HMS to compete in areas the Saber 1000 isn't fast enough for.
In addition to control over the garbage collection process, the Saber 1000 HMS provides a similar set of controls for managing when the controller saves metadata from its RAM to the flash. This is the information the controller uses to keep track of where each piece of data is physically stored and which blocks are free to accept new writes. Every write to the disk adds to the metadata log, so the changes need to be periodically evicted from RAM to flash. This is one of the key data structures that the drive's power loss protection needs to preserve, so the size of the in-RAM metadata log may also be limited by the drive's capacitor budget.
To enable software to make effective use of these controls, the Saber 1000 HMS provides an unprecedented view in to the inner workings of the drive. Software can query the drive for the NAND page size, erase block size, number of blocks per bank and number of banks in the drive. The total program and erase counts are reported separately, and information about free blocks is reported as the total across the drive as well as the average, maximum, and minimum per bank. The drive also provides a status summary of whether garbage collection or metadata log dumping are active, and whether they are needed. OCZ's reference guide provides recommendations for interpreting all of these indicators.
The Saber 1000 HMS will be available in early November for bulk purchases. The technical documentation and reference code should be available online today.
Source: OCZ
17 Comments
View All Comments
superunknown98 - Thursday, October 15, 2015 - link
This seems like a nice feature for raid arrays that require consistency but how much CPU power will it take to essentially be SSD controller?alaricljs - Thursday, October 15, 2015 - link
Typically disk IO (even SSD) causes the CPU to sit around and ponder what to do. You would have to integrate your software and hardware impossibly well to not have lots of CPU cycles doing nothing when it's time for disk IO. I'd put money down that while this *will* use CPU cycles to accomplish the IO that overall the service time across computation and IO will decrease while CPU utilization will climb a small amount. I'm sure there will be use cases where this isn't what you want... but I can't think of any that this tech (or similar) would be considered for in the first place.DerekZ06 - Friday, October 16, 2015 - link
I'll take that bet. This just introduces a way for the Host and SSD to communicate what's going on and how things should be done. Currently SSD's are communicating over a standard designed for Hard Disks so the SSD has to do a lot of guesswork.woggs - Thursday, October 15, 2015 - link
This doesn't move any real effort to the host. It just give the host a switch to turn garbage collection on and off or to make it high priority. This is no cpu effort in the host. This allows the host to fire-off garbage collection at high priority when not using the drive, then taking all the SSD internal bandwidth when it wants. If used properly, it makes sense. If used improperly, it will do bad things.ZeDestructor - Thursday, October 15, 2015 - link
Call me when SSDs simplify way down and just expose everything and let the OS take full control of the SSD - ie: let the OS see the raw blocks with full information of how it's wired to the controller and read/write to them directly instead of having just LBAs and relying on the underlying controller.aryonoco - Thursday, October 15, 2015 - link
Right?! We threw out 40 years of hard-learned CS principles when we allowed OEMs to sell us these magic black boxes that provide no visibility to the host OS (or in fact flat out lie to them a lot of the time).melgross - Thursday, October 15, 2015 - link
You're kidding, right? That's all we need is for IT to screw this up too.ZeDestructor - Friday, October 16, 2015 - link
If your IT can't benchmark for shit, then I'm sorry for you. For the rest of us, alternative methods better suited to our uses would be much more interesting.woggs - Thursday, October 15, 2015 - link
You are clueless about what goes on inside an SSD.ZeDestructor - Friday, October 16, 2015 - link
I'd like to know how "clueless" I am. The first step to learning is knowing you don't know something, etc.