By Rahul Advani, Director of Flash Products, Enterprise Storage Division, PMC
With the rise of big data applications, such as in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.5 billion market by 20151. In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.
Flash-based SSDs are not only growing as a percentage of all storage in the enterprise, but they are also almost always the critical storage component to ensure a superior end-user experience using caching or tiering of storage. The one constant constraint to the further use of NAND-based SSDs is cost, so it makes sense that the SSD industry is focused on technology re-use as a means to deliver cost-effective solutions that meet customers’ needs and increase adoption.
If you take the Serial Attached SCSi (SAS) market as an example, there are three distinct SSD usage models that are commonly measured in Random Fills Per Day (RFPD) for 5 years, or filling an entire drive xx times every day for 5 years. There are the read intensive workloads at 1-3 RFPD, mixed workload at 5-10 RFPD and write intensive at 20+ RFPD. Furthermore, different customer bases like the Enterprise and Hyperscale datacenter have different requirements for application optimizations and scale for which SSDs are used in their infrastructure. These differences in requirement show up typically in terms of number of years of service required, performance, power and sensitivity to corner cases in validation. The dilemma for the SSD makers is how do you meet these disparate needs and yet offer cost-effective solutions to end users.
In enterprise applications, software defined storage has many different definitions and interpretations, from virtualized pools of storage, to storage as a service. For this article, we will stick to the application of software and firmware in flash-based storage SSDs to help address the varied applications from cold storage to high performance SSDs and caching cost effectively. There are a few key reasons why the industry prefers this approach:
• Different densities and over-provisioning NAND levels
• Different types of NAND (SLC/MLC/TLC) at different nodes
• Different power envelopes (9W and 11W typical for SAS, 25W for PCIe)
• Different amounts of DRAM
• Often need to support Toggle and ONFI, in order to maintain flexibility of NAND use
The table below shows the many different configurations that PMC’s 12G SAS flash processor supports:
Using a flexibly architected controller, you can modify features including power, flash density, DRAM density, flash type and host interface bandwidth for purpose-built designs based on the same device. And this allows you to span the gamut from cold storage (cost-effective but lower performance) to a caching adaptor (premium memory usage and higher performance) through different choices in firmware and memory. The key is that firmware and hardware be architected flexibly. Here are three common design challenges that can be solved with software defined flash and a flexible SSD processor:
A flexible architecture that can support software defined flash optimizations is the key to supporting many different of usage models, types of NAND and configurations. It also helps reduce cost, which will accelerate deployment of NAND-based SSDs and ultimately enhance end-user experience.
Source: 1. IDC Worldwide Solid State Drive 2013-2017 Forecast Update, doc #244353, November 2013.
With the rise of big data applications, such as in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.5 billion market by 20151. In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.
Flash-based SSDs are not only growing as a percentage of all storage in the enterprise, but they are also almost always the critical storage component to ensure a superior end-user experience using caching or tiering of storage. The one constant constraint to the further use of NAND-based SSDs is cost, so it makes sense that the SSD industry is focused on technology re-use as a means to deliver cost-effective solutions that meet customers’ needs and increase adoption.
If you take the Serial Attached SCSi (SAS) market as an example, there are three distinct SSD usage models that are commonly measured in Random Fills Per Day (RFPD) for 5 years, or filling an entire drive xx times every day for 5 years. There are the read intensive workloads at 1-3 RFPD, mixed workload at 5-10 RFPD and write intensive at 20+ RFPD. Furthermore, different customer bases like the Enterprise and Hyperscale datacenter have different requirements for application optimizations and scale for which SSDs are used in their infrastructure. These differences in requirement show up typically in terms of number of years of service required, performance, power and sensitivity to corner cases in validation. The dilemma for the SSD makers is how do you meet these disparate needs and yet offer cost-effective solutions to end users.
In enterprise applications, software defined storage has many different definitions and interpretations, from virtualized pools of storage, to storage as a service. For this article, we will stick to the application of software and firmware in flash-based storage SSDs to help address the varied applications from cold storage to high performance SSDs and caching cost effectively. There are a few key reasons why the industry prefers this approach:
- As the risk and cost associated with controller developments have risen, the concept of using software to generate optimizations is not only becoming popular, it’s a necessity. Controller developments typically amount to several tens of millions of dollars for the silicon alone, and they often require several revisions to the silicon, which adds to the cost and risk of errors.
- The personnel skillset required for high-speed design and specific protocol optimizations (SAS or NVMe) are not easy to find. Thus, software-defined flash, using firmware that has traditionally been deployed to address bugs found in the silicon, is increasingly being used to optimize solutions for different usage models in the industry. For example, firmware and configuration optimizations for PMC’s SAS flash controller described below cost around 1/10th of the silicon development and the benefits of that are seen at the final product cost.
- Product validation costs can also be substantial and cycles long for enterprise SSDs, so time-to-market solutions also leverage silicon and firmware re-use as extensively as feasible.
• Different densities and over-provisioning NAND levels
• Different types of NAND (SLC/MLC/TLC) at different nodes
• Different power envelopes (9W and 11W typical for SAS, 25W for PCIe)
• Different amounts of DRAM
• Often need to support Toggle and ONFI, in order to maintain flexibility of NAND use
The table below shows the many different configurations that PMC’s 12G SAS flash processor supports:
Using a flexibly architected controller, you can modify features including power, flash density, DRAM density, flash type and host interface bandwidth for purpose-built designs based on the same device. And this allows you to span the gamut from cold storage (cost-effective but lower performance) to a caching adaptor (premium memory usage and higher performance) through different choices in firmware and memory. The key is that firmware and hardware be architected flexibly. Here are three common design challenges that can be solved with software defined flash and a flexible SSD processor:
- Protocol communication between the flash devices: Not only does NAND from different vendors (ONFI and toggle protocols) differ, but even within each of these vendor’s offerings, there can be changes to the protocol. Examples are changing from five to six bytes of addressing, or adding prefix commands to normal commands. Having the protocol done by firmware allows the flexibility to adapt to these changes. Additionally, having a firmware-defined protocol allows flash vendors to design in special access abilities.
- Flash has inconsistent rules for order of programming and reading: A firmware-based solution can adapt to variable rules and use different variations of flash, even newer flash that might not have been available while developing the hardware. By having both the low-level protocol handling, as well as control of the programming and reading all in firmware, it allows for a solution that is flexible enough to use many types and variations of flash.
- Fine-tuning algorithms/product differentiation: Moving up to the higher level algorithms, like garbage collection and wear leveling, there are many intricacies in flash. Controlling everything from the low level up to these algorithms in firmware allows for fine-tuning of these higher level algorithms to work best with the different types of flash. This takes advantage of the differences flash vendors put into their product so they can be best leveraged for diverse applications.
A flexible architecture that can support software defined flash optimizations is the key to supporting many different of usage models, types of NAND and configurations. It also helps reduce cost, which will accelerate deployment of NAND-based SSDs and ultimately enhance end-user experience.
Source: 1. IDC Worldwide Solid State Drive 2013-2017 Forecast Update, doc #244353, November 2013.