Two interesting takes on storage controller bottlenecks have appeared in the past few months. The first is a late June posting entitled I Have Seen the Future of Solid-State Storage, and It’s Scale-Out in which Network Computing’s Howard Marks discusses SSD’s impact on storage controller bandwidth. According to Marks, today’s controllers have been more than powerful enough to support current levels of hard disk performance without causing any performance bottlenecks, even with CPU-intensive business continuity features, such as thin provisioning, snapshots, and replication tacked on. Add SSD, however, and the controller struggles to keep up. Why? According to Marks, the power needed from the disk controller is a function of IOPS, not disk capacity. With five or ten MLC SSD’s delivering the same number of IOPs as 1,000 disk drives, the typical single or dual-controller architecture of legacy arrays simply won’t cut it. Add business continuity features such as snapshots and you have even more of a problem.
Marks points out that today’s SSD vendors have approached this issue two ways: Pure Storage, Nimble, and other array vendors simply limit the expandability of their arrays to ensure they don’t max out their storage controllers and still have power to spare for business continuity functions. Some other SSD vendors, including Kaminario, have chosen instead to implement a scale-out architecture that scales compute power as it scales storage capacity. The result is more storage, more IOPs, and room to spare for enterprise-level business continuity.
The writing is on the wall, according to Marks. As IT shops start deploying all-SSD arrays, they’ll need to scale not only SSD capacity, but the compute power to support it. That’s why Marks says, “I have seen the future of solid state storage and it’s scale out.”
This is no surprise to us, as you’ve read in our past blogs. But now it turns out that SSD’s aren’t the only reason legacy storage controllers have a short future. InfoStor’s Henry Newman points out in his June 18 blog, Disk Drive Performance Becoming Insufficient (catchy title) that legacy controllers can’t even support the latest hard disk arrays! He does a bunch of calculations based on current high-performance enterprise SATA drives and finds that 220TB worth of drives could potentially max out the bandwidth of today’s 40 GB per second controllers. In today’s enterprise 220TB ain’t as much as it used to be. His conclusion: “I think we are going the way of the SMP vendors for the future, and controllers will be designed in clusters of boxes based on commodity hardware.” We agree of course.
Scale-up is dead. Long live scale out.
Tags: business continuity, Clusters, hard disk arrays, hard disk performance, Henry Newman, Howard Marks, InfoStor, IOPS, Kaminario, MLC, Network Computing, performance bottlenecks, replication, SATA, scale out, scale up, snapshots, SSD, SSD array, storage controller, thin provisioning