Another interesting seminar at Stephen Foskett’s Tech Field Day, hosted by Nigel Poulton, addressed the best architecture for an SSD array. Participants included Thomas Isakovich, CEO and Founder of Nimbus Data Systems, Umesh Maheswhari, CTO and founder of Nimble Storage, Jonathan Goldick, software CTO at Violin Memory, and Dave Wright, founder and CEO of SolidFire. It was a pretty lively debate, with Goldick grinning broadly through much of it and taking jabs at the others. One couldn’t help but wonder what he was grinning about.
The truth is, there was a lot of disagreement over the best array architecture and sometimes the argument got a tad heated and personal, much to the delight of the audience. However, there were three things everyone could agree to. First, the best architecture is one that provides an ideal balance of scalability, share ability, reliability, and performance, not performance alone. Second, for all but the few most performance- and latency-sensitive applications, it’s more important to provide consistent, predictable performance for an array of applications, than to provide the absolute best performance. And third, the best architecture is a mix of commodity hardware and a software architecture designed from the ground up for SSD. Sound familiar?
Interesting first question: “Do array vendors have their heads up their rears putting SSD on the network rather than inside the server (not my language)?” The answer: An SSD array brings latency from about 10 or 20 milliseconds to a few hundred microseconds. The extra 50 to 100 microseconds saved by a server-based architecture doesn’t matter for all but the absolute most performance hungry applications and you’re constrained by a single server. Goldick countered that latency was important, but acknowledged that most applications didn’t need the lowest latency.
Then came the question of all Flash vs. hybrid disk/Flash. Umesh argued that a mix of Flash and disk provides a balance of performance and capacity. Wright touted an architecture that can balance capacity and performance for each application, including a mix of high- performance and less-costly, lower-performance Flash. What about the cache solution? Goldick dismissed Flash cache’s inability to support writes and its tendency to break down every time a change is made or an application is restarted. Isakovich favored tiering at the application layer to allow for best of breed solutions in each tier, rather than an expensive single tiered storage solution.
Next came a question about the impact of inline deduplication on performance. Isakovich was openly skeptical that deduplication could be nondisruptive, pointing out snidely that “none of the vendors who make that claim allow you to test their solutions with dedupe turned off.” Goldick promoted immediate asynchronous deduplication, rather than in-line, as the way to minimize disruption. Most of the members agreed that solutions should allow dedupe for the applications it benefits and let you turn off dedupe for those that don’t.
To achieve nondisruptive dedupe, Wright argued for a scale-out architecture, similar to that of the K2, that has a high ratio of CPU to memory and doesn’t put excessive pressure on a single or dual controller. Goldick countered that in a scale out architecture every controller has to be informed of any deduplication that takes place and so there is an impact on performance.
As I said, they all agreed that the future belongs to a combination of off-the-shelf hardware and a software architecture geared for SSD. Also that individual disk format SSD’s do not provide the consistent performance users need. Uh, we agree.
Click here to see the members face off yourself:
Tags: architecture, cache, Dave Wright, deduplication, disk, Flash, hybrid, Jonathan Goldick, Kaminario, latency, Nigel Poulton, Nimble Storage, Nimbus Data Systems, off-the-shelf hardware, scale-out architecture, SolidFire, SSD, SSD array, Stephen Foskett, Tech Field Day, Thomas Esakovich, tiering, Umesh Maheshwari, Violin Memory Systems