2014-02-25

By William Stevenson

In our previous blog, Who is Winning in the Storage Value Chain, we observed that the enterprise value of legacy array vendors declined last year, while the enterprise value of storage component and flash vendors increased substantially.  We inferred that equity investors appear to be betting that commodity, flash, and cloud vendors will capture value in the storage market at the expense of legacy vendors.

Whether these trends will continue depends on the success of the new offering/new vendors in moving beyond the initial segments where they have found success, and finding broad acceptance in Tier1/Tier 2 enterprise storage environments which is currently the core of legacy array vendors’ business.

Most of the new vendors are acutely aware of the need to move beyond narrow segments. Fusion-IO saw tremendous growth on the strength of their sales into the data centers of Facebook and Apple (>50% of total sales), but then struggled to continue that growth across the broader enterprise market. Violin and IBM FlashSystem flash arrays are great for accelerating Oracle databases, but where else can they be sold? Nimble has done well with its hybrid arrays in SMB customers. It remains to be seen if they can move beyond that.

On the other hand, legacy storage array vendors are acutely aware that the large public cloud data centers have long since moved away from their product. Facebook found that NetApp storage arrays were much too expensive as the number of photos being uploaded to their site exploded, so built their own storage from commodity servers using Fusion IO flash cards and proprietary software.  Google and Microsoft have taken similar approaches.  Legacy array vendor are not anxious to see this type of architecture move to the enterprise.  But a growing number of customers are actively planning to do just that.

Most enterprise storage customers tend to be conservative, and only a few of them are likely to rip out their existing infrastructure, and replace it a brand new product. The data housed within those storage arrays is a core business asset for most of those customers, and they want to be assured that they can access it, protect it, and manage it effectively. But the cost of housing that data effectively has been increasing, so more customers opening up to the possibility of weaving in new technologies around the edges, especially if it provides capability to improve overall data management.

So what will be required for flash appliances and software defined storage to move into core Tier 1 and Tier 2 applications?  There are core functionalities that customers expect. The economics need to be disruptive. And most customers prefer that someone else has wrung the bugs out of the product.

5 Must-Have Capabilities for Winning in the Enterprise Storage Market

So what will be required for flash appliances and software defined storage to move into core Tier 1 and Tier 2 applications? They will need the following five must-have enterprise storage capabilities that result in disruptive economics and performanc

1. Scale out, highly available block and file architecture

Not limited to 2 node clusters—support for multi-petabyte environments, and performance scale out across multiple controllers

Supports geo-distributed workloads, not just DR/hot failover

2. Compelling economics

Hybrid flash and HDD persistent storage volumes—and the flexibility to match performance/cost to specific workloads

Dramatically smaller space and power requirements

Built on commodity hardware

To throw out a few numbers:

<$0.5/IOP (mixed read/write)

< $2/GB (hybrid Flash/HDD deployments),

>10 million IOPS/rack

>1 million IOPS/KW

Most of these specs  have already been greatly surpassed, but it will be hard to get a lot of traction without hitting them

3. Extensive data protection and efficiency features:

RAID, replication, snaps, clones, dedupe, intelligent data placement, etc.

4. Granular Quality of Service Management:

Guaranteed SLAs for critical workloads without overprovisioning, not just throttling of bandwidth hogs

5. Simplified Management

Ultimately the customer wants to manage data in a unified process. New solution need to move the ball toward this goal

A few, if any, vendors are there yet. It is hard to develop efficient scale out technology quickly, if the product was not initially designed to do so. It will be even harder to create disruptive economics by stuffing flash into an old storage array. Sanbolic has been working on this problem for 12 years now, so check out our capabilities when you get a chance.

Dedicated storage controllers on commodity servers is the architecture used by most of the new flash storage system vendors, but the flexibility to run on converged compute/storage architecture will become increasingly important.  Nutanix and SimpliVity are examples of this approach. We wouldn’t be surprised to see converged compute/storage become a core element of Cisco’s Unified Computing System, without VCE or FlexPod.

An interesting question is how will customers choose to buy new storage technologies. Most are accustomed to buying a hardware box they plug in and then spend days or weeks configuring. At a recent Goldman Sachs conference, the CEOs of Nutanix and SimpliVity agreed that customers currently prefer buying an appliance, but that “software only” sales deployed on commodity hardware are likely to grow in the future.  Given that storage is really about data architecture, not hardware deployments, will storage become a larger practice area for system integrators over the next few years? It could accelerate adoption in the Tier 1 accounts that legacy vendor are working to protect.  In any case, a lot of change is coming.

Previous:

http://sanbolic.com/who-is-winning-in-the-storage-value-chain/

http://sanbolic.com/what-does-ibms-system-x-sale-to-lenovo-mean-for-the-storage-market/

Show more