2016-08-19

Keyword tags:

Cloud Storage

Data Backup

In our previous note we discussed about the predicament legacy MSPs are in. For telco carriers and IDC to regain their competitive edge in the cloud era, they will have to look at transforming their current assets– their storage architectures. I don’t mean building new cloud-based storage resource, but converting existing legacy storage arrays into cloud.

What does that mean? Basically, we are stripping software away from the hardware, using a software layer to eliminate hardware silos, transforming them into a unified pool of resources to achieve dynamic resource allocation. With this approach, all data can be migrated and copied from anywhere to any location regardless the storage hardware.

Actually, this technology is not new. More than a decade ago before cloud computing was widely promoted as a concept, this technology was referred to as "storage virtualization". After another round of hype, it’s now more commonly known as "software-defined storage."

Regardless of what it’s called, does this mean re-using these technologies can breathe life into legacy MSPs?

No. Why?

Most existing storage virtualization products are designed for enterprise customers. Compared to enterprise applications, MSP applications are looking at a far higher demand.

Specifically, MSPs look at the following requirements:

1. The storage environment isn’t just bigger – it’s also more complicated.

MSP supports many different IT architecture for many corporations at the same time. Due to different business needs, these enterprises have set up different preferences in the systems, with varying hardware or software. Thus, MSPs must be able to build a variety of different storage systems for their customers. Ultimately the cost of maintenance is the sum of all these different storage environments, making it larger and more complex. At the same time, due to business needs, they cannot favor a certain vendor's products and technology. This presents as a challenge to both cost and management.

2. Requirements for size and scalability is higher

As mentioned previously, MSPs are supporting multiple companies. Therefore, it’s paramount that storage can easily be scaled up. It’s not just scaling up storage capacity, but also the performance requirements.

3. Of course, speaking of performance

Performance is the determining factor between MSPs, and indeed, many spare no effort to provide the best performance. As more and more MSP sell their products based on SLA, higher performance will guarantee higher returns; enterprise will be more willing to invest in advance performance. Flash arrays are an essential component of that architecture. It’s a big challenge to integrate new flash arrays into existing disk-array-based infrastructure, boosting the performance while controlling the cost and complicity.

4. MSPs are more sensitive to storage costs

For MSPs, every costs on storage hardware, software and personnel, is equals to expenses out of profits. Price competition is often fierce, which means every penny spent on the backend has to be scrutinized. Compared to other enterprise, MSP probably feels more pressure to reduce costs. Thus, MSPs are often more keen to adopt new technology and products that could improve efficiency.

5. CAPEX is great, but OPEX is the ultimate goal

By lowering initial investment, and allowing linear expansion based on business needs, this will significantly reduce the financial risk.

6. Universal architecture

Versatile architecture means the advantage of scalability. There’s a higher requirement for compatibility – not only running all mainstream technology, but also ability to provide the same performance, functionality and manageability.

7. Storage management via SLA

SLA is an important indicator of an MSP’s service delivery; therefore, management tools adopted by MSPs for the management of the storage infrastructure needs to be in line with an SLA's approach. For example, we need to be able to guarantee the performance of mission critical applications, so management tools need to connect to different hardware devices, and unified command platform for SLA management.

8. Existing architectures need to stand the test of time

Future integration, is particularly important for MSPs. Many emerging technologies and architecture are important factors in the competition between MSPs. Besides, due to technical ability, speed and on-line investment risks and other considerations, many companies prefer to put new applications for MSPs to deploy.

Put all these requirements together, and you see a storage virtualization super product. These design specifications far exceed enterprise-class requirements, with the ability to support emerging technologies, but at the same time maintain a low cost. Most of the software-defined storage and virtualization products will find it a struggle to meet these requirements.

In the end is that just a pipe dream?

For technology and products before the era of cloud computing, it’s almost an impossible dream. Because there was no such demand, and no such thing as flash memory, there’s a huge gap between the indicators and demands from MSP.

Even the rapid expanding startups are finding it difficult to achieve targets, as they lack exposure in collaborating and partnering with legacy IT vendors – especially storage vendors, making it difficult to find compatible solutions that sufficiently meet all requirements.

Only those companies that have accumulated experience, and is able to advance with the times, maintaining innovation and development to improve their products, will be able to meet the needs of MSPs.

FalconStor’s FreeStor meets all these requirements. How? And how do they compare to other similar offerings on the market? We’ll leave that for the next installment.

Show more