2013-10-10

In 2012, Gartner predicted that enterprise adoption of virtualization would increase 14 percent by the end of the year. The trend toward virtual environments and the accompanying technologies is showing no signs of cooling off and is predicted to continue its growth at astonishing rates. By becoming the new industry standard, virtualization has had a substantial impact on data center architecture and management.

There has been an explosion in the development and use of virtual machines (VMs) as data demands continue to grow. In virtual environments, a software program simulates the functions of physical hardware, creating new levels of hardware utilization, flexibility and cost savings. The rapidly growing popularity of virtualization empowers organizations to run multiple applications at once, creating a need for unheard-of levels of storage. This sudden rise in demand has warranted a fresh approach to storage: specifically, a solution that offers effective management, flexibility and efficiency.

The Benefits of Virtualization

Enterprises have realized a number of benefits from virtualizing their servers—namely, cost savings and flexibility. Virtualization enables organizations to make more efficient use of the data center’s hardware. The majority of the time, physical servers in a data center are simply idling. By implementing virtual servers in the hardware, the organization can optimize the use of its central processing units (CPUs), as well as its hardware. This solution gives enterprises an ideal use for virtualization’s benefits and cost efficiencies.

Virtualization also allows for increased flexibility. It gives organizations the convenience of reducing the need for physical machines in their infrastructure and of moving to virtual machines. In the event that an organization decides to change hardware, the data center administrator could simply move the virtual server to the newer, advanced hardware, achieving improved performance for a smaller cost. Before virtual servers, administrators needed to install the new server and then reinstall and move all the data stored on the old server—a much more convoluted approach. Moving a virtual machine is considerably simpler than moving a physical machine.

Virtualizing at Scale

The increases in popularity of virtualization are widespread. But there has been a significant spike in demand for virtualization by data centers hosting a large number of servers—somewhere in the range of 20–50 or above. These organizations can achieve considerable levels of the cost-efficiencies and flexibility benefits defined above. Moreover, servers are far easier to manage once virtualized. The pure physical challenge of administrating a number of physical servers can become arduous for data center staff; virtualization enables administrators to run the same number of servers on fewer physical machines, simplifying data center management.

Keeping Pace With Demand

Regardless of the benefits of virtualization, the increasing adoption of virtual servers is placing stress on traditional data center infrastructure and storage devices.

In a way, the problem stems directly from the popularity of virtual machines. The original VM models used local storage in the physical server, making it impossible for administrators to move a virtual machine from one physical server to one with a more powerful CPU. The introduction of shared storage—either network-attached storage (NAS) or a storage-area network (SAN)—to the VM hosts solved this problem, introducing the ability to stack on several virtual machines. This configuration eventually evolved to today’s server virtualization scenario, where all physical servers and VMs are connected to a unified storage infrastructure.

The setback to this approach? Data congestion.

A single point of entry can quickly lead to failure. Since data is moving through a single access point, data gets gridlocked during episodes of excessive demand. Considering that the amount of VMs and data are only expected to increase, it is clear that storage architecture must be improved. Infrastructure must keep up with the pace that data growth has set.

Proceeding With Caution

Organizations converting their data centers to virtualization will all face these growing pains. The early adopters of virtualized servers have already experienced the problems associated with single entry points and are working toward moderating their impact.

Fortunately, there is hope for organizations looking to maximize the benefits of virtualization. They are able to avoid data congestion created by traditional scale-out environments by eliminating the single point of entry. Today’s NAS or SAN storage solutions inevitably have a single access point that regulates the flow of data, leading to congestion during heightened demand. Alternatively, organizations should opt for a solution that has several entry points and distributes data uniformly across all servers. Even if multiple users are accessing the system at any given time, it will be able to maintain optimal performance while reducing lag time.

Currently, this is the most direct solution, but the next generation of storage infrastructure has some intriguing new alternatives to offer.

Computing and Storage Integration

The next generation of storage infrastructure has introduced a new strategy to combat the storage challenge of scale-out virtual environments. This new approach involves actually running VMs inside the storage node themselves (or running the storage inside the VM hosts)—subsequently turning it into a compute node.

This approach essentially flattens out the entire infrastructure. For instance, if an organization is using shared storage in a SAN, normally the VM hosts from the highest storage layer, ultimately reconstructing it into a unified, single entry storage system. To solve the data gridlock problems associated with this approach, many organizations are moving away from the traditional two-layer architecture that has both the virtual machines and storage running on the same layer

Moving On

Despite the challenges faced during the early developmental stages of virtualization, it has proven to be quite successful. The flexibility, efficacy and cost savings that accompany infrastructure virtualization have made a lasting impression on enterprises. If organizations continue to learn from the mistakes of those before them, they will be able to develop an effective scale-out virtual environment that enhances performance and decreases infrastructure expenditures.

Leading article photo courtesy of NeoSpire

About the Author

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010, Stefan worked in this field for Storegate, the wide-reaching Internet-based storage solution for consumer and business markets with the highest possible availability and scalability requirements. Previously, he worked with system and software architecture on several projects for Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile- and fixed-network operators.

The post The Next Frontier of Virtualization and Storage Infrastructure appeared first on The Data Center Journal.

Show more