2014-06-11

When it comes to numbers on the global IT market, every analyst has an opinion. Forrester pegged global IT spending at $2.1 trillion last year. Meanwhile, Gartner says the market is on track to reach $3.8 trillion in 2014. Despite the variation in these forecasts, everyone agrees that enterprise use of cloud computing technologies is growing the fastest.

Some analysts are predicting that cloud computing will have a multi-trillion dollar impact on the global economy. According to McKinsey & Company, the total economic impact of cloud technology could reach $6.2 trillion annually by 2025.

But how are customers paying for their cloud services in this global economy? How that is defined can determine the direction of a company’s strategic direction. At the heart of the issue is data and the software used to determine not just the cost but the benchmarks needed to make IT an effective provider for the organization.

For the most part, cloud services charge on a per hour basis. That’s like a utility company charging a customer for the size of their house. More granular billing that charges based on the actual usage of compute, storage and networking makes a lot more sense. It’s analogous to charging for electricity at the same billing rate, no matter how efficient the homeowner may be. Better insulation? Turning off all the lights when leaving the house? Sorry, the rate remains the same.

Just as data defines the new stack infrastructure so does it correlate to the change in pricing models for cloud services. And like any data-centric approach, it’s the measurement of the data that matters most.

When a company uses data effectively, it can take a more granular approach and give the customer ways to think through how to manage existing IT resources.

A more granular methodology first requires to establish a standard measure of infrastructure capability that is comprised of an allocation of each of the core resources used by all applications: compute (CPU), memory (RAM), network I/O (Mbps) and storage I/O (IOPS).

The first task is to manage the running workloads for compute, memory, storage and networking and taking measurements on a scheduled basis for each of the four dimensions. Measured every five minutes, the data can then show what resources are used the most. Calculations can then be defined by units of measurement, comparing consumption of the resources and calculating the average based on an hourly basis. A monthly average is computed based on the hourly samples for the month.

For example, consider a standard single server app that has one front-end web server and a back-end database server. Let’s assume that the database is running on a memory-intensive virtual machine and the app is CPU-intensive.

It’s often the case that a cloud services provider will be based on the total capacity of the virtual machines allocated to the customer.

Charging for the resources most used would be similar to the way utilities bill customers. The greatest amount of usage determines the bill. If the room with the air conditioning is consuming the most electricity that essentially defines the overall cost of the kilowatt per hour.

Cost is a factor for why companies use cloud services. But understanding the cost structure of cloud services can determine the strategic direction for how resources are allocated.

The most direct impact is on hardware purchases. It’s clear that less hardware will need to be purchased, and with that, budget for manpower needed to run the hardware can be refocused on more worthwhile projects. The savings can free up IT budgets by 30 percent or more.

Once an effective data measurement system is in place, the customer can form a budget strategy that does not lead to guessing about how much the resources they use might cost. The information is all there, based upon the units of measurement used to determine the data usage.

This is an example of the software-defined data center. Software monitors the systems, abstracts the need for hardware and provides the measurement tools for realizing the costs and reallocations of hardware budgets.

These new sophisticated systems allow users to bypass hardware configuration altogether, and allow virtualization to run directly from the cloud software platforms. In a post NSA era, there will be an increasing emphasis placed on securing these new cloud environments. By measuring the data, customers will avoid the costs of guessing what to protect and what to manage in increasingly sophisticated technical environments.

Enterprises now have to be able to apply security, monitor threats and remain compliant (and auditable) across existing IT, private clouds, public clouds and hybrid combinations. The explosion in security/compliance capabilities will mirror the growth in cloud demand. How companies manage that data will define these important new practices.

The change in IT spend is upon us, with many more turns ahead. We are already seeing shifts in the software community, as companies become more comfortable with open source, competitive and custom solutions coexist. Solution providers that don’t have a firm hand in the dynamic cloud environment will need to change too – and quickly – if they want to stay afloat in the new world of IT.

This is just the beginning of a new era in software. And data is what defines it all.

Simon Aspinall is president of Service Provider Business and CMO at Virtustream, a sponsor of The New Stack.

Feature image via Creative Commons on Flickr.

Show more