Utility infrastructure is the logical separation of operating systems and applications from physical servers, storage, processing power, and memory. The goal is to improve capacity utilization and increase the agility of IT. Infotech Research Group shares approaches that can help organizations make the most of this strategy?

Also See:
Free Information Technology Standards and Guidelines
Cloud Computing: Five Reasons to Proceed with Caution

Utility infrastructure promises increased agility, efficiency, and opportunity for competitive advantage through the dynamic allocation of IT resources to the small enterprise. This is a wholesale change to the way IT operates that carries massive implications for the allocation of IT assets and management.

Key Considerations:

1. Utility infrastructure is coming. Given the opportunities for savings and efficiency, Info-Tech believes that a move towards a utility infrastructure-style model is inevitable. The traditional server infrastructure model is inefficient. A typical configuration consists of disparate, single-function servers, each running specific applications and databases. To insulate against capacity shortages, the majority of the applications are allocated far more processing power and storage than is needed in order to meet peak activity requirements (distributed physical servers typically only use 20% to 40% of their capacity). Mirrored backup servers exacerbate this glut of resource provisioning.

The server infrastructure is inflexible. Capacity upgrades require manual re-provisioning and application downtime while servers or storage are added.

The server infrastructure is opaque. Few IT managers have a clear view of how spending on IT resources aligns with specific business initiatives.

2. Someone has to pay. Moving to a utility infrastructure will mean re-organizing the delivery of IT into centralized systems and services. Any resulting costs for additional resources and ongoing services (such as virtualization software, storage management, and dashboards) will have to be absorbed and managed by either IT or the business units.

Scenario 1: IT pays for the computing infrastructure. If all small enterprise servers were previously accounted for as part of IT's budget, this will not be a huge leap. The project will only require approval on the IT budget from the CFO, CEO, or other senior executive responsible for overseeing IT spend. In this case, senior management must be on board, and individual business units could remain unaffected (depending on how budget approval and oversight is managed). However, if disparate servers were previously owned by business units and included as a part of departmental budgets, the shift to a utility infrastructure will create a surplus on the departmental side and a deficit to IT that must be accounted for.

Scenario 2: Business units pay for computing based on a utility chargeback model. In this case, usage of IT resources by business units is tracked by IT managers. Each business unit pays for those resources that they use via transfer payments to the IT department. Business leaders will need a say in how this system is structured because their costs may increase in order to finance this new infrastructure.

Next: Streamlining resources to improve capacity

{mospagebreak title=Streamlining Resources}

3. Procurement for new resources is streamlined. Small enterprises will continue move to more commoditized hardware. Purchasing decisions will be made to increase capacity of the overall system, not to facilitate specific applications. In this sense, the small enterprise should aim to replace proprietary systems with commodity processors, memory, and storage. Utility infrastructure will run on low cost, modular servers such as industry-standard x86 servers or blades that can scale to meet overall demand.

4. If IT pays for the computing infrastructure. The role of IT changes from purchasing and installing new server boxes based on application need to monitoring the capacity usage of the overall system and bringing new processors on and offline as required. If processing and storage resources are hosted in a "cloud," this could literally mean scaling capacity by "flipping a switch." For more information, refer to the ITA Premium Small Enterprise research note, "Cloud Computing: Five Reasons to Proceed with Caution".

If business units pay for computing based on a utility chargeback model. Responsibilities must be defined to address the amount of involvement from IT. Specifically, is it up to business units to monitor and manage their usage and request services as needed, or does IT provision resources based on demand and charges departments accordingly? The question of who can authorize the scaling up of resources must be addressed, especially if the enterprise is on a platform that cannot be turned off or scaled downward. In terms of asset lifecycle planning, server life may be extended as additional capacity and redundancy can be added to reduce the risks associated with older machines.

5. Scaling down is difficult. While utility infrastructure provides agility and upward scalability, it is not a "true utility" like water or electricity because reducing capacity (and cost) when demand falls is not always an option. Some outsource vendors, such as IBM, HP, and Sun, allow this type of functionality for hosted computing services. For example, vendors offer large scale platforms as a base machine, but additional CPUs and associated software licensing can be "turned on" when needed. However, in other platforms, additional resources can be brought online, but once on, cannot be turned off (from a billing standpoint).

The latter scenario is more indicative of what happens when the small enterprise owns the resources outright - processors and storage can be bought to increase capacity, but from an asset perspective, removing resources becomes difficult. They can be turned off to reduce power consumption; however, the asset remains on the enterprise's balance sheet. As such, the IT department must clarify with management who "owns" excess capacity. Is it an IT cost, or one distributed across the enterprise? How will this affect chargeback models?

6. Licensing models will become more complex. License management will become more onerous. Software vendors have been forced to change their licensing strategies to reflect hardware changes such as multi-core processors and virtualization. However, this is a customer-driven change in the industry, so the market will ultimately determine fair value. If vendors set exorbitant per-processor/per-virtualized-instance licensing rates, for example, the market will respond by seeking alternatives. Ultimately, small enterprises can expect licensing rates to be rebalanced until customers once again feel the value equation works in their favor.

Regardless of pricing, licensing complexity will increase. In order to maintain compliance, managers will now have to keep track of multiple licensing schemes with different rules. Consider the following examples:

IBM has introduced an alternative to per-processor licensing with its Processor Value Unit (PVU) model. The new licensing scheme attempts to charge customers based on processor performance rather than on how many physical machines are used to accomplish the task.

Similarly, Microsoft allows for up to four virtual instances of Windows Server 2008 Enterprise Edition to run on a single physical machine. The Windows Server 2008 Data Center Edition allows for an unlimited number of instances on a single machine. This, however, still does not account for a completely virtualized infrastructure where physical machines are clustered and individual boxes become irrelevant.

Next: Recommendations for Implementing a Utility Infrastructure

{mospagebreak title=Recommendations}


Although the utility infrastructure model is far from mature, enterprises can prepare in advance by taking the first steps toward delivering it. Consider the following:

1. Adopt a business-value approach to IT services. The three main benefits of a utility infrastructure are 1) better utilization of IT assets and available capacity, 2) the ability of IT to dynamically direct and scale computing resources to meet the rapidly changing needs of the business, and 3) the accounting and control benefits of being able to allocate IT assets directly to specific business departments or projects, thus linking IT spending to the success of business initiatives.

2. Inventory current server deployments. Many small enterprises do not have a clearly documented picture of where all of their servers are located, who controls them, which applications are running on which servers across the entire enterprise, or how various server and application resources interact with each other. Gathering this information and gaining control over how capacity is deployed is a necessary first step towards consolidation and moving to a utility infrastructure. Decision makers also need dashboards that consolidate usage, capacity, and availability data from multiple servers and storage devices.

3. Prepare for IT chargebacks. Adopting a chargeback model will involve changing internal billing systems to charge for IT resources consumed. Even if this is not on the immediate horizon, IT departments in small enterprises should nevertheless seek to control costs and gain visibility into departmental usage of resources by associating assets with line-of-business applications and services.

Make services measurable. The first step is breaking computing into measurable processing and storage units. Look to vendors such as IBM and Sun for examples of how this is done in the industry. As an example, most IBM middleware is now licensed using Processor Value Units (PVUs). Similarly, Sun's grid computing utility is priced simply at $1 per cpu-hr. In addition to identifying the units of measurement, IT managers will also have to implement mechanisms to track and monitor usage and enforce limits.

Offer a menu of options. IT departments can also offer tiered pricing based on different Service Level Agreements (SLAs). For example, certain applications might require higher levels of availability or performance than others. Similarly, users may demand different backup and security mechanisms depending on the sensitivity of the data being accessed.

Empower business units. A shortcoming of single-tiered models is that they tend to cater to the highest common denominator, leaving many users paying more than they need to. In this case, complaints by departments claiming "we could do it cheaper ourselves" may very well be true. Allowing business unit customers to select the lowest-cost services that meet their minimum requirements would help resolve this problem.

Help set requirements. IT departments will have to work closely with business units to set minimum requirements to ensure that business leaders are not cutting costs to the point of compromising their ability to deliver value (i.e. by risking performance degradation).

Clarify options. Providing users with a price/performance analysis for various levels of service would also help business units in enterprises understand the value of IT resources and the tradeoffs required.

Determine accounting rules for excess capacity. Since IT has limited ability to reduce the total capacity of the IT infrastructure once additional resources are brought online, there must be a mechanism to account for this excess when changes to business requirements cause demands to drop. There must be an understanding with senior management that this cost is separate from the IT budget. Either the cost will have to be spread across the business units, or IT should be permitted to create a "profit" to offset this cost.

4. Standardize the infrastructure. Adopting standard configurations and approved purchase lists for the IT infrastructure will help move the small enterprise towards a more hardware-agnostic view of computing where processing power and storage space are simply commodity items. A single platform is also desirable. If independent departmental initiatives have resulted in multiple data repositories, duplication of applications, and rogue servers, enterprise-wide standards will greatly reduce complexity and costs. A standardized infrastructure will also make service levels more predictable and reduce maintenance and service costs. This is not an initiative that will take place overnight. Standardization is an incremental process that may take several years as servers move through their asset lifecycles. For assistance with setting up IT standards, refer to the Info-Tech Advisor template, "Information Technology Standards and Guidelines."

5. Virtualize servers where possible. Server consolidation and virtualization improve server capacity by allowing multiple virtual servers to exist within a single physical box or cluster/grid of servers. Virtualization also allows new physical resources (i.e. servers or storage) to be added with minimal disruption. As an example, software from Virtual Iron can migrate a virtual server from one physical server to another, thus allowing IT managers to bring new hardware on or offline without any disruption to applications.

6. Identify shortcomings in the control and scalability of the networked storage environment. Small enterprises should seek two critical components in order to deliver utility storage capabilities:

Storage Resource Management (SRM) tools. While these applications are required for large enterprises with heterogeneous storage environments, smaller enterprises that deal with a single-vendor storage solution will be able to obtain similar functionality using proprietary tools. Dedicated SRM vendors focus on asset management, storage utilization, workflow management, and automated policy enforcement to prevent unwanted and redundant data from being introduced into the system. SRM can also help monitor SLAs with business users, identify wasted storage space, and aid in storage provisioning. However, the major shortcoming of SRM offerings is that they lack industry-wide standards for interoperability (in heterogeneous environments).

Currently, the Storage Networking Industry Association (SNIA) is working to develop and standardize interoperable storage management technologies.

Predictive change management for SANs. A major component of a utility infrastructure is the ability to accurately and dynamically make changes to the storage environment. Unfortunately, many tasks such as adding a server, a switch, or a redundant path between devices must still be performed manually. If IT agility is to be gained, change management is needed, so enterprises can maintain application availability while applying SAN changes, improve SAN management practices, and support the growth of the storage infrastructure. Predictive change management software is still in its infancy; however, IT departments with an eye towards utility infrastructure can get ahead of the game by developing internal frameworks for SAN changes and vet change processes to identify best practices.

7. Start small. Small enterprises cannot buy a utility infrastructure, they must build one. Begin with one application on a cluster or grid of low-cost servers and incrementally move more applications to this group. Then, as additional capacity is required, gradually consolidate more databases and servers onto the cluster/grid.

8. Consider power systems. Small enterprises will want to plan for a modular system that can grow with capacity requirements, including the need for high-density power in the case of racks and blades. Also investigate scalable Uninterrupted Power Supplies (UPS) for disaster recovery purposes. In order to accommodate chargebacks, managers will also want to look into power metering for a more accurate description of overhead costs. Vendors such as IBM and HP have power management tools for their blade systems.

Bottom Line

Small enterprises will find that moving to a utility infrastructure will not happen overnight. It will require a long-term, phased approach with significant challenges along the way.