Why Wachovia Is Banking On Virtualization

By Mel Duvall


Frustration over the length of time it took to roll out new applications and services started Wachovia down a path three years ago to seek out ways to reduce IT complexity.


With the backing of senior management, the IT team at the nation's fourth largest bank has experimented with a number of leading-edge technologies and concepts such as Web services, virtualization, and processor arrays, to achieve its vision of a more responsive, flexible, and energy-efficient IT infrastructure.


Now, as the bank prepares to move into its new headquarters in downtown Charlotte, N.C., in the summer of 2009, it is using the opportunity to put many of the lessons learned into action. Among the most promising is a computing platform that will make use of a combination of virtualization technologies to deliver better performance to the bank's traders and simplify the process of upgrading or repairing servers.


advertisement

"We've tried to push a number of things forward and to be more innovative so that ultimately we can compete better," says Jacob Hall, head of platform design and data center technology for Wachovia's investment banking group. "You don't get there overnight—it's an evolution. But we think we're headed in the right direction."


The push in the right direction came one day as Hall was venting frustration with his boss, chief technology officer Mark Cates, over the length of time it took to roll out new applications at the bank. Hall wanted to deploy an event messaging service which would provide employees with a common area of interest, such as those using a particular managed service, with a simple means to share information, be notified of changes, and post questions. At the time, employees were doing this through a hodgepodge of technologies, including email lists and collaboration software, but Hall wanted to have a single standardized method.


"I asked Mark, 'When is it worthwhile to adopt a new technology,' because I saw lots of groups adopting new technologies that competed with one another, essentially providing the same functionality," recalls Hall. "You know—we would have 15 different reporting tools basically providing the same reporting capability."


Cates' reply served as a catalyst for Hall in his drive to reduce IT complexity at the bank. "He said, 'Jake, if you bring in one thing and remove nothing, there's really no value. If you bring in one thing and remove another, there may be some value. But, if you can bring in one thing and remove two things, there may be significant value for us,'" recalls Hall.

"That really focused our attention on the fact that we have so many opportunities to remove two or more things and replace it with one."


advertisement

That simple premise has coalesced into a three-pronged strategy largely enabled by virtualization technologies:


  • The first is to provision first. Rather than order new hardware or software, Wachovia first attempts to use existing infrastructure through reuse or by making better use of capacity through virtualization technologies.
  • The second aim is to achieve high availability by default. Instead of purchasing backup and continuity technology, Wachovia now aims to build redundancy into its architecture.
  • The third target is to be Green by Design. Hall says by provisioning first, by achieving high availability by default, and by using virtualization technologies to ensure hardware is being more fully utilized, Wachovia can save power through design.
{mospagebreak title=Creating a Data Center "MapQuest"}

Creating a Data Center "MapQuest"


In order to meet the first goal of the three-pronged strategy, provision first, Wachovia needed to get a better handle on exactly what software and hardware were deployed in its data centers, and how those systems interacted with and supported the business. The company had emerged through a period of mergers and acquisitions, including the acquisition of Prudential Securities in 2003, and had adopted a number of different trading systems in the process. Wachovia now has more than 50 data centers worldwide supporting the investment banking group, and a much larger number of buildings with software and systems to support. In all, Hall estimates there are some 4,500 such locations from Asia, to London, South Africa, and all over the U.S.


Starting in early 2006, Wachovia began doing a series of architecture reviews to determine exactly what servers supported what applications and how those applications supported the business. The technology team began by writing simple Perl scripts that would run on a machine and essentially capture information related to its function. It would determine the programming language being used, the processes that were running, what network connections were taking place or what other applications were interacting.


advertisement

While the home-grown querying application worked relatively well, the technology team began to explore if a commercial alternative was available to perform the same task without having to write new scripts and perhaps provide more details. At the same time, Hall says he and his team began talking about how much better it would be if they could have a visual map of the data center, showing all of the hardware and software components and how they connected with one another. Essentially, a data center MapQuest.


The first piece of the puzzle was filled through a software package from Tideway Systems of London, called Foundation. Foundation automatically maps business applications to the underlying physical infrastructure, including the dependencies between applications and systems. Hall says one bonus with the Tideway product over Wachovia's home-grown application, is that it provided a modeling tool. It could take an equities trading platform, for example, and visually model the various servers and components that supported the platform.


advertisement

Next, Wachovia formed a partnership with the University of Carolina and IntePoint, a Charlotte-based provider of simulation and visualization software, to help it create the 3-D map of its data center operations. That project has been underway for about a year now and Hall says almost all of the servers, applications, and people connected to those applications in the investment side of the bank have been mapped. In addition, physical assets on the map, such as servers and storage devices, have been colorized based on how much of their capacity is being utilized and their peak power consumption.


"Now we can see power usage by building and by application, so it's easier to visualize which machines we can eliminate to improve performance," says Hall.


In addition, it allows Wachovia to better determine how it can make use of existing capacity through such technologies as server virtualization. And that leads to the goal of provision first. "Rather than ordering first, we've learned to provision to the virtual or existing infrastructure first and only buy hardware if we need it," says Hall.


{mospagebreak title=High Availability}

High Availability


Virtualization technologies are also playing a key role in Wachovia's second strategy, achieving high availability by default.


Hall says while Wachovia is adopting machine virtualization technologies like VMWare it also wanted to take a look at other ways virtualization could have an impact—from the number of drivers on a machine, to the number of cards, and the amount of change that was involved when adopting a new process or technology.


As a result, the bank began experimenting with building "processor arrays," essentially dense blade racks with no local storage and no directly connected input/output (I/O) devices (such as Ethernet or fibre channel cards, hard drives, etc.). To make this happen, Wachovia piloted virtual I/O technology.


One of the benefits of server virtualization is that several operating systems hosting applications can be moved onto a single machine, making better use of its capacity. However, one of the problems this creates is the applications end up sharing all of a machine's resources, including the connections to the network and storage. The typical solution is to add more I/O devices, such as Ethernet or fibre channel connections, a high-speed data transmission technology often used to connect services with clustered storage devices. The additional I/O devices provide more throughput and can be used to secure an application from being exposed to other networks.


advertisement

Virtual I/O technology provides a means to consolidate all the cables and cards that are typically needed to support a virtualized machine or a non-virtualized machine, down to one or two wires. Instead of using physical cards, virtual I/O uses software to accomplish the same task. Wachovia has been working with several vendors to implement virtual I/O in its data centers, including Scalent of Palo Alto, Calif. One of the benefits of the Scalent technology, says Hall, is that it is agnostic of hardware suppliers, allowing Wachovia to maintain a neutral position. In tests, Hall's group found the virtual technology could increase I/O performance by 300%.


"With the advanced computing farms and processor arrays we want to put into our data centers, if you only have two wires going to every box, you can put 60 blades inside of a chassis and not be concerned that the number of cables are going to be so great it causes air flow problems or a problem in swapping out a cable if a connection goes bad," says Hall.


advertisement

The benefits of virtual I/O technology in combination with processor arrays include the ability to troubleshoot or replace or upgrade blades without having to bring down the entire chassis - achieving the goal of high availability by default. "There's an upgrading benefit, there's a repair/troubleshooting benefit, and by simplifying the chassis design to only contain power and management, the chassis become cheaper and easier to inventory," adds Hall.


{mospagebreak title=Green By Design}

Green By Design


Many of the lessons learned over the past several years, will be leveraged in Wachovia's new headquarters, scheduled to open in Charlotte in the summer of 2009. The 48-story building is being built to Leadership in Energy and Environmental Design (LEED) specifications, a set of standards developed by the U.S. Green Building Council for environmentally sustainable construction. To be LEED certified, a building is evaluated against a set of standards, including energy efficiency, water conservation, indoor air quality, renewable materials and use of local suppliers.


Wachovia plans to implement its processor array model to support the demanding applications delivered to its traders. Traders are typically power users—they use a lot of video, require streaming market data, and often have anywhere from four to eight monitors at their desks. "They're not our typical client base—they represent maybe only 10% of our organization," says Scott Haynes, senior platform and data center architect. "So, we're looking for something that's going to be very high performance."


advertisement

Instead of having a high-end desktop computer, traders will instead connect to a processor array through a portal device on their desks. The device, being supplied by Teradici of Burnaby, B.C., is shaped somewhat like a hockey puck. Users gain access to the processing power of a blade server, but from an administrative standpoint it is much easier to manage than having individual workstations deployed on desks, says Haynes. In the event of a system failure, IT personnel will be able to switch a user to a "hot" spare blade server in a matter of minutes, greatly improving uptime.


In addition, traders are frequently moved between task groups on the trading floor to respond to market conditions. With typical workstation scenarios, such moves are time consuming and costly (estimated at about $1,000 per move). With the virtualized model, Haynes says traders can have their desktops switched to any location on the floor.


advertisement

For the remainder of the bank's non-trading employees Wachovia is also looking to deploy thin clients, but with less processing power dedicated to each client on the backend. "That's going to give us the ability to take the processor array model and really make it shine," says Haynes, "because we can now move people around, and at night we can take all of the excess computing power and put it into our grid for computational stuff.


"It's really about flexible computing. We're really tired of having all these high energy desktops at people's desks that are used for only seven hours a day," he says. Pilots are currently taking place to see how well the system can scale. Wachovia hopes to learn, for example, how many thin clients it can plug into a processor array before performance deteriorates, and to determine which users are best suited for the technology.


"We're basically going to see how deep we can stack these guys before they say, 'you know what - give me a desktop'," says Haynes.


And as the new systems are deployed and virtual technology implemented, the team is keeping chief technology officer Mark Cate's message of replace two things with one in mind. That has translated into implementing a wide range Web services, applications providing a function such as looking up a credit report, that can be accessed by multiple applications. In one instance, Hall took a look a five new project requests and saw that each required a reporting tool. Rather than purchase five new tools, he was able to create a Web service to support all five projects. "We've done the same in such areas as messaging, document conversion, and Web hosting," says Hall.


Haynes and Hall admit the last several years have been not been without bumps. In one instance the team attempted to create a "mega service," a central repository for a wide range of Web services that programmers could use to build applications in a Lego-block fashion. A wide variety of applications could tap into the mega service to run components of a program on an as-needed basis.


The mega service made sense in theory, but in practice didn't get a lot of use. "I think the problem is, developers like coding to what they think is the best design, and some of the things we did with the mega service, took that away," says Hall.


But for every idea that didn't work, there have been many others, like virtual I/O, that are moving forward.


"Thinking back, there are a lot of lessons we learned," says Hall. "We hired a lot of great people, tried a lot of interesting things and worked with some great vendors. And, really, the management team deserves credit for allowing this to happen."

Enjoyed the article?

Sign-up for our free newsletter to kick off your day with the latest technology insights, or share the article with your friends and contacts on Facebook, Twitter or Google+ using the icons below.


E-mail address

Rate this blog entry:
0
Jay Rajani has not set their biography yet

Comments