Interview: Intel's Ajay Chandramouly On Xeon E5- 2600 Processor Advantages in Big Data

Content Sponsored by Intel, for more information please visit here.


Background

Ajay works at Intel IT, so in this role he is uniquely qualified to provide insights to the new Intel Xeon E5- 2600 processor family from an end user IT perspective. Ajay describes in this interview how Intel IT has successfully deployed and leveraged the Xeon processor family across their 87 global data centers to drive key operational cost savings throughout Intel Corporation. Additionally, Ajay provides some key insights about the processor to benefit CIOs and Senior IT business decision makers in the area of Big Data processing, Cloud Computing, Data Center Management, and Security issues surrounding data storage and management.  Please see Ajay’s complete bio at the bottom of this article.


From a CIO's perspective, the volumes of data most organizations amass will grow exponentially with each passing year.  Traditionally, when planning and evaluating the performance of computing systems, many CIOs and Senior IT professionals need to develop a capacity management plan that builds a road map for an organization's required computing resources looking several years into the future.

Looking back over the last decade or so we know that most data warehouses of yesterday and Big Data systems of today eventually have a tendency to become I/O bound before becoming CPU bound.

Question:

How does the I/O architecture of the E5 Family of chips facilitate the enormous I/O demands of Big Data applications such as Hadoop Clusters, and In-Memory database applications (such as SAP or Oracle) to process data more efficiently than previous systems built on Intel architectures?

Specifically, what are the special design elements in these chips to assist in these I/O intensive processing tasks?

Answer:

A primary focus for the E5-2600 design was to address and greatly reduce all potential I/O bottlenecks in the new processor. An important goal of the chip in this context then is to move more data into and out of the processor as quickly as possible. The Sandy Bridge E5 processor now has up to 8 cores. Each one of those cores is multi-threaded so this can effectively yield 16 cores or threads. The CPU gets bandwidth primarily from 3 sources: (1)through the last level cache, (2)memory bandwidth itself, and (3) access to remote memory. Let’s take each of these one at a time:


1) Last Level Cache - Intel has added on die memory with very low latency and very high bandwidth. This memory is located as close to the cores as you can possibly get. Implementation of a Ring Topology Interconnect allows for all of the cores to access the data simultaneously. Logically it appears as one cache but physically it is divided into multiple slices. With the new E5 CPU Intel has added up to 2 additional cores for more raw computational performance and up to 8 MB more last level cache.

2) Memory bandwidth itself - Intel went to 4 channels of DDR 3 with higher than ever per DIMM memory capacities so that more data can be stored local to the processor reducing wasted clock cycles waiting for data to arrive from the storage system. This goal to reduce latency in getting data to the processor is reflected in our increased integration with PCI Express being built directly into base silicon components. We’ve also supported higher bandwidth throughout the platform with additional QP1 speed, support for DDR3-1600 memory, and the first ever enterprise support for PCIe 3.0.

3) Access to Remote Memory - When a CPU wishes to access data sitting on another socket or CPU the request is handled over a QPI interconnect ( Quick Path Interconnect ) ( A physical connection between the CPUs). The new CPU has up to 2 QPI links that connect one socket to another.


Nearly every area that could improve processor performance has been addressed - increasing the number of cores, increasing cache, increasing memory, and increasing integration.

The addition of the integration of the PCI Express 3.0 into the die itself has yielded a 30% decrease in latency (i.e. the amount of time the data has to sit around waiting to be processed by the CPU.) In previous architectures, the I/O subsystem was on a separate chip. Now having it on the processor itself greatly improves throughput and reduces latency.


Question:

For the Cloud Service provider or a data center operator, one of the biggest expenses after procurement is the cost of power consumption to operate the facility and the server farms. The Xeon E5 family supports Intel Node Manager 2.0, which is a feature embedded in the hardware that provides the information and control needed for efficient, policy-based power management.

Can you speak about the improvements in energy efficiency in the chip?


Answer:

Users can realize up to a 50% improvement in energy efficiency with the new CPU. This is achieved primarily by the scaling of memory cache and I/O to match core needs. When a core is active it is designed to scale power depending upon the use of the core. The processor can tune interfaces to match the performance and power consumption across 23 points of control so that systems do not consume power unnecessarily but instead tightly link performance to the amount of energy consumed. As an extension of these chip improvements, the Intel Node Manager and Data Center manager are tools to help IT manage and monitor their power consumption more effectively.



Question:

Can you address the security advantages of special chip instructions dedicated to data encryption and the performance enhancements of these instructions over chips that do not contain dedicated encryption instructions?


Answer:

There are several embedded security features in new Xeon E5, one of which is the AESNI (Advanced Encryption Standard New Instruction Set) design. AESNI helps IT shops quickly encrypt data flowing through a company's computer systems and protecting it from hackers. AESNI enables the encryption process to take place without taking a performance hit.

Six new instructions have been added that offer full hardware support for AES (Advanced Encryption Security).The new instructions are important in protecting critical data in a wide range of communications applications including the connection of embedded devices, network security appliances, web servers, and routers.


Intel TXT (Trusted  Execution Technology) addresses the security needs across the deployment of servers especially in virtualized or cloud based environments. It helps to protect your server prior to an OS launch or hypervisor launch. TXT ensures that the booted OS is free of malware. Additionally, Intel TXT allows for new use models. For instance, you may create pools of platforms with trusted hypervisors and use the platform trust status to constrain the migration of sensitive virtual machines or workloads. This helps raise the overall protection of all critical data.


Question:

Can you provide any insights into how the new Xeon Processor will be used within the Intel IT environment?

Answer:

The new Xeon E5 2600 product family will be the new mainstream platform across our entire environment, including our Office and Enterprise private cloud. For Intel IT, technology refresh has been critical in delivering business value back to the corporation. Having a proactive refresh cycle has helped to support hundreds of millions of dollars in efficiencies. We have been able to achieve server consolidation ratios of 20:1 by deploying the latest generation of Xeon servers.


In our private cloud, which is composed mostly of these type of two socket Xeon processors, we have been able to save $9M annually to date in hard cash savings. This does not include non-cash benefits like productivity and efficiency gains through increased agility. For instance it used to take 90 days to provision a server, now it takes under 3 hours or even as little as 45 minutes in some cases.  These efficiency gains have been one of the key motivators in building our own private cloud consisting of Xeon servers at Intel IT. One of our key goals is to enable our engineers to go from idea to implementation within a day.


We currently have 87 data centers across the world. By proactively refreshing our servers with the latest generation Xeon based servers we have identified an opportunity to further reduce the number of data centers by another 35% over the next few years. The Intel technology refresh strategy is not limited to just server refresh, but also includes storage, network, and facilities. As the latest generation of Xeon servers have come to market we have seen the bottleneck in the operation of our datacenters shift from computing to storage and networking. For example, we are upgrading our network to 10GBit Ethernet (approximately a 25% network operating cost savings), as well as adopting Xeon based storage solutions ($9.2 M savings).


A key metric for all IT managers is driving effective utilization across all datacenters. By refreshing across compute, storage, and networking we are driving towards 80% effective utilization across our global datacenter resources. Intel currently manages 38.2 Peta Bytes of data, up from 24.9 Peta Bytes in 2011. As our volumes of data grow each year we look to leverage the latest in Xeon Processor technology to stay ahead of the growth curve and enable our engineers to continue to deliver on the promise of Moore’s Law.


For more information about the Xeon E5 2600 Processor Family please visit here.

To participate in the Intel LinkedIn Group please visit here.



About Ajay Chandramouly

Ajay works at Intel IT and is the Cloud Computing and Data Center Industry Engagement Manager.

Ajay has over 13 years of experience in the technology industry with over 10 years of experience at Intel Corporation. Ajay has held a variety of IT, software and hardware engineering positions while at Intel and the Lawrence Livermore National Laboratory.   Ajay is a highly sought after speaker and has spoken at numerous forums worldwide including Computer World's Storage Networking World and the National Defense Association.  In addition, Ajay is a highly regarded expert in his field of cloud computing and data center management and has been interviewed and cited in prestigious publications such as Data Center Knowledge, Forbes Magazine, and Business Week among others.  In addition, Ajay has co-authored several white papers that can be found on www.intel.com/it.   Ajay's current role is to define and articulate Intel's leadership Cloud and Data Center strategy and roadmap with his senior IT peers.  Ajay holds both an MBA and MSE from UC Davis.

Follow Ajay on Twitter: @ajayc47

Follow Ajay’s blogs

Enjoyed the article?

Sign-up for our free newsletter to kick off your day with the latest technology insights, or share the article with your friends and contacts on Facebook, Twitter or Google+ using the icons below.


E-mail address

Rate this blog entry:
0

Bill has been a member of the technology and publishing industries for more than 25 years and brings extensive expertise to the roles of CEO, CIO, and Executive Editor. Most recently, Bill was COO and Co-Founder of CIOZone.com and the parent company PSN Inc. Previously, Bill held the position of CTO of both Wiseads New Media and About.com.

Comments