A pioneer in a risk-averse world

Peter Gross was recognized for his outstanding contribution to the industry at the 2010 Data Center Leaders’ Awards

22 November 2011 by Ambrose McNevin - DatacenterDynamics

A pioneer in a risk-averse world
Peter Gross

Peter Gross is an electrical engineer. After gaining his MBA he started his career with Teledyne in the early 1980s, working on the design of uninterruptible power supply (UPS) systems. He set up EYP MCF in 1997, sold it to HP in 2008 and today works as an HP Vice President and is the Technology Global Consulting Managing Partner responsible for power, cooling and energy efficiency in the data center.

 
When he began, back in what he describes as the golden age of data centers — a time when water-cooled mainframes were everywhere — the data centers were more or less designed purely for them. Electrical system design was considered a ‘black art’, as few engineers had the experience to work on it. As Gross puts it, designing 400Hz electrical systems for IBM mainframes meant working on the most expensive and important equipment in the heart of the data center.
 
After this, Gross’s career took a turn into consultancy — in his case with a relatively small boutique called PRK Associates. Here he provided data center services for financial services institutions. “Financial services were without a doubt the most important users of mission-critical environments. From the late 1980s to the mid 1990s, I spent many hours at the New York Stock Exchange and Merrill Lynch” he says. “I worked on some very interesting projects — designing and supporting the many functions of data center operation.”
 
It is at this point Gross mentions for the first time the risk-averse nature of the data center business. “Because reliability is key and data centers are so essential to the welfare of an enterprise, it is a very delicate balance between introducing new architectures and topologies to improve efficiency, reliability and other characteristics, and getting the customer to trust that it is safe to do so,” he explains.
 
“It is a challenge to get the customers to trythem (new architectures) out. You really have to gain the confidence of the clients — priorexperience counts if you want them to believe new is good.”
 
This challenge of innovating in an industry averse to change has been an emphasis for Gross and the teams he has worked with. “Throughout the years, identifying the new trends has been key. We were among the firstto focus on energy effi ciency and in the early 2000s were among the first to develop a pieceof software assessing the energy performance of the data center,” he says.
 
Despite clinging to tried and tested methods, even the data center industry was pushedforward thanks to big changes in electrical products and designs.
 
“For the longest time — until the late 1990s — the single most important point was that the UPS was a single point of failure. System architecture didn’t allow for redundancy because of the inability to supply more than one source of power to the computer equipment,” he says.
 
Testing, commissioning and maintenance throughout life was enormously important. Facilities within financial services or airlines were key to operations and always had planned maintenance downtime to test and recalibrate UPS systems, generators and circuit breakers. All these systems could cause an outage.
 
New architectures
Two major developments fundamentally changed the operations of the data center. These were the static transfer switch and dual-cord distribution.
 
The ability to provide dual-power changed data center design, topology and architecture. The static switch was an intermediate step. It meant power from two sources - still a single point of failure but a major step forward.
 
“One of the things I am most proud of came about ten years ago when we designed a reliability model for the electrical system. The data center population was not big enough to build a system based on historical data. So, the reliability model was the first analytical tool that looked at the individual components and the whole system and, based on statistical information, predicted system behaviour and quantified risk” Gross says. This enabled the design of better architectures.
 
“By gathering statistical data on mean time to repair and mean time to failure, [we could]develop the most reliable architecture and topologies to date.” In the late 1990s, thiswas the single most important element in the development of the data center.
 
Inflection points
As for other inflection points, Gross observes the next period of major disruption as the dotcom boom (from 1999-2001), when extraordinary volume, speed to market and economic factors drove many innovations of data center design.
 
The second inflection point came in the mid 2000s with the advent of the mega data centers being built by the likes of Google, Microsoft, Yahoo! and other online or web-driven businesses.
 
These caused a huge shift in the priority changes and drove further innovation. Prior to this type of facility the only factor that counted was reliability, and it was the first time that cost and energy efficiency emerged as primary factors. Reliability was achieved, not by building the most robust facility possible but by software and failover.
 
This brought a new set of products into the data center design. Unfortunately, most innovation from this could not be transferred to traditional data centers because, as Gross says, most enterprises are not willing to sacrifice reliability to such an extent.
 
Hybrid data and data center
“This is the most challenging task today — never before in this industry has there beenso much uncertainty. This is about the future of hardware, configuration and topology. The data center choices are between private, public and hybrid clouds. There are so many choices at every level of data center infrastructure design and application,” Gross says.
 
What is needed is a multi-tier hybrid design, and today we are seeing the first attempt to move from monolithic high reliability to attempt to provide real correlation between the application criticality and the infrastructure supporting it. This will come about because, according to Gross, in any typical environment the number of applications that are truly critical to a business will typically be about 15%. Twenty percent are important but not mission-critical, and the remaining areimportant but not vital.
 
Traditionally, the goal is to design against the most mission-critical applications. So a 50,000-sq-ft data center was always designed with the top tier of the most vital applications in mind.
 
Today a hybrid data center will respond to the criticality of the application, different density levels, with different responses to new requirements. All these data centers are not uniform and let you have the ability to respond quickly to the supply of services demanded by the business.
 
But the shift to this responsive type of datacenter will not happen overnight and Gross says he believes this is the reason for the surge in the popularity of colocation. Not because [the colocation providers] can provide these services but because users know they can buy time without a big financial commitment while waiting for answers and getting the ability to comprehend the direction of the industry.
 
There are many examples of this type of disruption. “One typical example is where the data center systems are going with the convergence of infrastructure and IT,” Gross says. “Until today, these were two segregated disciplines. When asked to produce a data center, as an engineer you had a general idea what was needed. But you built something and populated the space, and really all you could hope for was that it matched the needs of IT.”
 
The holy grail is controlling the IT andfacility in an integrated fashion. It is finding a way to get the facility to respond to IT behaviour within the dynamics of a virtualized environment, where you have varying degrees of utilization.
 
“The ability of cooling systems to properly modulate thermal conditions to IT loads is atypical example of what we are trying to do,” Gross says.
 
“A typical application has major implications in terms of cost, reliability and efficiency, andwe have systems attempting to do that. We are trying to do the same thing in terms of power — finding a way to provision power as and when it is needed.”
 
Happy at HP
This objective of designing a data center for the application is in part why Gross now works for HP. Though it is first and foremost an IT company, Gross says HP recognizes that you cannot segregate the data center from the rest of the IT space.
 
In the evolution of IT, the data center opens a significant number of opportunities. The significant revenue opportunity is the ability to integrate new solutions and make server, racks and cabinets intimately connected to the rest of the data center. By buying EYP, it gave HP a system integration capability and a more substantial role in the data center space.
 
Flexible data center
Today, Gross’s main focus is Flex DC. If you look at the space, at one extreme you have the traditional bricks and mortar, and at the other you have a POD, the containerized data center. There is a big gap between them and that is the target market for Flex DC, he says.
 
“Flex DC is a compromise, but not a bad compromise. It still has all the attributes of a traditional data center. You can run anything in the space (it is IT vendor-neutral), but it has most of the benefits of the POD, such as energy efficiency, lower cost , speed tomarket and modularity,” Gross says.
 
“In this world, cost is the main driver. Performance is important but we have got to the point in the evolution of the data center where cost escalation, especially in comparison to the cost of IT hardware, is not sustainable. The majority of data centers are not 2MW (even at the low end) and it is prohibitive to spend $25m per megawatt.That’s why users go wholesale.”
 
“But the answer is Flex DC, which we’ve started deploying," Gross says. It delivers the ability to build in prefab blocks, the cooling blocks, the power modules, etc, in buildings that are all prefabricated.
 
“We shift labour from site to factory. We redact from construction to supply chain management. By building standardizedblocks, there is predictability in the product. Users get line of site and a reduction in delivery from two years down to six months.”

All these things are addressed. This is not building a big warehouse with lots of UPSs and chillers. This is a real 21st century data center with the ability to easily readjust to changes in load and IT equipment. Gross believes that flexibility is the foundation of modern data center operation. He continues to push the envelope.

The EMEA Data Center Leaders' Awards are on again. A record number of entries have been taken this year. To book your seat visit our Awards page here.

DatacenterDynamicsCONVERGED takes place at London's Excel Centre on November 30 - 1 December. Registration is open now

Follow the event on Twitter at #dcdlondon.

 

 

CONNECT WITH US

Sign in


Forgotten Password?

Create MyDCD account

Regions

region LATAM y España North America Europe Em Português Middle East Africa Asia Pacific

Whitepapers View All