Can data centers be built to operate with energy efficiency?
Given that data centers have a reputation for being energy hogs, so-called green data centers might seem like an oxymoron. But professionals who work in the sector say the industry has been trying to get greener and is starting to succeed. The fact that the U.S. Green Building Council’s fourth version of LEED includes data centers is one sign of the times.
“By bringing data centers into the suite of LEED rating systems, we’re removing barriers so that even more data facilities can participate in LEED and build sustainably,” says Jacob Kriss, a spokesman for the USGBC.
Several data centers have already attained LEED certification, including two facilities, in Richmond, Va. and Atlanta, that were retrofitted by data center company QTS. The Richmond center got high LEED marks under the commercial interiors program for water efficiency and optimized energy use in HVAC systems, according to Brian Johnston, chief technology officer for QTS.
Thomas Traugott, senior vice president of the data center solutions group at Cassidy Turley, says the standard that is used to rate data centers is known as Power Usage Effectiveness (PUE). A 2.0 PUE rating means that 1 kW of power goes to run computers and 1 kW is needed to run cooling and support needs. An energy miser with a 1.5 PUE needs only .5 kW to run 1kW of computing power. An energy hog with a 3.0 rating needs 2 kW of power to run just 1 kW.
Companies are starting to realize that those PUE numbers make a huge difference to their bottom line. Traugott offered this theoretical situation: Say, you’ve got a 10,000-sq.-ft. theoretical center that pays a generic .10 per kilowatt hour for 730 hours a month. With a low 1.5 PUE rating, you’re paying $109,500 a month. With a higher 2.0 PUE rating, the bill hits $146,000.
One notable green data center is currently underin Clifton, N.J. by Telx, a provider to Cassidy Turley clients, says Traugott. When fully occupied, it will have a PUE rating of 1.5. Telx estimates that for each 0.1 of reduced PUE, the firm will save $740,000 in electric bills and reduce carbon emissions by 6.7 million pounds.
Experts are working to make data centers more efficient in many ways, the most common of which is containment. The idea is to contain the cold air that cools the center’s computer system at the front of the building with screens and provide an escape chimney for hot air released from the center’s back.
Another method is to use so-called variable frequency drives. The old-fashioned way to go about this would be for a 30-ton cooling unit to be either completely on or turned completely off, Traugott says. With a variable frequency drive, the cooling unit can be programmed to provide only the power needed. That is particularly important as data centers grow and expand.
“Even a data center with a PUE rating of 1.5 at full load may start out only 10 percent occupied and that will give it a 3.0 PUE,” Traugott says. Being able to ramp up the cooling power, he adds, “makes you efficient along the way.”
Chris Johnston, senior vice president at the engineering firm Syska Hennessy Group, says the “900-lb. gorilla” in the data center room is an inefficient computer system— not just the way the data center is run.
“I can buy very efficient computer equipment and pay a high price or buy the lowest-price thing,” Johnston says. “Unfortunately, a lot of the big users buy cheap stuff.”
Johnston says that advances in computer equipment are helping to make data centers more efficient. One way is that computer equipment can now tolerate more heat. In the 1980s, computer equipment rooms needed to be at about 70 degrees. Now, some can tolerate up to 80 degrees, he says.
Another way is through a process called virtualization. “That lets a particular computer or server handle more than one program,” he says. “Say, you have a data center and you’ve got servers for all the e-mail in New York,and D.C. At peak time I have to have all three, but at night you can shift application programs over to one server and let the other two go to sleep.”
While data centers use many redundancies to keep crucial computer systems powered without a nanosecond of lost time, it’s not those redundancies that are the biggest problems, Johnston says. “The biggest consumer of energy is inefficient computer hardware and inefficient use of that hardware,” he says.
And even those redundancies are starting to be pulled back. “It used to be that people wanted the data center to withstand Armageddon,” Johnston says. “That’s starting to change.”
Still, there are energy sucks that can be maddening. Johnston recalled the case of one data center client who got its PUE rating down to 1.8 from 2.2. The problem was that the HVAC system had a method to create humidity by—believe it or not—boiling water. “So you had one cooling unit burning electricity to put humidity in the air,” he says. “The one next to it was on full cooling to offset the heat created to make that humidity.”