April 2017 - Data Center | Energy Efficiency | Certifications

Innovative Approaches to Energy Efficiency in Data Centers

Roland Broch, the administrator of the eco Data Center Expert Group and the eco Data Center Star Audit, talks to dotmagazine about innovative approaches to energy efficiency in data centers; from water cooling to Kyoto wheels.

Water storage tank 72l (Source: Roland Broch)

Water storage tank 72l (Source: Roland Broch)

DOTMAGAZINE: What different aspects of the topic “power” have you covered in the eco Data Center Expert Group 

ROLAND BROCH: The first topic is the aspect of energy efficiency – to consume less power in the server, and another one is to waste less power for cooling. So, you have energy consumption to produce cooling and energy consumption to transport cooling. Then you have – but less important – the electrical loss when transforming the voltage in the UPS (uninterruptible power supply) systems. But cooling is the biggest proportion in  energy consumption after the IT servers. Then, a few years ago, there was a new generation of rack servers with a new idle mode where you can save power if the processing load is low. If you have low processing activities on the CPU, for example, you can switch to the idle mode, because the power supply unit then needs less power. 

A totally different aspect is that you save power if you do less redundant computing, or smarter computing – so this is the aspect of virtualization, for example. Or if you look to see if there is really a need for calculations – there have been some scientific projects in the past dealing with power-saving software design, for example. We don’t usually cover topics like this in the Data Center Expert Group meetings, because our members come more from the colocation and web hosting industry, and they are more interested in aspects of efficient IT infrastructure.  

DOT: What activities has the group undertaken with relation to energy efficiency? 

BROCH: A few years ago we started with a white paper, with research into the power density usually used for different business models – so, if you are a web hoster or a colocation provider or if you do calculations in a scientific area – high performance computing, for example – you will have different power densities. It’s important to know your power density in advance, so that you have the right data for data center planning. If you design the data center with too little power density, you will need to do an upgrade, and this is quite expensive. If everything is built, and then you discover that you need more power, this is really really expensive. If you go the other way, and design a data center for too much power density, you can’t run efficiently.  

At the moment, we are working on an update of this study. I’ve done some interviews with experts, and they said that they do not expect to see too much change over the past years – and I’m currently researching the reasons for that. If you have the normal data center business use, you have maybe about 3KW per rack, and they said that today you often have a higher density, with more efficient servers, so that in sum, the load is the same. Then you can process more calculations with the same energy. 

DOT: How do you measure the overall efficiency of a data center? 

BROCH: To measure the efficiency of a data center, you can use KPIs like the PUE (Power Usage Effectiveness) – this is the relation between the total power consumed and the power you need for the IT itself. And this depends on the business model and on different cooling designs. If you use fresh air from outside and you are located in a cooler country, then you will save more power than if you operate a data center in Spain or somewhere in Africa or Asia, for example. 

But you shouldn’t use the PUE to compare data centers with different designs. It’s a really good indicator to monitor the improvement you have made over the years. If you have the same business model and comparable data centers, then you can compare the data centers, but it doesn’t make sense to compare a colocation data center with a high-performance data center used for scientific research – they are often more efficient because of less redundancy. The more redundancy you have, the less efficient you are. Larger data centers have often more potential for efficiency improvements than smaller ones. 

DOT: Looking at the eco Datacenter Star Audit (DCSA), what aspects of power are covered there? 

BROCH: The focus from the history is on reliability and availability. You have a certificate that stands for a high degree of redundancy – that’s the historical purpose of this audit. Then on the other hand, we started a few years ago with an extra Green Star, which is awarded not for the overall efficiency, but for the effort made towards efficiency – so we have a look at the potential of the data center. If the operator is willing to do something on efficiency. 

Of course, if you have a bad design, this star is not awarded. You have to be efficient, but we don’t measure it in an absolute sense. We will have a look at whether the management of the data center is willing to make improvements. So you have a potential that you can improve, and if they have documented that they have started and are continuously improving all the energy efficient topics over the months and over the year, then we will say that this is worth a Green Star. You can’t do everything at once. 

DOT: Tell me about some of the more innovative solutions you’ve seen when it comes to energy efficiency.

BROCH: There are different approaches to save energy. Most of the solutions I’ve seen focus on saving power with cooling. Cooling can make up about 40 percent or so of the overall power consumption in a data center, then you have about 30 percent for the servers, then you have less for the UPS systems, a little bit for the lights, and so on. A really efficient approach is water cooling, and you can do it in totally different ways. You can cool the processor directly on the motherboard – tiny copper pipes, cooling with water or oil. This is common in high-performance data centers for scientific uses. And you have a different approach using water to cool the rack doors, so you have hot air running through the server, and on the back door of the server rack this hot air is cooled down. The heat goes into the water cycle and is taken out of the data center (this is the method used by e3Computing, for example). So this is much more efficient than using air – water has a greater capacity to transport heat instead of air, so then you need less power to transport the heat through water rather than the air. You need huge fans to transport air. So this is a really efficient approach, but a lot of data center providers don’t like the idea of having water in the server room. 

Another approach is, for example, the system that Kyoto Cooling is using – that you have exchange of cold and hot air through a big rotating wheel. The benefit of this system is that you don’t use direct cooling, where the fresh air from outside the data center can go directly into the data center. This is often dangerous if you have pollution outside or if you have a fire and smoke outside. The Kyoto wheel uses indirect cooling, so you have a big rotating wheel with fine mesh through which the air can run, and half of the wheel is inside the data center, and half has contact with the air outside. The cold air from outside goes through the mesh and cools it, then as it rotates into the data center, the hot air is running through the structure, heating the structure up – and then as it rotates outside again it takes the heat outside. The benefit is that you don’t need big open holes in the wall – good for security reasons. If you want to cool a data center with direct fresh air, then you have big holes in the wall, and you have to protect them from attack. 

Then you have prototypes: For example, submerging the motherboard completely in oil, or the Microsoft concept for tiny data centers in the ocean – kept underwater in waterproof racks . So these are different ideas impacting the architecture of the server rack directly.  

DOT: What about power sources and power generation for data centers? 

BROCH: Most business cases use the public grid as the main power source, and then you have an emergency power supply for emergencies (for example, diesel generators). Then there are studies that do it the other way – so, you generate your own power, and in emergencies they use the public grid. One source of independent power generation is gas, another is fuel cells. There’s one initiative in the Frankfurt area called RheinMain BLUE Cluster, which is a project using fuel cells as the primary power source. Another idea is a project looking at creating gas from algae, and using this to power fuel cells, but this is still a prototype. Then there are also cogeneration power plants – so this is a mini power station that the data center can own and operate and which can be the main power supply or for emergency power.  

For emergency power, the diesel generators take some time to start and reach full load, so during that time you need an uninterruptible power supply (UPS) system. This can be an done with array of batteries, but they need maintenance. Another option is flywheel technology, which stores kinetic energy. You can use this as an active UPS system for galvanic isolation – this basically means that the momentum and inertia of the flywheel mass, which is connected to the public grid, actually stabilizes the current, so that the sensitive data center equipment is not subjected to sudden peaks and troughs in the current.

Roland Broch is Head of Member Development and the contact person for the Data Center Expert Group at eco - Association of the Internet Industry