Let’s take a little flight of fancy – imagine what would happen if your city, or your state, suffered a massive blackout. OK, we’ve probably all experienced something of the sort – the inconvenience of having to throw out ruined perishables from a warm fridge, the search for candles, only to realize that since you changed over to smoking an e-cigarette you no longer have a lighter in the house, finding the torch only to discover that the batteries are flat, surviving the night without TV, stereo, or Internet. Domestically, it can be a pain, but hey – we all went camping when we were young. We can rough it.
But head out the front door, and there’s another side to a blackout. Shop payment systems are largely electronic, so you will need cash to pay for any supplies you want to stock up on. The banks also depend on their IT systems – so you might have trouble getting the cash you need. And if you delve deeper into the supply chain which keeps our populations fed and watered, as Marc Elsberg does in his chilling thriller “Blackout - Tomorrow Will Be Too Late” – describing a Europe-wide power outage caused by a cyber terrorist attack - you’ll start to realize how vulnerable we would be without our computer-controlled logistics, our electronic bank cards, our hospitals, or even our water supply.
Business continuity at stake
So, where would we be if our IT suddenly failed? In our increasingly digitalized society, most businesses today are utterly dependent on their IT infrastructure for their business activities, and a sustained power outage will have a serious impact on business continuity, and in turn, on the economy. That’s why data center builders and operators, for example, take great pains to ensure their back-up power systems are in place and functional. Of course, other critical infrastructure providers – such as hospitals and banks – also need such emergency systems in place. This is being written into law across a range of countries and the European Union placing the burden of responsibility on infrastructure providers.
However, sensitive IT equipment is vulnerable to not only a sustained power outage. Fluctuations in the grid lasting even less than 10ms (a mere blink of an eye) can be a problem, and need to be bridged effectively to ensure the smooth operations of IT. Despite the laudable objectives behind the push for a change in energy policy around the world, this is becoming a problem in the age of renewable energy.
Renewable energy – green but volatile
Renewable energy takes many forms – hydroelectricity, wave power, biomass, solar and wind power, to name a few. All of these, biomass excepted, are geographically and climatically limited. Many of these are also volatile sources of power – not generating a stable current, but fluctuating madly depending on the time of day, weather conditions, etc. Unfortunately, a power grid can only function effectively if consumption and generation are equal. As soon as more power is consumed than generated, the grid can’t cope, and you have a blackout.
Anyone who has lived in a hot climate knows what I’m talking about: a day in the high 30s (degrees centigrade) or up into the 40s – something which is becoming more common even in cooler and temperate climates – will have everyone turning on their company and domestic air conditioners, with government officials pleading for moderation to avoid a complete breakdown in the power grid. It’s the power equivalent of a ban on washing your car in a mid-summer drought. But it goes the other way too – if too much power is being generated and not used, then power plants will need to be taken offline to ensure stability.
Coming back to renewables, sources like solar and wind power are extremely volatile, and require a very stable underlying basis of power generation. To make effective use of such types of power, there needs to be very effective storage of the excess power generated, in order to be able to use it to feed the grid when there is a deficit, as Staffan Reveman argues (see "Data Centers in the Transformation to Sustainable Energy Consumption"). Energy storage brings to mind batteries – and if you consider the batteries required for an electric car with today’s technology, imagine the challenge of powering a national grid with them. Certainly, there are other ways of storing “energy”, such as pumping water into a high-altitude lake, so that it can be released into a hydroelectric plant on demand to act as a stopgap. But high altitude lakes are also geographically limited.
Could batteries solve the problem of grid stability?
So the race is on to develop effective energy storage technology – not just for industries like e-mobility, but for the stability and maintenance of our power supply as a whole. The different approaches taken in technology race to develop grid-scale storage range from batteries that store energy chemically, like lithium-ion batteries, to flow batteries which store energy in electrolytes rather than electrodes, to compressed-air storage and even the idea of using parked and plugged-in electrical cars for night-time storage. Other storage solutions are also being trialed, such as the Solana power plant in Arizona, where excess heat is pumped into vats of salt, which stores the heat and releases it when water, pumped through the salt, boils. Flywheels made from carbon fibers and operating in a vacuum are also promising solutions due to the unlimited number of charge and discharge cycles.
Certainly, Tesla’s lithium-ion technology is looking increasingly promising, with battery production not only for e-cars, but on the scale of a house, and now most recently, his endeavor to provide South Australia with a grid-scale battery array of 100MWh within 100 days. One Australian manufacturer, Zen Energy, wants to give Elon Musk a run for his money, with their solar-based technology, also for the supply of grid-scale batteries to rectify the instability of state grids in South Australia.
Protecting sensitive IT hardware from grid fluctuations
In the meantime, however, fluctuations in grids caused by the increased use of volatile power sources are still an issue that needs to be dealt with. But back to where we started: what does this mean for the security and reliability of our processes and data? Data centers not only need large-scale generators to provide an extended power supply if the supply from the grid drops completely, they also need uninterruptible power supply systems, which pick up the slack on the micro-second level, and bridge the time until the main emergency power generators can come online (see "A UPS Solution with Kinetic Energy" by Armin Höfner from Telehouse Germany for one solution to this). They also need to test these systems regularly – optimally doing what are known as black building tests: cutting the power during operations to test emergency systems. To the uninitiated, perhaps a heart-stopping moment – but I have it on good authority that the data center operators know what they’re doing.
Data centers can undergo a range of certifications to ensure that they are doing everything possible to guarantee reliability and availability at all times, such as ISO, Uptime Institute, TÜV, and the eco Data Center Star Audit (DCSA). The eco DCSA (as described in "Innovative Approaches to Energy Efficiency in Data Centers") assesses not only the structural and physical security of the data center, but also the redundancy concept, the supply security and organizational processes, as well as offering a green star certification of energy efficiency. For companies looking to outsource their IT to the cloud, good results in independent certifications offer guidance in choosing a data center to meet the needs of the respective criticality of data and processes.