July 2017 - Data Center | Green IT | Quantum Computing | Energy Efficiency

HPC - Tackling Problems that are Just Not Solvable by Physical Methods (Transcript)

Testing, modelling, simulations – innovations in fields as disparate as blood circulation and aviation depend on high-powered computing applications to crunch the numbers and analyze the vast quantities of data produced in the development of new and better products. Andy Long from Hydro66 talked to dotmagazine’s Judith Ellis about the exciting applications of high performance computing (HPC), data center requirements for HPC, and abundant green power.

© Mat Richardson

Listen to the 10-minute interview here or download the audio for later

Transcript

DOTMAGAZINE: Andy, tell me a little bit about high performance computing. How are companies using HPC to optimize product design and production? 

When you look at some of the more interesting or more kind of out-there applications I've read about (…), you can rapidly decrease the time to market for products, but also make massive savings in the development.

ANDY LONG: Well, what's really interesting about the use of high performance computing is the innovative ways that companies are using it. I mean, it goes back to the ‘80s originally, especially in the automotive field. But what it lets you do is try all sorts of different designs without having to build the product in real life. So, the classic example is crash simulation, but also noise, vibration, and harshness modelling. And then when you look at some of the more interesting or more kind of out-there applications I've read about – the acoustic simulation for different internal designs of dishwashers, and also in designing new aircraft by simulating the aerodynamic performance of a particular body shape at a certain speed especially for a known power of the aircraft – you can rapidly decrease the time to market for products, but also make massive savings in the development. 

DOT: So what are the really exciting innovative applications that are happening at the moment? 

There are certain types of problems that are just not solvable by physical methods.

LONG: Ah, there's loads. I mean, there are certain types of problems that are just not solvable by physical methods. So, for example, using computational fluid dynamic modelling and looking at the way that blood flows around inside an artificial heart pump, companies have been able to miniaturize components to reduce turbulence to make sure that these things are actually safe to go into people. But also, there's really interesting applications in the space of the design of new drugs, and crunching the massive amount of data that you get when you're creating these new tailored therapies. So, we see gene splicing and gene therapies being used and a world of kind of customization of medicine, but it's only through HPC that you can actually gain the insights to understand how modifying different gene sequences will play out when you actually go in and do the chemistry. 

DOT: What impact do you think that quantum computing, when it eventuates, will have on HPC? 

If anybody successfully builds a large-scale general purpose quantum computer, then we've got much bigger problems to worry about because it breaks a whole bunch of cryptography.

LONG: Well that's a big open question. I mean, there are those that think that, at the moment, the quantum machines that you can get (and I think the D-Wave machine is the most interesting and exciting one) can only really be used on a subset of problems, and they are some of the classic problems that are in NP-hard, around thinking about how to optimize networks, how to optimize the use of radiation for radiotherapy, where it's a particular type of problem that fits really nicely into the quantum model. If anybody successfully builds a large-scale general purpose quantum computer, then we've got much bigger problems to worry about because it breaks a whole bunch of cryptography – in fact, it basically breaks all of cryptography, if you look up Shor's theorem. So at the moment it's very specific, it's very expensive - you know, Google and NASA just went halves on one – but who knows? 20 years from now, even 10 years from now, I think we'll be seeing a lot of quantum machines in the deep learning space, in the AI space as well. 

© Mat Richardson

DOT: Changing the focus slightly, and looking at the data center. What are the requirements for HPC applications in terms of infrastructure? How is it different from other applications? 

LONG: I think the biggest difference is that you have to kind of pack the compute and the storage tightly together and that can give legacy data center designs issues in terms of power density. So, you need to crunch a lot of compute and a lot of power into very small areas. Typically, this is going from the typical data center rack, being five kilowatts of power, up to the 10, 15, or 20 kilowatt range. And then if you get into 30, 40, and 50 kilowatt range, you need to start looking at technologies like liquid cooling

DOT: How energy efficient is HP computing and how can it be optimized? 

And then the last stage of creating efficiency is within the machine and the application itself: getting the code right and getting the algorithm right so that you get the most bang for your buck out of the cycles that are available to you.

So there's two or three things that affect the efficiency of an HPC environment. One is the environment in which it sits. So, if you're in a 10 or 20 year old legacy on-premise data center, you're probably running a PUE - which is power usage efficiency - of somewhere between one point five and two. If you're up at two, it means for every kilowatt that you use on the computing, you're wasting a kilowatt in cooling and in other infrastructure. That's a lot of waste. You then have to look at the physical machines themselves, and the efficiency of the power supplies. We've seen Facebook and other providers doing DC power distribution in some cases, which is very interesting. And then the last stage is within the machine and the application itself: getting the code right and getting the algorithm right so that you get the most bang for your buck out of the cycles that are available to you. 

DOT: At Hydro66, you do a lot of HP computing in your data center. How does that work for you in terms of location - because you're a long way away from the rest of the world, I believe.

Instead of taking power to these centers of information and population, we're now seeing the data being taken to centers of power.

LONG: Yeah, well we're not that far – it's an hour from Stockholm. But the real change that's happened in the market is that for decades we were all told that you had to be in city centers to do any form of data center computing, even HPC, and that was because of the cost of bandwidth. You know it was millions and millions of dollars just to get an STM-1 from A to B. And the cost of bandwidth has been falling exponentially, and at the same time the cost of power has been going up. And so, instead of taking power to these centers of information and population, we're now seeing the data being taken to centers of power. So you can imagine, whether it is the data streaming out of autonomous vehicles or semi-autonomous vehicles, or other kind of deep-learning type applications, or the Internet of Things, the huge amount of data that's being produced that then needs to be crunched and analyzed or some of these more traditional industrial applications, the key thing is really the power infrastructure and the expandability. 

So, one of the things that we thought about when we built Hydro66 in Sweden was we wanted to be renewable – so we were only going to ever do 100% renewable power, and to do that by connecting to the grid and using renewable electrons, if you like, not just buying certificates – but also expandability. And the river that we are on has 4,000 megawatts of generation. One of the hydro plants has got nine hundred and eighty megawatts of generation on it and that's more than all of the data centers in London, Frankfurt, Amsterdam, and Paris put together. So you know, as we look at this kind of exponential curve going up and the exponential growth of data and of compute, for us one of the main things is going to be just having the headroom – and that was the primary reason for us locating out there. I mean, cold weather helps as well, right, but abundant four cent green power is a pretty good story. 

© Mat Richardson

We think there is a fundamental rethinking of the way that data centers are designed and built going on. Facebook, Google, and Yahoo were at the forefront of that and we're adopting those principles.

DOT: Is there anything else you'd like to add? 

LONG: You know, it's going to be an interesting couple of years we see ahead. We're seeing some new standards evolving for data center resilience, both from the open standards that are being put forward by Lex Coors and the Green Grid and also the really interesting work that the Infrastructure Masons group is doing. So we think there is a fundamental rethinking of the way that data centers are designed and built going on. Facebook, Google, and Yahoo were at the forefront of that and we're adopting those principles and adopting those ideas – we're down at 1.07 PUE now. So we're really excited about the next couple of years as the industry begins to really think about efficiency as well as about, you know, connectivity and the traditional concrete boxes that you see. 

Andy Long has over 20 years experience in telecoms and finance in sales, business development, and management. At Black Green Capital he was responsible for leading the commercial launch and build of Hydro66 and continues to advise on strategy. Prior to Black Green Capital he worked at Easynet, BSkyB, Fisher Investments, and AllianceBernstein. He holds a degree in Computer Science from the University of Edinburgh.

Hydro66 logo

Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.