Exascale Supercomputers: The Next Frontier
The last few years finally saw the arrival of supercomputers capable of petascale performance. In all, seven systems from the US, China, Japan and France achieved the milestone of processing a million billion floating point operations per second (flops) by the end of 2010. But even before this target was reached, computer scientists and engineers were setting their sights on an even loftier goal: Exascale computing.
The supercomputer has become the mainstay of both theoretical and applied science. Climate modeling, genome analysis, protein folding, nuclear fusion research and many other fields all benefit from the continuing gains in processing power. Now, with a range of exascale initiatives, the U.S. and Europe have set a goal of building a supercomputer one thousand times more powerful than any today. And they want to do it sometime between 2018 and 2020.
At first glance, this seems like it should definitely be achievable. After all, three orders of magnitude in seven to nine years certainly falls within the scope of Moore’s Law. But whereas the move from terascale to petascale processing was considered evolutionary, the jump to exascale supercomputers will require advances that will be revolutionary. Simply scaling up current technology won’t work. For instance, the Cray Jaguar supercomputer at Oak Ridge National Laboratory has more than a quarter of a million processor cores, over 360 terabytes of memory and uses 7.8 megawatts at peak power. A combination of air and liquid cooling remove enough waste heat to potentially warm several large buildings. Scaling such a system up a thousand-fold just isn’t feasible.
So new technologies will have to be developed. We’ll need processors with considerably lower power requirements and heat generation, faster optical interconnects and improved algorithms that provide better utilization of available processing cycles. And that’s just for starters.
Will we be able to achieve this goal in the timeframe that’s been set? Possibly, but only just. Professor of computer science and engineering, Peter Kogge recently wrote of his concerns about realizing exascale computing in IEEE Spectrum. Kogge was editor and study lead for the Exascale Computing Study initiated by the Defense Advanced Research Projects Agency (DARPA). This study illuminated a number of obstacles which will have to be overcome. (Note: The DARPA study was made in 2007 to determine the feasibility of exascale computing by 2015.)
But the dream of exascale supercomputers is important enough that DARPA, the U.S. Department of Energy and private industry are forging ahead despite such concerns. Last year, Intel opened three new centers dedicated to exascale research: the Exascale Computing Research Center in Paris, France, the ExaCluster Laboratory in Juelich, Germany and the ExaScience Lab in Leuven, Belgium.
Why is exascale supercomputing so important? The world faces significant challenges in the coming decades. Dealing with climate change, peak oil and a multitude of engineering challenges will require tremendous computing resources. At the same time, we’ve entered an era of massive data sets. Everything from genomics and proteomics to molecular modeling to nanotechnology will benefit from these advances. In short, much of the science of the 21st century will be impacted by exascale supercomputing.
The great thing about a grand challenge like this is even if it does take longer than expected to achieve, there will be all kinds of research and innovation that yield benefits along the way. New processor architectures, improvements in energy efficiency and advancements in parallel algorithms are but a few advances we can expect to eventually trickle down to other, more publicly accessible uses.
But the US and Europe aren’t the only players pursuing the exascale dream. China has very evident goals of continuing to build the world’s fastest supercomputers. As of November 2010, their Tianhe-1A supercomputer was ranked the world’s fastest by TOP500.org. (TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations.) China is also currently building their third National Supercomputing Center in Changsha, Hunan Province, a massive complex that is expected to be completed by the end of 2011. China has set a goal of building an exascale supercomputer sometime between 2016-2020, which should give cause for concern. Given their focus and the speed with which they moved to the top rank with the Tianhe-1A, China could definitely jump far ahead, if we don’t make this a priority.
Fortunately, the Obama administration has asked for $126 million in the 2012 budget for the development of next generation supercomputers. Whether it will receive approval from the new Congress remains to be seen. In my opinion, a decision not to fund such important technology could have far-reaching consequences for our competitiveness in the world and would definitely show a lack of foresight.
superputers – What is the difference between computer and supercomputer?
April 10, 2011 @ 8:44 pm
[…] Exascale Supercomputers: The Next Frontier – Intelligent Future […]