The Supercomputer Race

Recent reports that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.