Review: Superintelligence by Nick Bostrom
Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.
For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.
One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.
As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.
As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.
It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.
The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.