Human Nature and AI Doomerism

Midjourney image of robot and boy

We live in a time of immense and often rapid change. Being in the midst of all of this, it’s easy for us to see much of the world in negative terms. Perhaps nothing illustrates this tendency so much as AI doomerism.

Certainly, there’s been some degree of this alarmist response to AI and machine intelligence running throughout the 20th century. Far from being limited to our modern era, such reactions to intelligent creations go back to Mary Shelley’s Frankenstein monster, the Golem of Jewish folklore, and beyond.

Perhaps it should come as no surprise then that this negative response has only increased in recent years as artificial intelligence has grown in capability. This has occurred for many reasons, not least of which are our own evolved tendencies and biases. As a species, we ascribe human characteristics and motivations to the world around us far too easily. This trait runs through beliefs and metaphors as diverse as animism, mother nature, mythologies, and the Gaia hypothesis. The presumed evolutionary benefit being that byassigning human traits to other aspects of nature as our own species was developing, we created an interpretation and understanding that increased our prospects of survival.

We saw this anthropomorphic response during the early years of AI when Joseph Weizenbaum created an early chatbot called ELIZA, in order to study communications between humans and machines. Based on a series of text-based scripts and word matching, ELIZA’s most famous program, DOCTOR, interacted with users as a Rogerian psychotherapist. To Weizenbaum’s surprise and chagrin, many users including his own secretary interacted with the program as though it was a real person.

Since that time, we’ve seen countless similar responses by lay individuals and even software developers and computer scientists. Whether this reaction is hardwired into us or a feature of our own socialization and enculturation, remains to be seen.

One spinoff of all this is we are all too ready to accept the notion that these devices will soon become conscious, go rogue and act with human-like motives and ingenuity. We do this even though the silicon circuits on which they are based are many orders of magnitude less complex than the neurons and brains we’ve sought to emulate.

This isn’t to say that AI can’t and won’t be a threat, because it can be, and it already is. Not because it wants to take over the world. But because the people using it already have consciousness, motives and ingenuity. In other words, the biggest risk factors we face from AI are not from AI itself, but from the people who wield it.

AI is and will be a tool of tremendous usefulness, especially in our increasingly complex world. The benefits we’ll see and are already seeing from it – from drug discovery and healthcare to robotics and cybersecurity to logistics and defense – far outweigh the risks and drawbacks of developing and using it.

But rather than worry about an impending AI apocalypse, an exceedingly unlikely development, we need to focus on the much more likely and immediate risks. These include, privacy intrusion, algorithmic manipulation via social media and other platforms, employment bias, copyright protections, and cybercrime. None of these will be initiated by our AI systems, but by end users, corporations, governments, and even the AI developers themselves. These are the true risks of AI that we need to anticipate, identify, and address so that we can continue to build the future we prefer.