Bard AI: The Fault Lies in Our Selves

Bard AI

I’ve been sitting back for a bit letting the dust settle around all of the noise and breathless speculation regarding several advances in generative AI, particularly ChatGPT. Generative AI based on large language models (LLMs) has tremendous potential – from code generation and copy writing to recommendation systems and AI assistants. The impact shouldn’t be understated, but nor should the potential for negative and unanticipated consequences, which is what I wanted to explore here.

As so often happens with new technologies, the media has been quick to jump on these recent developments to fill their news cycles, while the corporations that stand to make a potential buck are all too happy to let them do it. Compounded by marketing and sales hype, we quickly find ourselves in a spiraling feedback loop that fills our channels with impossible, or at least infeasible, promises of a utopic vision for the future.

Meanwhile, in the not-so-utopic present, we shouldn’t be surprised to find many of these technologies being turned to the creation of pornography. As we’ve seen throughout history, this is almost an inevitability, a reality that can be traced from stone age artifacts to the printing press, the early Internet bulletin boards, VHS’s triumph over Betamax, DVDs, video streaming and even online payments. Many of these technologies were just gleams in the eyes in their inventors until the all too dependable driver of human sexual response propelled them to mass adoption. Make of it what you will: for so many such advances, sex remains the ultimate killer app.

However, many recent new technologies are making possible new forms of exploitation and abuse as well. For instance, the widespread creation of deepfakes is now being used for everything from cyberbullying to revenge porn. These violations are anything but victimless crimes and are already leading to considerable psychological distress, severe depression and even suicide among its victims.

With generative AI, this trend will no doubt continue over the coming years, despite whatever safeguards we may try to create around it. We’re already seeing a host of objectionable new tools that leverage these recent advances, allowing almost anyone to create exploitative deepfakes and AI-generated porn. Even when not used for creating pornography outright, far too many publicly available AI image generators routinely produce imagery that sexually objectifies the subject.

For instance, many users, particularly women, have found that programs like Lensa, which allow users to portray themselves in historical and fictional guises, overly sexualizes them, even to the point of rendering some users in a state of undress.

This shouldn’t come as a tremendous surprise since these engines are mostly being trained on content that already exists, nearly all of it from the Internet. In other words, their foundations are all of our own predilections and peccadillos. What were once comparatively private desires that might be shared with a loved one or within limited subgroups are now being incorporated into a globally aggregated fantasy pastiche for all to see.

Unfortunately, those may be the least of our worries. With the recent release of ChatGPT from OpenAI, we find ourselves faced with several issues I think should be concerning us to a much greater degree. As they say, a picture paints a thousand words, but in the case of pornography they’re pretty much the same words over and over again. But chatbots based on LLMs achieve their magic through a brilliant but mindless statistical manipulation of language based on every combination of all the words that have ever been written. (Or at least all those that were available on the internet as of a few years ago.) Given ChatGPT’s ability to summarize complex content quickly and on demand, it should come as no surprise this technology is now being considered a contender for the next generation of internet search.

Of course, this has Google in a quandary because search is the basis for the majority of its revenue. Ironically, they and other tech giants have been comparatively conservative in their releasing of their more recent AI tools. After all, it was researchers at Google Brain who in 2017 introduced the transformer concept that’s used in LLMs like GPT-3, as well as Google’s own LaMDA, which powers their recently introduced chatbot, Bard. Yet they’ve routinely held off on releasing their work too quickly into the public domain as various technical and ethical concerns are explored. On the other hand, startups like OpenAI, who have far less to lose and much to gain, race to get to market as quickly as possible. Now faced with ChatGPT, the general consensus is that Google must rapidly adapt and innovate or risk being left behind.

So, Google is rushing to get back out in front. Just this past week, the tech giant announced its own LLM-powered chatbot, Bard. Unfortunately, Google also held a demo this week in which Bard blundered badly when it provided the wrong answer to what should have been a straightforward query. 

This worries me a great deal. Not that Google’s stock took a hit or that they may miss their quarterly projections, but that in using these transformers to drive next-gen search, we could perpetuate and reinforce a growing body of erroneous information that comes to pervade much of the world’s knowledge. Unfortunately, I don’t believe the solution is simply better curation of the initial training data sets or adding certain filters. There is far too much nuance in language and human interaction for this approach to be successful without systems that can more fully understand the context of the question and the common sense rules of the world we live in.

As AI ethicist Margaret Mitchell recently pointed out, the error in Google’s demo stems from the question’s phrasing. More specifically, the answer being returned is predicated on Bard’s first order logic interpretation of the question. (Interpretation, not understanding—a critical distinction.)

Unfortunately for Google, most people don’t think or communicate or ask questions using first order logic. Instead, we use language that leverages our implicit commonsense, relative to the context of both the question and the questioner. When someone asks us a question, we intuitively understand which of a range of meanings each part of that question conveys while making allowances for any less-than-ideal logic structure they may throw into the mix. The chatbot doesn’t do this, nor is it going to be consistently helpful if we can’t adhere to the logic it requires. As I’ve written for years, the evolution of interfaces over the past century has routinely resulted in our interactions with machines becoming more natural, not less so. Forcing us to now frame queries in a machine-friendly format would be a major step in the wrong direction.

I’m not saying LLM-powered search can’t be a very useful tool, especially when trained on a limited and targeted subset of data for a purpose like creating a helpdesk chatbot. But a general knowledge search engine based on the good, the bad and the ugly that is shared across the internet is an epistemological disaster in the making. Errors will routinely be made, many of which will get past whatever checks, balances and filters we may have in place. When this occurs a few times, it will probably just be an inconvenience. But allowed to continue over years and decades, we stand to iteratively pollute the sum collection of knowledge we’ve assembled since the internet began and possibly well before.

Information and knowledge can’t exist without errors, of course. But, as a narrative-based species we are especially reliant on and vulnerable to stories. Our sense of who we are, our relationship to our families, friends, culture, society and institutions, even our consciousness itself, are all based in the narratives we create, share and internalize. We need only look to the recent cult of QAnon to have an inkling of how bad this could become.

This is far from just being about unintentional errors. It’s easy to foresee how such a search engine could be weaponized, whether for general disinformation, profit, political gain, or even literally rewriting history, a favorite pastime of depots everywhere. If this sounds alarmist, consider that only a few years ago, an earlier kind of chatbot, Microsoft’s Tay needed less than 24 hours of intentional abuse on Twitter to be converted into a foul-mouthed Nazi-sympathizer. During the past decade, Russia established an army of troll farms and chatbots to steer public opinion using social media in order to influence multiple elections around the world. So don’t think this sort of thing can’t and won’t continue; it’s happening even as you read these words.

But worse than all of this is what happens to public trust when everything can be questioned as a potential fabrication or an intentional outright lie. As Orwell wrote: “If thought corrupts language, language can also corrupt thought.” Society is based on many things but when we can’t even know if what we think we know is true, everything really falls apart.