top of page
Image by Joe Woods

05.07.22

THE DARK SIDE OF CHATBOTS

Chatbots are proliferating rapidly thanks to advancements in AI. Most of us have come into contact with them, commonly in a customer service capacity. Formulaic responses aside, they can save a lot of time. Who wants to spend over an hour on hold, only to be hung up on?


Chatbots mimic whatever information is fed to them. The result is a reflection of the integrity (or otherwise) of whoever programmed the information. Given the nefarious potential, it’s left some to question if this emerging technology could be used to spread hatred and false information online.


If you thought we’d reached peak misinformation dystopia already, it appears you thought wrong. There’s a new robot on the block, and its modus operandi is straight out of an episode of Black Mirror.

iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
Tape.png

Last year, Business Insider reported on a story first picked up by the San Francisco Chronicle about a new breed of incoming “hyperrealistic chatbots”. It begins:


“After Joshua Barbeau's fiancée passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.


“In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyperrealistic chatbots, re-created the experience of speaking with his late fiancée. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.”


I don’t know about you, but a robot version of my loved one speaking to me after they’ve passed feels, at best, spooky. Remember the ‘holographic illusion’ of Kim Kardashian’s late father, Robert Kardashian, arranged by Kanye for her 40th birthday? I’m still trying to forget it.


The article continues:


“It may sound like a miracle (or a "Black Mirror" episode), but the AI creators warned that the same technology could be used to fuel mass misinformation campaigns.”


As we’ve seen in the past with the Facebook / Cambridge Analytica debacle, misinformation spreads like wildfire online and can lead to very real political consequences. The pandemic accelerated the propagation of our virtual lives, to an even greater extent than we could have imagined back in 2016. And of course, the targeted misinformation about both COVID-19 itself and the vaccine was a key feature of the online world in 2020/21. But just how does this new tech work, and why is it so much more worrying than what’s come before?


The article continues:


“Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive data sets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.


“Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube's algorithm still pushes misinformation, and the non-profit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the ‘Disinformation Dozen,’ they have millions of followers.”


Scouring Reddit to inform your humanity – what could possibly go wrong? It’s a sad fact that misinformation and extreme opinions are what capture our attention quickly in the digital age. The more polarizing the content, the more engagement - whether it’s due to outrage or approval - a video is likely to get. 


Unfortunately, sowing the seeds of division is very easy to do. The perception created by such information is instrumental in achieving an “us vs. them” mentality. The upshot is that we all become more deeply entrenched in our own views, instead of trying to find a middle ground.


The article continues:


“In the last decade, an approach to AI known as “machine learning” has leaped forward, fusing powerful hardware with new techniques for crunching data. AI systems that generate language, like GPT-3, begin by chewing through billions of books and web pages, measuring the probability that one word will follow another. The AI assembles a byzantine internal map of those probabilities. Then, when a user prompts the AI with a bit of text, it checks the map and chooses the words likely to come next.”

The sheer volume of information processed by AI is mind-boggling. Equally so, is the amount of information we’re exposed to daily. It’s hard to remember a time when we weren’t bombarded with high-adrenaline news coming from social media, television, and radio. Whether it’s a real-time news feed about the spread of a pandemic or the unfolding outbreak of war; our brains are on constant high alert. As ensuing differences in opinion play out on social media, much reaction is fuelled by incorrect information, causing many people to feel more divided than ever.


This latest version of GPT has “parameters” of information far greater than the previous versions:


“The first version of GPT, built in 2018, had 117 million internal “parameters.” GPT-2 followed in 2019, with 1.5 billion parameters. GPT-3’s map is more than 100 times bigger still, assembled from an analysis of half a trillion words, including the text of Wikipedia, billions of web pages and thousands of books that likely represent much of the Western canon of literature.


“It’s easy to see how bad actors could abuse GPT-3 to spread hate speech and misogyny online, to generate political misinformation and to impersonate real people without their consent.”


Bad bots were instrumental in spreading false myth and rumour about COVID-19. But not all bots are bad. There is nothing inherently sinister about the computer programmes themselves, and companies like Twitter use bots to enhance people’s experience online. Similar to the chatbot that saves you the indignity of hold music, many are being used in a way that makes modern life more convenient.


But with GPT-3 the potential for misuse is so great, the company who created it haven’t fully released it:


“OpenAI (which, through a spokesperson, did not make anyone available to answer questions for this story) cited such dangers when it announced GPT-2 in February 2019. Explaining in a blog post that GPT-2 and similar systems could be ‘used to generate deceptive, biased, or abusive language at scale,’ the company said it would not release the full model. Later it made a version of GPT-2 available; GPT-3 remains in beta, with many restrictions on how testers can use it.”


It’s comforting to know there are restrictions in place for how this new tech can be used. With great power comes great responsibility, after all. But it also suggests we’ve yet to reach the apex of ‘fake news’, unless there are sophisticated ways of identifying what’s a bot and what’s not. Otherwise, who could predict what further consequences misinformation could have on public health or democracy, not to mention our day-to-day sanity.


However, one Professor is keen to assert that, for all its ingenuity, GPT-3 doesn’t truly have imposter potential:


“Despite their size and sophistication, GPT-3 and its brethren remain stupid in some ways. ‘It’s completely obvious that it’s not human intelligence,’ said Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute and a pioneering AI researcher. For instance, GPT-3 can’t perform simple tasks like tell time or add numbers. All it does is generate text, sometimes badly - repeating phrases, jabbering nonsensically.”


So, there you have it. If you are in any doubt it’s a real person you’re dealing with, just ask them what time it is, or to recite their times tables. That should foil them.


On a serious note, it serves as a timely reminder – don’t let your guard down online. As chess grandmaster and chairman of the Human Rights Foundation, Garry Kasparov, once tweeted: “The point of modern propaganda isn't only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.”

Image by Joe Woods
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
bottom of page