top of page
Image by Joe Woods

06.09.20

THE AI TAKEOVER

A lot of things that come from the mouth of Elon Musk – or his Twitter account, for that matter – should be taken with a shovel-load of salt. But when he talks about the potential species-ending threat that Artificial Intelligence (AI) poses, perhaps we’d do well to sit up and listen.


Quoting the Monty Python line, “Nobody expects the Spanish Inquisition”, Musk hasn’t been shy in sharing his fears that one day - in the not-too-distant future - AI will turn on us.

iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
Tape.png

Way back in 2014, Musk called for greater regulatory oversight during an interview at the AeroAstro Centennial Symposium:


"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful... I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish."


He even went so far as to compare advancements in the field to summoning a demon:


"With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out."


Fast forward to a July 2020 interview with Maureen Dowd for The New York Times, and Musk’s cautionary stance remains unchanged:


"My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are… And this is hubris and obviously false."


Musk’s claim is supported by his work with AI at Tesla, allowing him to state with some confidence:


"We’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird."


The Cambridge Dictionary defines 'unstable' and 'weird' as:


Unstable

Adjective

Not firm and therefore not strong, safe, or likely to last

Weird

Adjective

Very strange and unusual, unexpected, or not natural


It may not represent everything going to hell – to use Musk’s terminology – but it doesn’t sound like a walk in the park either. “Not likely to last” is especially sinister when you apply it to the human race. Perhaps those dystopian novels and movies about robot uprisings were right on the money – maybe it really is a question of “when” rather than “if”.


Musk’s main concern is reportedly DeepMind, a secretive London-based AI lab owned by Google that Musk himself was an early investor in:


"Just the nature of the A.I. that they’re building is one that crushes all humans at all games…I mean, it’s basically the plotline in ‘War Games.’"


Now I must admit I haven’t seen War Games (or if I have it’s long forgotten), but the plot synopsis makes for interesting reading:


"The film follows David Lightman (Matthew Broderick), a young hacker who unwittingly accesses War Operation Plan Response (WOPR), a United States military supercomputer originally programmed to predict possible outcomes of nuclear war. Lightman gets WOPR to run a nuclear war simulation, believing it to be a computer game. The computer, now tied into the nuclear weapons control system and unable to tell the difference between simulation and reality, attempts to start World War III."


But this isn’t the first time Musk has referenced a movie to make his point. Explaining his involvement with DeepMind to US news channel CNBC in June 2014, Musk stated:


"I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there… There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad."


Unsurprisingly, Demis Hassabis - CEO of DeepMind - takes a different view, as his thought piece for The Economist illustrates:


"I believe artificial intelligence (AI) could usher in a new renaissance of discovery, acting as a multiplier for human ingenuity, opening up entirely new areas of inquiry and spurring humanity to realise its full potential. The promise of AI is that it could serve as an extension of our minds and become a meta-solution."


He continues:


"Traditional AI programs operate according to hard-coded rules, which restrict them to working within the confines of what is already known. But a new wave of AI systems, inspired by neuroscience, are capable of learning on their own from first principles. They can uncover patterns and structures that are difficult for humans to deduce unaided, opening up new and innovative approaches. 


"For example, our AlphaGo system mastered the ancient game of Go just by competing against itself and learning from its own mistakes, resulting in original, aesthetically beautiful moves that overturned thousands of years of received wisdom. Now, players of all levels study its strategies to improve their own game…


"…By deepening our capacity to ask how and why, AI will advance the frontiers of knowledge and unlock whole new avenues of scientific discovery, improving the lives of billions of people."


Now that doesn’t sound so scary, does it? If anything, it sounds exhilarating. A mind-blowing, “imagine the possibilities” moment that tech fanatics like me relish… But to accept what Hassabis says at face value would be naïve. Consider why the CEO of a company is writing an article for The Economist in the first place – it’s a PR exercise. A prime opportunity to tout the company wares to a wider audience.


I have no doubt that AI will prove hugely beneficial in a multitude of ways – even so far as to improve the lives of billions of people, as Hassabis states. But it would be folly to overlook the risks such advancements could pose.


As the late, great Professor Stephen Hawking warned in a 2014 interview with the BBC:


"The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate…Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."


The following year, Hawking signed an open letter calling for safety measures to be instituted. To date the letter has been signed by over 8,000 people including Hassabis, Apple co-founder Steve Wozniak and - you’ve guessed it - Elon Musk.


A document entitled "Research Priorities for Robust and Beneficial Artificial Intelligence" was attached to the letter, offering examples of research directions that could help maximise the societal benefit of AI. The document includes a quote from Microsoft’s now-Chief Scientific Officer, Eric Horvitz:


"...we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes - and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise?... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an 'intelligence explosion'?"


I don’t know about you, but I find this reassuring. At least on some level. To know that these questions are being asked by some of the greatest minds of the modern era, is to know that we - as a species - are in some small way prepared for what comes next. However weird it may be….

Image by Joe Woods
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
bottom of page