top of page
IntrepreTED_Logo.png

THE DANGER OF AI IS WEIRDER THAN YOU THINK

01.07.20

WHAT'S IT ABOUT?

Overview from TED

The danger of artificial intelligence isn't that it's going to rebel against us, but that it'll do exactly what we ask it to do, says AI researcher Janelle Shane.

Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems - like creating new ice cream flavors or recognizing cars on the road - Shane shows why AI doesn't yet measure up to real brains.

"It has the approximate computing power of an earthworm, or maybe at most a single honeybee, and actually, probably maybe less."

White Brick Wall
Janelle Shane

AI Researcher

While moonlighting as a research scientist, Janelle Shane found fame documenting the often hilarious antics of AI algorithms.

Sylvain.jpg
MyTakeSmall.png

MY TAKE...

Artificial Intelligence (AI) has come a long way in the past few decades. So much so, you’d be forgiven for thinking that those clever robots from sci-fi movies that seemingly “think” for themselves may already exist in real life. You’d be wrong. For now, at least…


Janelle Shane – in her intriguing TED Talk “The Danger of AI is Weirder Than You Think” - explains the common misconception:


"In movies, when something goes wrong with AI, it's usually because the AI has decided that it doesn't want to obey the humans anymore, and it's got its own goals, thank you very much.


"In real life, though, the AI that we actually have is not nearly smart enough for that. It has the approximate computing power of an earthworm, or maybe at most a single honeybee, and actually, probably maybe less."


Now I’m no David Attenborough but I imagine earthworms to be pretty basic creatures, capable of essential survival skills (eating, mating, sleeping?) but not much else. That said, I’ve never actually considered whether earthworms sleep before… A quick Google search later and the little critters prove themselves to be quite enigmatic:


"It really depends on the definitions of sleep. If sleep is defined as a period of inactivity, then worms indeed sleep. If sleep is defined as a loss of consciousness, typical brain wave patterns consistent with “sleep” and closed eyes (which worms do not have), then worms do not sleep."


Thanks for enlightening me, Uncle Jim (or should that be Earthworm Jim?!)


Equipped with a better understanding of how basic a computing power we’re talking, I dive back into Shane’s talk:


"We're constantly learning new things about brains that make it clear how much our AIs don't measure up to real brains.


"So, today's AI can do a task like identify a pedestrian in a picture, but it doesn't have a concept of what the pedestrian is beyond that it's a collection of lines and textures and things. It doesn't know what a human actually is.


"So, will today's AI do what we ask it to do? It will if it can, but it might not do what we actually want…"


Whilst not following orders sounds like robot rebellion to me, the issues we face with AI are far more nuanced than that. And I’m curious to find out why…

BREAKING THE ICE

Shane’s presentation is punctuated with humorous doodles depicting the differences between expectation and reality.


My favourite centres on a pile of parts.


The AI is asked to assemble them into a robot to get from Point A to Point B. Whilst traditional computer programmes would provide step-by-step instructions on what to do with each individual component (akin to that Dem Bones ditty – the leg bone is connected to the… knee bone, the knee bone is connected to the…), the AI is only informed of the goal itself: to reach Point B from Point A. AI succeeds in solving the challenge using trial and error, but in a different way than you might expect.


It assembles the parts into a tower which then topples over to land at Point B. Genius, you might argue. But it’s not the “correct” way. We wanted to see a little robot guy with arms and legs walking from one side of the screen to the other. This AI, however, has no concept of arms and legs. Only shapes and measurements.


In 2006, at the height of his Peep Show fame, David Mitchell threw a curveball by narrating a somewhat questionable reality TV show for E4, entitled Beauty and the Geek. Advertised as “the ultimate social experiment”, attractive women were partnered with brainy men to teach one another about unfamiliar subject areas – fashion, hair, make-up and the likes for the “geeks”, and physics, chemistry and maths for the “beauties”.


The penultimate episode saw the remaining “couples” tackle a series of timed tasks. One such challenge called for them to decide upon the best way to extract a key from a block of ice. 


Presented with a kettle, matches, firelighters and a blowtorch, the “geeks” were seen furiously working out its compressive strength and optimal melting temperature. One “beauty”, meanwhile, opted for smashing the ice block against a wall….


I should know exactly how easy it is to break ice…” The geek groans, frantically rubbing his face in the hope for inspiration.


Just throw it. Just throw it, it’ll break.” The beauty replies.


“You reckon?”


“Well yeah, obviously. It’s only water.”


Bingo! The key was free within seconds, much to the consternation of her partner in crime.


I see AI working in much the same way. We – as humans – represent the “geeks” in this scenario. We are learned. We benefit from a wealth of knowledge and experience, and sometimes we seek complex solutions to what are, in truth, simple problems.


By designing the toppling tower, the AI has achieved its objective – thereby releasing the key – but not in the way we originally anticipated. Whilst humans follow conventions (designing a robot with arms and legs that resemble ourselves), AI rips up the rulebook and smashes the ice block. The results are the same, but the methods vary wildly in design, speed and complexity.


The trick then, is figuring out how to teach AI other approaches.

DANGER AHEAD!

Had the rules of Beauty and the Geek dictated that the ice block could not be smashed, then the “geeks” scientific knowledge of melting points would have come to the fore and the “beauty” would have learned to approach the problem in a different way.


Shane illustrates this point by sharing an animation of a robot tackling an obstacle course:


"So, this little robot here is being controlled by an AI. The AI came up with a design for the robot legs and then figured out how to use them to get past all these obstacles. But when David Ha set up this experiment, he had to set it up with very, very strict limits on how big the AI was allowed to make the legs."


She expands upon the issue further:


"Seeing the AI do this, you may say, OK, “no fair”, you can't just be a tall tower and fall over, you have to actually, like, use legs to walk. And it turns out, that doesn't always work, either.


"This AI's job was to move fast. They didn't tell it that it had to run facing forward or that it couldn't use its arms. So, this is what you get when you train AI to move fast, you get things like somersaulting and silly walks. It's really common. So is twitching along the floor in a heap."


Later in the talk, Shane explains that working with AI is less like working with another human, and more like working with “some kind of weird force of nature”:


"It's really easy to accidentally give AI the wrong problem to solve, and often we don't realize that until something has actually gone wrong."


Trial and error, you could say.


It’s clear that AI can be out of control, but not in the ways you’d expect from what you’ve seen in the movies. Not yet, anyway. Many big names in science and technology – including Hawking, Musk, Wozniak and Gates – have expressed concerns about the risks posed by AI. Discrepancies between “what we want” and “what we get” could prove to be problematic – even catastrophic – in the future.


The following examples from a Future of Life Institute article illustrate what might happen if well-intentioned goals are undermined by destructive methods:


"If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.


"As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem."


The article continues:


"Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us.


"The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?"


Scary stuff! The good news? We’re not there yet. And it could be a very, very long time before we are.


In her closing lines, Shane shines a light on the stage we’re at right now:


"When we're working with AI, it's up to us to avoid problems…We have to learn what AI is capable of doing and what it's not, and to understand that, with its tiny little worm brain, AI doesn't really understand what we're trying to ask it to do.


"…we have to be prepared to work with AI that's not the super-competent, all-knowing AI of science fiction. We have to be prepared to work with an AI that's the one that we actually have in the present day.


"And present-day AI is plenty weird enough."


You can say that again!

For more entertaining, insightful and thought-provoking talks, visit TED.com

bottom of page