top of page




Overview from TED

Here's a paradox: as companies try to streamline their businesses by using artificial intelligence to make critical decisions, they may inadvertently make themselves less efficient. Business technologist Sylvain Duranton advocates for a "Human plus AI" approach - using AI systems alongside humans, not instead of them - and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

"We have a choice here. Carry on with algocracy or decide to go to 'Human plus AI.' And to do this, we need to stop thinking tech first, and we need to start applying the secret formula."

White Brick Wall
Sylvain Duranton

Management Consultant

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of 800+ data scientists and has implemented more than 100 custom AI and analytics solutions for companies across the globe.



Sylvain Duranton’s TED talk has received more than 2.3 million views to date, and it’s clear to see why. The rise of artificial intelligence (AI) is a very real concern, especially when it pertains to the workplace.

Back in 2020, I published a video highlighting the benefits of Universal Basic Income (UBI). Why UBI is necessary – or will be, in the years that follow – is simple. In 2016, the US Council of Economic Advisers estimated that 83% of US jobs that pay less than $20 per hour could be automated in the future. That’s around 65 million, and that number looks certain to rise.

Making tens of millions of humans redundant in favour of robots is never going to be sustainable, or ethical for that matter.

But whilst UBI is one solution to the problem, Duranton has other ideas.

He is adamant that “Human plus AI” is the only option to bring the benefits of AI to the real world:

“In the end, winning organizations will invest in human knowledge, not just AI and data. Recruiting, training, rewarding human experts. Data is said to be the new oil, but believe me, human knowledge will make the difference, because it is the only derrick available to pump the oil hidden in the data.”

I found the talk hugely refreshing. Duranton is upfront and honest about the downsides, repeating “long, costly and difficult” throughout the presentation. But he also provides numerous, solid examples to support his view that AI-only systems will ultimately fail.

He opens his talk with a paradox that provides the backbone for his pro-human argument:

"For the last 10 years, many companies have been trying to become less bureaucratic, to have fewer central rules and procedures, more autonomy for their local teams to be more agile. And now they are pushing artificial intelligence, AI, unaware that cool technology might make them more bureaucratic than ever. Why? Because AI operates just like bureaucracies."

He continues:

“The essence of bureaucracy is to favour rules and procedures over human judgment. And AI decides solely based on rules. Many rules inferred from past data but only rules. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy - I call it "algocracy" - where AI will take more and more critical decisions by the rules outside of any human control. Is there a real risk? Yes.”

And Duranton should know. In his role as a management consultant, he leads a team of 800 AI specialists responsible for the deployment of more than 100 customised AI solutions for large companies around the world. According to Duranton, more and more corporate executives are calling for “costly, old-fashioned humans” to be taken out of the loop entirely – a “human-zero mindset”, as Duranton calls it.

Ethics aside, you can see the appeal. Why pay employee wages, office rent, utilities and the like, when a chatbot can handle all your customer service enquiries? In an increasingly technology-reliant world, AI is sexy and exciting. It’s an area with plenty of uncharted territory, ready to be explored and exploited.

But an AI-only operation – or one that overlooks the value of human intervention - does come with risks. And big ones at that.


For a long time, Amazon’s automated marketing emails have been the subject of ridicule. You buy a new lawnmower to replace your trusty Flymo, only to be inundated with emails about other lawnmowers they offer. Achieving the right balance for email targeting is a skill, and one which Amazon algorithms failed to master for many years.

Duranton provides two examples to demonstrate the “very dumb things” AI can do when left to its own devices. The first, a humorous tweet from a customer:

“Dear Amazon, I bought a toilet seat because I needed one. Necessity, not desire. I do not collect them. I am not a toilet seat addict. No matter how temptingly you email me, I’m not going to think, oh go on then, just one more toilet seat, I’ll treat myself.”

The second took a morbid turn:

“Had the same situation with my mother’s burial urn. For months after her death, I got messages from #Amazon saying, ‘If you like THAT…’”

Uneasy laughter reverberates around the auditorium.

But as Duranton points out, some mistakes are no laughing matter. Like the 346 casualties from two B-737 aircraft crashes, when pilots could not interact properly with a computerised command system.

It’s terrifying to envisage a world with AI at the helm. This “algocracy” that Duranton speaks of is a world with little to no accountability; one where decisions are made based on data without common sense, intuition or imagination. You only need look at the exam results scandal in the UK during the pandemic to appreciate how dire the consequences can be when algorithms get it wrong.

And Duranton called it.

In this talk – from September 2019 – Duranton highlighted the real risk of relying on algorithms in this manner:

“Take an AI engine rejecting a student application for university. Why? Because it has ‘learned’, on past data, characteristics of students that will pass and fail. Some are obvious, like GPAs. But if, in the past, all students from a given postal code have failed, it is very likely that AI will make this a rule and will reject every student with this postal code, not giving anyone the opportunity to prove the rule wrong.

“And no one can check all the rules, because advanced AI is constantly learning. And if humans are kept out of the room, there comes the algocratic nightmare. Who is accountable for rejecting the student? No one, AI did. Is it fair? Yes. The same set of objective rules has been applied to everyone. Could we reconsider for this bright kid with the wrong postal code? No, algos don't change their mind.”

So how can “algocracy” be avoided? That’s where the secret formula comes in.


According to Duranton, to successfully adopt “Human plus AI” we need to stop thinking tech first and start applying the secret formula. He suggests:

“10% of the effort is to code algos; 20% percent to build tech around the algos, collecting data, building UI, integrating into legacy systems; But 70%, the bulk of the effort, is about weaving together AI with people and processes to maximize real outcome.”

That’s a shockingly large proportion of resource, but you can see why it’s necessary.

Duranton states:

“Citizens in developed economies already fear algocracy. Seven thousand were interviewed in a recent survey. More than 75% expressed real concerns on the impact of AI on the workforce, on privacy, on the risk of a dehumanized society. Pushing algocracy creates a real risk of severe backlash against AI within companies or in society at large.”

Not only is human input required to programme algorithms and interfaces, but to continuously define the parameters that AI works within. As Duranton succinctly puts it, humans “set the limits between personalisation and manipulation, customisation of offers and discrimination, targeting and intrusion.”

On that basis, “Human plus AI” offers the best of both worlds: a successful, collaborative approach that removes inefficiencies whilst keeping human beings on the payroll. Not because it’s the “right” thing to do, but because humans are inherently valuable. AI should serve to enhance humans, not exterminate them. 

Unless we’re talking Daleks, that is…

For more entertaining, insightful and thought-provoking talks, visit

bottom of page