AI Bias: A Flawed Algorithm or a Human Problem?
- Mike Lamb
- Mar 14
- 5 min read
In January 2020, Detroit Police pulled up outside the home of Robert Julian-Borchak Williams and arrested him in front of his wife and two daughters. His crime? Stealing five watches from a shop that he’d never even visited.
Despite protesting his innocence, Robert spent nearly 30 hours behind bars before being released on bail. Eventually, the truth emerged – an AI-powered facial recognition system had mistaken him for another Black man.
This was the first documented case in the US of facial recognition leading to a false arrest. But Robert’s story isn’t unique. AI is now being used almost everywhere. But the machine learning systems and algorithms used to make decisions on things as important as our healthcare, whether we get a job or qualify for a mortgage, are not as fair and bias-free as some like to claim.
So, how did our AI helpers learn to discriminate? And, if they really are as prone to prejudice as human decision-makers, what do we do about it?

More than a glitch...
Bias in technology isn’t just a recent issue – it goes back years before algorithms and AI. Take photography. Back when we'd all take our holiday snaps to be developed, Kodak was king. The world’s leading photographic company issued “Shirley cards” – featuring a picture of a woman – to help lab technicians calibrate colours in the photographs they were developing. For much of the 20th century, these images only featured a woman with light skin. The result? Lighter colours were rendered accurately, but darker colours often weren’t.
When Kodak finally addressed its colour bias, it had less to do with a moral awakening and more to do with furniture companies and chocolatiers complaining that photographs weren’t capturing the right brown tones.
It was a similar story with the “racist soap dispenser”. Its automatic sensor had been trained on limited data and, as a result, it wouldn’t drop soap onto hands with darker skin.
While technology isn’t inherently biased, it reflects and even amplifies its creators' biases. As data journalist Meredith Broussard puts it in the title of her book on the subject, bias in technology is More Than a Glitch.
She warns that as AI continues to permeate more aspects of our lives, we must be cautious of a growing phemomenon she calls "Technochauvinism" – the misguided belief that technology will always do things better and more fairly than humans.

AI in health
AI is already being hailed as a gamechanger in healthcare where it is being used to help detect cancers that human eyes would have missed, develop bespoke new treatments, and solve some of the big puzzles in biology – breakthroughs that promise to revolutionise medicine.
Here in the UK, the Government recently promised to “unleash the power of AI” across public services, including the launch of the world’s biggest trial of AI breast cancer diagnosis.
There are many reasons to be encouraged, with the promise that AI will improve the speed and accuracy of cancer screening while helping to lighten the load on under-the-cosh doctors, nurses and GPs.
This is undoubtedly a cause for optimism. However, we shouldn’t lose sight of AI’s limitations and fall into the trap of believing that machines will always make better decisions than humans.

Meredith Broussard experienced this firsthand. In More Than a Glitch, she reveals that when she was diagnosed with breast cancer, an AI analysis of her mammograms indicated no concerns. Thankfully, a human radiologist saw something that the machine didn’t. She explains:
The difference between how the computer ranked my cancer and how my doctor diagnosed the severity of my cancer has to do with what brains are good at, and what computers are good at. We tend to attribute human-like characteristics to computers, and we have named computational processes after brain processes, but when it comes right down to it a computer is not a brain.
The vital experience and instinct a doctor uses to diagnose a patient can’t be replicated by the mathematical superpowers of AI. Broussard summarises:
That ability to detect anomalies is at the heart of my doctor’s ability to spot the malignant particles in an X-rayed blob... A computer can’t instinctively detect something that is ‘off,’ because it has no instincts. Computer vision is a mathematical process based on a grid.
Some of AI’s issues stem from the same problem that led to badly developed photos – skewed data. In 2021, Google trained its AI skin-cancer detection model on 64,837 images – but only 3.5% were from people with darker skin. As Broussard concludes, this has significant and worrying consequences. “The skin cancer AIs are likely to work only on light skin because that’s what is in the training data. This is a clear bias.”

AI at the bank
AI is also quietly shaping our financial futures, with banks and lenders using algorithms and machine learning tools to decide who gets a mortgage, at what rate, and who gets shut out altogether. Broussard highlights some startling research here: in the US, AI algorithms have been systematically putting Black mortgage applicants at a disadvantage. Even with identical incomes, assets, and credit scores, Black applicants are up to 80% more likely to be rejected or offered higher interest rates than if they were White.
Mortgage applications are legally forbidden from considering an applicant’s race. So why does supposedly objective AI make such clearly unfair decisions? The answer lies in the distinction between “mathematical fairness” and “social fairness”. AI excels at spotting mathematical patterns. But it struggles with the more nuanced nature of fairness in society.
While AI models making these decisions don’t “see” race directly, they infer patterns from data – patterns rooted in centuries of economic discrimination and uneven wealth distribution in particular neighbourhoods.
This results in AI perpetuating these biases and making them harder to challenge when the computer says “no.”

How to tackle AI bias
It’s not just health and mortgages; across a whole range of fields, important decisions are increasingly being entrusted to AI. Given that we’ve known about the potential for tech-driven bias for decades, why hasn’t it been fixed sooner?
AI ethicist Timnit Gebru warned:
I’m not worried about machines taking over the world. I’m worried about groupthink, insularity, and arrogance in the AI community.
Gebru should know. In 2020 she was fired from Google after raising the alarm about biases embedded in the company’s large language models. She warned that AI systems trained on vast amounts of text from the internet risked reproducing some of the worst prejudices seen online – from racism and sexism to outright hate speech.
Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab, told the New York Times:
Her firing only indicates that scientists, activists and scholars who want to work in this field – and are Black women – are not welcome in Silicon Valley.
Google and other leading tech companies have frequently faced criticism for the lack of diversity among their staff. Data published last year showed that, among the top 75 Silicon Valley firms, only 30% of employees were women.
AI isn’t inherently good or bad, neutral or biased. But, like any other technology, it holds a mirror up to the people who created it.

To tackle bias in AI, the answer isn’t just better systems – it’s more diverse teams. More representation in tech leads to more inclusive technology, ensuring that AI does not simply replicate the biases of its creators.
Broussard argues that engineers need to recognise their own blind spots to create fairer systems and companies should embrace transparency and accountability by regularly auditing their algorithms, openly sharing how they’re trained, and proactively addressing problems.
Ultimately, the future of AI shouldn’t be driven by unchecked optimism or the belief that technology alone can solve everything. Instead, it requires intentional efforts to dismantle bias, diversify the tech industry, and ensure that it serves all of us.
Comments