top of page
The  iluli by Mike Lamb logo. Click to return to the homepage
The iluli by Mike Lamb logo. Click to return to the homepage

How to Predict the Future

  • Writer: Mike Lamb
    Mike Lamb
  • Sep 17
  • 9 min read
Why are we so bad at predicting the future – and how can we get better at it?

Humans have been trying to peer into the future for millennia and, for just as long, we’ve been blindsided by events that we should have seen coming — stock market crashes, natural disasters, the fact they’re still making Smurf movies in 2025...


For much of history, our forecasting flops can be put down to some of the questionable methods we used. Crystal balls, tea leaves, and horoscopes proved far from reliable guides.


ree

Today, we’ve traded soothsayers for science. With the accumulated knowledge of humanity, and the internet at our fingertips, you’d think that we’d have entered a golden age of foresight. So why haven’t we?

Meteorology shows what’s possible: a relentless focus on data, measurement and the scientific method mean that today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. (Good news for everyone, especially forecasters hoping to avoid another Michael Fish moment.)


But in politics, economics and society, our forecasts barely seem to have progressed at all. It can often feel like we’re little better off than our ancestors consulting their star charts.

In this month’s explainer video, I explored the science of prediction, the breakthrough that could revolutionise how we see the future, and the steps we can all take to think like superforecasters. Watch it here:



We’re worse at forecasting than you think…


Explaining the present and the past requires expertise. In forecasting the future, experts are generally no better than everybody else. They might be worse – Science writer Matt Ridley
There’s no chance that the iPhone is going to get any significant market share. No chance – Microsoft CEO Steve Ballmer in April 2007

History is littered with epically bad forecasts – often made by people we’d expect to know better. In October 1929, Yale economist Irving Fisher confidently declared that “stock prices have reached what looks like a permanently high plateau.” Days later, the Wall Street Crash triggered a decade-long global depression. Nearly a century on, many of the world’s economic experts fared no better at foreseeing the 2008 crash.


At first glance, these might look like the outliers. Perhaps Steve Ballmer’s infamous iPhone prediction is so well remembered precisely because it was unusual? Maybe experts usually get it right, and we just remember the howlers?


This was the question psychologist Philip Tetlock set out to address. After two decades studying political judgment – a field overflowing with confident predictions – he reached a damning conclusion: most experts, whose prophecies were treated as gospel, were “roughly as accurate as a dart-throwing chimpanzee.”


A chimp holds a dart beside a target with five missed darts. Pink text reads "Roughly as accurate as a dart-throwing chimpanzee."

Flawed forecasts were not rare, they were just rarely tracked or remembered. Unlike Ballmer, Fisher, or the shipping magnate who infamously claimed the Titanic was “unsinkable,” most predictions are quickly forgotten.


As Tetlock writes in Superforecasting: The Art and Science of Prediction:


Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are. Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us – leaders of nations, corporate executives, investors, and voters – make critical decisions on the basis of forecasts whose quality is unknown.

Perhaps more shockingly, Tetlock found that expertise could actually lead to worse predictions. The bad takes weren’t just coming from media pundits chasing headlines. They were coming from intelligence agencies – the world’s leading authorities on foresight.


Faced with this evidence, you or I might have thrown our hands up and concluded that forecasting complex human affairs is a fool’s errand. Fortunately, Tetlock didn’t. Instead, he turned his attention to finding a better, more scientific way to make predictions.


Forecasting as a science


In 2003, US and UK intelligence agencies gave their governments an important and seemingly well-informed forecast: the Iraqi leader Saddam Hussein was hiding weapons of mass destruction. It was a key factor in the case for invasion. And it turned out to be wrong.


The fallout forced some uncomfortable questions. How could the world’s most powerful agencies mess up something this big?


In the US, it led to the intelligence community launching an unusual experiment. The Aggregative Contingent Estimation program (ACE) established a series of forecasting tournaments. Between 2010 and 2015, participants were challenged to answer hundreds of specific, time-bound questions: Will Iran agree a nuclear deal this year? Will the Euro fall below $1.20 in the next six months?


Competing teams included professional analysts and academic experts. But one outsider entry – a team of online volunteers assembled by Philip Tetlock – stole the show.


Split image with a grayscale portrait of Philip Tetlock. Text reads "The Good Judgment Project" and "Psychologist Philip Tetlock" in bold colors.

The best performers in Tetlock’s Good Judgment Project team were almost twice as accurate as the control group, and even beat professionals with access to classified information.


So, what was their secret?Just as meteorology had transformed itself through data and measurement, Tetlock applied the same scientific rigour to human judgment. Predictions had to be expressed in percentages. Accuracy had to be measurable. Imprecise terms like “likely” were out.


Tetlock went further, using Brier scores – a statistical formula that measures not just whether a prediction proved right, but how well-calibrated the forecaster’s confidence was.


As he explains:


Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent – saying there is a 100% chance that something will happen and it doesn’t, every time – scores a disastrous 2.0, as far from The Truth as it is possible to get.

This was a world away from the vague, fuzzy language still common in intelligence reports. One of the most infamous US foreign policy disasters – the Bay of Pigs fiasco – stemmed from exactly this problem. President John F. Kennedy was told the invasion had a “fair chance” of success. He heard “good odds”. His chiefs actually meant “25%”.


Once Tetlock crunched the scores, a remarkable pattern emerged: among his thousands of volunteers, a small group consistently outperformed the rest. He dubbed them the superforecasters.


How to think like a superforecaster


So what makes a superforecaster so super? You might assume it's genius-level IQ, statistical wizardry or a magical sixth sense. But Tetlock’s research showed it’s much simpler than that – and something we can all learn.


As he put it:


Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.

Text "What makes superforecasters ...super?" in bold blue and white on a black background with abstract yellow and blue lines.

Here are six habits that set superforecasters apart:


1. Take the outside view


The Renzettis live in a small house at 84 Chestnut Avenue. Frank Renzetti is 44 and works as a bookkeeper for a moving company. Mary Renzetti is 35 and works part-time at a day care. They have one child, Tommy, who is five. Frank’s widowed mother, Camilla, also lives with the family.Question: How likely is it that the Renzettis have a pet?

Where do you start? For many of us, it would be inside the story. Maybe Frank and Mary would want a pet for Tommy. Especially as he doesn’t have a brother or sister. But then again, if it’s a small house and they’ve got Camilla there, they might be struggling for space.


Superforecasters don’t do this. They begin with the outside view – the base rate. About 66% of US households have a pet. That’s their starting point. Only then do they move on to the “inside view” to fine-tune their forecast.


2. Think in probabilities


At this point, you might still think that putting a 64.3% probability on a future event occurring sounds faintly ridiculous. After all, there are things we’re certain will happen, things we’re certain won’t happen, and the rest is just a big maybe.


Many people struggle with weather forecasts for this reason. As psychologist Gerd Gigerenzer found, a substantial number of Berliners thought that a 30% chance of rain tomorrow meant either that it would rain for 30% of the day, or that 30% of Berlin would be rained on. It’s harder to get your head around what that figure really means – if tomorrow happened 100 times, it would rain on 30 of those days (and that’s if this were an accurate forecast).


Superforecasters embrace the scientific approach – uncertainty is everywhere. So “maybe” needs a lot more fine-tuning, and the most effective way to do that is with numbers. It's why superforecasters will obsess over a percentage point or two difference in their forecasts. This is the key to their success. As the Financial Times Robert Armstrong explains:


Someone who knows the difference between a 45/55 bet and a 55/45 bet is a better gambler, or to put it generally, a better decision maker than someone who thinks in the crude terms of yes, no, or maybe.

3. Break big problems into smaller questions


Superforecasters use “Fermi questions” to break down what seem like impossible questions into smaller, guessable ones.


A classic Fermi puzzle is: How many piano tuners are there in Chicago?


Without access to any other information, where do you start? Do you just take a wild guess? The Fermi approach is to develop your answer step-by-step, with a series of guesses. What’s the population of Chicago? How many households? How many are likely to own a piano? How often do they need tuning? How many jobs might one piano tuner be able to do in a year?


If you divide your best guess at the number of pianos by your jobs-per-year estimate and end up with a number in the region of 80, you’ll be pretty close.


As Tetlock says:


The surprise is how often remarkably good probability estimates arise from a remarkably crude series of assumptions and guesstimates.

4. Seek out contrary evidence


Once we’ve made up our minds, most of us will only really pay attention to new information if it confirms we were right. Superforecasters do the opposite. They actively hunt for evidence that proves them wrong. “For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded,” says Tetlock.


This mindset helps them avoid the intellectual trap that many experts fall into – becoming so invested in their theories that they feel they have to defend them. As Richard Feynman is said to have remarked: “Doubt is not a fearful thing, but a thing of great value.”


Superforecasters keep multiple perspectives in play and form their forecasts with a healthy dose of doubt.


5. Update your views as the world changes


A forecast is only as good as the data it’s based on. So when the data changes, so should the forecast.


Superforecasters are prolific at updating their predictions. They expect to change their minds. Tetlock calls this mindset “perpetual beta”:


The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.

6. Keep score


Most of us make predictions and conveniently forget them – especially the wrong ones. Superforecasters log every forecast, calculate their accuracy and perform detailed post-mortems.


Tetlock explains why this matters:


Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience. They are like basketball players doing free throws in the dark.

The only way to improve is to see clearly where you went wrong, what you did right – and where you just got lucky.


So, make a prediction, track it and then evaluate it. If you want to be nerdy about it you can even log it using an online tool like Fatebook.


Superhero in a pink cape within a circular diagram labeled Observation, Hypothesis, Experiment, Analysis. Blue sky and clouds in background.

What forecasting can – and can’t – do


Of course, there are limits to foresight. Meteorology has made huge strides so that today’s one-day weather forecasts are more than 90% accurate. But accurately forecasting more than two weeks out will probably never be possible.


The weather, like financial markets and geopolitics, is a chaotic system. Tiny variables can snowball into huge outcomes that would be impossible to foresee at the outset. This is the famous butterfly effect: a flap of wings in Brazil could, in theory, set off a chain of events that causes a tornado in Texas.


Superforecasters stick to specific, time-bound questions like “Will Google be forced to sell off Chrome this year?” rather than “Will AI replace white-collar workers?” Even then, as Tetlock warns, anything beyond five years is basically just guesswork.


Most of us aren’t going to be taking part in forecasting tournaments. But having the ability to make informed predictions about what’s ahead is something we can all benefit from. And learning to think in probabilities, take the outside view and regularly update your beliefs won’t only make you better at forecasting – it’ll make you a better thinker.


Economist Tim Harford sees a broader payoff. It might even make us better people:


Thinking seriously about the future requires keeping an open mind, understanding what you don’t know, and seeing things as others see them. If the end result is a good forecast, perhaps we should see that as the icing on the cake.

Cartoon figure with hands on hips stands under a sun, surrounded by rockets, charts, and "AI" text. Vivid colors, futuristic mood.

Recommended links and further reading



And finally…


A new era of astronomy. This week marks ten years since the discovery of gravitational waves – ripples in the fabric of space-time that were predicted by Albert Einstein’s theory of general relativity. Now, a new gravitational wave discovery promises another breakthrough in our understanding of black holes. Astrophysicist Simon Stevenson writes about it in The Conversation.


Here’s my 2021 explainer video on gravitational waves:



Comments


bottom of page