top of page
The  iluli by Mike Lamb logo. Click to return to the homepage
The iluli by Mike Lamb logo. Click to return to the homepage

Microchips and the Puzzle of Human Progress - Part One

Advances in technology over the past century have been staggering. It took just 66 years to progress from launching the first airplane to landing on the moon. Life-changing new smart gadgets, revolutions in communication and miraculous medicines now come so thick and fast that we almost take them for granted.


What has driven this incredible rate of innovation? It all comes down to a tiny piece of technology: the transistor. 


Our ability to keep making these transistors smaller and fit ever-increasing numbers of them onto microchips has driven what many believe to be the most extraordinary era of technological progress in history. But there is a limit to just how small we can make them, and we may be close to reaching it. At the same time, geopolitical factors are leading to a global shortage of chips.

What might this mean for our daily lives, and for the arc of human progress?


Watch my short explainer to find out…



The small invention that changed the world


It’s Thursday, 1 July 1948, and tucked away on page 46 of The New York Times is a short article about a new invention: “A device called a transistor, which has several applications in radio where a vacuum tube ordinarily is employed, was demonstrated for the first time yesterday.”


The previous day, Bell Labs – the telephone firm founded by Alexander Graham Bell – held a press conference to show off its latest innovation. As the relatively low-key newspaper write-up reveals, people had yet to realise that the wired blocks of germanium on display in Bell Labs’ New York office would change the world in ways we could barely imagine, and set humanity on an exciting path of unprecedented technological progress.


The genius of William Shockley, Walter Brattain and John Bardeen’s invention was its ability to harness the unique properties of a class of materials called semiconductors. Most materials either conduct electricity (like copper) or block it (like wood). Semiconductors like germanium and silicon are different. They don’t conduct much electricity on their own, but the team at Bell Labs discovered that by adding some conductive elements to them they could get these materials to act like a switch. The resulting transistor would amplify an electric current when open, and shut the current off when closed.


Exciting things started to happen when these transistors were grouped together. Connecting four transistors built a portable radio, an invention which spread rock’n’roll music across the world. A few hundred transistors created the first handheld digital calculators. By 1969, 17,000 of them were powering the Apollo Guidance Computer and taking man to the moon.

In computing, these open and closed circuits are used to represent binary code. More transistors equal more processing power. If you’re reading this newsletter on an iPhone 14, the device in your hand is powered by a chip with 15 billion of them – each roughly 100,000 times smaller than the width of a human hair.


This incredible exponential growth – from the four-transistor radio to the 15 billion-transistor smartphone chip in a single lifetime – was predicted back in 1965 by an engineer called Gordon Moore. Moore’s Law, as it is famously known, forecast that the density of transistors per chip would double roughly every two years.


In his 2022 book Chip War, Chris Miller argues that these building blocks of the digital age have now overtaken oil as the world’s most critical resource. To understand the modern world, we need to understand microchips.


A brief history of microchips


1955: The Harwell CADET became the first fully transistorised computer in Europe, and possibly the world. It used 324 transistors and the hardware was broken up into separate modules which had to be soldered together by hand, with masses of wires running between them. This method of connecting transistors required lots of space and left the machine prone to failure.


1958: Engineer Jack Kilby joined Texas Instruments and put some quiet time during the summer to good use by designing an ‘integrated circuit’. This innovation helped solve the ‘masses of wires’ problem afflicting transistor-based computers by grouping several transistors on the same piece of semiconductor material. These came to be known as ‘chips’, because of the way each circuit was ‘chipped’ from a wafer of silicon.



Electronics scientist Jay Lathrop started working at Texas Instruments the same year. He invented a process called photolithography which would revolutionise the chip production process. If you look through a microscope the wrong way around, it will make large objects look smaller. Using this principle, Lathrop’s new technique enabled large patterns to be printed directly onto semiconductors in miniature. A version of this process is still used today.


1959: Robert Noyce, co-founder of Fairchild Semiconductor, had been working on the same problem as Jack Kilby and came up with an even better solution. Rather than grouping transistors on top of a semiconductor, his version placed then within it. This removed the need for any freestanding wires at all and created a flat microchip design that had huge potential for further miniaturisation.


1965: The April 1965 edition of Electronics magazine included an article written by Gordon Moore, Director of R&D at Fairchild Semiconductor. Moore predicted that the number of transistors that could be squeezed onto a chip would double every 12 months for at least the next decade. This prediction, later dubbed ‘Moore’s Law’ proved to be correct. In 1975, Moore updated his forecast to a doubling roughly every two years. Incredibly, this rate of progress has pretty much continued up to today.


An image of Apollo 11 on the Moon with 1969 written to the left, and an image of a microchip to the right.

1969: Neil Armstrong and Buzz Aldrin took a giant leap for mankind when they set foot on the moon, but it was semiconductor technology that helped to get them there.


The story of the space race and the early development of microchips are inextricably linked. NASA was quick to recognise the potential of the microchip and, in the early 1960s, became Fairchild Semiconductor’s biggest customer. Its investment was crucial to getting this new technology off the ground and making it commercially viable. By the time Apollo 11 landed on the moon, Robert Noyce and Gordon Moore had both left Fairchild to found a new company called Integrated Electronics, or Intel for short.


1989: True to Moore’s law, by 1989 Intel managed to cram one million transistors onto a chip. The 486 became the standard for desktop computers in the early 1990s. By 2006 Intel had produced the first chip with one billion transistors, with a two billion transistor chip following two years later.


To be continued….


Comments


bottom of page