
Deepfakes: Why Seeing Isn't Believing
Can we truly believe everything we see? Not anymore. The last few years have seen a rise in “deepfakes” – manipulated videos where one person’s likeness is replaced with another’s, often with the intention of misleading viewers.
Deepfakes rely on a process called “deep learning” – a subset of machine learning – and are created using algorithms that function in a similar way to the human brain. They consume huge amounts of data and teach themselves to recognise patterns within it, all the while critiquing the output to detect flaws and improve techniques.
Whilst there are huge benefits to deep learning (as seen in the development of Google Translate, self-driving cars and image sorting), certain advancements in facial recognition technology serve to undermine the basic principles of objective truth that we take for granted. So, what can be done about it?
