top of page
The  iluli by Mike Lamb logo. Click to return to the homepage
The iluli by Mike Lamb logo. Click to return to the homepage

Manipulating Memories with AI

Nowadays, taking photographs is so quick and easy, that the idea of “saving your film” for a few choice shots feels like a lifetime ago. There will be eighteen-year-olds alive today — who can drink, drive and vote — who have never known the rigmarole of taking camera film to the High Street for processing, nor the wait that followed before your holiday snaps were ready for collection.

It’s not an experience I’m especially nostalgic about, but it does feel like photos held more value “back then”.

A collage of cartoon images against a yellow background, featuring a central figure with arms outstretched. The surrounding panels depict various social activities: clinking beers,  a 'rock on' gesture at a concert,  a group of friends bowling and mini golf.

Families would spend time together arranging the prints in scrapbooks or display folders, reminiscing about this, that and t’other. You’d remember where each photograph was taken and, more importantly, why you took it.

Compare that today when many young people have camera rolls full of pictures, but of themselves.

Taking a “selfie” isn’t a trend. It’s the way things are done now.

Do you remember the last time you asked a stranger to take a photo of you and your family? It was probably at a special family gathering — a milestone birthday, for instance — or at a beautiful or meaningful place where the surroundings warranted more than just a close-up of your squished-up faces squinting in the sunlight.

Years ago, being asked / offering to take a photo of others was common place — especially on holiday. It was always a brief yet pleasant interaction. “Oh, yes please — that would be lovely. If you don’t mind?” you’d say, before explaining that the big round button on the top of the camera was the one to press. It didn’t matter if they were in possession of an identical camera model. 

Sometimes someone would mention the flash, other times you were prompted to “say cheese” or some other silly saying to prompt a smile. And that was it. You’d say thank you, never see that stranger again and judge their artistic prowess two weeks later when you returned home from Snappy Snaps.

The impact of the Coronavirus pandemic may mean we’ll never return to the way things were. Inviting a stranger to stand within a metre of you and touch your belongings is just asking for trouble. Tourists rely on lengthy limbs or selfie sticks now. Why risk it when you can say “go, go gadget arm” instead?

Besides, if the first picture doesn’t work out you can always take 500 more.

The rise of smartphones has fundamentally changed the way people capture the moment. Bob closed their eyes? No problem, take another shot. Janet doesn’t like the way they look? “Say cheese part two”… But endlessly striving for perfection doesn’t end there.

Editing software — such as Photoshop — has made amateur photographers of us all. There’s red-eye removal, teeth-whitening, blemish blur… not to mention a myriad of toggles to make the sky bluer than blue and the grass greener than green.

But what of “unwanted” objects? And, no, I’m not talking about your mother-in-law…

This has been an area of focus in machine learning circles for some time now, with a cool, new AI tool emerging as the latest breakthrough. Thought up by Jia-Bin Huang at Virginia Tech University after a visit to the zoo, the tool is designed to digitally remove unwanted obstruction — such as fences, raindrops and reflections — from photographs.

According to an article in New Scientist:

"The AI’s neural network analyses several frames in a movie, or in a “motion photo” taken by some smartphones, and identifies the various objects in the image. It then uses any slight change of angle between frames to calculate the distance to each object… The algorithm then separates out the objects into different layers and removes the foreground layer, providing an unobstructed view of the objects behind."

The article continues:

"Removing objects from images isn’t new, but this method of doing so is. Rather than labelling certain features as things to be removed or not, the neural network automatically discovers the distracting foreground objects in the process of learning. It is also less computationally draining."

The algorithm currently takes three minutes to decipher a 1296-by-864-pixel image. In time — by speeding up this process — Huang hopes the tool can run on smartphones too.

It certainly sounds like a more positive use of AI than previous developments.

A May 2018 article from New Scientist entitled “AI that deletes people from photos makes rewriting history easy”, serves to demonstrate the dark side of manipulating media. Quoting James Hays from the Georgia Institute of Technology, the article concludes:

"Authoritarian regimes sometimes remove people who have fallen out of favour from photographs. They will likely continue to rely on humans to do this — but automatic editing, which may similarly distort reality, could be abused in the hands of those wishing to spread misinformation online… If there’s a danger, it’s about the scale."

This danger should not be underestimated. It’s one thing to remove a photobomber from a family portrait at the seaside, it’s another thing entirely to doctor “official” images. If nothing else, it’s an important reminder to stay vigilant about “fake news”. At one time, seeing was believing. Not anymore.

But it’s not all doom and gloom.

A smiling grayscale face features on the left, against a blue background. To the right, a vertical pink strip displays four emojis depicting different emotions: sad, angry, stressed and happy. An arrow points to the "happy" one, signifying a match with the photograph.

Take GAN Paint Studio, for example. In July 2019 the tool (first launched in November 2018) featured in an article on The Next Web. Meaning “Generative Adversarial Network”, GANpaint allows you to manipulate an image — such as adding trees or grass to a scene — without ruining its original details. Related objects (a building, for instance) will be rectified to make the image look realistic.

A quote from Antonio Torralba, part of the MIT-IBM Watson AI lab, highlights an important point of difference from similar applications:

"All drawing apps will follow user instructions, but ours might decide not to draw anything if the user commands to put an object in an impossible location. It’s a drawing tool with a strong personality, and it opens a window that allows us to understand how GANs learn to represent the visual world."

The article continues:

"The team used a two-part network to train the model. It used generators to create samples of realistic photos, and discriminators to identify the differences between generated and real-life pictures. The input from discriminators is used to modify the generator model to make it better at creating realistic images. This method allows them to test other GANs and improve them by analyzing them for “artifact” units that need to be removed."

The best part? You can try it for yourself!

The site may look primitive, but there’s still fun to be had. An introductory paragraph explains:

"#GANPaint draws with object-level control using a deep network. Each brush activates a set of neurons in a GAN that has learned to draw scenes."

It remains to be seen if greater AI-intervention will be welcomed in the future. For years, celebrities and the public alike have railed against Photoshopped images showing models with unattainably smooth skin, distorted figures and “perfect” hair. Surely further advancements in AI will only serve to perpetuate the problem.

Images may look “good” after editing, but if they no longer provide an accurate representation of the subject then what’s the point? We’re only robbing ourselves of authentic memories — the very thing that cameras were invented for in the first place.


bottom of page