top of page
The  iluli by Mike Lamb logo. Click to return to the homepage
The iluli by Mike Lamb logo. Click to return to the homepage

A Child-Safe Internet

In years gone by, parents would worry about letting their children play out in the street. Nowadays, it’s the potential dangers lurking online that are just as likely to keep mums and dads awake at night.


The explosion of games, apps, and video content has happened faster than regulators and the rest of us can possibly keep up with. In that respect, the internet is like the new Wild West.


Some countries are now turning to facial recognition to identify the age of those consuming content on the web. Could this hold the key to creating a child-safe internet? Or will it represent a further erosion of our online privacy?

A cartoon image of a social media profile page, with a mouse cursor hovering over the 'About' tab.

In an attempt to tackle gaming addiction in China, internet users who want to play games after 10pm now need to verify their age — or get booted off.


Chinese gaming firm Tencent told The Guardian:


“We will conduct a face screening for accounts registered with real names and which have played for a certain period of time at night... Anyone who refuses or fails the face verification will be treated as a minor, as outlined in the anti-addiction supervision of Tencent’s game health system, and kicked offline.”

Could a similar approach be replicated elsewhere?


One big challenge seems to be finding a consistent and reliable way to verifying internet users’ ages that governments, businesses and individuals can agree on. While we can all put parental controls in place with our internet service providers and phone operators, in reality these are far from foolproof. Especially for a generation of tech-savvy young people.

As the Guardian piece explains, in the offline world we have two main approaches to age verification — checking some form of official ID, or simply looking at people (after all, there’s no need for ID cards to stop seven-years-olds being served in a bar).


These long-established simple safeguards are important because — let’s be honest — who among us didn’t try and push the boundaries as a child and, say, try to sneak into a film we were too young to see?

But the traditional approach faces a double whammy of challenges in the digital age. There is an almost infinite supply of content online to be regulated, and no consistent way of verifying the identity and age of people looking at this content.

Regulators have acknowledged the problem but potential solutions have been slow to materialise.


Back in 2017 there was a lot of fanfare when the UK Government introduced the Digital Economy Act. It proposed doing something which had never been done before — introducing a mandatory requirement for online age verification. It was relatively narrow in scope, applying only to sites showing content unsuitable for children, but was welcomed by many as a step in the right direction for child safety.


But then…. Nothing. Various deadlines were missed and the plug was finally pulled in 2019.


A cartoon image of a desktop computer. On the screen is an image like a typical YouTube page, featuring a sad cat video compilation.

Sadly, while regulators drag their heels, the real world impact of the lack of online protections for children continues to grow. In the UK, it is estimated that at least 1.4 million children view inappropriate content online every month. This, in turn, is having a very worrying impact on the wellbeing of impressionable children viewing it.

Could this be where AI steps in? There are encouraging signs that increasingly sophisticated AI could eventually be a reliable 21st century equivalent to the responsible adult at the checkout, bar or box office. It can verify the age of potential customers, without necessarily needing to collect personal data.

One company offering such a service is Yoti. It has partnered with CitizenCard to offer a digital version of its ID, and is working with self-service supermarkets to experiment with automatic age recognition of individuals. Following tests against people from a wide range of demographics, it believes its system is as good as a person at telling someone’s age.


It reports that operating a ‘Challenge 21’ policy using its system — asking for strong proof of age from people who look under 21 — would catch 98% of 17-years-olds and 99.15% of 16-years olds. If this approach is scalable and can be implemented widely online, the impact would be huge.


An article on BiometricUpdate quoting Yoti CEO Robin Tombs explains:

“The combination of biometric liveness checks with AI age estimation will prevent almost anyone between seven and ten years old from joining social media platforms. While some 11 and 12-year-olds would get through, the number of 13-year-olds needing parental consent or to use the Yoti app would be limited.


“People judged by the system to be 25 or more years old will be given the choice to have their age algorithmically confirmed, after which the image is deleted. People judged as 24 or younger can use a Yoti digital ID to share their over 18 status anonymously, allowing access to age-restricted goods at physical retailers, gambling, and online services like adult content or ecommerce websites.”


In June 2022, Yoti announced a partnership with Meta-owned Instagram:


“If someone in the US changes their date of birth on Instagram from under the age of 18, to 18 or over, they are required to verify their age… Yoti’s facial age estimation will be one of three options people will have to do this. The other two will be by uploading an ID document and social vouching. These are both powered by Meta.”


Arguably, it’s a step forward but the line between privacy considerations and the submission of photos to build a functioning neural network for the algorithm to work is bound to raise some parental concerns. Yoti have anticipated this.


A cartoon image of a child stood alone in a blank space, holding a school bag.

A sandbox being run by the UK Information Commissioner’s Office (ICO), along with Yoti and partners like SaaS GoBubble (GoBubbleWrap) is ensuring age estimation technology for children under 13 is developed with privacy considerations in mind:

“The company is building a biometric dataset by asking parents to submit a photo of their child through a web portal, specifically for the purpose of training age estimation algorithms for improved performance. Yoti points out that the process is not facial recognition, as it does not match faces. It also does not reveal any personal information, but simply estimates age based on analysis by a neural network.


“Yoti is also launching an education campaign and video competition to help parents and children understand AI and related concepts, including dataset consent and the differences between facial recognition and biometric analysis.”


While we wait for legislation to catch up with the digital world our children are growing up in, innovative tech-based solutions pioneered by Yoti and others appear to be the best hope we have. Thanks to AI, we at least may be able to offer young people the same protections online that we can in the real world.

Comments


bottom of page