Half of the world’s student population was out of school earlier this year due to the COVID-19 pandemic, according to UNESCO. Over the past few months, these children have presumably spent all of their time at home, but only a portion of their day focusing on school work. They’ve had more free time on their hands, which, in turn, means more time spent on the internet. This presents a very real threat, leaving minors vulnerable to online harms.
This vulnerability won’t magically disappear once students return to school, which nearly all in the UK are expected to do this month.
Thankfully, the UK Government has been working hard to tackle this issue. It is due to publish its full response to the Online Harms bill imminently and the Home Affairs Committee recently sought evidence on online harms arising from the COVID-19 lockdown period and the adequacy of the government’s proposals to counter them. Additionally, the ICO Age Appropriate Design Code took effect on Sept. 2.
Overall, the government is aiming to deliver a higher level of protection for children and expects companies to “use a proportionate range of tools, including age-assurance and age-verification technologies, to prevent children accessing inappropriate behaviour, whether that be via a website or social media.” But what does this actually mean for businesses, and how can the government set a realistic precedent when it comes to age verification?
Vanity age verification vs. true age verification
Not all age verification processes are equal, and clearly some don’t work as well as they should. Recent research by Jumio found that 54% of UK age-restricted sites have been unable to keep minors from accessing their products or services despite 67% believing it is their responsibility to prevent this from happening.
Organizations operating in age-restricted spaces use a range of methods to prohibit minors from accessing sites and products, the weakest of which is asking users to self-report their own age when accessing a website.
Taking a risk-based approach
Regulated industries such as financial services, which also fall under the age-restricted banner, must comply with know your customer (KYC) and anti-money laundering (AML) regulations which govern how they identity proof new customers. The premise is that knowing your customers — performing identity verification, reviewing their financial activities and assessing their risk factors — can keep money laundering, terrorism financing and other types of illicit financial activities in check.
The UK government needs to champion a similar risk-based approach to age verification. The greater the likelihood of social harm, the greater the need for more robust forms of non-anonymous methods of age verification. According to the Protecting Minors Report, businesses selling products, such as alcohol or fireworks, are less likely (50%) to depend on weak age-verification methods than those offering a service, like pornography (71%). This is somewhat intuitive since any requirement to divulge a user’s actual identity is likely to result in significant customer abandonment on pornographic sites.
Overall, 95% of those surveyed say it’s important to ensure minors do not access age-restricted services, which shows that businesses want to do the right thing. Nevertheless, harms need to be addressed and thought needs to go into how minors can truly be protected.
Face-based biometrics is the most thorough method of truly determining an identity, and subsequently, age. Most organizations know that there are better, stronger methods of age verification than having the user self-report that information, but it’s generally not in their own self-interest to leverage these technologies.
Protecting anonymity is one reason for using weaker forms of age verification, but this can be dangerous. Many porn (and even dating) sites know many of their members do not want to divulge their real identities — sometimes this is out of embarrassment, but some wish to remain anonymous because they intend to inflict harm or perpetrate fraud. Maintaining a balance between anonymity and the right amount of identity verification can be tricky, but in cases where clear harm can occur to a minor, age and identity verification should be compulsory.
Improving credibility, security and efficiency with one method
When done right, robust age verification can protect minors from online harms without having a negative impact on the customer experience or conversion rates.
It starts by requiring a user to capture a photo of their government-issued ID. Identity verification solutions can extract personal information, such as date of birth, from ID documents, which can be used to calculate the current age of the person creating the account, and can also determine if the document has been manipulated. Next, the user needs to take a corroborating selfie, which is compared to the ID to determine that the person possessing the ID is who they claim to be, and certified liveness detection ensures that the person is physically present. After the age and identity of a user has been verified online, biometric-based authentication can ensure that all future logins and transactions are made by the original account owner.
Striking a balance
One strategy being put forth by leading dating sites is to offer two options. The first is to let users who want to preserve their anonymity create accounts with a limited number of identity checks. Users who want to earn a certification badge voluntarily undergo identity and age verification checks. With this approach, members of the dating site can self-select whether they want to only date those who have earned the verification badge. But, even with this “free market” approach, safeguards need to be in place to protect members from catfishing, fraudulent schemes and physical harm regardless whether the user has been verified or not.
After all, it is completely appropriate to hold any organization that profits from selling age-restricted products and services accountable for the potential harms caused by their platform, depending on the industry and the likely harm of onboarding a bad actor. The UK Government should absolutely champion this approach if it is to truly protect minors from online harms.