x
Technology - August 30, 2025

UK’s Online Safety Act Sparks Global Push for AI-Powered Age Verification Systems to Protect Children from Online Harm

The global digital safety movement has seen a surge in artificial intelligence-powered products designed to shield children from harmful online content. This shift is particularly evident in legislative efforts, such as the Online Safety Act in the U.K., which mandates tech companies to safeguard minors from age-inappropriate content, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Failure to comply can result in penalties amounting to 10% of a company’s global annual revenue.

Similar legislation is gaining traction in the U.S., with the Kids Online Safety Act making social media platforms accountable for preventing harm to children. These regulatory measures have prompted significant changes within major tech corporations, with entities like Pornhub and others implementing age verification systems to restrict access to adult content.

Beyond pornography sites, companies such as Spotify, Reddit, and X have also introduced age assurance mechanisms to safeguard users from explicit or inappropriate materials. However, these measures have sparked privacy concerns within the tech industry.

At the forefront of age verification technology is Yoti, a company that uses AI to estimate a user’s age based on facial features, achieving an accuracy of approximately two years for individuals aged 13 to 24. Partnering with the U.K.’s Post Office, Yoti aims to capitalize on the emerging market for government-issued digital ID cards in the country. While Yoti is a prominent player in identity verification software, competitors such as Entrust, Persona, and iProov also exist.

The rise of digital identification methods has spurred debates over privacy issues and potential data breaches. According to Pete Kenyon, a partner at law firm Cripps, “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, asserts that privacy can be maintained while ensuring child safety. “Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she stated. “The best technology doesn’t just tick boxes; it builds trust.”

In addition to software solutions, hardware innovations are also being developed to safeguard children online. Finnish phone manufacturer HMD Global recently launched the Fusion X1, a smartphone equipped with AI technology that prevents minors from recording or sharing explicit content and viewing sexually explicit images across all apps. This technology is developed by SafeToNet, a British cybersecurity firm focused on child safety.

James Robinson, vice president of family vertical at HMD, emphasized the need for further advancements in this area, stating, “We believe more needs to be done in this space.” He added that HMD had conceived the idea for child-friendly devices before the Online Safety Act came into effect, and welcomed the government’s increased focus on child safety.

The launch of HMD’s child-oriented smartphone aligns with growing momentum within the “smartphone-free” movement, encouraging parents to limit their children’s access to smartphones. As child safety becomes a priority for digital behemoths like Google and Meta, who have faced criticism for exacerbating mental health issues in children due to online bullying and social media addiction, Govender states, “For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable. That era of neglect must end.”