Big news for anyone running an online platform: last week, Ofcom published its long-awaited Protection of Children Codes of Practice under the Online Safety Act.
With children’s online engagement at an all-time high, the new rules set out over 40 practical measures that digital services must implement – including filtering harmful content, enforcing highly effective age checks, and making it easier for children to report harmful content.
The Online Safety Act, introduced to make the UK “the safest place in the world to be online,” places legal obligations on online services to prioritise user safety, particularly for children.
If your service is used by under-18s, compliance is no longer optional. Services must complete children’s risk assessments by 24th July 2025 and implement appropriate safety measures by 25th July.
Failure to comply could result in costly fines of up to £18 million or 10% of global revenue.
There is a lot to digest from the codes, but I am keen to share some top-line thoughts on the new requirement for highly effective age checks – now a necessary step to create safer, age-appropriate experiences.
Highly effective age assurance: a new compliance standard
Platforms must now implement ‘highly effective’ age assurance methods that are accurate, robust, reliable, and fair. Crucially, outdated methods such as self-declaration or simple debit card checks — now widely seen as ineffective — are no longer acceptable.
Acceptable methods include:
- ID verification: Uploading an official ID, combined with a live image match to confirm the uploader’s identity.
- Facial age estimation: Using AI to accurately estimate a user’s age from a selfie, without storing images or personal data. Once the technology returns an estimated age, the image is deleted. No documents are needed and no personal details are shared.
- Digital ID app: Sharing a verified age attribute securely with a platform, without revealing any other personal information.
Privacy and user choice are paramount
Platforms are encouraged to offer users different methods to build trust and acceptance of age assurance; a variety of methods allow users to just share their age or age range.
This data minimisation approach is better for users and the platform, who only need to know that users are the right age to access either the platform itself, or certain types of content.
People should also be able to choose between different methods to prove their age online. This lets people select the method they trust and the one they feel the most comfortable with. It also ensures age checking is as inclusive and accessible as possible.
Undoubtedly, Ofcom’s new Child Protection Codes are a turning point for digital safety. We welcome the fact that Ofcom has confirmed that if services set a minimum age (for example, 13+), they must apply highly effective age checks, or assume younger children are present so tailor all their content accordingly.
We expect Ofcom and the ICO will continue to look jointly at how age assurance can support the checking of age under the age of 18; for instance at 13 and 16 to support age-appropriate design of services.
The Online Safety Act has changed the rules – now comes the real work.
The post Age assurance has become a non-negotiable appeared first on UKTN.