These days, the internet “looks a hell of a lot more like Las Vegas than ‘Little House on the Prairie.’”
That’s how Andrew Ferguson, chair of the Federal Trade Commission, described the online experience of children in his opening remarks for an FTC workshop on age verification last week.
The event took place on Wednesday, January 28, which also happened to be Data Privacy Day, an annual “holiday” of sorts to raise awareness about privacy issues and encourage better data protection practices.
A belated salute to all who celebrate. May your consent be informed and your opt-outs be honored.
Them’s the rules
Jokes aside, the timing didn’t feel accidental, and the workshop’s topic went straight to the heart of one of the FTC’s biggest focus areas: protecting kids online.
“The internet we encounter today does not look like one even modestly influenced by the choices of parents with small children,” Ferguson said.
Which brings us to the Children’s Online Privacy Protection Act (COPPA), the federal law enforced by the FTC that restricts how online services collect and use personal data from children under 13 without parental consent.
COPPA has been on the books for more than 25 years. By this point, some of the kids born the year it took effect in 2000 have kids of their own, for Pete’s sake.
The commission amended the COPPA Rule last January – the first update to the law since 2013 – imposing new restrictions on the collection, retention and sharing of children’s personal information, including limits on targeted advertising and tighter parental consent requirements.
But the updated rule is still only patching a framework that predates social media, apps, real-time ad auctions, algorithmic feeds and AI-driven recommendations. And now this same framework is being stretched to cover modern age-verification tools that even didn’t exist when COPPA was written.
That tension – between an aging and patched-up law and an internet that’s grown up faster than its safeguards – is part of what the commission is probing.
The FTC is using the workshop as a vehicle to gather info and input that could feed a policy statement on age verification and perhaps additional tweaks to the COPPA Rule.
But it’s important to note that this gathering wasn’t about whether kids should be online – they already are, of course – but rather what meaningful age verification might actually look like in practice and at what privacy cost.
Safe by design
From biometric scans to third-party verification systems, every option carries trade-offs, and the trade-offs aren’t limited to kids.
Any system that forces more rigorous age checks will inevitably sweep in adults, too, raising First Amendment questions about access to legal content – primarily pornography – and practical questions about how much friction and data collection people will tolerate just to browse the web.
But, despite those valid concerns, the momentum is moving in only one direction.
The feds aside, states are racing ahead with their own social media and “harmful to minors” laws that effectively require age checks, meaning that age verification is no longer optional.
For anyone in the ad-supported ecosystem, that means rethinking how audiences are identified, targeted and tracked – and doing it (ideally) in a way that both satisfies privacy regulators and doesn’t mess with revenue.
The only truly sustainable way to get there is to bake in some good old privacy by design from the beginning. Collect the minimum data needed to complete a task, incorporate limitations on use and reuse, delete what you don’t need and treat age assurance as part of the product architecture rather than a bolt-on or a compliance fix.
It takes effort but it’s also what businesses should be doing anyway.
As child privacy advocate Amelia Vance, founder and president of the Public Interest Privacy Center, a nonprofit focused on student and child privacy, pointed out during the workshop, a lot of the risk simply disappears “before the age” – as in, the age gate – if platforms take a beat to consider privacy and safety from the start.
“Many risks disappear when platforms build with safe defaults,” Vance said. “And this isn’t ‘kid-proofing’ the internet – this is limiting tracking, not automatically allowing people to talk with strangers … [and] allowing people to have default public profiles. Safety by design reduces what is otherwise riding on age assurance.”
🙏 Thanks for reading! And regards from my beloved feline chocolate milk fiend. As always, feel free to drop me a line at allison@adexchanger.com with any comments or feedback.
