The regulatory wave is coming, and it’s picking up speed. Australia has banned social media for under-16s. France is eyeing September 2026 for enforcement. Denmark is watching closely. The US state of Virginia proposed a one-hour social media limit for minors. And last month, Disney was slapped with fines for violating children’s privacy laws on YouTube in the US.

If you make or manage kids content, you’d be forgiven for reading that list and feeling the walls close in.

It’s difficult to argue against regulation when discussing kids’ safety online. Many will imply that your motivation is somehow nefarious, too commercial or that you simply don’t care enough about children. But in regulating bad content and access to it out of the way of kids, we’re creating conditions that could drive producers, creators and investors out of the kids market altogether. And when that happens, we won’t get a safer internet for children. We get a content void. Market failure.

It’s not just governmental policy bringing disincentives to the industry. Platform pressure, however overdue, carries its own risks. YouTube’s recent test illustrates a tension. They rolled out AI moderation software that is intended for child protection, but it can morph into creator punishment.

YouTube’s AI determines whether users are under 18 based on their activity. Sounds reasonable, right? Protect kids even if they lie about their age?  Except there’s a problem: if YouTube’s AI decides your audience skews young (even if you never tagged your content as Made for Kids) your revenue gets throttled automatically. Non-personalised ads. Algorithmic restriction. The MFK treatment, whether you opted in or not.

For creators already navigating the world’s most opaque algorithm, this is existential. You can’t strategise against criteria you can’t see and you’re essentially playing roulette with your livelihood, and creators will increasingly avoid making anything that could even unintentionally appeal to kids. When creators can’t understand the rules, they can’t adapt. And making a living on YouTube was already hard before AI started making revenue decisions for them.

The Economics are breaking further.

Let’s be crystal clear about the business reality. Kids content on YouTube operates under COPPA restrictions that slash ad revenue by up to 95%. CPMs are so low as to be largely immaterial unless at significant scale. Now layer on platform bans. Add AI systems that can effectively recategorise your content and the question is obvious, why would anyone rational still be making kids content?

The unintended consequence is that independent creators decide kids content is too high-risk for too little reward.

We saw this with COPPA’s initial impact. The creators who could pivot did. The ones who couldn’t either accepted poverty-level returns or quit. What remained were the scaled operations like Moonbug, who could weather low CPMs through sheer volume, and even then investors get twitchy.

So who’s left? The legacy players who are already in defensive mode. Paramount, laying off Nickelodeon staff. Sky Kids, which ceased commissioning originals in 2025. They aren’t positioned to fill the gap; they’re already trying to survive their own structural challenges.

Same goes for public broadcasters: The BBC, the ABC, PBS are commissioning less kids content whilst arguing their own value proposition with their respective governments. Most of them are broke, and the idea that regulation will naturally create opportunities for legacy or public media assumes resources and mandates that largely no longer exist.

A Market Failure scenario – we must consider it.

Regulation tightens. Platforms de-risk by restricting kids content. Independent creators exit because the economics don’t work. Legacy players are cutting budgets. Public broadcasters aren’t able to scale to meet demand.

What fills the void?

The best case is a handful of well-funded producers make content that satisfies the regulatory checkboxes but at the cost of creative. We get content that nobody loves but nobody gets sued over.

If we’re more realistic: a fragmented landscape of VPN’s, age-verification workarounds, kids hiding in internet corners, content sharing on closed social platforms. The exact opposite of what regulation intended to achieve.

None of this is an argument against protecting children online. The exploitative content, the attention-hacking tricks, the data harvesting – all of it needs addressing. But there’s a difference between smart regulation that creates guardrails and blunt regulation that burns down the ecosystem.

Right now I believe that we’re heading toward the latter. We’re creating a regulatory environment where the safest business decision is simply not to make kids content at all. The low barriers to entry that had democratised content creation will be rebuilt, higher than ever.

What happens next?

We’re at an inflection point. The permissive era of YouTube-first strategies and low-barrier content creation is ending. That’s probably inevitable and not entirely bad. But if we’re not careful, we’ll regulate ourselves into a market failure where kids content becomes economically unviable for everyone except the organisations least equipped to innovate.

The regulatory reckoning is here and there’s no doubt we need it. But are we protecting children by improving content, or protecting them by ensuring there’s barely any content left to consume?

Welcome to 2026, where the challenge isn’t creating great kids media. It’s whether anyone will still find it worth creating at all.


About Author

Jo Redfern is a leader in media, specializing in strategy for youth IP that entertain and educate across YouTube, social gaming (Roblox and Fortnite), TikTok, and broadcast.

Comments are closed.