Ai
Jan 9, 2026
Royal Pharmacy completes digital transformation with UAE-based AI-driven agency
YouTube has long been the world’s digital stage, a place where creators like MrBeast, KSI, PewDiePie, and thousands of others have built careers, launched brands, and transformed entertainment. But the platform’s relationship with artificial intelligence shifted decisively when YouTube terminated two of its most prominent AI-driven channels. Screen Culture and KH Studio, known for producing hyper-realistic fake movie trailers using AI, were removed despite boasting over two million subscribers and more than a billion views combined. Their takedown underscored a growing concern inside YouTube: popularity alone is no longer enough if content crosses the line between creativity and deception.
by Kasun Illankoon, Editor in Chief at Tech Revolt
That moment marked a broader turning point. As artificial intelligence tools have advanced, a new conflict has emerged: should AI-generated videos be allowed, monetised, or even encouraged on the platform? In 2025, YouTube made it clear it will no longer turn a blind eye. Recent policy changes targeting AI-produced and repetitive content signal what could be a defining shift in the future of online video.
At the heart of this battle is YouTube’s updated YouTube Partner Program (YPP) policy, which took effect in mid-2025. The platform has moved to penalise, and in some cases remove monetisation from, low-effort videos that are “mass-produced, repetitious, or inauthentic.” This crackdown is widely interpreted as a response to the explosion of cheaply made AI content, often described as “AI slop,” flooding recommendations and search results.
Research suggests the scale of the issue is significant. Studies indicate that between 21% and 33% of content surfaced to new users may be AI-generated, reshaping what audiences encounter when they first land on YouTube. For viewers, this often means endless variations of the same format: automated voiceovers, recycled visuals, fabricated narratives, and little editorial judgement.
Supporters of YouTube’s tougher stance argue that this intervention is overdue. For creators who invest time, money, and originality into their work, AI slop represents an existential threat. Channels like MrBeast succeed not through volume, but through spectacle, planning, and human storytelling. Likewise, creators such as KSI built their audiences on personality, commentary, and cultural relevance, qualities that are difficult to replicate convincingly through automation alone.
YouTube has been careful to stress that it is not banning AI content outright. Creators can still use AI tools, provided there is clear human input, originality, and added value. Reaction videos, commentary, and educational content remain eligible for monetisation if they are not simply auto-generated at scale. This distinction matters, even if it has been lost in parts of the creator discourse.
Still, critics argue the crackdown does not go far enough. Many creators and users believe AI-generated content continues to overwhelm the platform, crowding out meaningful work and making discovery harder for emerging voices. Calls for clearer labelling, AI content filters, and stricter enforcement have grown louder, particularly as deepfake-style videos blur the line between fiction and reality.
The removal of fake AI movie trailers following pressure from major studios also exposed legal and ethical fault lines. When synthetic content mimics real intellectual property or misleads audiences, the consequences extend beyond YouTube’s creator economy and into copyright law, media trust, and misinformation.
There are compelling arguments on both sides.
Why the crackdown is good:
It protects viewer experience, preserves trust, and safeguards creators who rely on originality rather than automation. It also limits deceptive content that can mislead audiences and damage confidence in the platform.
Why it could be problematic:
Legitimate creators using AI responsibly may fear demonetisation, while inconsistent enforcement risks penalising experimentation. Without transparency tools, audiences still struggle to understand what they are watching.
Ultimately, YouTube’s stance is not anti-AI, it is anti-inauthenticity. The platform is attempting to balance innovation with credibility, automation with accountability. Whether this approach succeeds will depend not just on policy, but on whether YouTube is willing to realign its algorithmic incentives with the values it claims to defend.
The battle between YouTube and AI is not about stopping technology. It is about deciding what kind of creativity deserves to be rewarded, and whether the future of content will be driven by human imagination, machine efficiency, or an uneasy mix of both.