Ai
Feb 23, 2026
AUS brings physical AI to campus through new robotics partnership with Terminus Group
A short video clip begins circulating online. Tom Cruise and Brad Pitt appear locked in a high-octane rooftop fight sequence. The camera glides cinematically. The lighting is theatrical. The choreography feels authentic. The facial expressions are uncannily precise.
But the scene was never filmed.
It was generated using Seedance 2.0, an AI-powered video tool developed by ByteDance, the parent company of TikTok.
Within days, the clip amassed millions of views, fuelling debate across film circles, legal forums and technology communities. The realism was not the only shock. The fact that it could be produced without a studio, camera crew or actor consent was what unsettled the industry.
These are not studio productions. They are AI outputs.
The implications extend far beyond entertainment.
By Kasun Illankoon, Editor in Chief at Tech Revolt
According to PwC, the global entertainment and media industry is projected to surpass US$2.8 trillion by 2027. Meanwhile, generative AI could contribute between US$2.6 trillion and US$4.4 trillion annually to the global economy, according to McKinsey. As AI video tools mature, they intersect directly with one of the world’s most valuable creative industries.
The question is no longer whether synthetic media is viable. It is what happens next.
For decades, high-end video production required infrastructure: cameras, crews, locations, logistics and budgets. Now, a laptop and a prompt may be enough to simulate comparable visuals.
Monesh Konchady, General Manager at Urban Edge Films, believes the shift is evolutionary rather than destructive.
“With everyone being able to generate high-quality video through AI, traditional production models won’t disappear, but they will evolve, like they did when we moved from film to computers,” he said.
“High-end storytelling, strategy, and brand thinking will become more valuable, while execution heavy tasks become faster and more cost-efficient. Production shifts from being equipment-driven to idea-driven.”
The distinction is critical. Large-scale shoots will likely remain essential for feature films, live broadcasts and sporting events. However, commercial production, branded content, explainers and digital campaigns may increasingly incorporate AI-assisted workflows.
“The reduces costs and short turnarounds helps create content at a rapid pace which works wonderfully for a content hungry economy,” Konchady added.
This aligns with broader industry trends. Global digital advertising spend surpassed US$600 billion in 2023, with brands under constant pressure to produce more content across more platforms. AI-generated video promises to compress production timelines from weeks to hours.
But the structural shift may alter where value sits in the production chain.

Photo: Monesh Konchady, General Manager at the Urban Edge Films
“Agencies and production houses will have to reposition themselves from logistics co-ordinator to creative consultants,” Konchady said. “The value will move upstream to story building, ideation and unique narration designs with prompt architecture being the core skill.”
Hybrid workflows, he suggests, will dominate.
“AI will not replace cinematographers, editors, or directors; it will augment them. Professionals who adapt will gain leverage. The industry won’t shrink but will become more fluid. Those who resist automation risk being replaced not by AI, but by creators who use AI effectively.”
If production models are evolving, legal frameworks are under greater strain.
Many generative models are trained on vast datasets scraped from publicly available material — including copyrighted works. Laws in most jurisdictions were not designed to address machine learning systems capable of recombining patterns from millions of images, scripts or performances.
Konchady argues that legal reform must move quickly.
“Copyright laws will have to evolve rapidly to adapt to this new landscape,” he said.
“The key aspects need to be looked at immediately. If AI generates content, who owns it, the user, the developer, or the model provider? Clear legal frameworks must define whether AI is a tool (like a camera) or a co-creator?”
Ownership is only one dimension. Training data transparency may become the larger battleground.
“Many AI models are trained on vast amounts of copyrighted material. There must be transparency around datasets and fair compensation models for original creators whose work contributed to training systems. Licensing pools or opt-in registries may become necessary.”
Globally, lawsuits related to generative AI and copyright are increasing. Creative unions and rights holders have called for clearer labelling and consent mechanisms. Policymakers in the European Union, United States and parts of Asia are considering regulatory approaches, but frameworks remain fragmented.
Identity rights may prove even more urgent.
“Most importantly, identity rights. Deepfakes and voice cloning need strict laws and guidelines. Anybody replicated digitally should be aware, authorized and compensated.”
The viral Seedance examples underscore this tension. When a synthetic clip resembles a real actor or public figure, the distinction between parody, fair use and infringement becomes blurred.
Beyond economics and law lies a more fundamental issue: trust.
According to the Edelman Trust Barometer, global trust in media has fluctuated significantly over the past decade. The rise of AI-generated imagery and video adds another layer of complexity. If audiences can no longer assume that what they see is authentic, verification becomes central.
Konchady does not see synthetic media as inherently corrosive.
“Synthetic media does not inherently erode trust, but opacity does. Trust will depend on transparency.”
He argues that responsible deployment could enhance rather than undermine credibility.
“When clearly labelled and responsibly used, AI-generated content can increase efficiency, accessibility, and personalization. The key is disclosure standards. Watermarking, metadata tagging, and platform-level labelling systems must become universal.”
Several technology companies are already experimenting with digital watermarking and content provenance standards. Industry coalitions are developing cryptographic verification systems designed to track the origin of media files.
“We are entering an era where ‘real’ and ‘synthetic’ coexist,” Konchady said. “Trust will shift from assuming authenticity to verifying provenance. Institutions, platforms, and creators who embrace transparency will retain credibility.”
Media literacy will also play a role. As generative tools become widely accessible, audiences may need to develop new critical frameworks for assessing digital content.
The most complex question may not be creative or legal, but structural.
AI video tools lower barriers to entry dramatically. A single creator can now generate cinematic visuals, sound design and animation without traditional infrastructure. This has implications for emerging markets, small businesses and independent storytellers worldwide.
Konchady sees genuine opportunity.
“On one hand, AI dramatically lowers the barrier to creation. A single individual can now generate cinematic visuals, sound design, and animation without traditional infrastructure. This empowers small businesses, independent creators, and emerging markets. That is genuine creative democratisation.”
Yet there is an opposing dynamic.
“On the other hand, the most powerful AI models are controlled by a handful of large technology companies. These entities control infrastructure, compute power, and distribution algorithms. So, while creation is decentralizing, control over tools and reach may centralize.”
This duality reflects a broader pattern in digital economies. Platforms often democratise participation while consolidating power. The infrastructure required to train large AI models — high-performance computing, massive datasets and global distribution networks — remains concentrated.
“The outcome depends on regulation, open-source innovation, and competitive ecosystems,” Konchady said. “If open models and interoperable platforms flourish, democratisation will dominate. If access becomes restricted or overly monetized, consolidation could outweigh creative freedom.”
He concludes with a broader observation:
“The future of creativity will not be defined by AI alone, but by who controls it.”
Seedance and similar systems represent more than viral novelty. They signal a structural inflection point in how visual media is conceived, produced and distributed.
Synthetic media does not automatically eliminate traditional production, nor does it guarantee creative liberation. It reshapes incentives. It redistributes value. It forces legal and regulatory recalibration. It alters the baseline assumption of authenticity.
For now, large-scale filmmaking continues. Studios still operate. Crews still shoot on location. But parallel to that infrastructure, a new layer of programmable creativity is emerging — scalable, replicable and increasingly indistinguishable from conventional output.
The Age of Synthetic Media is not a distant scenario. It is unfolding in real time.
The decisive question is not whether anyone can create anything.
It is who will define the rules of that creation.