Ai

Exclusive: Inside the Growing Industry Influence Over AI Regulation

Admin

By: Admin

Friday, March 6, 2026

Mar 6, 2026

6 min read

Artificial intelligence is advancing at a pace that few policymakers could have imagined just a decade ago. Governments across the world are now racing to build regulatory frameworks that can guide the development of AI while balancing innovation, security, and public trust. Yet as these frameworks take shape, a crucial question is increasingly emerging within policy circles: who is helping shape the rules that govern the technology?

by Kasun Illankoon, Editor-in-Chief at Tech Revolt

In practice, the process of regulating emerging technologies rarely happens in isolation. Regulators often rely on input from industry leaders, technical experts, startups, and academic institutions to understand how complex systems operate. For a field as rapidly evolving as AI, this collaboration is frequently seen as essential. Without technical insight, regulatory frameworks risk being outdated before they are even implemented.

However, the relationship between policymakers and technology companies is also becoming more complex. As governments draft standards, guidelines, and compliance requirements, industry actors are increasingly present in consultations, advisory groups, and technical working sessions. While such participation can bring valuable expertise, it also raises questions about balance, transparency, and long-term competition.

One perspective comes from Caesar Medel, CEO and Founder of ZIPTrust, a venture supported by the Canadian University Dubai Incubator. Medel argues that industry participation should not be limited to commentary alone, but should extend to providing the technical frameworks that make regulation practical.

“Think of AI regulation like building a new skyscraper in Downtown Dubai,” Medel explained. “You wouldn’t just ask the landlord how it should be built; you’d ask the architects and the engineers who know the strength of the steel.”

According to him, the same principle applies to digital systems. Regulators may define the rules, but the companies building the infrastructure often understand how those rules can be implemented in real-world systems.

“Right now, industry input is often just ‘advice’,” he said. “At ZipTrust and TrustPaper, we believe the industry’s role is to provide the actual digital blueprints. We aren’t just telling regulators what could work; we are building the tools that make compliance automatic. Industry input should be the foundation, not just the decoration on the building.”

This view reflects a broader shift taking place across technology governance. Increasingly, regulation is not just about defining limits on behaviour, but also about embedding compliance mechanisms directly into the technology itself. In areas such as digital identity, cybersecurity, and data governance, software architecture can determine whether rules are enforceable at scale.

Yet the growing influence of industry expertise also raises concerns about conflicts of interest. When technology providers help shape regulatory frameworks, critics often question whether the resulting rules could favour certain business models or platforms.

Medel believes the solution lies in designing systems where trust does not depend solely on the organisation operating the technology.

“A conflict of interest usually happens when the person guarding the vault also has the key to the treasure,” he said. “In the old world of tech, big companies held both.”

He argues that emerging technologies such as blockchain can shift this balance by distributing control away from centralised platforms.

“With ZTSign and TrustPaper, we are changing the lock,” Medel explained. “By using blockchain and tokenisation, the key stays with the user, not the company. It’s like a digital notary that doesn’t care who you are—it only cares if the signature is real. When the technology itself is neutral, there’s no room for a conflict of interest. The math doesn’t have a favourite.”

Still, as AI governance frameworks mature, another issue is beginning to draw attention: whether regulatory complexity could unintentionally reinforce the dominance of large technology platforms.

Compliance requirements for advanced technologies often involve extensive documentation, auditing systems, risk assessments, and data governance structures. While these mechanisms can strengthen accountability, they can also require resources that smaller companies may struggle to provide.

Medel describes this risk as the emergence of “digital fences.”

“If we make the rules so complicated and expensive that only the giants can afford to follow them, we end up with a neighbourhood where only five people can live,” he said.

From the perspective of startups and emerging technology companies, the challenge lies in designing standards that maintain high levels of accountability without creating barriers to entry. If compliance becomes too costly, smaller innovators may be excluded from markets before they have the chance to compete.

“At WatDubai.com, our goal is to keep the gates open,” Medel added. “We want to make trust as easy to access as a high-speed internet connection. Standards shouldn’t be a wall to keep people out; they should be a bridge that allows a small startup to compete with a giant on a level playing field.”

The debate highlights a broader dilemma facing regulators around the world. On one hand, governments must ensure that AI systems are safe, transparent, and accountable. On the other, excessive regulation could slow innovation or consolidate power within a small number of dominant firms.

Many policymakers are now exploring models that involve multi-stakeholder governance, bringing together government institutions, technology providers, academic researchers, and civil society groups. The aim is to create frameworks that reflect a wider range of perspectives while still maintaining regulatory independence.

At the same time, the concept of trust is emerging as a central pillar of the next phase of technological development. As AI becomes embedded in finance, healthcare, infrastructure, and public services, the credibility of digital systems may ultimately determine whether societies embrace or resist the technology.

Medel believes this shift marks the beginning of a new era in the technology industry.

“We are moving from an era of ‘Move Fast and Break Things’ to an era of ‘Move Fast and Prove Things,’” he said.

In this environment, companies will increasingly compete not only on the capabilities of their AI models but also on the reliability and transparency of their systems.

“In the long run, the companies that win won’t just be the ones with the smartest AI, but the ones people actually trust,” Medel said. “Innovation will flourish when trust is built in.”

He describes this future as part of a growing “trust economy”, where digital interactions must carry the same level of credibility as traditional human agreements.

“Imagine a world where every digital interaction is as solid as a handshake in the souq,” he said. “That’s the trust economy we are building. It turns competition into a race for integrity, and that’s a race everyone wins.”

As governments continue to refine AI governance frameworks, the balance between regulatory oversight and industry expertise will likely remain a defining issue. Technical insight is essential for shaping workable policy, yet maintaining transparency and independence will be equally critical to ensuring that the rules serve the broader public interest.

The challenge for policymakers may ultimately lie not in excluding industry voices from the conversation, but in structuring that conversation in ways that safeguard competition, accountability, and long-term innovation.

Share this article

Related Articles