Ai

Mar 6, 2026

Exclusive: Inside the Growing Industry Influence Over AI Regulation

Artificial intelligence is advancing at a pace that few policymakers could have imagined just a decade ago. Governments across the world are racing to develop regulatory frameworks that can guide the development of AI while balancing innovation, security, and public trust. As these frameworks take shape, a key question is increasingly surfacing in policy circles: who is helping shape the rules that govern the technology?

by Kasun Illankoon, Editor-in-Chief at Tech Revolt

In practice, the process of regulating emerging technologies rarely happens in isolation. Policymakers often rely on insights from industry leaders, academic researchers, and technical experts to understand how complex systems operate. For a field as rapidly evolving as AI, such collaboration is often necessary. Without technical input, regulatory frameworks risk being outdated before they are even implemented.

Yet the growing presence of technology companies in regulatory discussions has also sparked a wider debate about influence, balance, and governance. Many regulatory consultations now involve representatives from technology firms, startups, and industry associations who help inform standards, compliance mechanisms, and operational frameworks. Their technical expertise can be valuable, but it also raises questions about how regulatory processes ensure independence and accountability.

Dr. Balamurugan Balusamy, Chairperson of the School of Engineering & IT at the Manipal Academy of Higher Education Dubai campus, believes industry expertise plays an important role, but should not be the sole voice guiding policy decisions.

“Industry inputs are valuable in shaping AI regulations,” he said. “However, they can sometimes be influenced by the commercial goals and strategic agendas of individual companies. Since each organisation approaches AI from its own perspective, relying solely on industry viewpoints may introduce certain biases into the regulatory process.”

Balusamy argues that effective AI governance requires a broader coalition of voices beyond the technology sector itself. Academic institutions, policymakers, civil society groups, and independent research bodies should all play a role in shaping the rules.

“To address this, regulatory discussions should bring together a broader and more balanced set of perspectives that include academia, policymakers, and civil society,” he said. “This ensures that user welfare and wider societal goals remain central.”

The importance of balanced participation becomes even more evident when addressing one of the most persistent concerns surrounding emerging technology governance: potential conflicts of interest. When companies that build AI systems are also involved in discussions about how those systems should be regulated, maintaining neutrality can become a delicate process.

Balusamy notes that regulatory institutions often rely on structured governance mechanisms to manage these risks.

“Potential conflicts of interest in AI regulatory discussions are typically managed through transparency, disclosure requirements, and balanced stakeholder participation,” he explained. “Since industry stakeholders may have commercial interests, it is important that their inputs are complemented by perspectives from academia, independent experts, and public policy institutions.”

This approach reflects a broader shift in technology governance toward multi-stakeholder frameworks. Instead of relying solely on government agencies or private companies, regulators increasingly attempt to create collaborative platforms where different sectors contribute expertise.

“When regulatory bodies develop stakeholder frameworks that consider all participants involved in the AI lifecycle—from development and deployment to usage, evaluation, and continuous upgrades—it helps create common ground among stakeholders,” Balusamy said.

Such frameworks aim to reduce friction between competing interests while enabling innovation and regulatory oversight to evolve simultaneously.

From the industry perspective, some technology leaders argue that their participation in regulatory discussions is not simply about protecting commercial interests but about ensuring that policies can actually be implemented in real-world systems.

Caesar Medel, CEO and Founder of ZIPTrust, a venture supported by the Canadian University Dubai Incubator, believes that industry knowledge can help transform regulatory principles into practical solutions.

“Think of AI regulation like building a new skyscraper in Downtown Dubai,” Medel said. “You wouldn’t just ask the landlord how it should be built; you’d ask the architects and the engineers who know the strength of the steel.”

According to Medel, industry participation should go beyond offering commentary on policy drafts. Instead, technology providers can help design systems that embed regulatory compliance directly into digital infrastructure.

“Right now, industry input is often just advice,” he said. “At ZipTrust and TrustPaper, we believe the industry’s role is to provide the actual digital blueprints. We aren’t just telling regulators what could work; we are building the tools that make compliance automatic.”

This concept reflects an emerging trend in digital governance where compliance is built directly into technology platforms rather than enforced solely through external oversight. Systems such as digital identity frameworks, cryptographic verification, and blockchain-based authentication are increasingly being discussed as ways to automate regulatory processes.

Medel believes such technologies can also address concerns around conflicts of interest by reducing reliance on centralised control.

“A conflict of interest usually happens when the person guarding the vault also has the key to the treasure,” he explained. “In the old world of tech, big companies held both.”

He argues that decentralised technologies can help shift this balance by giving users greater control over digital verification systems.

“By using blockchain and tokenisation, the key stays with the user, not the company,” he said. “It’s like a digital notary that doesn’t care who you are—it only cares if the signature is real. When the technology itself is neutral, there’s no room for a conflict of interest. The math doesn’t have a favourite.”

Still, another challenge facing policymakers involves ensuring that evolving standards do not unintentionally reinforce the dominance of major technology platforms. Compliance with complex regulatory requirements often requires significant resources, including specialised legal teams, large datasets, and computing infrastructure.

Balusamy notes that this dynamic can sometimes create barriers for smaller companies.

“Evolving AI standards can sometimes unintentionally favour dominant platforms,” he said. “Large technology companies typically have greater access to data, computing infrastructure, and financial resources to comply with complex regulatory and technical standards.”

If not carefully designed, these standards could make it more difficult for startups or independent innovators to enter the market.

“To avoid this imbalance, the development of AI standards should actively involve academia, independent research institutions, and policy think tanks,” Balusamy added. “Academic researchers bring neutral, evidence-based perspectives, while research firms can provide independent evaluations of technological impact and feasibility.”

Medel echoes similar concerns, describing the risk of creating regulatory environments that inadvertently exclude smaller innovators.

“There is a real danger of digital fences,” he said. “If we make the rules so complicated and expensive that only the giants can afford to follow them, we end up with a neighbourhood where only five people can live.”

From his perspective, standards should focus on accessibility rather than exclusivity.

“Standards shouldn’t be a wall to keep people out; they should be a bridge that allows a small startup to compete with a giant on a level playing field,” he said.

Ultimately, the long-term impact of AI regulation will depend on how inclusively these frameworks are developed. For Balusamy, balanced policy design will be essential in ensuring that the AI ecosystem remains competitive and innovative.

“If policies are shaped primarily around the capabilities of large technology platforms, smaller companies and emerging innovators may face significant entry barriers,” he explained. “However, if policymakers involve a wider range of stakeholders—including academia, independent research organisations, startups, and industry—regulations can foster a more balanced and sustainable innovation environment.”

Such collaboration may also help ensure that ethical considerations remain central to the development of AI systems.

“In the long run, well-structured AI policies can guide technological growth in a responsible way, balancing commercial interests with ethical considerations and societal needs,” Balusamy said.

Medel believes the future of the technology sector may increasingly revolve around a new competitive principle: trust.

“We are moving from an era of ‘Move Fast and Break Things’ to an era of ‘Move Fast and Prove Things,’” he said.

In this environment, the success of AI companies may depend not only on technological capability but also on their ability to demonstrate reliability and transparency.

“In the long run, the companies that win won’t just be the ones with the smartest AI, but the ones people actually trust,” Medel said. “Innovation will flourish when trust is built in.”

As governments continue refining AI governance frameworks, the challenge will be maintaining a balance between technical expertise and regulatory independence. Industry voices will almost certainly remain part of the conversation, but ensuring that these discussions remain transparent, inclusive, and accountable may ultimately determine how successfully AI regulation supports both innovation and public trust.

Ai

Mar 6, 2026

Exclusive: Inside the Growing Industry Influence Over AI Regulation

Artificial intelligence is advancing at a pace that few policymakers could have imagined just a decade ago. Governments across the world are racing to develop regulatory frameworks that can guide the development of AI while balancing innovation, security, and public trust. As these frameworks take shape, a key question is increasingly surfacing in policy circles: who is helping shape the rules that govern the technology?

by Kasun Illankoon, Editor-in-Chief at Tech Revolt

In practice, the process of regulating emerging technologies rarely happens in isolation. Policymakers often rely on insights from industry leaders, academic researchers, and technical experts to understand how complex systems operate. For a field as rapidly evolving as AI, such collaboration is often necessary. Without technical input, regulatory frameworks risk being outdated before they are even implemented.

Yet the growing presence of technology companies in regulatory discussions has also sparked a wider debate about influence, balance, and governance. Many regulatory consultations now involve representatives from technology firms, startups, and industry associations who help inform standards, compliance mechanisms, and operational frameworks. Their technical expertise can be valuable, but it also raises questions about how regulatory processes ensure independence and accountability.

Dr. Balamurugan Balusamy, Chairperson of the School of Engineering & IT at the Manipal Academy of Higher Education Dubai campus, believes industry expertise plays an important role, but should not be the sole voice guiding policy decisions.

“Industry inputs are valuable in shaping AI regulations,” he said. “However, they can sometimes be influenced by the commercial goals and strategic agendas of individual companies. Since each organisation approaches AI from its own perspective, relying solely on industry viewpoints may introduce certain biases into the regulatory process.”

Balusamy argues that effective AI governance requires a broader coalition of voices beyond the technology sector itself. Academic institutions, policymakers, civil society groups, and independent research bodies should all play a role in shaping the rules.

“To address this, regulatory discussions should bring together a broader and more balanced set of perspectives that include academia, policymakers, and civil society,” he said. “This ensures that user welfare and wider societal goals remain central.”

The importance of balanced participation becomes even more evident when addressing one of the most persistent concerns surrounding emerging technology governance: potential conflicts of interest. When companies that build AI systems are also involved in discussions about how those systems should be regulated, maintaining neutrality can become a delicate process.

Balusamy notes that regulatory institutions often rely on structured governance mechanisms to manage these risks.

“Potential conflicts of interest in AI regulatory discussions are typically managed through transparency, disclosure requirements, and balanced stakeholder participation,” he explained. “Since industry stakeholders may have commercial interests, it is important that their inputs are complemented by perspectives from academia, independent experts, and public policy institutions.”

This approach reflects a broader shift in technology governance toward multi-stakeholder frameworks. Instead of relying solely on government agencies or private companies, regulators increasingly attempt to create collaborative platforms where different sectors contribute expertise.

“When regulatory bodies develop stakeholder frameworks that consider all participants involved in the AI lifecycle—from development and deployment to usage, evaluation, and continuous upgrades—it helps create common ground among stakeholders,” Balusamy said.

Such frameworks aim to reduce friction between competing interests while enabling innovation and regulatory oversight to evolve simultaneously.

From the industry perspective, some technology leaders argue that their participation in regulatory discussions is not simply about protecting commercial interests but about ensuring that policies can actually be implemented in real-world systems.

Caesar Medel, CEO and Founder of ZIPTrust, a venture supported by the Canadian University Dubai Incubator, believes that industry knowledge can help transform regulatory principles into practical solutions.

“Think of AI regulation like building a new skyscraper in Downtown Dubai,” Medel said. “You wouldn’t just ask the landlord how it should be built; you’d ask the architects and the engineers who know the strength of the steel.”

According to Medel, industry participation should go beyond offering commentary on policy drafts. Instead, technology providers can help design systems that embed regulatory compliance directly into digital infrastructure.

“Right now, industry input is often just advice,” he said. “At ZipTrust and TrustPaper, we believe the industry’s role is to provide the actual digital blueprints. We aren’t just telling regulators what could work; we are building the tools that make compliance automatic.”

This concept reflects an emerging trend in digital governance where compliance is built directly into technology platforms rather than enforced solely through external oversight. Systems such as digital identity frameworks, cryptographic verification, and blockchain-based authentication are increasingly being discussed as ways to automate regulatory processes.

Medel believes such technologies can also address concerns around conflicts of interest by reducing reliance on centralised control.

“A conflict of interest usually happens when the person guarding the vault also has the key to the treasure,” he explained. “In the old world of tech, big companies held both.”

He argues that decentralised technologies can help shift this balance by giving users greater control over digital verification systems.

“By using blockchain and tokenisation, the key stays with the user, not the company,” he said. “It’s like a digital notary that doesn’t care who you are—it only cares if the signature is real. When the technology itself is neutral, there’s no room for a conflict of interest. The math doesn’t have a favourite.”

Still, another challenge facing policymakers involves ensuring that evolving standards do not unintentionally reinforce the dominance of major technology platforms. Compliance with complex regulatory requirements often requires significant resources, including specialised legal teams, large datasets, and computing infrastructure.

Balusamy notes that this dynamic can sometimes create barriers for smaller companies.

“Evolving AI standards can sometimes unintentionally favour dominant platforms,” he said. “Large technology companies typically have greater access to data, computing infrastructure, and financial resources to comply with complex regulatory and technical standards.”

If not carefully designed, these standards could make it more difficult for startups or independent innovators to enter the market.

“To avoid this imbalance, the development of AI standards should actively involve academia, independent research institutions, and policy think tanks,” Balusamy added. “Academic researchers bring neutral, evidence-based perspectives, while research firms can provide independent evaluations of technological impact and feasibility.”

Medel echoes similar concerns, describing the risk of creating regulatory environments that inadvertently exclude smaller innovators.

“There is a real danger of digital fences,” he said. “If we make the rules so complicated and expensive that only the giants can afford to follow them, we end up with a neighbourhood where only five people can live.”

From his perspective, standards should focus on accessibility rather than exclusivity.

“Standards shouldn’t be a wall to keep people out; they should be a bridge that allows a small startup to compete with a giant on a level playing field,” he said.

Ultimately, the long-term impact of AI regulation will depend on how inclusively these frameworks are developed. For Balusamy, balanced policy design will be essential in ensuring that the AI ecosystem remains competitive and innovative.

“If policies are shaped primarily around the capabilities of large technology platforms, smaller companies and emerging innovators may face significant entry barriers,” he explained. “However, if policymakers involve a wider range of stakeholders—including academia, independent research organisations, startups, and industry—regulations can foster a more balanced and sustainable innovation environment.”

Such collaboration may also help ensure that ethical considerations remain central to the development of AI systems.

“In the long run, well-structured AI policies can guide technological growth in a responsible way, balancing commercial interests with ethical considerations and societal needs,” Balusamy said.

Medel believes the future of the technology sector may increasingly revolve around a new competitive principle: trust.

“We are moving from an era of ‘Move Fast and Break Things’ to an era of ‘Move Fast and Prove Things,’” he said.

In this environment, the success of AI companies may depend not only on technological capability but also on their ability to demonstrate reliability and transparency.

“In the long run, the companies that win won’t just be the ones with the smartest AI, but the ones people actually trust,” Medel said. “Innovation will flourish when trust is built in.”

As governments continue refining AI governance frameworks, the challenge will be maintaining a balance between technical expertise and regulatory independence. Industry voices will almost certainly remain part of the conversation, but ensuring that these discussions remain transparent, inclusive, and accountable may ultimately determine how successfully AI regulation supports both innovation and public trust.

Ai

Mar 6, 2026

Exclusive: Inside the Growing Industry Influence Over AI Regulation

Artificial intelligence is advancing at a pace that few policymakers could have imagined just a decade ago. Governments across the world are racing to develop regulatory frameworks that can guide the development of AI while balancing innovation, security, and public trust. As these frameworks take shape, a key question is increasingly surfacing in policy circles: who is helping shape the rules that govern the technology?

by Kasun Illankoon, Editor-in-Chief at Tech Revolt

In practice, the process of regulating emerging technologies rarely happens in isolation. Policymakers often rely on insights from industry leaders, academic researchers, and technical experts to understand how complex systems operate. For a field as rapidly evolving as AI, such collaboration is often necessary. Without technical input, regulatory frameworks risk being outdated before they are even implemented.

Yet the growing presence of technology companies in regulatory discussions has also sparked a wider debate about influence, balance, and governance. Many regulatory consultations now involve representatives from technology firms, startups, and industry associations who help inform standards, compliance mechanisms, and operational frameworks. Their technical expertise can be valuable, but it also raises questions about how regulatory processes ensure independence and accountability.

Dr. Balamurugan Balusamy, Chairperson of the School of Engineering & IT at the Manipal Academy of Higher Education Dubai campus, believes industry expertise plays an important role, but should not be the sole voice guiding policy decisions.

“Industry inputs are valuable in shaping AI regulations,” he said. “However, they can sometimes be influenced by the commercial goals and strategic agendas of individual companies. Since each organisation approaches AI from its own perspective, relying solely on industry viewpoints may introduce certain biases into the regulatory process.”

Balusamy argues that effective AI governance requires a broader coalition of voices beyond the technology sector itself. Academic institutions, policymakers, civil society groups, and independent research bodies should all play a role in shaping the rules.

“To address this, regulatory discussions should bring together a broader and more balanced set of perspectives that include academia, policymakers, and civil society,” he said. “This ensures that user welfare and wider societal goals remain central.”

The importance of balanced participation becomes even more evident when addressing one of the most persistent concerns surrounding emerging technology governance: potential conflicts of interest. When companies that build AI systems are also involved in discussions about how those systems should be regulated, maintaining neutrality can become a delicate process.

Balusamy notes that regulatory institutions often rely on structured governance mechanisms to manage these risks.

“Potential conflicts of interest in AI regulatory discussions are typically managed through transparency, disclosure requirements, and balanced stakeholder participation,” he explained. “Since industry stakeholders may have commercial interests, it is important that their inputs are complemented by perspectives from academia, independent experts, and public policy institutions.”

This approach reflects a broader shift in technology governance toward multi-stakeholder frameworks. Instead of relying solely on government agencies or private companies, regulators increasingly attempt to create collaborative platforms where different sectors contribute expertise.

“When regulatory bodies develop stakeholder frameworks that consider all participants involved in the AI lifecycle—from development and deployment to usage, evaluation, and continuous upgrades—it helps create common ground among stakeholders,” Balusamy said.

Such frameworks aim to reduce friction between competing interests while enabling innovation and regulatory oversight to evolve simultaneously.

From the industry perspective, some technology leaders argue that their participation in regulatory discussions is not simply about protecting commercial interests but about ensuring that policies can actually be implemented in real-world systems.

Caesar Medel, CEO and Founder of ZIPTrust, a venture supported by the Canadian University Dubai Incubator, believes that industry knowledge can help transform regulatory principles into practical solutions.

“Think of AI regulation like building a new skyscraper in Downtown Dubai,” Medel said. “You wouldn’t just ask the landlord how it should be built; you’d ask the architects and the engineers who know the strength of the steel.”

According to Medel, industry participation should go beyond offering commentary on policy drafts. Instead, technology providers can help design systems that embed regulatory compliance directly into digital infrastructure.

“Right now, industry input is often just advice,” he said. “At ZipTrust and TrustPaper, we believe the industry’s role is to provide the actual digital blueprints. We aren’t just telling regulators what could work; we are building the tools that make compliance automatic.”

This concept reflects an emerging trend in digital governance where compliance is built directly into technology platforms rather than enforced solely through external oversight. Systems such as digital identity frameworks, cryptographic verification, and blockchain-based authentication are increasingly being discussed as ways to automate regulatory processes.

Medel believes such technologies can also address concerns around conflicts of interest by reducing reliance on centralised control.

“A conflict of interest usually happens when the person guarding the vault also has the key to the treasure,” he explained. “In the old world of tech, big companies held both.”

He argues that decentralised technologies can help shift this balance by giving users greater control over digital verification systems.

“By using blockchain and tokenisation, the key stays with the user, not the company,” he said. “It’s like a digital notary that doesn’t care who you are—it only cares if the signature is real. When the technology itself is neutral, there’s no room for a conflict of interest. The math doesn’t have a favourite.”

Still, another challenge facing policymakers involves ensuring that evolving standards do not unintentionally reinforce the dominance of major technology platforms. Compliance with complex regulatory requirements often requires significant resources, including specialised legal teams, large datasets, and computing infrastructure.

Balusamy notes that this dynamic can sometimes create barriers for smaller companies.

“Evolving AI standards can sometimes unintentionally favour dominant platforms,” he said. “Large technology companies typically have greater access to data, computing infrastructure, and financial resources to comply with complex regulatory and technical standards.”

If not carefully designed, these standards could make it more difficult for startups or independent innovators to enter the market.

“To avoid this imbalance, the development of AI standards should actively involve academia, independent research institutions, and policy think tanks,” Balusamy added. “Academic researchers bring neutral, evidence-based perspectives, while research firms can provide independent evaluations of technological impact and feasibility.”

Medel echoes similar concerns, describing the risk of creating regulatory environments that inadvertently exclude smaller innovators.

“There is a real danger of digital fences,” he said. “If we make the rules so complicated and expensive that only the giants can afford to follow them, we end up with a neighbourhood where only five people can live.”

From his perspective, standards should focus on accessibility rather than exclusivity.

“Standards shouldn’t be a wall to keep people out; they should be a bridge that allows a small startup to compete with a giant on a level playing field,” he said.

Ultimately, the long-term impact of AI regulation will depend on how inclusively these frameworks are developed. For Balusamy, balanced policy design will be essential in ensuring that the AI ecosystem remains competitive and innovative.

“If policies are shaped primarily around the capabilities of large technology platforms, smaller companies and emerging innovators may face significant entry barriers,” he explained. “However, if policymakers involve a wider range of stakeholders—including academia, independent research organisations, startups, and industry—regulations can foster a more balanced and sustainable innovation environment.”

Such collaboration may also help ensure that ethical considerations remain central to the development of AI systems.

“In the long run, well-structured AI policies can guide technological growth in a responsible way, balancing commercial interests with ethical considerations and societal needs,” Balusamy said.

Medel believes the future of the technology sector may increasingly revolve around a new competitive principle: trust.

“We are moving from an era of ‘Move Fast and Break Things’ to an era of ‘Move Fast and Prove Things,’” he said.

In this environment, the success of AI companies may depend not only on technological capability but also on their ability to demonstrate reliability and transparency.

“In the long run, the companies that win won’t just be the ones with the smartest AI, but the ones people actually trust,” Medel said. “Innovation will flourish when trust is built in.”

As governments continue refining AI governance frameworks, the challenge will be maintaining a balance between technical expertise and regulatory independence. Industry voices will almost certainly remain part of the conversation, but ensuring that these discussions remain transparent, inclusive, and accountable may ultimately determine how successfully AI regulation supports both innovation and public trust.

Latest News

Top Stories

Top Stories

Big Tech

Big Tech

Technology

artificial intelligence

artificial intelligence

Finance

Finance

Startups

Technology

Technology

Big Tech

Big Tech

Media Partnerships