Ai
Apr 30, 2026
Exclusive: What Is the Hidden Workforce Behind AI Systems? Ghost Workers, Data Labelers, and Ethical Gaps Explained


A sweeping new cybersecurity report reveals that the gap between how fast organizations are adopting AI and how prepared they are to protect it has become one of the most urgent and underappreciated risks in enterprise technology today.
by Kasun Illankoon, Editor in Chief at Tech Revolt
[For more news click here]
Somewhere inside your organization, an AI assistant is summarizing emails, routing customer queries, or making decisions inside a workflow nobody fully mapped out. Maybe it was deployed six months ago, maybe it rolled out last quarter. Either way, odds are high that your security team cannot tell you with confidence what would happen if that system were compromised.
That is not a hypothetical concern. According to Proofpoint's newly released 2026 AI and Human Risk Landscape report, it is the defining security challenge of this moment in enterprise technology. The study, which surveyed more than 1,400 security professionals across 12 countries, paints a detailed and somewhat alarming picture of where AI adoption has outrun institutional readiness - and what the consequences are already looking like in live environments.
The headline finding is both simple and striking: 87 percent of organizations globally have deployed AI assistants beyond the pilot stage, and 76 percent are actively rolling out autonomous agents. Yet more than half of those same organizations are not fully confident their security controls would detect a compromised AI system. That is not a lag. That is a structural gap, and it is widening.
One of the more unsettling patterns in the report is what might be called the confidence illusion - the tendency for organizations to believe they have security coverage without being able to verify that it actually works.
Globally, the majority of organizations report having AI security controls in place. But when pressed on whether those controls would actually catch a compromised AI assistant or an exploited autonomous agent, confidence drops sharply. Among UAE respondents, for instance, 57 percent say they have AI security coverage, while 51 percent simultaneously admit they are not fully confident those controls would detect a problem. And 40 percent of organizations in the country that had controls in place still experienced an AI-related security incident.
This kind of gap between perceived and actual security posture is not new to cybersecurity. What is new is the speed and complexity introduced by AI systems that are now embedded across core workflows. When a threat actor exploits a traditional software vulnerability, investigators generally know where to look. When a compromise involves an AI assistant operating across email, messaging platforms, and third-party cloud applications, the forensic picture becomes exponentially harder to reconstruct.
Understanding why this problem is so difficult requires understanding what enterprise AI deployment actually looks like in practice. It is not a single tool deployed in a single place. It is a constellation of assistants, agents, integrations, and automations spread across the same collaboration infrastructure that organizations already struggle to secure.
The report identifies third-party SaaS and cloud applications as the most common threat vector for AI-related incidents, affecting 58 percent of organizations globally. But exposure does not stop there. It extends across email systems, social and messaging platforms, file-sharing services, and SMS. For UAE organizations that have already experienced an AI-related incident, that number climbs to 59 percent for third-party SaaS and cloud applications, and 51 percent for file-sharing platforms specifically.
What this means in practical terms is that the attack surface is not just larger than it was two years ago. It is also faster. AI-assisted threats can propagate across interconnected systems at a pace that traditional incident response frameworks were not designed to handle.
"Organizations are scaling AI assistants and autonomous agents across core workflows, yet many cannot confirm their controls are effective or fully investigate incidents that move across collaboration channels," said Ryan Kalember, Chief Strategy Officer at Proofpoint. "As AI becomes embedded in how work gets done, security leaders must rethink how they protect trusted interactions across people, data and AI systems."

Photo: Ryan Kalember, Chief Strategy Officer at Proofpoint
Even when organizations detect a potential AI-related incident, many face a second, equally serious problem: they cannot investigate it effectively.
Reconstructing what happened in a security incident requires visibility - the ability to pull together logs, correlate activity across systems, and trace how a threat moved from point A to point B. For incidents that stay within a single system, that is manageable. For incidents that span email, cloud platforms, collaboration tools, and AI agents simultaneously, it becomes a different problem entirely.
Globally, only one-third of organizations say they are fully prepared to investigate an AI or agent-related incident. In the UAE, the number is somewhat better at 53 percent - but that still means nearly half of organizations in one of the world's most AI-forward markets would struggle to reconstruct events if something went wrong. Thirty-nine percent of UAE respondents specifically report difficulty correlating threats across channels.
This is not a skills problem, or not only a skills problem. It is also an architecture problem. Most organizations' security stacks were built for a world where threats moved through identifiable, bounded systems. AI has dissolved many of those boundaries, and the tools have not fully caught up.
Layered on top of all of this is a problem that predates AI entirely but that AI is significantly aggravating: the sheer complexity of modern enterprise security stacks.
The report's findings on tool sprawl are striking even by cybersecurity industry standards. Ninety-eight percent of organizations in the UAE say managing multiple security tools is at least moderately challenging. Sixty-eight percent describe it as very or extremely difficult. Respondents cite overlapping and redundant tools, difficulty correlating threats across those tools, and slow investigation times as their primary pain points.
This matters for AI security in particular because the fragmentation of security tooling means that the visibility required to track AI-related incidents simply does not exist in many organizations. You cannot investigate what you cannot see, and you cannot see what is spread across ten different tools with no unified view.
The industry response to this is consolidation, and the report suggests it is well underway. Fifty-one percent of UAE organizations are actively pursuing vendor and tool consolidation, and 47 percent believe a unified platform is more effective than a collection of point solutions. Over the next twelve months, 65 percent plan to expand AI protections, and an equal number expect to move toward a unified platform approach.
The United Arab Emirates offers a particularly instructive lens on these dynamics, both because of its aggressive AI adoption posture and because of the specific threat environment it operates in.
Ninety-two percent of UAE organizations have deployed AI assistants beyond the pilot stage, and 80 percent are advancing autonomous agents. These are among the highest adoption rates in the study. At the same time, 55 percent describe their security posture as catching up, inconsistent, or reactive. Forty-one percent have already experienced a suspicious or confirmed AI-related incident.
"The UAE has established itself as one of the fastest growing markets in the region in AI adoption, and that ambition is not slowing down," said Emile Abou Saleh, Vice President Emerging Markets at Proofpoint. "Yet cybersecurity is still catching up, and the stakes are rising. We're seeing a significant surge in phishing attacks in both volume and sophistication, driven in part by geopolitical tensions that are fueling more targeted campaigns across the region, and cybercriminals are increasingly exploiting both AI and human behavior to amplify their impact."

Photo: Emile Abou Saleh, Vice President Emerging Markets at Proofpoint
The phishing dimension is worth particular attention. As AI makes it easier and cheaper to generate convincing, targeted content at scale, the human attack surface - the part of any security posture that depends on people making good decisions - becomes more, not less, important to defend. The report's framing of this as an "AI and Human Risk" landscape is deliberate: the two are not separate problems.
"Many UAE organizations are already dealing with these threats in live environments," Abou Saleh added. "The good news is that organizations across the UAE are actively investing to course correct. The focus now needs to shift to getting the foundations right: improving visibility, strengthening governance, and building security that keeps pace with how AI is actually being used, including the growing human attack surface that phishing continues to exploit."
The report does not spend much time on abstract recommendations, which is part of what makes it useful. The problems it identifies are specific, and the directions it points toward are practical.
Governance frameworks need to catch up to deployment realities. Forty-eight percent of UAE organizations report having no risk assessment process for AI workflows. Forty-four percent have no model for detecting compromised agents or exploited AI assistants. Forty-one percent have gaps in security training for AI-related threats. These are not gaps that require entirely new security paradigms. They require organizations to apply existing security discipline to contexts where they have not yet applied it.
Investigation capabilities need to be built before incidents happen, not after. Thirty-nine percent of UAE organizations say they already have difficulty correlating threats across channels. Adding more AI systems to those environments without improving cross-channel visibility will only make this harder.
And the tool consolidation trend the report documents is likely to accelerate, because the alternative - managing an increasingly complex AI threat landscape with an increasingly fragmented security stack - is untenable at scale.
The broader picture the report paints is of an industry at an inflection point. AI adoption in the enterprise is not slowing down. The question is whether security practice can evolve fast enough to keep pace with it - or whether the gap between deployment velocity and security readiness will continue to widen until something goes seriously wrong.
For most organizations right now, the honest answer is that they do not fully know. And not knowing, in this context, is itself the risk.
Related Articles