Ai
Apr 15, 2026


As enterprises accelerate their adoption of generative AI, concerns around data exposure and governance are rising just as quickly. Trellix, a cybersecurity firm focused on intelligence-led resilience, is responding with an expanded set of data security capabilities and a structured framework aimed at helping organisations deploy AI tools without compromising sensitive information.
[For more news, click here]
The move comes at a time when AI adoption is outpacing traditional security measures. In 2025, 88% of businesses integrated AI into at least one function, a shift that has also driven the emergence of “shadow AI”—unsanctioned tools used without oversight. This rapid uptake is already having financial consequences, with the average cost of data breaches increasing by $670,000.
At the centre of the challenge is visibility. Many organisations lack clear policies on how employees interact with AI systems, particularly when handling sensitive or regulated data. Even approved platforms can introduce risk when governance frameworks fail to keep pace with usage.
"The rapid adoption of AI tools across the enterprise is creating new and often invisible data risks,” said Alex Au Yeung, Chief Product Officer, Trellix. "Trellix brings together policy, visibility, and enforcement in a unified framework to help customers control how data is used across both sanctioned and shadow AI.”
Trellix’s approach combines enhancements across Data Loss Prevention (DLP), data encryption, and database security into a three-part framework. The model focuses on establishing clear usage policies, reinforcing them through governance and training, and providing real-time oversight of how AI tools interact with enterprise data.
A parallel can be seen in how organisations previously struggled to manage cloud adoption. Much like early cloud deployments, where teams adopted tools faster than security teams could govern them, AI is now following a similar trajectory—expanding rapidly while leaving gaps in oversight. Trellix’s framework attempts to close that gap by embedding controls directly into workflows rather than treating security as an afterthought.
“The regulatory landscape around data security continues to evolve, making compliance a moving target,” said Kristin Lowery, Field CISO, Optiv. “As organizations increasingly leverage AI in the workplace, having the right controls and visibility is essential to ensure compliant data-handling practices and prevent potential data leaks. This combination of associate training, data-handling processes, and controls is critical for responsibly integrating new AI tools.”
Key features include real-time monitoring of AI-related data risks, protection against unauthorised database access, and encryption controls that restrict how sensitive data is accessed across devices and platforms. Supporting services also focus on policy development, technical implementation, and employee training—areas increasingly seen as critical to managing AI risk.
As AI continues to reshape enterprise operations, the focus is shifting from adoption to control, with security frameworks becoming a central requirement rather than an afterthought.
Related Articles