Ai
Exclusive: 63% of Businesses Have an AI Exit Plan, Almost None of Them Could Actually Use It
Companies built AI strategies around big cloud vendors. A new wave of research suggests that was a dangerous bet, and the race to build sovereign AI infrastructure is already underway.
By Hans Roth, Senior Vice President and General Manager, EMEA, Red Hat
There is a particular kind of anxiety gripping enterprise IT departments right now. It is not about whether artificial intelligence works. It is about who controls it, where it lives, and what happens when a vendor relationship goes wrong.
For years, the dominant logic of enterprise technology was straightforward: pick a big cloud provider, go deep on their ecosystem, and let them handle the complexity. That logic is now cracking under the weight of AI. As organizations move AI from pilot programs into the operational core of their businesses, the risks of dependency are becoming impossible to ignore — geopolitical shifts, regulatory pressure, and the sheer opacity of proprietary systems are forcing executives to ask uncomfortable questions about what they actually own.
A new survey of 500 IT decision-makers conducted by Red Hat puts numbers to an anxiety that many in the industry have long felt but rarely quantified. The picture it paints is one of an enterprise world caught between ambition and vulnerability, deploying AI at scale while quietly acknowledging that the foundations it rests on may not be as solid as the business case promised.
The Exit Strategy Problem
Start with the most striking data point: 63 percent of businesses say they have a defined plan to switch AI providers if their primary vendor suddenly restricted access to its services. On the surface, that sounds reassuring. Organizations are thinking ahead. They have contingencies.
Dig deeper, and the reassurance evaporates. Nearly 40 percent of those businesses admit that actually executing that switch would cause moderate to significant disruption to business continuity. Another 30 percent of organizations do not have any exit strategy at all. What this means, in practical terms, is that the majority of enterprises are either operating without a safety net or carrying a strategy that exists more on paper than in operational reality.
This is not a minor governance gap. When AI systems sit at the center of supply chains, customer service operations, financial modeling, and internal communications, a forced migration is not a project management problem. It is a continuity crisis. And the conditions that could trigger it, a vendor going under, a government restricting cross-border data flows, a geopolitical rupture disrupting cloud infrastructure, are no longer theoretical.
The companies that are thinking clearly about this are not just asking "how do we switch providers?" They are asking a harder, more fundamental question: how do we build systems where we are not entirely dependent on any single provider's goodwill to keep the lights on?
Trust Is Now a Technical Specification
There is a phrase in the Red Hat research that is worth sitting with: trust as a technical requirement. It sounds abstract until you think about what it actually means for the people running enterprise AI systems.
Seventy-nine percent of IT leaders in the survey identified transparency and auditability as the most valuable open source benefits for building trust in their AI strategy over the next three years. Seventy-seven percent cited the need for customization to meet business and regulatory needs. Seventy-five percent said they need greater control over how AI is built and where it runs.
Read those numbers together and they tell a coherent story. Organizations are not just looking for AI that performs well on benchmarks. They are looking for AI they can inspect, verify, and modify, systems where they can answer, at any moment, the question of what is actually happening inside the machine. With proprietary systems, that answer is often unavailable by design. The model weights are secret. The training data is undisclosed. The infrastructure is somewhere in a data center you will never see.
For regulated industries, this opacity is not just uncomfortable, it is increasingly untenable. Financial regulators, healthcare compliance frameworks, and data sovereignty laws across Europe and Asia are creating binding requirements around explainability and data residency that a black-box AI deployment simply cannot satisfy. The enterprise organizations that are moving fastest on open source AI are often doing so not out of ideology but out of regulatory necessity.
Seventy-seven percent of the organizations surveyed said they should prioritize open source principles, including transparency, auditability, and open source licensing, to achieve what the industry is starting to call AI sovereignty. That is a majority consensus in favor of a fundamentally different model of technology ownership than the one that has dominated enterprise software for the past decade.
The Governance Gap at the Heart of Agentic AI
If the control question around conventional AI is complicated, the arrival of agentic AI has made it considerably more urgent. Agentic systems do not just generate responses to prompts. They plan, they execute, they trigger workflows, and they take actions in the world — autonomously, at speed, and often across multiple interconnected systems.
Eighty-eight percent of organizations in the Red Hat survey already use these tools. Only 31 percent feel they have strong governance in place to manage them. That is a governance gap affecting the vast majority of enterprise AI deployments, and it sits precisely at the moment when the stakes of getting things wrong are highest.
The implications run across operational security, legal liability, and regulatory compliance. An agentic system that is poorly governed is not just inefficient — it is a liability that can act in ways its operators did not anticipate and cannot fully explain after the fact. As regulators worldwide turn their attention to AI accountability, organizations that cannot demonstrate meaningful oversight of their autonomous systems are going to find themselves in difficult conversations with authorities who are no longer willing to treat AI as a special case.
The path forward requires building governance into infrastructure from the beginning, not layering it on as an afterthought. Red Hat's work with Telenor AI Factory illustrates what this looks like in practice: by standardizing on cloud-native AI platforms, Telenor was able to create a vendor-neutral foundation that addresses data residency requirements while maintaining national control over data and processes. The guardrails are not a policy document sitting in a compliance folder. They are embedded in the technical architecture itself.
What Digital Sovereignty Actually Looks Like
The term "digital sovereignty" has become something of a buzzword in technology policy circles, invoked freely by politicians and vendors alike without much precision about what it actually requires in practice. The Red Hat data offers a more grounded definition, one rooted in operational capability rather than political aspiration.
Digital sovereignty, in this framing, is the practical ability to move workloads, maintain operations, and control data regardless of what any individual vendor decides to do. It requires a consistent architecture that works across different environments — not a patchwork of vendor-specific tools that creates new dependencies every time it solves an old one. It means technical independence: the capacity to maintain systems even when a provider's status changes, when geopolitical conditions shift, or when regulatory requirements in one jurisdiction conflict with a vendor's operational norms.
Orange Business has operationalized this through its Cloud Avenue sovereign cloud platform, integrating Red Hat OpenShift to give enterprise customers the ability to modernize their applications while retaining full control over their data and infrastructure. The architecture is the answer. When the underlying platform is open, portable, and governed by the organization rather than the vendor, the question of sovereignty becomes answerable in technical terms rather than aspirational ones.
The Strategic Bet Organizations Are Making
What the Red Hat research ultimately documents is a strategic inflection point. For much of the last decade, enterprise technology strategy was organized around vendor relationships. The assumption was that depth of integration with a major platform created competitive advantage through capability access, even if it also created dependency.
AI has complicated that assumption in ways that are difficult to reverse. The data residency demands of sovereign nations, the explainability requirements of regulators, the operational risk of vendor concentration, and the governance complexity of autonomous systems are all pushing in the same direction: toward architectures that prioritize control, portability, and transparency over the convenience of lock-in.
The 71 percent of organizations seeking stronger community-driven trust and the 69 percent looking to reduce vendor lock-in are not making a philosophical statement. They are making a practical calculation about where the risk lies in the next phase of enterprise AI.
Organizations that begin building sovereign AI infrastructure now are positioning themselves to navigate whatever regulatory, geopolitical, or market disruptions come next without losing operational continuity. The ones that do not are accumulating a kind of hidden liability — one that will only become visible when conditions change and the exit strategy they thought they had turns out to be harder to execute than anyone anticipated.
The AI transformation is real, and it is permanent. The question now is not whether to participate, but whether the foundations being built today will hold when the conditions that made them seem adequate have changed beyond recognition.















































