Ai
Mar 24, 2026
Yango Tech launches Industrial AI Agents to accelerate UAE’s digital workforce agenda

In this exclusive interview with Tech Revolt, Jonathan Bryce, Executive Director of Cloud & Infrastructure at the Linux Foundation, shares insights from over 15 years in open source leadership. He leads both the Cloud Native Computing Foundation and the OpenInfra Foundation, and co-founded OpenStack, helping scale it into a global ecosystem. Previously, he spent a decade in data centre infrastructure and co-founded an early cloud company acquired by Rackspace.
How have you seen the cloud native ecosystem evolve over the past few years, particularly with the rapid growth of technologies like Kubernetes?
The evolution has been profound, moving from cloud native being an "emerging choice" to a near-universal enterprise standard. To understand this shift, I like to think of it as the move toward "Invisible Infrastructure."
Just as we don’t think about the complex systems behind booking a flight on Delta or calling a Lyft, cloud native has become the powerful, silent engine powering our daily lives. The proof is in the adoption: our research shows that 98% of organizations have now adopted cloud native techniques, and Kubernetes production use has reached 82% of container users.
We’ve moved into an era where:
● Kubernetes as the Utility Grid's Brain: Kubernetes acts as the central engine—the Operating System for AI—a foundational component holding up the entire utility grid of global cloud infrastructure. This is especially vital now that 66% of organizations are using it to manage their generative AI inference workloads.
● The Inference Imperative: The Utility Grid's New Load: The industry is moving from the bursty training phase to the inference phase, where immediate, intelligent answers at scale are required. This new load is rapidly becoming the largest compute workload in human history.
● Zero Trust for the Smart Utility Grid: Our approach to security has evolved from a simple perimeter to a high-security smart building model on the utility grid, where every microservice requires its own authentication. This embodies the Zero Trust architectures necessary for modern scale and compliance.
● Platform Engineering: Paving the Utility Grid's Service Road: To bridge the execution gap, we are focused on platform engineering. Instead of developers navigating a wilderness to set up infrastructure plumbing, we are focused on building a paved road. This encodes expertise and allows teams to bypass complexity, ensuring the utility grid reliably delivers value.
Today, the ecosystem is no longer just about adoption—it’s about narrowing the execution gap and standardizing the stack so that this infrastructure remains as reliable and invisible as the utility grid for the AI-native era.
The CNCF hosts many of the most widely adopted open-source cloud native projects. What factors determine when a project progresses from Sandbox to Incubating and eventually Graduated status?
The progression through Sandbox, Incubating, and Graduated status is a measure of maturity—not just technical, but cultural and operational.
● Sandbox serves as the initial stage, designed for new projects to test innovative ideas and gain community validation. In this environment, projects are encouraged to grow their user and contributor base. This phase is crucial for the community to test, iterate, and develop next-generation standards, particularly for emerging technologies like those in the AI and security fields.
● Incubating is the stage where a project proves its long-term viability. It must demonstrate significant growth, a healthy, diverse group of maintainers, and clear evidence of production adoption. It must be resilient, reliable, and relevant.
● Graduated status is the pinnacle. This requires a project to have achieved a level of maturity, security, and stability that makes it an indispensable component of the cloud native stack. It’s about being a true industry standard with a robust, vendor-neutral governance model that can sustain it for years to come. Ultimately, it’s about proving real-world enterprise impact.
Collaboration is a core principle of CNCF and the wider Linux Foundation ecosystem. How important is vendor neutrality in maintaining innovation within cloud native technologies?
Vendor neutrality is not merely a guideline, it is one of the most critical principles the CNCF provides to the cloud native ecosystem. Innovation thrives when governance is open, transparent, and, most importantly, neutral.
This neutrality ensures that projects are built on the best technical merit and community consensus, rather than being constrained by any single company's product roadmap. This lowers the barrier for collaboration, providing the assurance that allows even competitors to contribute to a common open source stack with confidence.
By preventing vendor lock-in and guaranteeing that long-term control remains community driven, CNCF creates a resilient, unified platform. This is why true innovation at a global scale happens: the core standards are accessible and trustworthy for everyone, transforming competitive friction into collective progress. The future of cloud native depends on open governance.
With organisations increasingly building distributed applications, what are the biggest challenges developers face when adopting cloud native architectures today?
The primary challenges are shifting from purely technical hurdles to cultural and operational ones. Our data shows that cultural challenges (47%) have overtaken technical complexity (34%) as the top barrier. This can be summarized in two key areas:
● The Talent Bottleneck: We are facing significant talent gaps across AI (68%) and platform engineering (56%). The rate of innovation is outpacing the supply of engineers capable of building, securing, and operating these systems.
● The Inference Imperative: On the technical side, the industry is grappling with the shift from training large models to managing inference at a massive scale—the “Inference Imperative.” This requires a new, high-efficiency, standardized infrastructure to handle the immense scale needed for agentic workloads.
The solution to both is a two-pronged strategy.
First, platform engineering. By encoding expertise into open source tools like Backstage, we capture the tribal knowledge and provide standardized blueprints. This allows developers to focus on innovation, effectively scaling the capabilities of our experts across the entire organization.
Second, organizations need to invest in upskilling and training through community-led programs to close the skills gap at an industrial scale. CNCF provides training and certifications; in fact, over the past 10 years, CNCF has launched 15 certifications and certified over 330,000 people. To further this mission, we launched the Kubestronaut program, recognizing community leaders who have mastered the full suite of Kubernetes certifications. For the most dedicated, we established the Golden Kubestronaut—an elite designation held by fewer than 300 professionals worldwide who have passed every CNCF certification.
CNCF’s flagship conference taking place in Amsterdam from 23–26 March 2026 is expected to gather a wide range of developers and enterprises. What key themes or discussions will define this year’s event?
The urgent convergence of cloud native and AI, particularly as the market transitions from hype to production, defines this year's event. The key themes will be:
● The Inference Imperative: Addressing the scaling and cost challenges of running specialized, smaller models in production—the crucial work that happens underneath the agent layer.
● The Agentic Reality Check: Moving past speculative AI to focus on the reality of building, securing, and operating massive, real-world AI systems, which is highlighted by events like Agentics Day and the Cloud Native AI + Kubeflow Day.
● AgenticOps and Digital Sovereignty: Discussing the rise of AI agents as the primary users of software and the infrastructure required to ensure data sovereignty and compliance.
● Closing the Talent Gap: Focusing on the "Human + AI" future and how our community, through training, certification, and Platform Engineering Day, is creating opportunity and upskilling developers at an industrial scale.
The schedule's focus on tracks like Platform Engineering, Observability, and Security (including Open Source SecurityCon) showcases the ecosystem's maturity, driven by the need for competitive advantages in automation, resilience, and scale.
Events like the CNCF flagship conference often serve as a platform for major announcements. Should attendees expect significant project updates, collaborations, or ecosystem developments this year?
Yes, absolutely! The announcements directly aim to address the current challenges in AI production. Attendees should expect:
● Standardizing the AI Stack: We will be framing the roadmap for open, certified standards that eliminate vendor lock-in and provide a robust foundation for inference infrastructure.
● Agentic Workload Guardrails: We anticipate major updates that provide a production-grade standard for autonomous, resilient workflows for the rising use of AI agents.
● Core Infrastructure Evolution: The ecosystem is hardening with significant updates to key foundational technologies to drive adoption of more capable, future-proof APIs and scale resources more efficiently.
● Project Maturity: We will celebrate significant milestones for projects that provide the essential policy-as-code and data-handling capabilities required for resilient, production-grade AI applications.
The conference brings together contributors from Graduated, Incubating, and Sandbox projects. How valuable is it to have these different stages of innovation interacting in one place?
The interaction between all three stages is vital because it represents the entire innovation life cycle of the cloud native ecosystem.
The Graduated projects provide the stable, battle-tested foundation, such as Kubernetes, Prometheus, Envoy. They are the standards that organizations build their entire infrastructure upon.
The Incubating and Sandbox projects represent the future. This is where the community tests, iterates, and develops the next generation of standards, such as the technologies we are seeing emerge in the AI and security spaces.
Bringing them together in one place ensures that the cutting-edge innovations of the Sandbox are informed by the production experience of the Graduated projects, and vice-versa. It’s an organic feedback loop that allows the open source community to move with speed, stability, and collective purpose.
For organisations and developers attending the event for the first time, what opportunities does the conference provide for learning, collaboration, and engaging with the broader cloud native community?
The KubeCon + CloudNativeCon events are the central meeting point for the entire cloud native ecosystem on a global scale. They provide a dynamic setting where the frontiers of open source collide with the industrial realities of production that shape the future generation of engineers. The events are the perfect opportunity to engage with the cloud native community on the human level, whether it's to learn, to teach, or even just to network.
● Upskilling and Learning via the Tracks: It’s a direct source of the most current knowledge, delivered across specialized tracks. For instance, the Application Development track is critical for developers looking to master best practices for building scalable applications, improving their workflows, and leveraging innovative tooling. You learn directly from the maintainers who are actively building the open source technologies, not just implementers. This targeted education is a core part of our commitment to addressing the Talent Bottleneck.
● Certification and Validation: By engaging directly with the ecosystem, you gain exposure to the training and certification programs that provide globally recognized skills to advance your career.
● Community and Collaboration: Most importantly, KubeCon is where the community comes together. You can engage with the human stories—the developers who have transformed their careers through open source contributions—and find mentors and collaborators to move from just consuming technology to actively helping to define the future of the cloud native landscape.
Related Articles