Ai
Oct 27, 2025
e& enterprise, the digital transformation arm of global technology group e&, today unveiled SLM-in-a-Box at GITEX Global 2025 — a jointly developed, ready-to-use Small Language Model (SLM) solution developed in collaboration with Intel that makes enterprise AI adoption faster, more affordable, and sovereign-ready.
The solution, which is available in AWS Marketplace, delivers pre-deployed SLMs which have been optimized by Intel to lower cost per user, and are supported by e& enterprise’s expertise in integration and governance. For organisations in government, critical infrastructure, banking, financial services, insurance (BFSI), healthcare, energy, and telecom sectors, this solution offers a practical way to deploy AI with confidence. It enables them to scale from pilot projects to full production within budget and in compliance with regional regulations, all while achieving immediate, measurable results, whether in improving customer service, automating content workflows, or supporting smarter decisions.
“Most businesses don’t need massive AI models that drain budgets and raise compliance risks; they need AI that is cost-effective, compliant, and ready to deliver results today,” said Khalid Murshed, CEO, e& enterprise. “SLM-in-a-Box is a step towards democratising AI adoption in our region, providing enterprises with a clear path forward. This solution offers a pragmatic alternative with a smaller and faster AI technology that scales with demand, respects data sovereignty, and delivers measurable outcomes from day one. Our partnership with Intel is about turning AI from a promise on a roadmap into a working tool for customer service teams, healthcare providers, and financial institutions across the region.”
Available today in AWS Marketplace, SLM-in-a-Box cuts set-up and integration time for IT teams while enabling cloud-native scaling as demand grows. At its core, the solution is leveraging Intel® AI for Enterprise Inference and Amazon EC2 C7i instances powered by custom Intel Xeon Processors with built-in AI acceleration allowing developers to run generative AI (GenAI) models at scale—securely, efficiently, and at a fraction of the cost of large-model deployments.
“The launch of SLM-in-a-Box with e& enterprise and AWS is a major step in democratizing AI for businesses across the Middle East. At its heart, this solution leverages the performance of Intel Xeon processors. We're providing a pragmatic alternative that boosts inference performance, making Small Language Models dramatically cheaper and faster to run”, said Taha Khalifa, Middle East and Africa General Manager, Intel.
The announcement comes as enterprises prioritise operational efficiency in AI programmes. A significant share of AI spending sits in inference rather than training, which is pushing teams toward smaller, specialised models that deliver faster time-to-value for tasks such as summarisation, classification, retrieval-augmented generation and domain copilots.
For enterprises, the value is immediate: lower deployment and operating costs, faster time-to-value adoption thanks to pre-deployed SLMs, flexible scaling across various workloads and industries, and the assurance that comes from built-in compliance in deployments that respect local data sovereignty requirements.
The joint initiative stems from e& enterprise and Intel’s shared goal to democratize enterprise AI in the Middle East by pioneering a more accessible, affordable, and practical AI solution. By packaging models, infrastructure optimization, and local delivery into a single solution, e& enterprise and Intel are set on accelerating AI adoption and ensuring that organizations can realize immediate, tangible benefits at scale rather than waiting for possible future outcomes.