At the start of this year, the UK government published its AI Opportunities Action Plan, aiming to “ramp up AI adoption across the UK to boost economic growth, provide jobs for the future and improve people’s everyday lives”.
The action plan has three goals: invest in AI (infrastructure, talent and regulation), position the UK as an “AI maker, not an AI taker” and drive cross-economy AI adoption. The latter puts the onus on the public sector, stating that it should “rapidly pilot and scale AI products and services” to “drive better experiences and outcomes for citizens and boost productivity.”
The UK has an opportunity to lead, with potential billions in economic gains if we "fully embrace the technology.” But there are challenges to address on the way: multi-cloud complexity, data security issues, legacy processes and a technology skills gap.
Multi-cloud flexibility
Today’s UK public sector runs on a cloud-first policy for procuring new or existing services, with guidance that individual organisations need to take strategic decisions based on their unique requirements. With 467 government departments, this approach has led to a fragmented landscape of proprietary cloud platforms, each with its own contract, rules and costs. It’s easy to say with hindsight, but this may not represent the most efficient way to take advantage of cloud transformation. The full benefits of the cloud rely on interoperability and interconnectivity, with a consistent security layer providing resilience, not silos that mimic old ways of working.
The lessons from the government’s cloud adoption journey are clear: AI presents a second chance to build more flexible foundations that can stand up to future unknowns, and start reusing common blueprints for success rather than reinventing the wheel in numerous places. A secured and supported platform, like Red Hat OpenShift, can manage hybrid multi-cloud in a unified way, enabling closer collaboration within and between teams while providing consistent experience and controls. Greater portability means the freedom to run AI wherever it makes sense for data sensitivity and to optimise resource usage and cost, whether on premise, in public clouds or at the edge.
Right-sized AI
AI deployments using large language models (LLMs) can face resource, locality and cost challenges for a variety of organisations. One option to reduce costs and speed up deployment is model compression and efficient inference with tools such as vLLM, an open source project focussed on optimising inference for generative AI models. vLLM is the basis of the newly launched Red Hat AI Inference Server, built to enable faster, higher-performing and more cost-efficient AI inference across the data center, public clouds or at the edge.
Another option is to use small language models (SLMs), an efficient and compact alternative to large foundation models. They are designed to deliver high performance for specific tasks with fewer computational resources. For instance, a 10-billion parameter SLM can be optimised for enterprise applications at a fraction of the cost and complexity of a much larger 400-billion parameter LLM. SLMs integrate AI with private datasets, minimising third-party risks and ensuring regulatory compliance. They are purpose-built for targeted use cases while maintaining efficiency, and their smaller size makes them more flexible and easier to deploy across hybrid multi-cloud environments.
AI-enabled teams
Talent is an essential component of a leading AI nation. Recent Red Hat survey data identified that AI is the number one urgent skills gap for the UK’s IT managers. Addressing this will require changes at the policy level, including investment in future talent and encouraging international experts to relocate to the UK. Public sector organisations must also prioritise skilling and reskilling existing personnel, building diverse teams with not just technical expertise but AI ethics, data bias and model explainability skills. To help organisations with this, Red Hat is offering a complimentary AI skills development initiative for women in the UK public sector.
Government organisations should lean on trusted partners with the right experience to support them and look for platforms and tools that make AI more accessible. For example, the open source project InstructLab makes it easier for non-technical people to contribute relevant business knowledge to an AI model so that organisations are not solely reliant on data scientists to train and tune models.
Transparency, security and control
For 30 years, Red Hat has been committed to an open source development model, building and supporting communities to drive innovation across sectors using the latest technologies. We are now bringing this open source leadership to the world of AI with Red Hat AI.
Red Hat AI provides an enterprise AI platform for model training and inference. It is designed for increased efficiency, a simplified experience and the flexibility to deploy anywhere across a hybrid multi-cloud environment. With continuous focus co-engineering, integration and testing with partners, Red Hat aims to provide as much flexibility, freedom of choice and sovereignty as customers need when choosing models, platforms and hardware for their AI strategy.
The aforementioned Red Hat survey data shows that when establishing trust in generative AI for the enterprise, the most important factor for UK IT managers was transparent, modifiable models with explainable sources. Alongside speed of innovation, this is a major reason why open source will continue to be so important for AI, helping enhance transparency and control for organisations.
Open source AI in the public sector
Red Hat has long been relied on by public sector organisations globally, such the UK's Meteorological (Met) Office, Department for Work and Pensions, and Ministry of Defence. Supporting AI innovation is a core tenet of Red Hat’s value to the public sector. For example, the Government of Ireland’s Department of Agriculture, Food, and the Marine (DAFM) worked with Red Hat to create SmartText, a machine learning (ML) text analysis platform that scans documents and images for sensitive information and prevents unauthorised access. DAFM can now correctly categorise documents while protecting back-end systems and delivering new features faster, reducing development time from weeks to days, and improving security and stability.
Elsewhere in Europe, Red Hat has worked with the Basque Government Informatic Society (EJIE) to develop an automated translation tool powered by AI with GPU-accelerated containers for thousands of daily users. Also in Spain, the Government of Castilla-La Mancha used Red Hat AI to create a generative AI assistant that streamlines processes, minimizes errors and reduces response times for its environmental impact assessments. Meanwhile, the U.S. Department of Veterans Affairs is collaborating with Red Hat to use AI and ML for a data-driven suicide prevention solution.
Ready for what’s next
The AI Opportunities Action Plan offers great potential for the public sector to step up and lead the UK’s AI adoption and innovation. AI may be the fastest moving tech space we've seen yet, so it's more important than ever for the right foundations to be in place. With hybrid multicloud frameworks and open source, the UK’s public sector will become more efficient, flexible and skilled so that it can seize opportunities as they come.
For the latest updates on Red Hat AI, watch the May 2025 Red Hat Summit keynotes on demand. Next up, join us in London for an in-person event, AI innovation with open source on 26th June.
About the author
Jonny Williams is Chief Digital Adviser for the UK Public Sector at Red Hat and author of "Delivery Management: Enabling Teams to Deliver Value". Prior to joining Red Hat, he was Head of Delivery at Homes England.
Having enabled teams to deliver value for over ten years, he now supports organisations to uncover effective modern approaches to work and understand the impact of open source technology.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech