Self-improvement in large language models refers to the ability of a model to enhance its own performance over time through iterative learning, without relying on external retraining by developers. As the demand for AI grows, traditional methods of model development relying on resource-intensive retraining cycles with vast human-curated datasets become increasingly impractical. We are researching techniques for LLM self-improvement to enable efficient and cost-effective re-training of models while avoiding problems such as catastrophic forgetting and other forms of model collapse while improving helpfulness, truthfulness and safety in the process.
We are researching distributed computing networks to provide the computing power needed to support a large community of users accessing our AI models without the need for large, energy intensive data centres. Rather than relying on a single machine, these networks divide the inference workload across several nodes, distributing tasks to the most suitable hardware based on availability and performance needs. This decentralised approach reduces reliance on large, energy-intensive data centres and enables the use of edge devices and underutilised hardware, contributing to a smaller carbon footprint.
Our research focuses on Lean Language Models as a scalable, sustainable path forward for AI. We believe compact, efficient models, when enhanced with retrieval, tool-use, and test-time compute, can match or exceed the performance of larger, resource-intensive models while remaining deployable on device or in the cloud. We are advancing this direction through model distillation, self-improvement, reasoning models, and architectural efficiencies.
We believe that Personalised AI is a core enabler of meaningful, long-term human-AI interaction. We are investigating modular persona adapters to enable efficient, fine-grained customisation of a model's tone, style, and voice, tailored to individual users or brand identities without requiring full model retraining. Additionally, we are exploring methods for long-term memory including a graph-based approach, allowing the model to retain and reason over personal context across sessions.
We are researching agentic systems capable of autonomous reasoning, tool use, and iterative problem solving to support complex tasks like deep research. These agents combine lean language models with retrieval, code execution, and planning to perform multi-step investigations, while remaining deployable on device or in trusted environments. We are particularly focused on human-AI collaboration: ensuring agents work under user guidance, clarify intent, and adapt their depth of analysis to the task.
Locai L1 is our flagship family of British foundational large language models, designed to be aligned with the laws, languages, and cultural context of the United Kingdom.
This patent introduces novel methods for fine-tuning machine learning models while preserving previously learned knowledge. The techniques address the critical challenge of catastrophic forgetting that occurs during incremental learning.
This foundational patent covers autonomous decision-making capabilities in AI agents while maintaining safety and control through sophisticated guardrails.
View patent →This patent describes breakthrough techniques for enhancing speech recognition performance, particularly in challenging acoustic environments.
View patent →This patent covers innovative approaches to automatically classify and route telecommunication calls in real-time using advanced machine learning techniques.
View patent →