Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning
In the current landscape of generative AI, the ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of
LFM2.5-350M
. This model is actually a technical case study in intelligence density with additional pre-training (from 10T to 28T tokens) and large-scale reinforcement learning
The significance of LFM2.5-350M lies in its architecture and training efficiency. While the most AI companies has been focused on frontier models, Liquid AI is targeting the ‘edge’—devices with limited memory and compute—by proving that a 350-million parameter model can outperform
models more than twice its size on several evaluated benchmarks
.

Architecture: The Hybrid LIV Backbone
The core technical differentiator of the LFM2.5-350M is its departure from the pure Transformer architecture. It utilizes a hybrid structure built on
Linear Input-Varying Systems (LIVs)
.
Traditional Transformers rely entirely on self-attention mechanisms, which suffer from quadratic scaling issues: as the context window grows, the memory and computational requirements for the Key-Value (KV) cache increase.
Liquid AI addresses this by using a hybrid backbone consisting of:
10 Double-Gated LIV Convolution Blocks:
These handle the majority of the sequence processing. LIVs function similarly to advanced Recurrent Neural Networks (RNNs) but are designed to be more parallelizable and stable during training. They maintain a constant-state memory, reducing the I/O overhead.
6 Grouped Query Attention (GQA) Blocks:
By integrating a small number of attention blocks, the model retains high-precision retrieval and long-range context handling without the full memory overhead of a standard Transformer.
This hybrid approach allows the LFM2.5-350M to support a
32k context window
(32,768 tokens) while maintaining an extremely lean memory footprint.
Performance and Intelligence Density
The LFM2.5-350M was pre-trained on
28 trillion tokens
with an extremely high training-to-parameter ratio. This ensures that the model’s limited parameter count is utilized to its maximum potential, resulting in high ‘intelligence density.’
Benchmarks and Use Cases
The LFM2.5-350M is a specialist model designed for high-speed, agentic tasks rather than general-purpose reasoning.
Benchmark | Score |
IFEval (Instruction Following) | 76.96 |
GPQA Diamond | 30.64 |
MMLU-Pro | 20.01 |
The high
IFEval
score indicates the model is efficient at following complex, structured instructions, making it suitable for tool use, function calling, and structured data extraction (e.g., JSON). However, the documentation explicitly states that
LFM2.5-350M is not recommended for mathematics, complex coding, or creative writing.
For those tasks, the reasoning capabilities of larger parameter counts remain necessary.

Hardware Optimization and Inference Efficiency
A major hurdle for AI devs is the ‘memory wall’—the bottleneck created by moving data between the processor and memory. Because the LFM2.5-350M utilizes LIVs and GQA, it drastically reduces KV cache size, boosting throughput. On a single NVIDIA H100 GPU, the model can reach a throughput of
40.4K output tokens per second
at high concurrency.
Liquid AI team reports device-specific low-memory inference results that make local deployment viable:
Snapdragon 8 Elite NPU:
169MB peak memory using RunAnywhere Q4.
Snapdragon GPU:
81MB peak memory using RunAnywhere Q4.
Raspberry Pi 5:
300MB using Cactus Engine int8.
Key Takeaways
Extreme Intelligence Density:
By training a 350M parameter model on
28 trillion tokens
, Liquid AI team achieved an super high 80,000:1 token-to-parameter ratio, allowing it to outperform models more than twice its size on several benchmarks.
Hybrid LIV Architecture:
The model departs from pure Transformers by using
Linear Input-Varying Systems (LIVs)
combined with a small number of
Grouped Query Attention (GQA)
blocks, significantly reducing the memory overhead of the KV cache.
Edge-First Efficiency:
It is designed for local deployment with a
32k context window
and a remarkably low memory footprint—reaching as low as
81MB
on mobile GPUs and
169MB
on NPUs via specialized inference engines.
Specialized Agentic Capability:
The model is highly optimized for
instruction following (IFEval: 76.96)
and tool use, though it is explicitly not recommended for complex coding, mathematics, or creative writing.
Massive Throughput:
The architectural efficiency enables high-speed utility, processing up to
40.4K output tokens per second
on a single H100, making it ideal for high-volume data extraction and real-time classification.
Check out the
Technical details
and
Model Weight
.
Also, feel free to follow us on
Twitter
and don’t forget to join our
120k+ ML SubReddit
and Subscribe to
our Newsletter
. Wait! are you on telegram?
now you can join us on telegram as well.
