Launched in early preview last May, Gemma 3n is now officially available. It targets mobile-first, on-device AI applications, using new techniques designed to increase efficiency and improve performance, such as per-layer embeddings and transformer nesting.
Gemma 3n uses Per-Layer Embeddings (PLE) to reduce the RAM required to run a model while maintaining the same number of total parameters. The technique consists of loading only the core transformer weights into accelerated memory, typically VRAM, while the rest of the parameters are kept on the CPU. Specifically, the 5-billion-parameter variant of the model only requires 2 billion parameters to be loaded into the accelerator; for the 8-billion variant, it’s 4 billion.
Another novel technique is MatFormer, short for Matryoshka Transformer), which allows transformers to be nested so that a larger model, e.g. with 4B parameters, contains a smaller version of itself, e.g. with only 2B parameters. This approach enables what Google calls elastic inference and allows developers to choose either the full model or its faster but fully-functional sub-model. MatFormer also support a Mix-n-Match method to let developers create intermediate-sizes versions:
This technique allows you to precisely slice the E4B model’s parameters, primarily by adjusting the feed forward network hidden dimension per layer (from 8192 to 16384) and selectively skipping some layers.
In the future, Gemma 3n will fully support elastic inference, enabling dynamic switching between the full model and the sub-model on the fly, depending on the current task and device load.
Another new feature in Gemma 3n aimed at accelerating inference is KV cache sharing, which is designed to accelerate time-to-first-token, a key metric for streaming response applications. Using this technique, which according to Google is particularly efficient with long contexts:
The keys and values of the middle layer from local and global attention are directly shared with all the top layers, delivering a notable 2x improvement on prefill performance compared to Gemma 3 4B.
Gemma 3n also brings native multimodal capabilities, thanks to its audio and video encoders. On the audio front, it enables on-device automatic speech recognition and speech translation.
The encoder generates a token for every 160ms of audio (about 6 tokens per second), which are then integrated as input to the language model, providing a granular representation of the sound context.
Google says they have observed strong results translating between English and Spanish, French, Italian, and Portuguese. While Gemma 3n audio encoder can process arbitrarily long audios thanks to its streaming architecture, it will initially be limited to clips of up to 30 seconds at launch.
As a final note about Gemma 3n, it is worth highlighting that it supports resolutions of 256×256, 512×512, and 768×768 pixels and can process up to 60 frames per second on a Google Pixel device. In comparison with Gemma 3, it delivers a 13x speedup with quantization (6.5x without) and has a memory footprint that is four times smaller.