IBM recently announced the Granite 4.0 family of small language models. The model family aims to deliver faster speeds and significantly lower operational costs at acceptable accuracy vs. larger models. Granite 4.0 features a new hybrid Mamba/transformer architecture that largely reduces memory requirements, enabling Granite to run on significantly cheaper GPUs and at significantly reduced costs.
IBM states:
LLMs’ GPU memory requirements are often reported in terms of how much RAM is needed just to load up model weights. But many enterprise use cases—especially those involving large-scale deployment, agentic AI in complex environments, or RAG systems—entail lengthy context, batch inferencing of several concurrent model instances at once, or both.
According to IBM, Granite can offer over 70% reduction in RAM needed to handle long inputs and multiple concurrent batches. Inference speed purportedly remains high even as context length or batch size increases. Accuracy remains competitive vs. larger models, especially on instruction following and function calling benchmarks.
IBM attributes those improved characteristics vs. larger models to its hybrid architecture that combines a small amount of standard transformer-style attention layers with a majority of Mamba layers—more specifically, Mamba-2. With 9 Mamba blocks per 1 Transformer block, Granite gets linear scaling vs. context length for the Mamba parts (vs. quadratic scaling in transformers), plus local contextual dependencies from transformer attention (important for in-context learning or few-shots prompting).
Additionally, Granite being a mixture-of-experts system, only a subset of the weights is used in any forward pass. This also contributes to keep inference cost lower.
Granite ships three model variants with the hybrid architecture. conveniently called Micro, Tiny, and Small to cater to different use cases. At one end, Micro (3B parameters) addresses high-volume, low-complexity tasks where speed, cost, and efficiency take precedence (e.g., RAG, summarization, text extraction, text classification). At the other end, Graphite Small (32B total parameters with 9B active) is intended for Enterprise workflows that require stronger performance without the cost of frontier models (e.g., multi-tool agents and customer support automation). Another model, Graphite Nano (0.3B and 1M parameters), is aimed at edge devices with limited connectivity and compute.
One empirical study of Mamba-based language models hinted at the potential of Mamba-2 hybrid architectures vs. Transformer and pure SSM models for some tasks:
Our primary goal is to provide a rigorous apples-to-apples comparison between Mamba, Mamba-2, Mamba-2-Hybrid (containing Mamba-2, attention, and MLP layers), and Transformers for 8B-parameter models trained on up to 3.5T tokens, with the same hyperparameters.
[…] Our results show that while pure SSM-based models match or exceed Transformers on many tasks, both Mamba and Mamba-2 models lag behind Transformer models on tasks that require strong copying or in-context learning abilities (e.g., five-shot MMLU, Phonebook Lookup) or long-context reasoning. In contrast, we find that the 8B-parameter Mamba-2-Hybrid exceeds the 8B-parameter Transformer on all 12 standard tasks we evaluated (+2.65 points on average) and is predicted to be up to 8× faster when generating tokens at inference time.
IBM open-sourced the Granite 4.0 models under the Apache 2.0 license. This contrasts with Meta’s LLaMa licensing, whose open source nature is disputed by members of the open source community. As for the Llama 4 Community License Agreement, it states that the license rights do not apply to persons residing in the EU or companies with headquarters in the EU.
Granite models are available on Hugging Face and watsonx.ai. The interested reader can try out the model in a dedicated online playground. IBM provide cookbooks for fine-tuning Granite. Additionally, a Colab example applying Granite to contract analysis is made available.
IBM has achieved accredited certification under ISO/IEC 42001:2023 for the AI Management System (AIMS) of IBM Granite. The ISO/IEC 42001 standard seeks to address the ethical , transparency, and continuous learning challenges posed by AI with a structured way to manage risks and opportunities.
