IBM Research has recently introduced Granite-Docling-258M, a new open-source vision-language model (VLM) designed for high-fidelity document-to-text conversion while preserving complex layouts, tables, equations, and lists.
Unlike typical OCR systems that rely on large, general-purpose models, Granite-Docling is purpose-built for document parsing. With only 258 million parameters, it delivers accuracy on par with models several times its size — offering a major cost and efficiency advantage. The model goes beyond plain text extraction by retaining precise document structure, including math notation, table layouts, and code blocks, making it well-suited for retrieval-augmented generation (RAG) pipelines and dataset preparation.
Granite-Docling builds on the earlier SmolDocling-256M-preview, replacing the SmolLM-2 backbone with a Granite 3-based architecture and upgrading the visual encoder from SigLIP to SigLIP2. The new version addresses previous stability issues such as token repetition or incomplete parses, thanks to improved dataset filtering and annotation cleanup.
Early community reactions have highlighted the model’s potential for on-device use. On Reddit, one commenter noted:
0.3B? Impressive. Almost like even low-end phones will have solid local LLM inferencing in the future.
To which a member of the IBM team responded:
Thanks, we are trying to push as much as we can with the smaller models, as some tasks do not need such large models.
IBM Research highlights that Granite-Docling’s benchmark results on standard document understanding datasets show consistent improvements across accuracy, structural fidelity, and layout retention. The Hugging Face model card includes full performance data, where Granite-Docling matches or exceeds larger proprietary systems on metrics like table structure recognition and equation parsing, while maintaining sublinear memory use.
At the core of Granite-Docling’s performance is DocTags, a structured markup format that describes every page element — including tables, charts, code, forms, and captions — along with their spatial and logical relationships. This explicit tagging allows the model to separate content from structure, producing outputs that are compact, machine-readable, and easily convertible to formats such as Markdown, JSON, or HTML.
The model also introduces experimental multilingual support for Arabic, Chinese, and Japanese, expanding beyond the English-only scope of its predecessor. While these capabilities are still early-stage, IBM notes that global language coverage will be a central goal in future releases.
Granite-Docling is intended to complement the Docling library, which provides customizable document-conversion pipelines and agentic AI integration. Used together, the library and the model combine high accuracy with flexible orchestration for enterprise document workflows.
IBM says upcoming work will include larger Granite-Docling models (up to 900M parameters), expanded evaluation datasets through Docling-eval, and deeper integration of DocTags within IBM watsonx.ai.
Granite-Docling-258M is available now on Hugging Face under Apache 2.0.