DeepSeek AI has developed DeepSeek-OCR, an open-source system that uses optical 2D mapping to compress long text passages. This approach aims to enhance the handling of text-heavy inputs by large language models (LLMs). The method, referred to as a “new paradigm for context compression,” suggests that visual encoding can more efficiently store and retrieve language compared to traditional tokenization.
DeepSeek-OCR comprises two key components: the DeepEncoder for visual compression and the DeepSeek3B-MoE-A570M as the decoder. It achieves 97% OCR precision with a compression ratio of less than 10×, condensing ten text tokens into one visual token. Even at a 20× ratio, it maintains around 60% accuracy, showing that meaningful content can be preserved even with significant token reduction.
Source: https://arxiv.org/pdf/2510.18234
The DeepEncoder architecture minimizes activation memory while effectively handling high-resolution inputs. By combining window and global attention mechanisms with a 16× convolutional compressor, it enables large-scale image processing without GPU memory issues. DeepSeek-OCR has outperformed advanced models like GOT-OCR 2.0 and MinerU 2.0, achieving greater precision with under 800 vision tokens per page.
The decoder, powered by a mixture-of-experts (MoE) design, allows specialized processing for different OCR subtasks while maintaining speed and accuracy. This enables the model to read charts, formulas, and multilingual documents with precision comparable to full-scale OCR suites, while consuming significantly fewer computational resources.
The research team positions DeepSeek-OCR as more than an OCR system — it is a potential foundation for memory mechanisms in next-generation LLMs. By storing long contexts as compressed vision tokens, models could effectively “remember” past information without inflating token counts.
Early reactions from the AI community highlight curiosity. One Reddit user wrote:
This looks like what Gemini 2.5 already has, unless they were using extra tools behind the scenes. I had text-heavy images that used fewer tokens than the actual transcribed text, and it was able to process them without issue.
Following the release, developers discussed the practical side of running the model locally. On Reddit, a user asked:
I wish I knew how to run these vision models on my desktop computer. They don’t convert to GGUFs, and I’m not sure how else to run them, as I could definitely use something like this right now. Any suggestions?”
Another user stepped in with clarification:
Via Python transformers, but this would be full precision, so you need some VRAM. 3B should fit in most GPUs, though.
DeepSeek-OCR’s code and model weights are publicly available on GitHub, with the company inviting researchers to reproduce and extend its results. The system’s performance — compressing and decoding large textual documents through a visual channel — could influence how future LLMs balance efficiency and memory.