Hugging Face has unveiled FinePDFs, the largest publicly available corpus built entirely from PDFs. The dataset spans 475 million documents in 1,733 languages, totaling roughly 3 trillion tokens. At 3.65 terabytes in size, FinePDFs introduces a new dimension to open training datasets by tapping into a resource long considered too complex and expensive to process.
While most large-scale language model datasets rely on HTML sources such as Common Crawl, PDFs offer unique advantages. They tend to capture higher-quality, domain-specific content, particularly in law, academia, and technical writing. However, extracting usable text from PDFs has historically been difficult: some contain embedded text, others require OCR, and formatting issues can complicate parsing.
The FinePDFs pipeline addressed these challenges through a mix of text-based extraction (Docling) and GPU-powered OCR (RolmOCR), alongside deduplication, language identification, and PII anonymization. According to Hugging Face, this dual strategy allowed them to process documents at scale while maintaining quality across diverse formats.
The dataset covers a wide array of languages, with English making up the largest share at over 1.1 trillion tokens. Spanish, German, French, Russian, and Japanese each contribute over 100 billion tokens. Smaller languages are also represented, with 978 having more than 1 million tokens.
To evaluate FinePDFs, Hugging Face trained 1.67B parameter models on subsets of the dataset. Results showed that FinePDFs performs nearly on par with SmolLM-3 Web, a state-of-the-art HTML dataset. More importantly, combining the two provided a notable performance boost across benchmarks, reinforcing the idea that PDFs bring complementary knowledge.
This emphasis on evaluation drew immediate questions from the community. On LinkedIn, data scientist Arthur Wuhrmann asked:
What is the evaluation? What is the score?
Hynek Kydlíček, a machine learning engineer at Hugging Face, responded that the team tracks the probabilities of correct choice for various benchmarks, signaling a focus on probability-based reporting rather than a single score.
Researchers have noted the dataset’s potential for advancing long-context training, since PDF documents are often much longer than web pages. Some in the AI community see it as a milestone in data transparency, as Hugging Face not only released the dataset but also documented its processing pipeline, from OCR detection to deduplication.
FinePDFs is available under the Open Data Commons Attribution license, making it free to use for research and development. The dataset is hosted on Hugging Face Hub, with access through datasets, huggingface_hub, and the in-house Datatrove processing library.