Hugging Face has released FineTranslations, a large-scale multilingual dataset containing more than 1 trillion tokens of parallel text across English and 500+ languages. The dataset was created by translating non-English content from the FineWeb2 corpus into English using Gemma3 27B, with the full data generation pipeline designed to be reproducible and publicly documented.
The dataset is primarily intended to improve machine translation, particularly in the English→X direction, where performance remains weaker for many lower-resource languages. By starting from text originally written in non-English languages and translating it into English, FineTranslations provides large-scale parallel data suitable for fine-tuning existing translation models. Internal evaluations also indicate that the resulting English text performs on a similar level to FineWeb for English-only model training, allowing the data to be reused beyond translation-specific tasks.
Beyond translation, Hugging Face reports that the resulting English corpus retains substantial cultural and contextual information from the source languages. In internal experiments, models trained on the translated English text achieved performance comparable to those trained on the original FineWeb dataset, suggesting that FineTranslations can also serve as a high-quality supplement for English-only model pretraining.
The dataset is sourced from FineWeb2, which aggregates multilingual web content from CommonCrawl snapshots collected between 2013 and 2024. To reduce skew toward highly repetitive or domain-specific material, such as religious texts and Wikipedia pages, only language subsets with a bible_wiki_ratio below 0.5 were included. For each language, up to 50 billion tokens were processed, with quality classifiers from FineWeb2-HQ applied where available, and random sampling used otherwise.
Translation was carried out at scale using the datatrove framework, which enabled robust checkpointing, asynchronous execution, and efficient GPU utilization on the Hugging Face cluster. Documents were split into chunks of up to 512 tokens, with a sliding-window strategy to preserve context across segments. Additional safeguards were introduced to mitigate common large-scale translation issues, including early classification of toxic or spam-like content, strict formatting constraints, and post-processing to ensure consistency of line breaks and structure.
Each dataset entry includes aligned original and translated text chunks, language and script identifiers, token counts, quality and educational scores, and references to the original CommonCrawl source. The dataset can be accessed through the Hugging Face datasets library, streamed for large-scale processing, or consumed directly using datatrove-based pipelines.
Commenting on the release, Achref Karoui said:
Awesome! This release will bridge the gap and allow communities to better align popular models with their languages.
FineTranslations is available now on Hugging Face. The dataset is released under the Open Data Commons Attribution (ODC-By) v1.0 license, and its use is subject to CommonCrawl’s terms.
