Pico-Banana-400K is a curated dataset of 400,000 images developed by Apple researchers to make it easier to create text-guided image editing models. The images were generated using Google’s Nano-Banana to modify real photographs from the Open Images collecion and were then filtered using Gemini-2.5-Pro based on their overall quality and prompt compliance.
According to the researchers, the dataset aims to close a gap in the availability of large-scale, high-quality, and fully shareable image editing datasets. Existing alternatives are either human-curated and therefore limited in size, or entirely synthetic, relying on proprietary models like GPT-4o.
What distinguishes Pico-Banana-400K from previous synthetic datasets is our systematic approach to quality and diversity. We employ a fine-grained image editing taxonomy to ensure comprehensive coverage of edit types while maintaining precise content preservation and instruction faithfulness through MLLM-based quality scoring and careful curation.
As mentioned, Apple researchers started selecting a number of real photographs from Open Images, including humans, objects, and textual scenes. Then, they created a set of editing prompts and used them to drive Nano-Banana to edit the photos accordingly. Finally, they used Gemini-2.5-Pro to analyze the results to filter out failed edits or retry the prompt to improve them. The evaluation criteria used to determine success or failure include instruction compliance (40%), editing realism (25%), preservation balance (20%), and technical quality (15%).
About 56K generated images were retained as failure cases for robustness and preference learning.
The researchers devised 35 types of edits, organized into eight categories, including pixel and photometric adjustments (e.g., change overall color tone), object-level semantics (e.g., relocate an object, change an object’s color), scene composition (e.g., add new background), stylistic transformation (e.g., convert photo to sketch), and more.
The prompts themselves were generated using Gemini-2.5-Flash with a specific system prompt instructing it to “write ONE concise, natural language instruction that a user might give to an image-editing model […] be aware of visible content (objects, colors, positions) and be closely related to the image content”. The resulting, usually lenghty prompts were then summarized into a shorter, human-like prompts using Qwen2.5-7B-Instruct for more realistic results.
In addition to the main dataset, which contains 257K images created using single-turn text–image–edit prompts, Pico-Banana-400K includes three specialized subsets. The first, with 72K examples, contains multi-turn instructions for studying sequential editing, reasoning, and planning across consecutive modifications. The second, with 56K examples, comprises failed images and is intended for alignment research and reward model training. The third subset pairs long and short editing instructions to support the development of instruction rewriting and summarization capabilities.
Pico-Banana-400K is available on Apple’s CDN through GitHub under the Creative Commons Attribution–NonCommercial–NoDerivatives (CC BY-NC-ND 4.0) license, whereas the Open Images originals follow the CC BY 2.0 license.
