Google Quitly released the ml kit genai application program (api) last week, allowing android development developers to build apps that leverage the capability of gemini nano. As per a document added to its developer forum, the mountain view-based tech giant is now letting developers access the image description feature of the artificial intelligence (AI) MODEL AS Well. Earlier, the model was only available as an experimental access, and developers could not publish the apps made using the large language model (llm).
Developers can now use gemini nano to build ai-powered android apps
First spotted by Android Authority, The Tech Giant Added A New Support Document to Gemini Nano’s Android Developer Page. This page now mentions a new api dubbed ml kit genai that will allow developers to “Harness the power of gemini nano to deliver out-of-to-box performance for Common Tasks Through High-level interface. “
The page also highlights that API is Built on Aicore, An Android System Service, and It Enables on-Deevice Execution of Gemini Nano-Like Models, even If Developers Do Not UNDERSSTANG LAONDERSSTANDs. The apps built using the ai model will also run locally, Powered by the device’s system-on-a-chip (SoC).
With the ml kit genai api, developers will be able to access new features such as text summarisation, message proofreading, rewriting messages, as well as adding short descriptions to lyst. Notable, Google has also Scheduled a Session at I/O 2025 Dubbed “Gemini Nano on Android: Building with on-Device Gen Ai.” The company will likely explain the capability of the model and how developers can integrate the feature into the apps they’re Building.
Google first related gemini nano to developers in October 2024 as part of the AI Edge Software Developer Kit (SDK). However, this was only available as an experimental access, which means development development
Additional, the sdk only supported developing apps for the google pixel 9 series, while the new api allows building apps for all compatible android devices. The sdk only supported text-based features, and the image description feature was not available.