Google reveals its ambitions for the near future of Android, and unsurprisingly they are largely focused on artificial intelligence. Under the banner Gemini Intelligencethe group intends to deeply integrate its AI model to transform the user experience.
This initiative aims to “ help you stay ahead of the curve by acting proactively throughout your day. ” Of the more proactive devicesbut Google ensures that control remains total for the user.
The rollout will begin this summer starting with the latest Samsung Galaxy and Google Pixel phones, before expanding to other devices later in the year.
Automating tasks to change the use of applications
Gemini Intelligence introduces multi-step automation capable of navigating between different applications to accomplish complex actions.
No more manual switching from one application to another, Gemini Intelligence will be able, for example, to find a course program in Gmail, then order the necessary books on a purchasing application. Google says it has spent months refining the capabilities ofmulti-step automation on the Galaxy S26 et le Pixel 10.
This capacity is increased tenfold by the addition of visual context. It will be possible to take a photo of a shopping list and ask Gemini to fill a basket for a delivery, or to photograph a travel brochure so it can find a similar excursion on Expedia.
The user remains in control, validating the final action and tracking progress via notifications.
New features to simplify navigation and input
The web browsing experience on mobile is about to evolve. From the end of June, Gemini will integrate with Chrome on Android to summarize pages, compare information, manage tasks such as making appointments with automatic navigation.
Auto browse is coming soon to Chrome on @Androidletting you automate time-consuming digital chores. Prompt, approve a plan, then return when the task is complete. pic.twitter.com/I2ipo7nnQ5
— Chrome (@googlechrome) May 12, 2026
Form filling will also be optimized. By partnering with Google’s automatic completion, and with prior consent, Gemini will be able to draw relevant information from connected applications to fill in the fields on its own.
Voice typing takes a leap forward with new feature Rambler from Gemini Intelligence and in Gboard. This tool is designed to ” adapt to the way people actually express themselves “. It cleans out hesitations, repetitions, “uh”, “ah” or language tics to produce a clear and concise message, while retaining the user’s style.
Rambler even manages the mixing of languages within the same sentence. Google emphasizes that audio is only used for real-time transcription, with no storage or recording.
A new stage for interface customization
With Gemini Intelligence, which comes with an updated design language based on Material 3 Expressive, Google is taking a first step towards generative user interface. A practical application is with Android widgets.
With Create My Widget, users will be able to create fully personalized widgets by simply describing what they want in natural language. For example, simply ask “Suggest three protein-rich recipes each week” to generate a dedicated widget.
This approach makes it possible to create tools that adapt to everyone’s needs. Custom widgets will also be available on smartwatches with Wear OS.
