Google’s story with the glasses is not over. On the contrary. After years of silence, the company has put them on stage again. And he has done it with a project that recovers a family name: Project Aura. It is not the first time we listen to it. It has been playing since the time of the Google Glass, which ended in a dark corner of technological history. But this time, the approach is very different.
During the Google I/or 2025, where we also saw proposals such as AI Mode or Beam, the company showed live the state of development of its new mixed reality glasses. A proposal that is born in collaboration with Xreal and that, according to those responsible for the project, wants to bring the Android experience to the XR universe, context and responses in real time.
A live demo, without tricks or touch -ups. It all started when Shahram left, responsible for the device area, launched a question to the public recorded in a video published on YouTube: “Who points to see an early demonstration of the Android XR glasses?” The answer came from bambalins. Nishtha Bhatia, part of the team, appeared on the scene remotely and began to show the real operation of the glasses.
The first thing we saw was an interface superimposed in real time over the environment. Through the integrated camera, the glasses showed what he had in front while Bhatia received messages, played music, consulted addresses or interact with Gemini, the conversational assistant, all through voice commands. Without taking out the mobile. Without clicking anything.
In one of the most striking moments, the demo showed how I could ask which band was the author of a painting that was watching. Gemini responded, although with the occasional delay attributable to connection problems. He also asked that a band of the band on YouTube Music be reproduced, which happened without manual intervention. Everything was registered in the image shared in real time.
Live translation and a small failure on stage. The final test consisted of a conversation between Izadi and Bhatia in different languages. She spoke in Hindi, he in Farsi. The glasses, by Gemini, offered a simultaneous translation with voice interpretation. The system worked correctly for a few seconds, but those responsible decided to interrupt the demo when they detected a failure.


Despite the stuping, the message was clear: Google wants to play again in the field of connected glasses, this time with a more mature base, supported by its service ecosystem, in Gemini and in collaborations with key actors in the world XR. The difference, at least for now, is in the approach: practical experiences, in real time, without long -term ornaments or promises.
Images | Google
In WorldOfSoftware | Google already has an agricultural AI capable of programming for you: it’s called Jules and seeks to stand up for OpenAi