By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Gemini could be the big Google Maps upgrade we’ve been waiting for
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Gemini could be the big Google Maps upgrade we’ve been waiting for
News

Gemini could be the big Google Maps upgrade we’ve been waiting for

News Room
Last updated: 2025/10/11 at 9:00 AM
News Room Published 11 October 2025
Share
SHARE

00:00 – Mishaal Rahman: Google Maps will soon let you ask Gemini for coffee shop recommendations.

00:04 – C. Scott Brown: And Android will eventually let you fully automate apps like the Rabbit R1.

00:09 – Mishaal Rahman: I’m Mishaal Rahman.

00:11 – C. Scott Brown: And I’m C Scott Brown, and this is the Authority Insights Podcast where we break down the latest news and leaks surrounding the Android operating system.

00:19 – Mishaal Rahman: So, I don’t know about you, Scott, but I have quite a bit of trouble when it comes to scouring Google Maps for like a restaurant I should go try out with some friends. We just have so many dang good options here in Houston. It’s just such a melting pot of different ethnic cultures and different restaurant options. Most of the time I’ve been relying on word of mouth. Like I’ve been asking my friends, my family, extended family members, or just going to Reddit, like the Houston subreddit for recommendations. But maybe soon in the future, I’ll ask Gemini instead through Google Maps.

00:48 – C. Scott Brown: So, you might trust AI to make something like a food recommendation for you, but would you trust AI to actually act on your behalf and buy things for you? So, tech companies are really hoping that you will trust AI to perform those kinds of tasks on your behalf. But so far, all these AI agents have been either based in a browser or in some other situation like that. But Google is now working on an AI agent that can control your Android apps for you, right on your phone, but we’re not so sure that’s a good idea.

And another change that has us scratching our heads is Google’s plan to bring its new Nano Banana image creator, which everyone loves, to Google Lens and Circle to Search. So, I mean, Nano Banana’s great, but why Google Lens and Circle to Search? That’s kind of confusing. Those things are made for searching.

01:43 – Mishaal Rahman: Riding that gravy chain a little bit too hard there, aren’t you, Google?

01:46 – C. Scott Brown: Yeah. Yeah.

01:48 – Mishaal Rahman: I’m not really sure how I feel about that particular use of the integration right there in Google Lens and Circle search. But I am really excited to see this new Ask Maps feature that Google’s working on for Google Maps. So, I mean, to be clear, Ask Maps itself is not actually new because this is something that Google actually rolled out late last year, you know, the ability to ask Gemini about a certain place in Google Maps. But the way it works right now is that you have to actually highlight a place, you like tap on a location, this card shows up, the usual detail card, and then you might see this Ask Maps about this place kind of at the very top in the overview tab. And you can kind of ask anything you want to know about a place. You even have some recommendations, like if you select a bar, you can ask, do they have a full bar? Is it quiet? What’s the dress code like? And etc. But this is only once you’ve selected a location and you actually found the place that you already are looking for and you’re curious to learn more information about it. But the new version of Ask Maps that Google’s experimenting with, according to our APK insights team, or our APK teardown, whatever you want to call, our team that does the teardown, AssembleDebug, took a look through the Google Maps app and he discovered that Google is working on a new interface for Ask Maps. This interface is accessed through a chip right below the search bar. You can see here in this screenshot right here, well, if it’s not obscured by the watermark we have. Underneath the search bar, there’s this Ask Maps chip. You tap that, and it pulls up this full screen, almost full screen interface that kind of resembles the main Gemini app actually. You have this hello indicator, you have these chips at the bottom that with like recommended things you can ask about, search bar with a microphone icon. And basically, it’s just like a full Gemini interface but within the Google Maps app. And I think this is a big deal compared to, you know, what we had before, which is just location dependent. Now, this is kind of like a generalized access entry point to Gemini within Google Maps. You can ask it about pretty much anything that you want. So, I mean, I personally think this is going to be a game changer for how we search maps for locations, for restaurants, for coffee shops, and etc. But what do you think, Scott?

04:12 – C. Scott Brown: Yeah, no, totally agree. Game changer. In my life, if I’m traveling alone, I would definitely use something like this. But when I’m traveling with my partner, she usually does all this stuff. She spends hours trawling through Google Maps and Instagram and Reddit and figuring out like if we’re traveling to a new place we’ve never been to before, and she’ll save all these different things and then she’s kind of like my own personal Gemini because we get to the location and then I say, we should get, you know, burgers tonight or whatever and she’ll pull up her maps and figure out where she’s saved the best location in the area and then we’ll we’ll go there. But being able to just talk to, you know, what seems to be just Gemini, like a Gemini overlay over Google Maps, being able to do that would definitely be great in those rare situations where we’re in a place where she hasn’t already done that and we can just, you know, ask maps and figure it out from there. I also see this being really advantageous if you’re with a group of people and I don’t know about you, but that’s always really difficult for me because they’re trying to accommodate all these different, you know, if you’re a group of, let’s say five people, one’s a vegetarian, one’s a vegan, one’s gluten-free, one would prefer if we went somewhere where they could get something that was like not super rich, maybe low calorie, one person doesn’t drink, one person wants to have a beer, you know, it’s like figuring all that out and just being able to say to Gemini, these are the things we want, please give me three possible choices within a, you know, 10 block radius of where I am right now. Game changer. Like that would be so much easier than what we have to do now, which is, you know, people throwing back things in a group chat and links being sent and, you know, reading menus, and it’s just chaos. So no, this is definitely, you know, we spoke last week about implementations of AI that are genuinely useful and not just like we’re throwing AI at this thing because why not? You know, we need AI because that’s what we do now. This is one of those situations where I’m like, no, like this is actually a really good idea. We’re going to talk later in this episode about things that might not be such a great idea, but this one, this one I think is really good.

06:24 – Mishaal Rahman: Yeah, I mean like whenever I’m researching, like whenever I’m like booking a hotel for an event like Google I/O, for example, lately since I’ve been there a couple of times now, I’ve kind of settled on a couple of locations that I return to. But whenever I’m looking for a new event in an area I haven’t been to, like I use a lot of the filters that they give you. The hotel brand filter because I prefer to stay with like Hilton brand just because of, you know, membership and stuff like that. Price range. I also like actually having multiple tabs open because I want to make sure, oh, does this hotel have free parking? Does it not have free parking? Does it have this amenity? So like being able to just open this Ask Maps chip within Google Maps and say, okay, I’m looking for so and so from hotels from these brands in this price range with these amenities would just be an absolute game changer. Like right now, I could technically do that by opening the main Gemini website or the Gemini app. But the problem is it’s not really built in to maps the way, of course, maps itself would be. Like it would just give me a list, a text list of, oh, here are these locations. Then I would have to manually, oh, open them up in Google Maps to see, okay, cause like within the Gemini app, I can’t visualize how far that is from the venue that I’m actually going to, right? I would have to like separately open that in like Google Maps to see, okay, this is this location, these are the transit options I would take or or like this is how long it would take for me to ride an Uber. But like having this built into Google Maps and potentially like it creating a list and showing everything on a map, just would be a huge game changer for planning.

07:53 – C. Scott Brown: For sure, but the issue that we sort of got to keep in mind is the trustworthiness of this information. You know, like how much of this is Gemini going to be delivering accurately. And so far, from what we’ve seen, it seems that there are problems. So, you know, we so, you know, it’s not like Gemini is going to be flawless. Sometimes you’re going to say, oh, I’m looking for a place that has a gluten-free menu. Gemini is going to check the maps listing, find a menu from five years ago that had a completely gluten-free thing and then you get to the restaurant and they say, oh, we haven’t had a gluten-free menu in years. And you know, so it’s like that’s where I think a lot of the issues are going to come through.

08:37 – Mishaal Rahman: Yeah, like if you’re asking Gemini questions about, oh, like, tell me about this city or what is the tallest mountain, like Shiv did here in these examples for our article, like it’s going to handle those queries fine. But as you mentioned, if you’re asking specific questions about things that require the restaurant information or the location information to be up to date, then you might not get accurate results because, of course, it requires people to actually go in and update that information. And there are a lot of places, like smaller locations where people don’t provide up-to-date information. You might have years old things. You might have a location that’s completely permanently closed, and Google Maps is not even aware that it’s been closed for a while because no one’s reported it as being closed. Or the pricing on a menu is like years out of date, and it doesn’t fit with your budget anymore. So, yeah, I am a little concerned about that.

09:25 – C. Scott Brown: To be fair, to this feature and Gemini and the AI search in general, like those would be problems that you would face without Gemini. Like, you know, if you just go into the Google Maps listing, you look at the menu and you say, menu looks great. Let’s go and then you get there and they’re like, yeah, the menu is wildly different now. That’s something that you would face automatically,, but yeah, but there would be other situations that maybe a human would be able to ascertain things. They would be able to look at the menu and say, wait a minute, this menu is from 2015. Like, things are probably different now, whereas Gemini might not be able to make that distinction. So, I think it’s going to be a your mileage may vary thing, but the concept, I think, is awesome.

10:08 – Mishaal Rahman: So speaking of the concept, I’m kind of curious to hear your thoughts on how this will impact like the way a lot of people kind of go about hunting for hidden gems in their city. You know, a lot of people just go around, drive around without Google Maps or they just rely on the word of mouth they talk to their friends, family, co-workers, they go on forums like Reddit, like like recommend me some hidden gem, Indian spots in the city, you know, or like they ask for like what are the best Mexican food places or what are like this this underrated Italian place, you know, and they get recommendations from real people. But now, we have the ability to ask Gemini, which of course, it’s trained on responses that were given by real people. So like people who contributed information to Google Maps like the local guides, Redditors, and so on. But like, do we think this is going to fully replace asking people for recommendations? Or do we think like we’re going to see a resurgence maybe of people who are eschewing AI and just driving around and randomly stumbling upon something that they saw with their own eyes.

11:08 – C. Scott Brown: I think universally the concept of getting your information directly from a human is going to get more and more important as the years go by. Yeah, I mean nothing’s going to change you one of my favorite dishes is pork ragu. Like, you know, pasta, hot fresh pasta, hot meat sauce on top, tons of cheese, you know, love it. Could eat it all day. And yeah, so I could talk to a friend or a family member or something and they could go and they could say like, oh, we went to this restaurant and we had this dish and it was exactly what you want. Like it was the perfect pork ragu, you got to go. That’s not something that you can really get from AI, you know, that personal connection isn’t going to be there unless I like literally told Gemini like I log into Gemini and I say here is the exact description here’s a tome on what I love about this particular dish and then Gemini saved that and was able to deduce that through imagery and comments but even then I feel like it wouldn’t be trustworthy. That human element of somebody being like “I’ve tasted this. I know what you like. This is the restaurant you need to go to.” That’s never going to change and that’s going to be something that’s going to become more and more important as time goes on. As a as a guy with a YouTube channel I’m very much anticipating that being an issue going forward as well now that we’re seeing a lot of AI slop appear on YouTube you know you don’t really know is the voiceover for this channel is it a real person or did someone just ask ChatGPT to make something for them and then feed it out an audio that sounds human and we just stole B roll from you know from Android Authority and from Marques and from all these people and tossed it into a video and now we’re a tech creator like that kind of thing is going to become even more of a problem so yeah that human element is something that is always going to be needed. So no this isn’t going to be a replacement for that. I think this is going to be a solution for speeding up a process that could now take you know an hour plus you know I can just I’ve definitely had group chats with people trying to figure out where we’re going to stay or where we’re going to go eat and it takes a long time if you have a big group of people so this is just going to pare that down and just make it faster and that’s fine that once again that is something that AI should be doing that’s going to make life easier but yeah replacing that human element restaurant reviewers the Michelin Star system those things are fine like they’re not going anywhere.

13:40 – Mishaal Rahman: Yeah. But I am curious to think about what might be some of the potential unintended consequences of this change? Like we kind of see right now, because of the rise of Google Maps, there are a lot of restaurants that have kind of optimized themselves using SEO tactics, you know, kind of the things you only expect websites like Android Authority and you know, our competitors to do. Real life restaurants are using SEO to make themselves appear more highly in Google Maps search results. Like you have restaurants that are just named Chinese food or like Italian food near me. You know, there I think there was like a viral pizza shop in New York City that was called like best pizza place near you. And like, the only reason to name yourself that is so that you show up when people search for pizza places in New York, right? But like, with the advent of generative AI, you know, the fact that it’s able to pull together so much information. Do we think that this is going to make that kind of situation worse? Are we going to see a lot more kind of fake information or like AI targeted information injected into Google Maps listings?

14:41 – C. Scott Brown: Yeah, I mean it probably will make it worse, but I mean, I don’t know, like, yeah, if I searched for best pizza near me, and a restaurant came up that was literally called best pizza near me, I wouldn’t go. Like I’d be like, that’s ridiculous. That’s clearly a sham. That’s clearly somebody who’s just trying to game the system. I’m not interested.

15:03 – Mishaal Rahman: I mean, hey, if it’s 3:00 a.m. and you’re really craving pizza, you know, like you’re too
15:08 – C. Scott Brown: Not in New York City. Oh my god.

15:10 – Mishaal Rahman: You’re too wasted to think about it too hard.

15:12 – C. Scott Brown: You can’t throw a Starbucks cup in New York City without landing on a pizza place, so I don’t think that would be a problem. But yeah, the common sense element of knowing like, oh, wait, this restaurant has, you know, oh, this restaurant has a 4.4 out of 5 star reviews. That I mean, that means it’s got to be good. And then you notice, wait a minute, there’s only three reviews. And then, you know, this other restaurant has 4.2, but a thousand reviews. So you can figure things out like by deducing, you know, using your brain. And so I think that that’s not going to change. I think, yeah, there might be more there may be different ways of trying to gain the AI system that the restaurant sort of implement, but, you know, if you see a bunch of photos and it’s all like influencer-esque women standing in front of angel wings at the restaurant, you know that the food is going to be terrible because it’s only trying to appeal to people trying to get their their Instagram on, which is fine. You want to go to a restaurant and do your Instagramming. That’s great. I’m glad you have that. But I’m more interested in the place that has the actual good food and they’re not going to have angel wings on their restaurant. So it’s like, you’re going to have to use your brain still a little bit and Gemini’s just going to hopefully just make it easier to do the you know, the deduction that you need to do.

16:25 – Mishaal Rahman: I mean, Scott, I think you’re giving a lot of people too much credit. Like if people are unable to discern AI slop, they might be the kind of people who will fall and go actually have real life slop from these restaurants because they still exist, you know, like they’re clearly doing well enough to continue to function.

16:47 – C. Scott Brown: I’m an optimist, Mishaal, that’s all I can say, you know, I have great faith in humanity.

16:53 – Mishaal Rahman: You got a lot of faith in people.

16:56 – Mishaal Rahman: All right, so moving on to our next story. So this is a really interesting piece because behind the scenes for a little for like over a year and a half, Google’s been working on this project called Project Astra, which is like their next generation universal AI assistant, basically an even more intelligent form of Gemini that can not only, you know, do stuff on your phone, like all the usual stuff that assistant, Google Assistant and Gemini can currently do, but potentially interact with the real world and like you know, like remember things that you’ve seen and actually do things in the real world. And as part of this new Project Astra, like this research project that Google’s been doing on, at Google I/O, they showed off a version of Project Astra that can control your Android phone. It can control apps on your Android phone. So they have this demo running on a Pixel device where they actually had this guy who wanted help repairing his bike. And what he did was he set his phone aside on the table and then as he was working on his bike, he would ask it questions. Like, for example, he would ask it to pull up information from a manual. And then Project Astra would open that manual. It would find the manual online, and then it would scroll to the specific page, finding the information that he was asking for. And then based on that information, the person would ask, okay, can you find me some related YouTube videos that explain how to fix this part. And then it would open YouTube, it would search YouTube, and scroll and find the relevant video for the person to watch so he could fix his bike. So I think this was a really big, interesting project that Google showed off because, you know, right now, assistants, they’re hands-free in the sense that you can ask one question, but if it involves anything that involves like looking up information and actually scrolling through documents, filling out forms, switching apps, things like that, it can’t do any of that right now. Like it’s relegated to anything it can pull from the web. So, I think, you know, there would be a big deal if there was some way to actually automate the apps on your Android phone.

Lo and behold, it seems Google is finally working on something that allows for that. Because I found evidence that Google is working on a new feature called Computer Control and what this feature allows you to do is, as I just mentioned, agentic AI control of your Android apps. So why is Google working on this feature? Like what is the importance of this? Well, the problem with Google’s Project Astra demo was that the way it kind of worked in the background was that you can see here in this demo here, this little indicator on the top left corner suggests that this is a screen recording that Project Astra, the demo that Google made, is literally recording your screen and then it’s scrolling through and injecting inputs, probably using the Accessibility API. Now, the problem with that is, since it’s recording your screen, it’s actually like actively using your screen, which means you can’t do anything with your phone at the same time as the AI agent is doing whatever task that you asked it to do. So if you ask it to find a manual or search a YouTube video, you can’t touch your phone at all while it’s doing that, because otherwise you would interrupt its flow. It wouldn’t be able to find where it needs to go next. That’s obviously a problem because Google’s demo, in like a footnote, they mentioned that it’s running at 2x speed. The video was sped up twice as fast, which means it’s running significantly slower than what the video actually originally implies. And you only determine that if you actually pay attention to the footnote. So, what Google is working on, their solution to this problem is Computer Control. And the way this feature works is it will create a virtual display in the background and this virtual display will have the app in question that you want automated launched onto it. And then the app that’s using the Computer Control API will be able to send tap, swipe, and scroll inputs to that application that’s running in the virtual display in the background. And then all of this is happening while you’re still able to use your phone. So your phone, you can do whatever you want on the main display and you have a virtual display in the background where this agentic AI or Gemini is able to control that application. And yeah, I mean there’s a lot of interesting aspects of this like how Google built some kind of privileged mechanism to control it. So only pre-installed applications with a highly privileged permission can access it and apps that use this framework can only control the specific app they were granted permission to by the user, so they can’t just open other applications whenever they want to and start seeing and controlling them. Another cool aspect of this framework is that they developed a mechanism to allow for the trusted virtual display to be mirrored onto a separate, more interactive one. And I think that will allow you to stream that interactive virtual display to a computer so that you can basically remote into what the AI agent is doing. So you maybe potentially if it makes a mistake, you can then configure and change things yourself and then it can continue doing its work in the background. So, I mean, this is really interesting to me, just because like AI gadgets like the Rabbit R1, when they launched, they were almost universally derided for being useless. Like, Scott, I think you mentioned earlier like right before we started this call that you had, you actually briefly used the Rabbit R1. What did you think of Google’s take on this feature? Do you think this is a good idea? Like do you think Rabbit R1 maybe was just ahead of its time and that the problem with that device was just the fact that you need separate hardware and not the fact that it was just a bad idea itself?

22:58 – C. Scott Brown: Well, there are multiple problems with the Rabbit R1 as anyone can tell you in the tech world. But the core idea of Rabbit R1 was sound. It didn’t need extra physical hardware. There’s no reason to have an extra item with you to do these things. It should all happen on your phone, which is one of the big reasons why everyone said immediately when the Rabbit R1 was announced, why isn’t this just an app? Why do we need separate orange hardware for this? And so that was problem number one. But problem number two was that it didn’t have access to your life. You know, it only had access to whatever Rabbit was able to get access to. And, you know, with what Google is trying to do here with what you’ve discovered, sounds more like what we actually need. You know, we actually need an agentic AI that is already able to act on our behalf using the thing that is most personal to us, which is the smartphone. So, I think that what Google is doing here is not only a step ahead of the Rabbit R1, but also just fundamentally more in touch with what it needs to be to be successful. Now, obviously, I haven’t used this yet. I have no idea how well this is going to work. You know, there are a lot of security and safety concerns with this, which I’m sure we’re going to talk about in a couple of minutes. So yeah, I’m cautiously optimistic about this. I do like the concept of being able to talk to Gemini and have it do something, you know, mundane on my behalf. You know, just as a quick example, my Pixel 10 Pro is a 128GB model and, you know, if I take a bunch of videos and photos, that storage fills up quickly. So one thing I have to do every once in a while is go through and make sure that everything that’s been uploaded to my Google Photos account, stays on the Google Photos account, but everything that’s been on the actual phone hardware itself gets deleted because I don’t need it on my phone anymore. It’s backed up to the cloud. So I have to go through and delete those things manually using a couple of button taps and swipes and things like that. Mundane, stupid, don’t want to do it anymore. I just have the agentic AI go through and do it for me. Like that seems really useful. Simple things, making a restaurant reservation, just being able to say, please make a restaurant reservation, please call this restaurant for me and make this thing, do whatever. Those all seem like really great ideas, and you don’t need an orange block to do that. You can just do that with your phone. I like this concept. But yeah, the issue that concerns me, the thing that makes me very, you know, worried is that, you know, I wouldn’t trust, you know, a stranger to act on my behalf. I’m not going to hand a stranger my credit card information and say, go buy me these concert tickets. I don’t know if I trust the AI to do that either. So like that’s where I’m sort of getting nervous. Simple things, quick little actions that are mundane and take, you know, 30 seconds of my day that I don’t want to do anymore. Totally. Please, take this away from me AI, but Google obviously doesn’t want to do that. That’s not sexy. Google wants the sexiness of, oh, I want to stay at this hotel. Book this hotel room for me. Give it your credit card information, tell it your personal information, tell it all this stuff. Like that, I’m just like, I get nervous.

26:27 – Mishaal Rahman: I mean I’m in the same boat. Like that’s the whole reason I haven’t tried, I haven’t done anything with those agentic AI browsers like Comet, you know, like I don’t trust generative AI with actually, you know, completing purchases or booking hotels or any of that sort with me. Like, I want to make sure I have the final say before the details, payment details go through because I do not want to give it that level of control over my, you know, purchasing power. But I mean, to be fair, there are a lot of users who might really benefit from this, especially users who have accessibility issues using a computer or elderly folks, you know, who struggle to navigate websites or figure out what button to press or, you know, how to attach documents and images and stuff like that. Like, this could be really beneficial and really help them out, assuming it gets things correct, of course, which is the big, which is the big if, because we’ve seen how things can go wrong and when it screws up. And if you’re doing something like filling out a visa application before you’re traveling to, you know, another country, a single screw up could mean your application is rejected and you don’t want that happening. You don’t want AI to get blamed for that mishap, or at least like Google doesn’t want its AI to be blamed for that mishap. So yeah, I like the idea that this opens the door for kind of generalized, anything you can imagine, you can have AI control it because it can literally read your screen, it can literally simulate taps and inputs. But I much prefer the idea of kind of having it structured and having it be kind of a back and forth between an application and the AI where like the application tells the AI, here’s what I can do, here’s what I’m exposing to you, and then the AI can actually just execute those functions, which is kind of like the Intents that we have or like app shortcuts on iOS, you know, and actually exactly what Google is already working on simultaneously side by side with this Computer Control feature because with the release of Android 16, I actually spotted that Google is working on an API called App Functions and this API is basically exactly what I said, like it opens the door for Gemini to perform tasks in third-party apps, tasks that were specifically exposed to Gemini by the app developer. So for example, if a restaurant application wants to create a function to allow AI chatbots to order food on their behalf, it can create a specific function called “order food” and then define the parameters that the AI chatbot could go through before it can order food. I would be a lot more comfortable with that approach. What about you, Scott?

29:02 – C. Scott Brown: Yeah, I mean that makes sense to me, but now we’re talking about fundamentally changing the entire app ecosystem, the entire concept of using an application. And I know that lots of lots of organizations, I think including Google are saying like the future is no apps. The future is that you just have an AI and the AI does everything and there’s but it’s going to be I don’t know what, a decade before I mean like it’s maybe not a decade, but it’s going to be a long many years before we can even get close to that kind of difference in how people do things today versus how Google and other companies are envisioning we’ll do them in the future. In the meantime, it’s going to be difficult to get people to sort of accept that. I think a lot of people would be nervous if you told them like, oh, this AI is going to, you know, use your credit card to buy something or or yeah, like you said, like get you a visa for, you know, travel, like there’s so many things that a lot of people would just be like, nope, don’t trust, you know, medical information, like making a doctor’s appointment. Like, you know, that kind of thing is just very sensitive and it’s something that, you know, has to be done slowly and methodically. And I know this doesn’t exactly have to do with what we’re talking about, but the Google Home team made it clear to me in a conversation I had with them semi-recently about how the reason that the Home team moves slowly compared to the rest of the Google AI team is because Google knows that it can’t mess up in the home. If it messes up in the home, people don’t trust it and then they don’t want Google in the home anymore. So, the Google Home team has to move relatively slow compared to maybe the Chrome team, for example. I think that this agentic AI situation is another place where Google needs to move slowly and methodically and deliberately. Is Google going to do that though? Or is it going to chase and go as fast as it can because it has to beat OpenAI, it has to beat all these other browsers and all these other things. I don’t know. So I really hope that Google understands the gravity of the situation. If it messes up and it has some sort of situation where you know, you find out that a bunch of, you know, senior citizens in a nursing home somewhere all had their life savings taken away by AI because it did the stupid thing with a generic system, that’s terrible and that’s something that’s going to ruin it for the long term because people are going to forever remember that situation and they’re never going to want to adopt this. So Google’s got to move slowly and deliberately here and I just I don’t know if it’s going to do that, but I you know, I’ll give it the benefit of the doubt. I’ll wait patiently and see what happens, but yeah, Google’s move fast, break things kind of situation is a little nerve-wracking when it comes to this kind of thing.

31:44 – Mishaal Rahman: I’m a little skeptical personally of, you know, all the AI companies’ claims that the future is going to be AI chatbots interacting with apps and we’re not going to be using apps at all. Like I just don’t see that happening because I don’t see app companies allowing that to happen. Like, I don’t know if you saw OpenAI’s recent announcement, they introduced their apps feature and they have partners like Spotify. You can ask it to play certain songs from Spotify. And that’s I think Spotify’s probably okay with that because it still allow that, you know, you still have to if you want to like certain songs, you want to skip ads, you have to subscribe to Spotify, right? You’re still paying Spotify in some way, but like a lot of applications just would not. How would they make money if they were fully integrated into AI chatbots? Where would they get revenue from? How would they get the data that these AI chatbots are getting? Like I think most companies want apps. They want you to use their apps because they want the control, they want the data, they want to directly funnel users into their subscriptions and kind of having everything integrated into a singular AI chatbot just I don’t see that fulfilling their needs.

32:49 – C. Scott Brown: Yeah, this is a situation where Google, OpenAI, these companies that are massive, massive data companies are basically trying to shoehorn their way into these much more traditional businesses like Spotify. Spotify, you know, is a tech company and very advanced and big, whatever, but it’s fundamentally a subscription service. Like that’s all it is. You pay money to them, they give you music. Like it’s a very simple transaction. And yeah, the data from their applications is invaluable too. There are tons of applications out there that are not subscription services and they make money specifically because you’re using the product. You know, Instagram for example, you know, like you’re not paying to use Instagram, you’re just using it for free. But the reason that you know that you’re able to use it for free is because you are the product. Your data is valuable to Meta and they use that to make money. So if you’re using an agentic AI to post something to Instagram, that’s time that you’re not spending in the Instagram app. That’s time that Meta does not make money off of you. So yeah, it’s going to take a lot for all these user habits and for all these business structures to change. You know, granted, things changed pretty quickly when the smartphone first came around. You know, 2007 when the iPhone came out, it was only a couple of years before we were neck deep in the app ecosystem that we’re still in today. So yeah, it could happen quickly, but at the same time, like it’s not going to happen because Google says, we did this thing. Like that’s just not going to be enough because there’s going to be, you know, thousands and thousands of mega corporations that are going to be negatively impacted by that. So yeah, this is a huge can of worms that Google is opening, or it’s not just Google, like we’re talking about Google because of this particular feature, but this could apply to any company that’s trying to do this. Even Rabbit. It’s going to take a lot more than just like we did this thing. Isn’t this cool? Like, you got to change like, you know, decades of culture and the way people just are used to doing things. So yeah, I’m skeptical. I’m excited because I do like the concept, but I’m also skeptical that it’s actually going to do anything.

35:00 – Mishaal Rahman: Yeah. I think we’re just still firmly in the period of time where AI companies are just throwing things at the wall and seeing what’s sticking. So like they’re experimenting like the next big thing they all think is going to happen is agentic AI. So Google, Microsoft, you know, Amazon, OpenAI, they don’t want to be behind in this race. So they’re all rushing to, you know, release their own versions of Computer Control features to make sure that, you know, if this is indeed the next big thing and if indeed the future is apps that you all only interact with through AI chatbots that they have they’re ready for it, right? They just want to be the first ones there. And you know, that’s why we’re kind of seeing companies throwing things out. They’re not even really sure if this is the right place to include this feature or if this is the right way the feature should be implemented. They just got to have it out there because they want people to use their version of the feature and not their competitors. And that’s kind of why I think we’re seeing Google potentially integrate Nano Banana into Google Lens and Google Circle to Search. You know, as you mentioned at the top of the show, Google is experimenting with putting Nano Banana in more and more places and Google Lens, Circle to Search doesn’t really quite make sense because they’re both tools for searching, right? Like you don’t make videos or make images through Google Lens or Circle to Search, you literally it’s in the name, Circle to Search. You’re circling something to search for it, right? Like what part of that implies image creation? But apparently Google thinks like, okay, we got this hugely popular surface, Circle to Search as well as Google Lens. Nano Banana is going absolutely bananas in terms of popularity. I see ads for it everywhere. Like the newsletter I read every morning, Morning Brew, and also Techmeme like they have ads every single day for Nano Banana and like it’s clearly one of Google’s most successful products. So they’re like, okay, we’re going to bring this to as many, we want as many people as possible using and switching to Gemini. So we’re going to bring Nano Banana creation capabilities to Google Lens as well as Circle Search. And I think that’s probably why they’re doing this. That’s why they’re basically just dumping this capability into an interface that wasn’t really built for it, but because Google Lens is incredibly popular, Circle to Search is incredibly popular, that this is another avenue for them to promote Gemini.

37:17 – C. Scott Brown: Yeah, I mean this is definitely a classic case of we need to have AI in everything and just shoehorn it into whatever it needs to be shoehorned into. You know you and I have previously discussed how far behind Apple is when it comes to these kinds of AI features. And you know Apple has a huge problem on its hands. Like that’s a huge problem. Siri is terrible. Apple’s lack of innovation in the AI space is holding it back. There’s a lot of problems with that. But at the same time, the fact that Apple is not messing itself up by throwing all these things at the wall and hoping something sticks, is laudable as well. You know, their strategy is wildly different from what Google is doing. Apple wouldn’t just like develop this and be like we’re going to put this new thing that Siri does into literally everything we have. Like Apple wouldn’t do that because of what you’re saying because it doesn’t make sense. Like why is Nano Banana in Circle to Search? This makes zero sense at all. But yeah, Google just wants to show it’s the leader. When it comes to AI, we are the leader. We’re going to put AI in our AI and that’s what we’re going to do. Please continue to invest in us, please. That’s what Google is trying to do. And yeah, so it’s weird, but yeah, but there are things that you know that you could put it I can think of situations in which injecting AI into a place or AI generation I should say that maybe it wouldn’t make sense could make sense. For example, there could be a situation where maybe your child is you know, has your phone and is looking at a piece of candy and they use Circle to Search and they scan that little piece of candy and the Google thing says you know this is a Mike and Ike. Mike and Ike is a candy that tastes like fruity, you know, whatever. And then the child could ask something like how are Mike and Ikes made. And then boom AI comes together and creates a little short clip for that child that says like this is how a Mike and Ike is made. I can imagine that and I can imagine that being fun. I can imagine that being something that would be popular and cool. So that’s a situation where injecting AI generation into a search function would make a degree of sense. But that’s not what we’re, you know that’s like steps and steps and steps ahead of where we are right now. So it’s like maybe Google is laying the framework for things that it wants to do. I can respect that but yeah like I don’t know it will get very tiring to just have AI injected into everything even when it doesn’t make sense for it to be there.

40:07 – Mishaal Rahman: I mean, maybe it’s just another avenue for them like maybe the thought process behind putting Nano Banana in these two services is just they are hoping that this will be another way for it to go viral. Like, for example, people use Google Lens and Circle to Search a lot. Maybe they are sent a meme and they want to like search up like find more information about that meme or that image or something and then they have this create option just sitting right there. You know, they’ve never used Nano Banana before because you have to actively go to the Gemini app to use it or in Google Photos you use the Ask Photos creator, right? You have to actively go and use it. But like if you’re just searching for images, looking up information on something and you just have that create option there. Maybe you might think, okay, why don’t I just click that button and then turn the sky red or something or just add a hat to this dog. Or something silly like that, right? And then you have this image creation, you might be surprised, genuinely surprised by how good the image creation is because I know I’ve been really surprised by how good Nano Banana is. I’ve been using it to generate like the hero image in the thumbnails for this very podcast and it’s just crazy how good it is. So I think like when people give it a shot and they’re like, wow, this is shockingly good. Then they might share it on social media, it might go viral. That’s more people spreading the word about Nano Banana and thus more people signing up and using Gemini. I think that might be the goal behind Google sticking Nano Banana in these two surfaces.

41:26 – C. Scott Brown: I wonder if Google is disappointed in the name that they chose. Nano Banana.

41:33 – Mishaal Rahman: It doesn’t tell you anything about it, right? Nano banana, like that’s clearly a code name. Why did they stick with that?

I think I think the reason for that actually is because in the lead up to the announcement, like a lot of influencers were vague posting about, oh, this new image creation thing is amazing because they were all using the code name for it. Google didn’t have an official name for it. I think the official name actually is Gemini 2.5 flash image editing or image creation. Like it doesn’t have an actual name besides Nano Banana.

42:02 – C. Scott Brown: That’s worse than Nano Banana but it’s still like, I don’t know, Nano Banana like

42:08 – Mishaal Rahman: I mean, I think you’re expecting too much. Like Google having good naming practices, this is the same company that couldn’t decide on a name for Google Wallet. Actually like we went full circle. We had like what? We had Google Wallet, Android Pay, GPay, Google Pay, Google Wallet like yeah we’re expecting a little too much from Google right there.

42:29 – C. Scott Brown: Google and Qualcomm, they need some help when it comes to naming their products. But yeah, no, I definitely think that, you know, I think, you know, we’re talking about AI throughout this whole podcast and we’re talking about like the ramifications that it could potentially have for these not only these different products, but just like our day-to-day lives. You know, I think that Google and all these companies need to think about where they’re going. Like, right now I feel like it’s just like, you know, there’s a like cartoon that I can imagine where the train is coming down the tracks and like the cartoon characters are frantically trying to put the track down so that the train doesn’t roll off the tracks. So they’re just, you know, building the track as the train is actually moving on it and that’s what I feel like we’re at with AI right now. It’s like the train is barreling ahead and we’re just trying to lay down track and it’s like no one is actually having the time or the forethought to think, where the hell is this train actually going? Like where are we laying the track to? And I think that’s what Google needs to sort of take a step back and and figure out. And to be totally fair to Google, there might be a grand overarking plan to this that they’re just not telling us because they don’t want, you know, everyone else, OpenAI and all these other companies to know what they’re doing. But Google has a long history of not planning ahead, you know, just look at all the products that they’ve launched that they’ve eventually moved to the Google graveyard. Just think about the fact that there is a Google graveyard. Like that’s all representation that Google doesn’t usually do very well when it comes to long-term thinking. So, I don’t know, like where are we going and when are we going to get there and what’s it going to be like when we get there? Those are the big questions I always have in my mind whenever we talk about these kinds of new innovations. So, yeah, we’re just going to have to wait and see.

44:24 – Mishaal Rahman: I mean, I think personally, as long as you ignore the hype building by people like Sam Altman and you just look at the actual things you can do with the tools that are on offer today, it’s quite incredible. Like there are people who are kind of naysayers about the advancements that AI has sprung, and then you look at the things you can do with Nano Banana. Like I do not know how to use Photoshop at all. I am terrible at image creation and image editing. But with Nano Banana, I can create thumbnails that I would never have been able to do years ago. Or with Gemini, how it helps me proofread my own work and helps me understand code and things like that. Like, it’s amazing what you can do with it. As long as you look past the hype and actually just play around with the tools and see what works best for your own particular workflow, like maybe it might not be what are they calling it? AGI, artificial general intelligence, right? They’re all hyping that up or we’re going to have robots that can completely think on their own. Like I just ignore the hype, the hype people on Twitter and stuff like that. But look at the things you can actually do with the tools today and it’s incredible how far we’ve come.

45:34 – C. Scott Brown: No, I agree. There are a lot of things that have, you know, a lot of AI-based tools that have made a genuine difference in my life. Gemini being a major one. Gemini, I use Gemini all the time for brainstorming for video ideas, like you said, like proofreading and fact checking and helping to understand code or complicated concepts. Gemini Live. You know, when I’m traveling, I talk to Gemini Live, you know, basically like a travel guide. Like I show Gemini Live a monument or something. I say, what is this? And Gemini tells me and it’s like, great, I didn’t have to like spend time reading Wikipedia or ask some random person. I can just talk to Gemini. Like there’s all sorts of things that AI is being helpful with today. And let’s not forget that AI has been in our phones for a decade, you know, like most of the cool features that your Pixel can do are AI-based and they always have been AI-based. It’s just that they haven’t been, you know, promoted as such because AI wasn’t a hot sexy topic. So, you know,

46:33 – Mishaal Rahman. They used to call everything machine learning back then, back in the good old days.

46:36 – C. Scott Brown: Yeah, because they probably felt that AI, the term, artificial intelligence, they probably felt that it didn’t market well. They probably felt that when you use the term artificial intelligence, people think Terminator, you know, they think of negative things. So they were like, okay, we’re not going to use that term, we’re going to use machine learning instead. And at some point in the past like two years, machine learning is out the door. No one says machine learning anymore. And we’re back to AI and artificial intelligence. So, it’s like it’s all been there, it’s all been happening. It’s just that now we’re seeing advancements a little bit faster and we’re seeing Wall Street only care about AI and so now every company is just AI, AI, AI, AI. Eventually, the bubble’s going to burst, you know, maybe not this year, but 2026, 2027, the bubble’s going to burst and all these companies that are popping up that are like, all we do is do AI stuff, they’re going to drown. But what we’re left with is going to be similar to when the dot com bubble burst, you know, whatever amount of time it was, I’m really old, so, you know, maybe 50 years ago, I don’t even remember. But when the bubble burst, you know, we still had the Internet, we still have dot coms, we still have all the basic things that the dot com, you know, boom had. It’s just that the fluff, the junk is gone, and we just have the Internet. So, I think that we’re going to see a similar thing happen with AI, but in the meantime, we’re just going to have to suffer through, you know, some company coming along and being like we’ve cured cancer with AI and it’s like, oh, okay.

48:06 – Mishaal Rahman: Well, I mean, fortunately, Google hasn’t made such bold claims yet with AI, but what they are doing is, you know, you get Gemini, you get Gemini, everybody gets Gemini. That’s Google’s approach to AI so far. Okay, and that’s everything we’ve got for you this week. You can find links to all the stories mentioned in this episode in the show notes and you can find more amazing stories to read over on androidauthority.com.

48:29 – C. Scott Brown: Thanks for listening to the Authority Insights Podcast. We publish every week on YouTube, Spotify, and other podcast platforms. You can follow us everywhere on social media at Android Authority, and you can follow me personally on Instagram, Bluesky, and my own YouTube channel at C. Scott Brown.

48:47 – Mishaal Rahman: As for me, I’m on most social media platforms posting day in and day out about Android. If you want to keep up with the latest on Android, go to my Linktree and follow me on the social media platform that you like best. Thanks for listening.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Cybercriminals use generative AI for their attacks
Next Article KDE Plasma 6.5 Fixing Some Of The Most Common Crashes, Other Bugs Fixed Too
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Luckin Coffee sees profit margin drop 10% in Q4 despite revenue surpassing Starbucks by $260 million · TechNode
Computing
Select immersive NBA games coming to the Apple Vision Pro
News
Apple Clips social video app killed after eight years
News
Dingdong Maicai halts nearly 40 site operations in Guangdong amid cost squeezing · TechNode
Computing

You Might also Like

News

Select immersive NBA games coming to the Apple Vision Pro

2 Min Read
News

Apple Clips social video app killed after eight years

1 Min Read
News

How To Add Your State ID To Your Google Wallet On Android (And Why You Should) – BGR

3 Min Read
News

4 ways to take advantage of your iPad’s USB port (that aren’t power)

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?