The Information has published a report with interesting tidbits about Apple’s partnership with Google, which will have Gemini serve as the foundation for its AI features, including the new Siri. Here are the details.
In-house finetuning, no Google or Gemini branding
Yesterday’s joint announcement that Apple had decided to rely on Gemini to power its AI features was light on technical specifics.
The companies stated that Apple’s Gemini-based features “will continue to run on Apple devices and Private Cloud Compute,” which means that Google won’t have access to user data by design, but that was it.
Today, The Information published an interesting look at some aspects of the partnership, including the fact that Apple will be able to adjust its version of the Gemini model independently:
Apple can ask Google to tweak aspects of how the Gemini model works, but otherwise Apple can finetune Gemini on its own so that it responds to queries the way Apple prefers, the person involved in the project said.
The report also partly answers a question that multiple people have been either asking themselves or speculating about when it comes to how prominent the Google branding will be throughout the experience:
In the current prototype of Apple’s Gemini-based system, AI answers don’t include any branding related to Google or Gemini, this person said.
Although the final experience may change from the current implementation, this partly echoes a Bloomberg report from late last year, in which Mark Gurman said:
I don’t expect either company to ever discuss this partnership publicly, and you shouldn’t expect this to mean Siri will be flooded with Google services or Gemini features already found on Android devices. It just means Siri will be powered by a model that can actually provide the AI features that users expect — all with an Apple user interface.
The Information also notes that Apple expects the Gemini-powered Siri to improve its performance on answers related to world knowledge by actually answering the question (“such as describing the population of a country or scientific information”) rather than listing links for the user to visit.
Gemini-powered Siri to get better at providing emotional support
The Information’s report also notes that Apple expects the Gemini-powered Siri to become better at providing emotional support:
“Another common set of questions Siri has historically struggled with involved emotional support, such as when a customer tells the voice assistant it is feeling lonely or disheartened. In the Gemini-based version, Siri will give more thorough conversational responses the way ChatGPT and Gemini do, this person said.”
Setting emotional support as a goal could be a risky move, as there’s no shortage of documented cases in which vulnerable users have harmed themselves after having conversations with chatbots.
In many instances, rather than offering appropriate safety guidance or steering users toward real-world help, the systems hallucinated, misread the situation, or failed to grasp the stakes of the conversation, sometimes with serious consequences.
How exactly the Gemini-powered Siri will handle this situation when it inevitably comes up, remains to be seen.
About those two different systems
Last August, at a company-wide meeting, Apple’s head of software, Craig Federighi, addressed one of the biggest sticking points of Apple’s fumbled Siri revamp.
At the time, Bloomberg reported:
Federighi explained that the problem was caused by trying to roll out a version of Siri that merged two different systems: one for handling current commands — like setting timers — and another based on large language models, the software behind generative AI. “We initially wanted to do a hybrid architecture, but we realized that approach wasn’t going to get us to Apple quality,” Federighi said.
While The Information’s report doesn’t go in-depth to address this technical aspect exactly, it does note the following:
While certain common Siri tasks such as setting a timer, reminder or sending a specific text message to a phone contact will continue to be powered by technology stored on Apple devices, the new version of Siri would also be able to handle instances in which the customer’s question isn’t clearly understood.
For example, if someone asks Siri to send a text message to their mother or sister, but the customer doesn’t store their names that way in their contacts, the Gemini-based Siri would could search through their messages to figure out which of their contacts is most likely to be their mother or sister, this person said.
In other words, it appears that Apple is still seeking to merge traditional, low-stakes natural language processing tasks (such as setting timers or creating simple reminders) with more complex, non-deterministic tasks into a single, streamlined experience. As it should be, from the user experience standpoint.
The problem is that while this may seem trivial at first glance, it has proven to be a challenging endeavor, even for Google and for Amazon. So it will be interesting to learn more about this as we get closer to the actual rollout of the first features from the partnership.
Timeline
Finally, the report also reaffirms that the rollout of Apple’s Gemini-powered AI features will be gradual:
Some of the features will launch this spring. Others, including Siri’s ability to remember past conversations it had with a customer, or proactive features that could suggest they leave home to avoid traffic ahead of an airport pickup that’s listed on their Apple calendar, are expected to be announced at the company’s annual developer conference in June, this person said.
You can read The Information‘s full report here.
Accessory deals on Amazon


FTC: We use income earning auto affiliate links. More.

