By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Apple study looks into how people expect to interact with AI agents – 9to5Mac
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Apple study looks into how people expect to interact with AI agents – 9to5Mac
News

Apple study looks into how people expect to interact with AI agents – 9to5Mac

News Room
Last updated: 2026/02/13 at 6:55 AM
News Room Published 13 February 2026
Share
Apple study looks into how people expect to interact with AI agents – 9to5Mac
SHARE

A team of Apple researchers set out to understand what real users expect from AI agents, and how they’d rather interact with them. Here’s what they found.

Apple explores UX trends for the era of AI agents

In the study, titled Mapping the Design Space of User Experience for Computer Use Agents, a team of four Apple researchers says that while the market has been investing heavily in the development and evaluation of AI agents, some aspects of the user experience have been overlooked: how users might want to interact with them, and what these interfaces should look like.

To explore that, they divided the study into two phases: first, they identified the main UX patterns and design considerations that AI labs have been building into existing AI agents. Then, they tested and refined those ideas through hands-on user studies with an interesting method called Wizard of Oz.

By observing how those design patterns hold up in real-world user interactions, they were able to identify which current AI agent designs align with user expectations, and which fall short.

Phase 1: The taxonomy

The researchers looked into nine desktop, mobile, and web-based agents, including;

  • Claude Computer Use Tool
  • Adept
  • OpenAI Operator
  • AIlice
  • Magentic-UI
  • UI-TARS
  • Project Mariner
  • TaxyAI
  • AutoGLM

Then, they consulted with “8 practitioners who are designers, engineers, or researchers working in the domains of UX or AI at a large technology company,” which helped them map out a comprehensive taxonomy with four categories, 21 subcategories, and 55 example features covering the key UX considerations behind computer-using AI agents.

The four main categories included:

  • User Query: how users input commands
  • Explainability of Agent Activities: what information to present to the user about agent actions
  • User Control: how users can intervene
  • Mental Model & Expectations: how to help users understand the agent’s capabilities

In essence, that framework spanned everything from aspects of the interface that let agents present their plans to users, to how they communicate their capabilities, surface errors, and allow users to step in when something goes wrong.

With all of that at hand, they moved on to phase 2.

Phase 2: The Wizard-of-Oz study

The researchers recruited 20 users with prior experience with AI agents, and asked them to interact with an AI agent via a chat interface to perform either a vacation rental task or an online shopping task.

From the study:

Participants were provided with a mock user chat interface through which they could interact with an “agent” played by the researcher. Meanwhile, the participant were also presented with the agent’s execution interface, where the researcher acted as the agent and interacted with the Ul on screen based on the participant’s command. On the user chat interface, participants could enter textual queries in natural language, which then appeared in the chat thread. Then, the “agent” began execution, where the researcher controlled the mouse and keyboard on their end to simulate the agent’s actions on the web page. When the researcher completed the task, they entered a shortcut key that posted a “task completed” message in the chat thread. During execution, participants could use an interrupt button to stop the agent, and a message “agent interrupted” would appear in the chat.

In other words, unbeknownst to the users, the AI agent was, in reality, a researcher sitting in the next room, who would read the text instructions and perform the requested task.

For each task (vacation rental or online shopping), participants were requested to perform six functions with the help of the AI agent, some of which the agent would either purposely fail (such as getting stuck in a navigation loop) or make intentional mistakes (such as selecting something different from the user’s instruction).

At the end of each session, the researchers asked participants to reflect on their experience and propose features or changes to improve the interaction.

They also analyzed video recordings and chat logs from each session to identify recurring themes in user behavior, expectations, and pain points when interacting with the agent.

Main findings

Once all was said and done, the researchers found that users want visibility into what AI agents are doing, but not to micromanage every step, otherwise they could just perform the tasks themselves.

They also concluded that users want different agent behaviors depending on whether they’re exploring options, or executing a familiar task. Likewise, user expectations change based on whether they’re familiar with the interface. The more unfamiliar they were, the more they wanted transparency, intermediate steps, explanations, and confirmation pauses (even in low-risk scenarios).

They also found that people want more control when actions carry real consequences (such as making purchases, changing account or payment details, or contacting other people on their behalf), and also found that trust breaks down quickly when agents make silent assumptions or errors.

For instance, when the agent encountered ambiguous choices on a page, or deviated from the original plan without clearly flagging it, participants instructed the system to pause and ask for clarification, rather than just pick something seemingly at random and move on.

In that same vein, participants reported discomfort when the agent wasn’t transparent about making a particular choice, especially when that choice could lead to the wrong product being selected.

All in all, this is an interesting study for app developers looking to adopt agentic capabilities on their apps, and you can read it in full here.

Accessory deals on Amazon

Add 9to5Mac as a preferred source on Google
Add 9to5Mac as a preferred source on Google

FTC: We use income earning auto affiliate links. More.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article T-Mobile is giving away the iPhone 17 for free when you add a new line and trade in T-Mobile is giving away the iPhone 17 for free when you add a new line and trade in
Next Article Intel Nova Lake Sound Support In Linux 7.0 Intel Nova Lake Sound Support In Linux 7.0
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Google’s Gemini to be integrated into major UK property platform – UKTN
Google’s Gemini to be integrated into major UK property platform – UKTN
News
Linux Gets Rid Of Intel 440BX Driver For Old Pentium CPUs After Being Broken For 19+ Years
Linux Gets Rid Of Intel 440BX Driver For Old Pentium CPUs After Being Broken For 19+ Years
Computing
Why a Viral AI Doomsday Column Makes Me Value My Liberal Arts Degree
Why a Viral AI Doomsday Column Makes Me Value My Liberal Arts Degree
News
Openreach appoints new chief executive | Computer Weekly
Openreach appoints new chief executive | Computer Weekly
News

You Might also Like

Google’s Gemini to be integrated into major UK property platform – UKTN
News

Google’s Gemini to be integrated into major UK property platform – UKTN

2 Min Read
Why a Viral AI Doomsday Column Makes Me Value My Liberal Arts Degree
News

Why a Viral AI Doomsday Column Makes Me Value My Liberal Arts Degree

11 Min Read
Openreach appoints new chief executive | Computer Weekly
News

Openreach appoints new chief executive | Computer Weekly

4 Min Read
Best Apple deal: Save .01 on the AirPods 4 (with ANC) at Amazon
News

Best Apple deal: Save $39.01 on the AirPods 4 (with ANC) at Amazon

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?