By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Agentic UX Over “Chat”: How to Design Multi-Agent Systems People Actually Trust | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Agentic UX Over “Chat”: How to Design Multi-Agent Systems People Actually Trust | HackerNoon
Computing

Agentic UX Over “Chat”: How to Design Multi-Agent Systems People Actually Trust | HackerNoon

News Room
Last updated: 2025/11/27 at 9:17 PM
News Room Published 27 November 2025
Share
Agentic UX Over “Chat”: How to Design Multi-Agent Systems People Actually Trust | HackerNoon
SHARE

When I was first tasked with integrating generative AI into Viamo’s IVR platform, serving millions of people across Africa and Asia’s emerging markets, it didn’t take me long to recognise that we couldn’t just stick a chat interface on it and call it a day, as much as that would simplify some of our technical and development challenges. To be clear, we were designing for people who rely on voice interfaces strictly because they need this kind of information about healthcare, agriculture, and finance, and they have little patience for AI that fails them or gives them the wrong information when they have limited time and limited bandwidth.

That project taught me a lesson about designing for AI that I think all designers should learn: Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. Over seven years of designing products, spanning fintech, logistics, and software platforms, I have realised that the most effective way of implementing AI is not about replacing human judgement, but about augmenting it in ways that people can, and ultimately, will trust.

The Fatal Flaw of Chat-First Thinking

There is a dangerous paradigm that has been ingrained in this industry because of its obsession with chat interfaces on AI products, and that is that everyone is trying to build a “ChatGPT for Y”. No one stops and says, “Well, actually, just because we can build this and chat is a part of it, that doesn’t actually have anything to do with whether or not chat interaction is actually what we need on this.”

That’s not necessarily true. Chat is perfect for open-ended exploration and creative tasks that involve journeys as much as they involve destinations. But most business tasks demand accuracy, auditability, and repeatability. When designing the supplier interface for Waypoint Commodities, a system that deals with million-dollar fertiliser and chemical trade transactions, users didn’t require a user-friendly chat interface that could facilitate exploratory conversations about their transactions. They required interfaces that enabled AI systems to point out errors, identify optimal routes, and highlight compliance concerns without clouding critical transactions with any uncertainty or vagueness.

The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation. Users can’t easily inspect what information was used, what was applied, and what was explored as an alternative option. Of course, this is acceptable for low-stakes interrogations, but disastrous for consequential choices. When designing our shipment monitoring system that tracked orders all through fulfilment, our Waypoint project was facing a challenge that required users to be assured AI messages about potential delays or market fluctuations weren’t based on fictitious observations but on actual facts explored and verified by AI itself.

Multi-Agent Systems Require Multi-Modal Interfaces

But then, a paradigm shift occurred within my thinking as I ceased designing only for one AI model and focused on designing for environments that consisted of multiple specialised AI entities operating together as a system.

It meant that we had to forgo entirely the paradigm of a one-window chat system. Instead, we built a multi-window interface through which multiple interaction methods could be used simultaneously. Quick facts would get immediate responses through AI voice output. Troubleshooting would involve a guided interaction through which AI would answer preliminary questions before redirecting the user to an expert system. Users searching for information on government facilities would get formatted replies that would cite sources accordingly. All these methods of interaction would have distinct visual and audio signals that would build user expectation accordingly.

These outcomes proved this strategy was valid, and we experienced improved accuracy of response of more than thirty per cent and heightened user engagement levels. Far more significantly, user abandonment levels decreased by twenty per cent as users ceased leaving conversations due to frustration of expectation mismatches. Since users understood they would be speaking to an AI system that had a certain body of knowledge as compared to waiting for human expertise, they adjusted their levels of enquiry and patience accordingly.

Designing for Verification, Not Just Automation

One of the most important principles of agentic UX design that I uphold is that ‘automation without verification’ is merely ‘technical debt masquerading as AI.’ There should be an ‘escape hatch’ provided alongside each AI ’agent’ used in a system, allowing ’users to validate its reasoning’ and ‘override its decision’ as and when required, ’not because one lacks faith’ in AI ’abilities,’ but because one ’respects the fact’ that ’users have final responsibility’ when ’in regulated environments or high-value transactions.’

When I was responsible for designing the admin dashboard for onboarding new users at Waypoint, we had a typical case of an automation project, the kind that would enable AI processing of incorporation documents, abstracting essential information, and automatically populating user profiles, thereby reducing user onboarding from several hours into just minutes. Of course, we understood that inaccuracies could potentially lead a company into a case of non-compliance or, worse, create fraudulent user profiles. So, we realised that we don’t need more accurate AI processing as a remedy to this problem, but rather to create a system of verification that would involve AI-generated user profiles, pending activation by human admins.

In our interface, we implemented the following system for indicating AI confidence levels for each field that was extracted:

  • Fields that had high levels of accuracy had black text colour and green tick marks;
  • Medium accuracy had orange colour, and a neutral symbol was used;
  • Fields that had low accuracy or missing information had red colour and a warning symbol.

To identify any errors that AI systems had missed, thirty seconds per profile was enough time for admins, as they got enough context through this system.

But the outcome was clear: we achieved a reduction of onboarding time of forty per cent over fully manual methods and greater accuracy than human or AI approaches alone. But more significantly, the admin personnel trusted this system because they could actually follow its logic. If there was any error on the AI’s part, that was pointed out quite easily through the verification page, and this helped build that all-important trust that enabled us to successfully roll out other AI functionality later on.

Progressive Disclosure of Agent Capabilities

Another subtle but essential area of agentic UX that most designers struggle with is providing users with information about what their agents can and cannot accomplish without overwhelming them with possibilities and potential applications of these capabilities. Such is especially true for systems that apply generative AI, and as we struggled at work at FlexiSAF Edusoft, where I developed these systems, they have applications that range widely but are unpredictable across different tasks or activities. Users, in this case students and parents, would need direction through often complex admission procedures and, on the other hand, would need to be informed of what responses could be provided by AI and what would require human interaction.

Our implementation provided capability hints based on interaction, meaning that as one used the system, they would be provided with examples of questions that the AI was strong at answering versus questions that could be more appropriately answered by the human resources people at the institution, meaning that as a user typed questions about application deadlines, they would see examples of questions that the AI was strong at answering, such as “When is the deadline for engineering applications?” as opposed to questions they could more effectively answer, for instance, “Can I be exempted from payment of application fees?”

Additionally, we enabled a feedback cycle through which users could express whether their question had been fully answered by an AI response or not. This was not only for improving the model, but it enabled a UX feature through which users could express that they required escalation of their issue and that they had been left stranded by an AI system. Relevant resources would be provided through this system, and, if not, they would be connected with human resources as well, thus resulting in a support ticket decrease but without sacrificing user satisfaction, as people would feel that they had been listened to and that they had not been left stranded through an AI system.

Transparency and Its Usefulness as a Trust-Building Factor

Trust, of course, is not established by improved AI algorithms but by transparent system design that allows a user to see what the system knows, why it made its conclusions, and where its limitations are. eHealth Africa, our project involving logistics and data storage of supply chains in the medical sector, made this one of its non-negotiables: ’If AI computer agents predicted the timing of vaccine shipments or indicated optimal routes for delivery, these justifications had to be explainable, because human decision-makers would be deciding whether rural clinics received life-saving commodities on time.’

To address this, we built what I call “reasoning panels” that provided output alongside AI suggestions. These reasoning panels did not display model details of its computations, only information about why it reached its recommendations, including road conditions, previous delivery times for this route, weather, and transport capacity available. The reasoning panels enabled field operatives to quickly ascertain if they had been getting outdated advice from AI or if they had neglected an essential, more recently available fact, such as a bridge closure, and made them indispensable and transparent rather than opaque decision-makers, as would be the case for black boxes.

Transparency was required, and this was true for failure as well as success. To this end, we built helpful failure states that would describe why the AI was unable to offer its recommendation as opposed to falling back on some generic error message. If, for instance, it was unable to offer an optimal route because it lacked connectivity information, this was explicitly communicated, and the user was informed of what they could do if they still had no route recommendation available.

Designing Handoffs Between Agents and Humans

But perhaps one of most undeveloped themes of agentic UX is that of handover, or exactly when and exactly how an AI agent is supposed to pass control of a system or of an interaction over to a human, whether that human be a colleague or be themselves a user of that system or interaction. This is precisely where most of the loss of trust occurs within multi-agent systems, and this was actually one of the first projects that I engaged that dealt explicitly with this issue, that of Bridge Call Block for Viamo, which was a system that transferred users from IVR interactions to human customer service reps.

Our protocol for context transfer was designed such that after every interaction of the AI, a structured summary was displayed on the screen of the operator before they could greet the user, and this summary contained what was asked by the user, what the AI intended to say, and why the AI escalated this call. There was no need for users to be asked to repeat what they had asked, and all interaction context was available to the operators, and this small detail of interaction design vastly improved average handling time and user satisfaction, as people felt they had been respected and that their time had not gone to waste.

The handoff from human to AI agent had to be considered equally carefully in reverse. In cases that called on the operators to refer their users back to the automated system, user interface functionality was used effectively by the operators to communicate adequate expectations of AI autonomy based on certain tasks that would enable users to be referred back to the automated system, as opposed to doing so with expected frustrations.

Principles of Pragmatic Design of Agentic UX

As a practitioner designing AI-enabled systems for many years, today I have formulated some pragmatic guidelines that help me design agentic UX effectively:

Firstly, design for the workflow, not for technology. Users don’t care whether they’re being helped through AI, rules, or human intelligence. They only care about whether they can accomplish their tasks effectively and conveniently. Begin by reverse-engineering from the target outcome, identifying areas of added value and added complexity due to AI-enabled agents, and then stop and proceed accordingly.

Secondly, define meaningful boundaries of AI-enabled agents. Users need to be aware of when they are leaving one realm of intelligence and entering other realms, such as the intelligence of retrieval, model intelligence, and human intelligence, and establish consistent visual and interaction guidelines accordingly, such that they don’t wonder what kind of answer they’re going to get and when they’re going to get it.

Thirdly, build verification into your workflow design respecting user expertise. AI systems should ideally help hasten decision-making by bringing up pertinent information and suggesting courses of action, but these should ultimately be made by human users who possess context unavailable to AI systems themselves. Designing decision verification flows into AI system user interfaces that facilitate this would be ideal.

Because of projects that have successfully secured funding, boosted engagement by definite increments, and processed user figures in the thousands, we didn’t succeed because we possessed, or attempted to create, sophisticated AI systems. It is because we provided these users, through our interface, the ability to comprehend what was happening on their end of this AI system and, through that, helped them trust it enough to accomplish increasingly complex tasks over time that has truly made them successful examples of agentic UX.

n

n

n

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Best Google Smart Home Devices Best Google Smart Home Devices
Next Article Enjoy up to 0 OFF on Dreame’s smart home cleaning and self-care products Enjoy up to $800 OFF on Dreame’s smart home cleaning and self-care products
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

BYD ships Thailand-made EVs to Europe for first time · TechNode
BYD ships Thailand-made EVs to Europe for first time · TechNode
Computing
Amazon just undercut Samsung’s Black Friday price on the Galaxy Z Fold 7
Amazon just undercut Samsung’s Black Friday price on the Galaxy Z Fold 7
News
Digitap ($TAP) is the Top Crypto Presale Pick Ahead of Its Major Visa Integration
Digitap ($TAP) is the Top Crypto Presale Pick Ahead of Its Major Visa Integration
Gadget
Mercedes-AMG Petronas F1 revs up testing with augmented reality | Computer Weekly
Mercedes-AMG Petronas F1 revs up testing with augmented reality | Computer Weekly
News

You Might also Like

BYD ships Thailand-made EVs to Europe for first time · TechNode
Computing

BYD ships Thailand-made EVs to Europe for first time · TechNode

1 Min Read
China approves 184 online games in November as PUBG Mobile variant adds PC version · TechNode
Computing

China approves 184 online games in November as PUBG Mobile variant adds PC version · TechNode

1 Min Read
Optimizing AI with the Right Cloud Strategy: Multi-Cloud, Hybrid, and More | HackerNoon
Computing

Optimizing AI with the Right Cloud Strategy: Multi-Cloud, Hybrid, and More | HackerNoon

13 Min Read
Scalability Lessons From Building an AI Learning Platform for Healthcare | HackerNoon
Computing

Scalability Lessons From Building an AI Learning Platform for Healthcare | HackerNoon

9 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?