Are we still building for humans, or for AI agents? The answer to this question is changing faster than most product teams can catch up with, and we’ve seen several innovations this year back up this fact. One of the most recent ones is Google’s release of the Agent Payments Protocol (AP2) in September 2025.
For the first time, we’re witnessing AI agents and systems independently execute payments and transactions on behalf of users. Essentially, this means you can no longer assume that the person at the other end of every transaction is human. It also changes everything, from what user experience now means to how products must be built. How then do we navigate this new reality? Let’s find out in this piece.
Google’s AP2 Enables Agent-First Experiences
It’s impossible to roam the tech media without coming across one or two articles tagging 2025 as the year of AI agents. It’s hard to argue with this narrative because we’ve seen some of the best AI rollouts in the last few years, and frankly, it only gets better from here.
While traditional agents were designed to respond to prompts or perform single tasks, the aim with AI agents was to create a system where these tools can plan and execute, use external tools, adapt to feedback, and possibly do so much more independently. The expectations are high, and it’s easy to see why the sector is pumped are eager to see these ideas implemented.
Many of the releases seen this year haven’t lived up to expectations, as they only scratched the surface of these promises, not until Google’s AP2 release on September 17th. AP2 is an open protocol agent designed to authenticate, purchase, and transact autonomously, ensuring that payment flows remain compliant with existing financial regulations.
In simple terms:
● Shopping will shift from “add to cart” to “delegate to agent
● Scheduling becomes syncing between AI proxies
● Engagement evolves from clicks to intent alignment.
A Google Cloud blog post announcing this release detailed some of AP2’s design functionality, like smarter shopping, accessing personalized offers, and coordinating tasks in relation to purchases. Imagine a customer discovers a leather jacket they want is unavailable in a specific size, and they tell their agent: “I really need this jacket in a size XL, and I’m willing to pay 30% more for it.” The agent then monitors the store under these predetermined conditions and automatically executes the order the moment this variant is met. This is exactly how this article pictures AP2 in real-world use cases.
One of the major challenges with the rollouts seen this year is that they work great in demos, then crash and burn when handling actual business processes. In fact, a LangChain research found that performance quality is the #1 concern of 82% of organizations deploying AI agents. However, the potential AP2 holds in terms of functionality is high, and considering Google’s recent huge success with the VEO3, there is a high chance it will meet expectations. As such, this could be the biggest enabler of agent-first experiences for product leaders.
Designing for The Digital Twin (AI agents)
IBM defines a digital twin as a digital representation of an object or system that updates in real-time, utilizing simulations, machine learning, and reasoning to support decision-making processes. The concept of the digital twin has been in existence for some time. In manufacturing and infrastructure, it has been around for decades, and McKinsey calls it “the ultimate convergence of data and design,” referring to digital models that mirror physical systems in real-time. Although these systems have existed for a long time, they’re taking a new direction today, with the integration of artificial intelligence.
This new direction forces product teams to rethink design at a fundamental level, especially considering that traditional user experience is built around human cognition and interactions. We’re at a point where these traditional approaches won’t suffice when the user at the end of the screen might not even be human. At this time, product leaders have numerous new factors to consider. For instance, there may now be a need to prioritize APIs and semantic metadata that agents can interpret and act upon.
At the same time, we might need to start looking into vulnerability loops and ways to confirm that an agent’s actions do represent its user’s intent. This model essentially changes what it means to “design for the users” as the next customer’s journey we curate might not be a direct interaction, but one mediated through intelligent proxies.
Lessons for Product Leaders
The most important lesson to take from innovations like this is how quickly the industry is evolving, and how we need to be actively involved in moving with the times. If AI agents in digital finance transactions become a working system, how do we integrate ourselves into such a model? We can start by asking the following questions:
● Are we ready to design for agents as first-class customers?
● Can our APIs and systems support autonomous decision-making?
● Do we measure success by engagement, or by how faithfully agents reflect user values?
The Future of Human-Product Relationship
Designing for AI agents is ultimately about redefining the relationship between people and products. While the focus is shifting toward creating experiences for autonomous agents rather than direct human users, it’s essential to remember that these agents are extensions of their users. They reflect their identity, preferences, and ethics. When you design with those human values at the core, you’re not just building for today’s interfaces; you’re building products resilient enough to survive the next wave of AI innovation.
