Paul Jones / Android Authority
When you take a zoomed-out view of the AI landscape, you see tools, services, and products mushrooming from literally every corner. It feels like AI is now a part of our lives in every way that matters.
But zoom in a little further, and you’ll notice something else. It’s the AI companies that are fighting for your attention and racing to shove AI into every possible place, whether it makes sense or not. AI pins that clip onto your shirt collar as an omnipresent AI tool fall squarely into this category. They feel forced, especially when there are clearly better form factors that could do the job far more naturally.
Even if the rumored Apple AI pin becomes a reality, I’d still pick Google’s AI glasses over it any day.
Which AI hardware makes more sense to you?
0 votes
AI pins are doomed from day one

Donald Norman’s The Design of Everyday Things underlines a simple reality: products succeed only when they align with users’ mental models. They fail when people are asked to adapt to entirely new behaviors instead of extending existing ones that already feel intuitive.
We adjusted to touchscreens fairly quickly a couple of decades ago because we were already using our thumbs to type on BlackBerry keyboards. Touch wasn’t an alien interaction; it simply replaced physical keys with something more convenient — one that needed a light touch instead of a firm press.
Device makers should focus on reducing friction instead of expecting people to adapt to new form factors overnight.
And I don’t even need to go that far back to make this point. We’ve already seen the Humane AI pin fall flat, despite the novelty and high-tech hardware that should have attracted early adopters.
When abandonment rates for wearables are already high (and I’ve been a part of that band), device makers should focus on reducing friction instead of introducing new form factors and expecting people to adapt overnight. People even rejected smart glasses when they looked absurd and overly robotic, and only started accepting them once they began to look “normal.” Adding another cyborg-esque piece of hardware to clothing isn’t the way to go.
Novelty cannot exist for its own sake. Hardware needs a reason to exist — something it can do that existing devices cannot. Earbuds can’t read your brain signals; for that, you’d need dedicated Neuralink-like hardware. But why would you opt for a pin hanging off your T-shirt (and making you look like a cyborg) when existing hardware can already match or surpass its capabilities with ease?
You are already wearing the future
Reducing AI’s potential to be just another chatbot is short-sighted. A dedicated AI pin has no eyes, relies entirely on ambient audio, responds only in voice, and expects voice input in return. That’s an incredibly limiting form factor, especially when most of us are already wearing devices that have been far smarter for well over a decade.
Given how much investment the industry is pouring into visual AI, it’s clear where consumer-facing AI is headed. Companies want access to what you see and hear as the next data frontier, while users want an assistant that can stay with them throughout the day without being intrusive — one that understands context and discreetly helps. That convergence makes visual AI a win-win.
When you’re getting better functionality and richer multimodal interaction, it’s hard to justify a device that relies on just one mode of interaction.
Smart glasses, particularly those with AR displays, are emerging as the next big AI project for nearly every major tech company. They offer far more consumer-friendly use cases than virtual-reality headsets. I can walk with navigation directions overlaid directly in my field of view instead of checking my phone every few seconds. I can take genuinely candid photos without pointing a camera at someone and making them self-conscious.
Smart glasses can also pair with the smartwatch already on your wrist, allowing input through hand gestures and unlocking yet another interaction layer. If voice commands don’t work in a packed subway, a discreet finger tap or a quick interaction on your watch can. When you’re getting better functionality, richer multimodal interaction, and multiple output surfaces — visuals in front of your eyes and haptics or touchscreen on your wrist — it’s hard to justify a device that relies on just one mode of interaction.
I would never settle for an AI pin — Humane’s or Apple’s — that supports only a single interaction mechanism.
Google’s real-world head start

C. Scott Brown / Android Authority
Google collecting vast amounts of personal data is often framed as a negative — and rightly so. But when it comes to AI products, that data also translates into real-world understanding.
Take Street View, for example. It spans cities and towns across the globe, not just select neighborhoods in global hubs like New York or London. It has regularly updated imagery that reflects how streets actually look.
Google Lens has further personalized visual understanding by letting us capture and analyze the objects we interact with daily, while Assistant — and now Gemini — has spent years answering everyday questions through voice.
Put Street View, Lens, and Gemini together, and Google arguably understands our physical surroundings better than any other company.
Put all of this together, and Google arguably understands our physical surroundings better than any other company. I wouldn’t be surprised if Apple taps into that vast knowledge of Google indirectly, just as it has in the past with Visual Intelligence and Siri, more recently.
We’re also already deeply familiar with how Android works in our hands. Extending those gestures and behaviors — for navigation, notifications, reminders, and translation — only strengthens the case for Google’s approach. Once again, extending existing behavior always finds more takers than trying to rewrite habits from scratch.
Don’t want to miss the best from Android Authority?


Inevitable vs. statement pieces

Hadlee Simons / Android Authority
There’s a clear difference between the future some companies want and the one that’s actually likely to arrive. Smartphones aren’t disappearing in a couple of years, replaced entirely by AI pins. Habits don’t form in weeks, nor do they change as easily.
Trying to force AI pins into daily routines might work as statement pieces — the kind you see clipped onto jackets on intellectual-sounding podcasts — but making them mainstream is a far tougher ask, even with Apple’s resources and pop-culture effect.
Just as phones and watches gradually became smarter over time, eyewear is heading in the same direction — and AI is only accelerating that shift.
What is inevitable is making existing hardware smarter and more AI-ready. One of AI’s biggest strengths is how hardware-agnostic it can be. Google running Gemini on years-old Home speaker hardware proves that point. Just as phones and watches gradually became smarter over time, eyewear is heading in the same direction — and AI is only accelerating that shift.
The things we’ve worn on our wrists and noses for centuries are far more likely to drive the next phase of AI hardware, rather than some awkward accessory that doesn’t belong with everyday outfits. If Apple truly wants to lean into its fashion-forward image, focusing on AR glasses would make far more sense. Or it could simply borrow from Google’s playbook — something it’s never shied away from.
Thank you for being part of our community. Read our Comment Policy before posting.
