Wei Duan is a technologist pioneering AI innovation for wearable devices. Currently a technical leader at Snap Inc., Wei previously co-founded 8glabs as CTO, where he led a team of engineers and raised nearly $3 million in seed funding. With experience at Google working on speech recognition AI and significant contributions to machine learning research, Wei is developing new approaches to creating responsive, private, and intuitive AI assistants.
Wei Duan
In an era where AI increasingly shapes our daily interactions with technology, Wei’s work addresses critical challenges in latency, privacy, and natural interaction that have hampered the effectiveness of AI assistants on wearable devices.
**Please tell us about your journey into the world of AI and wearable technology. **
I’m currently developing Snapchat’s AI chatbot. Previously I worked on speech recognition for Google Assistant, focusing on reducing model memory usage while improving transcription accuracy and latency—challenges that informed my passion for creating more responsive AI assistants. In 2022, I co-founded 8glabs as CTO, where I built a team from scratch and led the development of AI-powered products. Throughout my career, I’ve been driven by making AI more intuitive and accessible. Wearable technology presents an exciting frontier because it can seamlessly integrate AI assistance into our daily lives in ways that feel natural rather than intrusive.
**What inspired you to focus on AI assistants for wearable devices, and what challenges are you addressing?**
I observed a disconnect in current AI assistants: despite their power, they suffer from high latency, privacy issues, and unnatural interactions—particularly problematic for wearables that should feel like natural extensions of ourselves.
My focus is developing on-device AI models that process input directly on edge devices, with algorithms specifically designed to analyze video from wearables. This approach eliminates server-related delays that break natural conversation, enhances privacy by keeping data on personal devices, and creates more contextually aware and seamless interactions.
This technology could transform how we receive immediate assistance based on what we’re seeing, advancing our endless pursuit of better vision understanding.
**How have your experiences at Google and 8glabs shaped your approach to AI innovation?**
These environments gave me complementary perspectives. At Google, I learned the importance of scalability and robustness when building systems for millions of users. Working on speech recognition taught me to optimize models for real-world constraints while maintaining accuracy.
At 8glabs, I experienced the full lifecycle of product development. This startup environment forced me to be agile and make critical architecture decisions balancing cost, timeline, and performance. Leading a team taught me the importance of clear technical communication and inspiring innovation while maintaining focus.
From Google, I brought discipline in optimization; from my startup experience, I gained the ability to move quickly and see the bigger picture of how technology serves user needs.
**How do you see AI and AR transforming wearable devices?**
This combination will transform wearables from passive tools into intelligent companions that understand and enhance our perception. It’s about creating interactions where technology anticipates needs and provides context-aware assistance.
What excites me most is enhancing human capability. Imagine devices that help people with memory impairments recognize faces, provide real-time translation in foreign countries, or offer professionals hands-free access to specialized knowledge during complex tasks.
The key is creating AI systems that respond quickly enough and understand context well enough to feel like natural extensions of human cognition rather than separate tools we must consciously operate.
**What are the biggest ethical considerations in developing AI for wearable devices?**
Privacy is paramount. These devices capture incredibly personal data—what we see, hear, and sometimes physiological information. Processing on-device rather than in the cloud addresses some concerns, but it’s just the beginning.
User autonomy is equally important. AI assistants should augment human capabilities without creating dependency or removing control. I believe in designing transparent systems that users can understand and override when needed.
We must also ensure these technologies work for diverse populations and don’t amplify existing biases or inequalities. This requires diverse training data and testing with varied user groups.
**What developments do you predict in AI for wearable technology in the next 5-10 years?**
We’ll see a fundamental shift to on-device processing, enabled by more efficient algorithms and specialized hardware. This will reduce latency and privacy concerns while enabling more continuous, contextual assistance.
AI will become better at understanding human intent through multimodal sensing—combining voice, vision, and other inputs for richer context understanding. This will make interactions feel more natural and reduce the need for explicit commands.
Personalization will truly adapt to individual users, learning specific patterns and preferences in nuanced ways. And we’ll see new interaction paradigms beyond voice and touch—subtle gestures, eye movements, and perhaps direct neural interfaces—making interactions even more seamless.
The journey to creating truly responsive, private, and natural AI assistants for wearable devices is just beginning, and I’m excited to be part of shaping this future.