AI-enabled smart glasses are rapidly moving closer to everyday use in universities, potentially ushering in the kind of frictionless and inclusive education that has long been advocated by educationalists.
With the flick of an eye or the swipe of a finger, students will be able to translate texts, access real-time feedback or record sessions for rewatching without even having to pick up a smartphone. Carefully implemented, this could offer genuinely life-changing benefits for students and staff alike.
For instance, smart glasses can provide real-time optical character recognition that can “read images” for vision-impaired students. They can instantly transcribe speech, allowing hearing-impaired students to read others’ voices. They can read text aloud, without the need for earphones, to support students with dyslexia. And they offer real-time audio support and AI-driven text reorganisation to support a variety of learning needs.
Neurodiverse people could also benefit from various AI functionalities designed to support different ways of processing information, and those with psychiatric conditions such as schizophrenia could find support to reduce cognitive load.
ADVERTISEMENT
But the conversation must not end here. The implementation of this technology needs careful oversight. The problem is that the pace of technological change is currently outstripping the ability of effective governance to keep up with the host of concerns that it raises.
Smart glasses look like regular eyewear so their potential to facilitate student cheating is enormous. Last year, for instance, a student in Japan was found to be using camera-equipped smart glasses to cheat on a university entrance exam by secretly photographing questions and sharing them online in real time with paid tutors. Even viva voce exams – often touted as a solution to students’ illicit use of AI to write their essays – would not be immune to such cheating.
ADVERTISEMENT
Privacy is another big concern. A 2024 study found that nearly one in five smart glasses users admitted to filming others without their consent. Let us imagine a future where, without even touching a smart phone, someone wearing such glasses in the classroom, seminar room or Zoom meeting can track your eye movement, facial expression, attention levels, emotional responses and even physiological cues, such as pupil dilation or blink rate. Would you (or could you) self-censor under such hyper-monitoring?
Moreover, would you (or should you) be worried about being deepfaked? As AI systems improve their capacity to generate synthetic voices, faces and videos, data harvested from smart glasses could be weaponised against academics, distorting their words and intent. A realistic deepfake can be made with as few as 20 images in 15 minutes.
Hence, the absence of consent to be monitored and recorded is not merely a privacy concern: it signals a direct threat to the conditions that enable academic freedom to exist.
This is the biggest threat to higher education – but, as things stand, it is also the biggest regulatory and research blind spot. There are plenty of studies focused on smart glasses and wearable 3D tech as pedagogical innovations, but they don’t address how their integration could be normalising self-censorship. And while privacy and plagiarism concerns are being debated and represented in reports and guidelines such as the GDPR in Europe and the AI Act in the US, the detrimental impacts of smart glasses on academic freedom are not being discussed – while the conditions that are so toxic for it are being normalised through habituated surveillance.
ADVERTISEMENT
Increased doxing is another peril. Recently, two Harvard students demonstrated the possibility of combining smart glasses with facial recognition to identify people and almost instantly access their personal information. What will be the impact on our capacity as public intellectuals to speak freely when we fear our likeness could be deepfaked into racist or sexist commentary in real time and live-streamed alongside our home address?
Of course, deepfaking is possible already. The difference with smart glasses is that you will have no idea that you are being recorded. When someone holds up a phone or other device, it signals intent – and gives the opportunity to question, resist or opt out. Smart glasses that are indistinguishable from regular eyewear offer no such opportunity.
And make no mistake: our students will very soon be watching us through them. They are already being heavily marketed as mainstream products and can already be bought for less than a mobile phone. In a sector with one of the highest rates of digital technology adoption, it will only be a year or two before they are ubiquitous.
We therefore have only a brief window to develop checks on their use that support diversity without cementing the prospect of being deepfaked and doxed in real time as an unavoidable fact of our working conditions.
ADVERTISEMENT
Frictionless learning is a nice idea. But when it comes to academic freedom, a little friction is not a flaw: it is a vital safeguard.
Janine Arantes is senior lecturer and research fellow, Andrew Welsman is lecturer and research associate and Bec Marland is lecturer and research associate at Victoria University’s Institute for Sustainable Industries and Liveable Cities and College of Arts, Business, Law, Education and IT.
ADVERTISEMENT