Meta's vision of “always-on AI” and what this means for TIPSS
- spriteplus
- 2 days ago
- 8 min read
Updated: 1 day ago
Emma Barrett, University of Manchester and SPRITE+ lead for the XRCET-TIPSS Deep Dive
Attending the UnitedXR Europe conference in December 2025 really brought home to me the rapid speed of innovation as XR converges with other emerging tech, and the risks and opportunities these developments raise. One prominent example is how quickly the convergence of AI + Augmented Reality is moving from concept to consumer reality and how unprepared we still are. This post outlines Meta’s vision and the TIPSS challenges it raises.

Meta was prominently in attendance at UnitedXR Europe, promoting their augmented reality glasses, and the 'always-on' AI that they see as the eventual future of 'smart' glasses. In this future, AR glasses would use AI continuously to gather and interpret the world around them and provide feedback to users in the form of audio and visual input. Meta was ruthlessly focused on the current and future positives (placing particular emphasis on the accessibility benefits for people with visual or hearing impairments) and rather vaguer on the risks and possible harms.
Smart glasses in practice
At an invite-only breakfast meeting, some of us had the chance to try out the latest Meta AR glasses. The glasses are controlled via subtle hand and finger gestures detected by a 'neural wristband' that uses electromyography. Menus and apps were easy to navigate, even with minimal movement (One Meta representative described these as “covert” gestures. We think he meant to say “discreet”, but it was a telling slip...).
The visual display was not obtrusive or too distracting (at least within the confines of a conference room) - with clear safety and convenience benefits for a user following navigation directions without having to get their phone out and look down at it.
One use case for AI+AR is real-time translation, with live captions (although, as one colleague pointed out, the unfortunate caption placement meant that the user appears to stare at the speaker's chest. Oops.)
Another AI-enabled feature is to modify the image of a person in front of the glasses by saying "Hey Meta, imagine the person in front of me as...”. One of my colleagues asked to be imagined as The Grinch, and sure enough, his image, with a Grinch face, was quickly generated.
Toolkits for developers - and a quid pro quo?
These use cases are just the start. Always-on AI (if/when it happens) would drive further innovation, but in the meantime, Meta has launched a toolkit to enable independent developers to produce apps for the current generation of AR glasses. Their vision is of an AR app ecosystem that will drive creative new use cases and, of course, sales and uptake.
In every presentation, the Meta reps characterised regulators and law makers as holding back innovation through excessive regulation. We'd heard this refrain many times of course, but what raised our eyebrows was the suggestion that developers should lobby their elected representatives and government officials directly, to urge them not to “over-regulate”. This made the offer of a free developer toolkit feel like a quid quo pro arrangement, rather than a genuine attempt to democratise the AR app ecosystem.
Always on AI + AR = more TIPSS challenges
Always-on AI + AR convergence has TIPSS implications that go beyond the already well-documented issues with existing AR glasses and raise the potential for harms that were previously theoretical risks.
The privacy risks are the most daunting. As generative AI models potentially risk running short of training data in the form of words and pictures scraped from the internet, what users see, hear and do via smart glasses could provide a new rich seam of data about the physical world and human behaviour, including everyday actions, reactions, and social interactions.
Meta argues this data collection will enrich lives, but such intimate, continuous monitoring also creates an attractive target for criminal actors. Government security agencies in democratic and non-democratic countries might also see value in access to this data - but within what safeguards? How might users’ attention and decision-making be influenced by the information the algorithms share – and do not share? And continuous capture could feed the data engine that drives targeted advertising. Imagine your walk on a chilly day being interrupted with an ad for your favourite hot beverage at Starbucks. Or your smart glasses noticing your child has a rash and recommending a (sponsored) product to treat it.
As well as reacting to a wearer's questions and instructions, could smart glasses' AI one day proactively provide suggestions, maybe even coaching users in how to behave in a particular scenario? What does it mean for human autonomy if an AI whispering in a wearer's ears could turn the user into a 'meat puppet'?

It's not just users’ data that is at risk. Many researchers have highlighted risks to bystander privacy from the collection of data when users activate recording features in AR glasses. Always-on AI will not require specific activation of a camera; data capture will happen continuously. Bystanders won't be able to opt out, and nor may they be aware that their images are being harvested and behavioural data processed.
Meta's lukewarm responses to questions about risks
We, and other attendees, raised some of our concerns with the Meta reps*. It's fair to say the answers were... disappointing.
Privacy
We asked about the privacy implications, suggesting that bystanders might find AR glasses, particularly with AI functions, 'creepy'. Meta compared concerns about being captured by AR glasses to early anxieties about smartphone recording in public, arguing people “will get used to it.” They argued that "over-signaling" to bystanders could "overwhelm" people. Another rep later talked about "signal fatigue” and described efforts to establish bystander consent as being "friction" that would undermine user experience.
But being captured via glasses is not the same as being filmed by a smartphone. Leaving aside the point that "people will get used to it" is not an adequate strategy for managing the many privacy-related risks of digital image capture on a phone, let alone AR glasses, a phone being pointed towards a person or scene is more obvious than a glasses-wearing user taking pictures via "covert" subtle gestures. Meta's glasses currently feature a light that comes on when the glasses are recording, but this is a subtle indication, most bystanders don't know what it means and in any case won’t know what will be done with their data, and there are ways to get around it.
The energy costs of always-on AI
The response to our question about this was that "it’s a problem for any AI". But continuous multimodal inference (i.e., processing audio, video, gestures, and environmental context) has high compute and energy requirements, far exceeding those of text inference. In practice, continuous deep processing is not viable at scale: even if technically possible, it would be economically prohibitive. Instead, always-on AR will likely rely on on-device inference for lightweight, continuous context monitoring, with only specific user actions or salient changes in the environment triggering deeper processing in cloud-based models. This would reduce the compute and energy requirements (and potentially address some privacy concerns), but perhaps at the expense of the all-powerful “super intelligence” that consumers might feel they are being promised, and the quality of data that would make targeted advertising worthwhile.
The developer toolkit
We asked about responsible app development. For instance, if images of bystanders could be easily modified on the fly, what's to stop someone creating a nudification app that will 'undress' the women and children in front of them in real time? Meta pointed us towards their code of conduct for developers, which includes acceptable use safeguards. (The effectiveness of these remains to be seen.)
Meta isn't the only player on the smart glasses pitch
Meta has clearly thought about safeguards for some TIPSS challenges that their new tech raises. But even if we give Meta the benefit of the doubt and assume that they have watertight safeguards and can control what apps are deployed on their glasses, what about other manufacturers and platforms?
AR glasses are experiencing strong consumer demand. Established and upstart companies are rapidly bringing new AR glasses to market. Alternative operating systems, most notably AndroidXR (launched at the end of 2025, and integrating Google’s Gemini AI) will be an attractive alternative for developers. Can we be sure that every producer of AI+AR glasses is committed to (and capable of deploying) effective safeguards?
But… when, or if?
Always-on AI is not here yet and would have to overcome technical challenges and potential consumer and policy maker reluctance. And the business model is risky. From an economic perspective, Meta needs to bring in enough revenue from AI glasses to meet the financial costs of processing - perhaps they envision a subscription model, and/or lucrative targeted advertising. Will consumers be hooked, or will the glasses gather dust once the novelty has worn off? How will Meta match up compared to other developers and platforms’ competitor smart AI glasses? Without a pathway to profitability and a durable moat, Meta’s always-on AI would be an expensive flop.
What's needed - and quickly
Research
There are some excellent projects already underway (for instance, AUGSOC Project, Good Enough Ethics, and XR4Human). But there just aren't enough researchers working on these issues. Funding is scarce, and several members of our XR TIPSS Community of Interest have reported a frustrating lack of success in UKRI bids.
Over the next 18 months, SPRITE+ XR Community of Interest will be mapping and prioritising areas where new research will add to our understanding and inform harm mitigation measures. These might include, for instance:
Understanding the trade-offs needed to make always-on AI+AR technically and economically viable
Understanding the genuine (versus hyped) benefits and harms of always-on AI+AR
Understanding how XR-relevant standards inform concrete design decisions, and evaluating their use in the wild
Understanding what shapes attitudes to privacy and smart glasses among different populations
Mapping current regulatory frameworks against the new risks posed by AI+AR
Evaluating whether and how safeguards are implemented across the rapidly proliferating range of smart glasses, for instance via pen-testing, red teaming and privacy policy analysis
Identifying meaningful, low-friction consent mechanisms for users and bystanders
Regulatory guidance
Last time we checked, specific XR guidance from Ofcom, ICO and most other UK regulators did not feature on their websites. But AI-enabled smart glasses are well and truly here, and AI capabilities are rapidly developing. If, as some suggest, the uptake of AR glasses mirrors the trajectory of smartphone adoption in the 2010s, action is urgently needed to avoid the same trajectory of harms. And if the future regulatory landscape is vague, developers may hold off innovating until potential constraints are clearer.
A wider public conversation
Discussion of AR capabilities and the potential risks and benefits of AR+AI is currently confined to a somewhat niche, tech enthusiast / tech risk community. But AR glasses are going to become more widespread. For many people they will start to replace several smartphone functions and add new capabilities. As they do so, they'll begin to intrude into everyone's lives. We need to talk about the risks and benefits as a society, helping developers and policymakers understand what is - and is not - acceptable.
----
* If you want to hear Meta's Matthew Chalmers attempt to answer questions from Bertrand de la Chapelle (the Internet & Jurisdiction Policy Network) about the risks and harms associated with always-on AI, here's a link to the relevant part of their 'fireside chat' from UnitedXR 2025.
Acknowledgements
My attendance at UnitedXR and the writing of this blog post were funded by SPRITE+ (grant number EP/W020408/1) and the University of Manchester. The commentary presented here was greatly informed by discussions with my fellow SPRITE+ XR-TIPSS Community of Interest colleagues, including Mark McGill, Pejman Saeghe, and Aislinn Bergin, but any errors are mine alone.
