Why AI Wearables Feel Exciting And Weirdly Invasive At The Same Time
1 hour ago
Subscribe to our Telegram channel for the latest stories and updates.
There’s something undeniably cool about asking your glasses what you’re looking at and getting an instant answer. Or having a watch that notices patterns in your health before you do. AI wearables promise to make our lives smarter, more connected, more efficient, all whilst keeping our hands free and our phones in our pockets.
But the same technology that feels like having a personal assistant on your wrist also feels like inviting surveillance into your most intimate moments. And we’re only just beginning to reckon with what that actually means.
The Promise of AI That Actually Gets YouI get the appeal of AI wearables—really, I do. Meta’s Ray-Ban smart glasses let you point at a shelf of products and ask which one meets your needs, and the AI analyses everything in real time to tell you. The Apple Watch Series 11 detects irregular heart rhythms and hypertension patterns by monitoring your blood vessels over 30 days, passively, without you doing anything.
These aren’t hypotheticals. These are real features available right now in 2025, and they’re genuinely useful. The Meta Ray-Ban Display glasses, unveiled in September 2025, take things further with an in-lens display that shows messages, translations, and turn-by-turn navigation without you ever pulling out your phone. Pair them with the Meta Neural Band, a wristband that reads muscle signals to control the glasses, and you’ve got something that feels legitimately futuristic.
AI that’s there when you need it, invisible when you don’t.
Your Data Is the ProductTo make all of this work, AI wearables need data. A lot of it. Your heart rate, sleep patterns, location history, what you’re looking at, what you’re saying, who you’re with, where you go, how you move, what you buy.
A 2025 study analysing privacy policies from 17 leading wearable manufacturers found concerning gaps in transparency, particularly around encryption and data sharing practices. Xiaomi, Wyze, and Huawei scored highest for privacy risk, whilst Google, Apple, and Polar ranked lowest. But even the best performers collect staggering amounts of biometric data, and what happens to that data isn’t always clear.
Children’s wearables are particularly concerning. Devices marketed to minors frequently collect geolocation, audio, and health data with minimal parental awareness or consent, and in some cases have been found to act as surveillance devices.
The fundamental tension is that in order for AI to be helpful, it needs context. But context requires constant monitoring. And constant monitoring is surveillance.
The Humane AI Pin as Cautionary TaleThe most instructive example might be the Humane AI Pin, which launched in April 2024 with enormous hype and died in February 2025 with barely a whimper.
Created by former Apple designers, the Pin was marketed as a privacy-conscious, screenless AI assistant that would free users from their phones. It clipped onto your clothing, had a built-in camera and projector, and promised to handle everything through voice commands.
It was also, by nearly universal consensus, terrible. It couldn’t set timers. It frequently gave wrong answers. It required its own mobile plan and gave you a separate phone number. The battery died within hours. Returns eventually outpaced sales.By February 2025, HP acquired Humane and immediately bricked all existing AI Pins, deleting all user data.
The Humane AI Pin failed because it tried to replace smartphones without being better than smartphones. But it also highlighted a deeper problem. We don’t actually want to be constantly monitored, even if the monitoring is supposedly private.
Meta’s Strategy of NormalisationMeta’s strategy has been more successful, precisely because it doesn’t ask you to give up anything. Ray-Ban smart glasses look like normal glasses. They’re stylish, comfortable, and useful. You can take photos, make calls, listen to music, and get AI assistance whilst looking like a regular person wearing regular glasses.
But here’s what makes this approach more insidious. By normalising cameras and microphones on people’s faces, Meta is fundamentally changing social expectations about privacy and consent.
When someone wearing Ray-Ban smart glasses looks at you, are they just looking, or are they recording? Are they running facial recognition? Asking their AI assistant about you? You have no way to know. And neither do they half the time, because the AI is always listening, always watching, always ready.
The question isn’t whether someone is recording you. It’s whether you’ll ever know if they are.
The Health Data Gold MinePerhaps the most valuable and vulnerable data comes from health wearables. Healthcare data records are worth significantly more on the black market than payment card information. They contain comprehensive personal information about your physical condition, your habits, your vulnerabilities.
Insurance companies could use health data to risk-profile individuals, potentially leading to higher premiums. Employers could access data reflecting negatively on candidates’ health or productivity. And fitness tracker companies aren’t bound by medical privacy laws in many jurisdictions, meaning they can legally sell your health data to third parties.
The Apple Watch collects data on your heart rate, blood oxygen, sleep quality, menstrual cycles, and even detects falls. It’s genuinely helpful for health monitoring. But it’s also creating a detailed profile of your physical state that exists forever, somewhere, in someone’s database.
The Transparency ProblemThe biggest issue isn’t that AI wearables collect data. It’s that users have almost no idea what’s being collected, how it’s being used, or who has access to it.
Privacy policies are dense legal documents that most people never read properly. Most of us just click “agree” and hope for the best.
Tech companies are collecting as much data as legally possible, being as vague as legally permitted about what they do with it, and gradually eroding user control over their own information. There’s a pattern.
Where Does This Leave Us?AI wearables represent genuine technological progress. Real-time translation, health monitoring, hands-free assistance. These are legitimately useful capabilities that improve people’s lives, but they also represent an unprecedented level of intimate surveillance. We’re willingly strapping devices to our bodies that monitor our every move, analyse our health, record what we see, and listen to what we say, all in exchange for convenience.
The uncomfortable truth is that both things can be true simultaneously. AI wearables are exciting because they work. And they’re invasive because they work.
What we desperately need, and largely don’t have, are robust regulations that give users meaningful control over their data. Not lengthy privacy policies, but simple, enforceable rules about what can be collected, how it can be used, and who can access it.
Regulations are steps in the right direction, but they’re reactive measures in a rapidly evolving landscape. By the time regulations catch up, the surveillance infrastructure will already be normalised.
The question isn’t whether AI wearables will become mainstream. They already are. The question is whether we’ll build them in a way that respects privacy and autonomy, or whether we’ll wake up in five years and realise we’ve voluntarily constructed the most comprehensive surveillance system in human history.
Right now, it could go either way. And that’s precisely what makes this moment so important, and so unsettling.
...Read the fullstory
It's better on the More. News app
✅ It’s fast
✅ It’s easy to use
✅ It’s free

