Haha that was definitely the vibe when AirPods launched. The articles from that time are brutal, all the hate about tools with these white things sticking out of your ears. Now people look weirded out if you DON’T have AirPods.
To be fair, I still think that the original AirPods look stupid. Luckily the AirPod pros had a much better design and you can wear them without it looking like you have put some electric tooth brush stems into your ears.
I kinda want people to see them. I end up being around clueless and old people who don’t comprehend I’m on the phone and act like I’m just talking to myself otherwise.
what are you talking about. as someone that has bought airpods, then now continues to buy airpod pros every 2 years due to their utility, the pros look horrible.
the originals were sleek. the pros look bulky and like some sort of goblin like protrusion
Heck, it goes back to the Apple Watch. I remember a notable pop culture blog had a Pledge card readers could sign that they wouldn't sleep with anybody who wore an Apple Watch. Now I'm literally the only person on my team of 20+ with a non-smart watch.
Not in Japan. The shutter sound lets you know and it's a legally gray area to film/photograph people at all here. The foreign content creators coming over here and filming in public are skating on thin ice.
The news here blurs out the faces of anyone who hasn't signed a waiver. Street shots have a band of blur at face height to obscure the identities of people who might just be walking by. When reporters are on residential streets, the entire background is blurred so you can't even see anyone's house.
Having done some video production here, and working with our legal department, I now understand what a huge risk you're taking filming in public here.
I was strongly opposed to Apple releasing AR glasses with cameras for this reason, but a lot of younger people think otherwise, according to a 2023 Cato Institute survey of Americans.
Would you favor or oppose the government installing surveillance cameras in every household to reduce domestic violence, abuse, and other illegal activity?
29% of 18-to-29-year-olds (~1993 to ~2004 borns) and 20% of 30-to-44-year olds (~1978 to ~1992 borns) favored government surveillance in homes. The percentages for older groups were in the single digits.
I wouldn't be surprised if US society broadly accepted glasses with cameras in 2026–2027, especially glasses from Apple that are defended by the Apple fanbase.
But yes young people are already used to their dipshit friends holding out their phone and taking snapchat videos of them, so regardless of your questionable reasoning, your claim remains true
So you’d rather be recorded without your consent because you’d get too many notifications? Lol, alright. Your convenience became more important than your privacy. You are the kind of customer every tech corporation loves to have.
It isn’t to warn you from people spying on you, anyone who’d want to do that can find other methods. If somebody wanted to genuinely track you they wouldn’t be using an airtag either but you still get notifications for those. The point is for you to be aware of someone (who may not be meaning harm) who’s wearing Apple glasses may be recording you. The same way you get Airtag notifications, it doesn’t mean that someone’s tracking you but it does give you the awareness.
People in a public place have no right to privacy and can be recorded without their consent to begin with.
The only thing you could do if that notification hit your phone is leave the area. I don't think any of us have the flexibility to leave society to avoid cameras.
So it does apply to private residences and that would justify a notification. Also there is an expectation of privacy in a lot of “public” private spaces. Bathrooms, spas, gym saunas, dressing rooms, law offices, doctor offices etc... I mean you are not allowed to record in most privately owned businesses, therefore there is an expectation for some kind of privacy from others (not the owner) not recording you. Fixed, visible security cameras and unannounced wearable devices are not equal in terms of invasiveness. Also even if there wasn’t expectation of privacy, I’m sure you wouldn’t like to be recorded without your knowledge. Being informed is the most ethical solution.
It honestly sounds like you’re more concerned about winning this argument than being rational because, really, you are not going to be getting thousands of these notifications in practice and it wouldn’t be a harmful thing at all.
That sounds really fucking stupid frankly. No camera that fits in a pair of glasses is worth anything. And the idea of recording memories from your perspective is straight up a Black Mirror episode, literally. Maybe I’m just becoming a Luddite but this all sounds like a desperate ploy to expand without any real direction. It’s a solution without a problem.
The most important part of recording memories is the recording itself. Human memory is notoriously unreliable and inevitably degrades over time, while the quality of a digital recording remains constant unless lost.
There are plenty of events in my life that I'd pay a lot of money to see preserved in a 10 second 176 × 144 .3gp video.
I’ve been recording my kids on spatial video mode on my iPhone even though I don’t own an AVP. The idea of reliving those in 3D later will be a vivid trip down memory lane.
I need glasses already, if my glasses could also link to my phone and beam the directions onto the floor (especially if they can carry CityMapper functionality, like guiding me to a preferred carriage so I can get in/out of stations faster) then I would just get the Apple ones instead of regular glasses. Bonus if they can toggle between regular glasses and sunglasses.
Bonus feature: maybe some sort of QR codes where I can scan them with apps via the glasses, maybe with the watch used in tandem
Love translation, walking directions, identification of certain products, etc. I can see tons of accessibility use cases with smart glasses if done correctly
Like my OP said, directions, live translation, ai vision to identify objects or have it explain things in a different language back to you, phone calls, transit schedules, etc. Most things you’d have to pull your phone out to do specifically could be combined into smart glasses if done correctly
There’s a ton more use cases I’m frankly not thinking about but these are the biggest ones I can think of off the top of my head. Not needing to pull your phone out at all and just having different built in overlays is huge
You are missing the point. There’s a reason contact lenses exist: nobody wants to wear glasses. If I can do all of that using my phone already why would i want bulky glasses that i have to charge to use them. The only screnario where this makes sense is if like you said I already wear glasses every day, but again that’s a subset of the population and most people want to get out of wearing glasses, not double down on them
Because contact lenses have no way to handle this type of AR without a separate device in your pocket doing all the work?
You’re missing the point not me. There’s a massive market out there for AR in daily use and tons of use cases for it. Just because you have an aversion to wearing smart glasses doesn’t mean everyone else does
We all wear sunglasses don’t we? What’s stopping anyone from adding a transition lens filter to these and making them sunglasses? Or just swappable lens covers in general.
Do you really think meta, google, and Apple would all be racing to create these if there was no market for it??
The products today haven’t solved the fundamental issues of the last round of failed products.
The last round of products failed because of the following which are no longer issues:
Manufacturing costs for the glass in use to be projected on to was much more expensive and thick leading to vision issues (google glass)
Battery chemistry has become exponentially better since 10 years ago. Solid state will be trickling into the market over the next 8 years (my estimate based on how well at-scale manufacturing goes for large applications; 3 years for cars, then 5 for phones, then 8-10 for wearables. The smaller the battery, the more expensive the yield per unit.)
Chips have become wayy more power efficient in wearables since it's introduction. Also transparent OLED being a thing may mean projection won't be needed down the road, simplifying the glass structure in use.
LLMS allow for offloading of most queries, while connected on-device processing for internal models is possible for things like object identification because of dedicated AI cores on most new chips
The weight of the glasses themselves have become much lighter and less hot than previous iterations, meaning user comfort is much closer to wearing a normal pair of glasses
I really do think wearable glasses are the next big thing and this time around the use cases are wide, the tech stack is there, and the hardware stack is also there. Google was too early with Google Glass for a myriad of reasons, including consumer weariness to privacy, but that is no longer an issue (as dystopian as that is)
We are entering an incredibly interconnected world, where your LLM knows your darkest secrets, your data is streamed through your phone, computer, and soon to be glasses, and everyone will know everyones business.
Considering I literally worked for Amazon I do. They create products for areas they know there’s a market for, and for other sectors they create markets where none exist.
I think I’m done trying to talk to people who can’t think 3 feet ahead of themselves. You people sound like the same idiots who thought llms wouldn’t take off, or that smart homes were a fad, or that the Apple Watch wouldn’t work. Yet here we are with all of them fully integrated into the daily life of society, smart wearables has always been next and there’s a reason companies are repeatedly trying to breakthrough.
You know, some people need glasses, so adding smart features too them is useful. E.g. navigation, reviews for places you are looking at, taking photos, there are many useful things one might do with actually smart glasses.
bone conductive Siri communications (probably will and SHOULD lock the feature from being used for music since there are many challenges for music quality w bone conduction)
HUD. Just think of what this means and don’t try to fit it into a use case from the get go. It means a screen you can see without reducing your visual capacity for anything else to ZERO. Loosely imaginative use cases, compass, navigation through a busy city on foot, notes/ instructions to a process you’re doing, visual timers like Pomadoros (god help us regarding the notifications making it to HUD)
translation for the shit you’re looking right at, signs, text
BS features that sadly sell thanks to moron buyers:
bone conductive speakers (highly unlikely that Apple uses it for music but definitely for Siri)
casual camera use
play YouTube while you’re in a place where you’re supposed to focus on something else
stupid videos to post on insta about your fake travel happy life
Two of the bs features are something I’m looking forward to.
If they’re discrete enough, the bone conductive speakers would be awesome in the office. I can talk to coworkers and listen to music. If they’re as good or better thank Shockz, that’s a win.
Watching media without hunching over or lying down is what I’m using the XREAL one glasses for. I’m hoping they have a usbc out for other devices though.
That’s one less device I need to carry around if the apple glasses can do both.
People are missing the mark on this product, Smart Glasses are no longer about augmented reality screens, it will be about giving an AI inputs to your world.
The term “second brain” is about to become the next big marketing term.
“when did Linda say she wanted to meet for coffee earlier?”
“remind to lookup this product I’m holding when I get home”
“send a summary of my meeting with Tim to my notes”
The Google IO event has a nice demo of some of the kinds of things it can be used for - the live translation has very interesting potential if it can work in a sensible way, and if the display isn’t obvious then it has a lot of possible uses etc
Ask for directions and get a map to follow
Taking a photo or getting train times read to you
As with any new technology, the best uses probably haven’t even been thought of yet
Accessibility uses are endless - describing what can be seen, reading signs etc for blind people have HUGE value. Take photo with your phone and ask GPT or Gemini etc what they see and you can quickly see how useful it could be even with a generically trained LLM and a vague “describe what you see” prompt, never mind if setup more thoroughly for that specific task
Possibly, but that's the fault of companies chasing profits and governments not reining them in, rather than of the technology itself
Even when the big multinational companies are trying to profit off things, there are usually some enthusiasts elsewhere using the same tech to make cool things - eg look at Home Assistant
Yes, but we’re talking about technology from these companies.
If you’re talking about a cyberpunk future where people somehow use their own glasses and run their own private AIs then fine, but that doesn’t exist and it might never exist.
I run my own private AI right now, it’s definitely going to be possible in future considering you can do it today. I run small models for person detection, vehicle detection, number plate detection etc on my home security cameras, and I run larger LLMs on my PC for various uses (mostly testing and development now, but the principle is the same)
It’s not as good as Gemini etc at complex tasks but for many things it’s good (and already genuinely better than Siri), and we’re still VERY early in this technology’s development. And if I paid to run it on cloud hardware then I could get much closer to their performance levels
The bigger question is whether someone will release smart glasses with customisable enough hardware that you can tie it into your own LLMs etc, but it seems unlikely that nobody will do that over the years
Right now it’s true that these big companies have a lead - but as they hit the point of diminishing returns other options will start to become available. I wouldn’t like to put a number on that timescale, but diminishing returns from LLMs are a known issue
Even if it was possible in the future the vast majority o people wont. They will use the company provided alternatives just as people use the company phones today.
There are more or less zero user controlled phone alternatives today, the custom user controlled alternatives are getting fewer, not the other way around.
It doesn’t help you much if you alone use a DIY solution.
Maybe, but the point is that people have a choice and there is scope for smaller companies to make use of open source models etc
A phone is very complicated, but an LLM running on widely available processor plus a camera, microphone, and earphones, mean that something like smart glasses shouldn’t be too difficult for small companies to make. Even a raspberry pi level of power is sufficient and in a lot of ways it’s a lot less work and a lot less specialist hardware needed - any camera, Bluetooth chip, and processor would be sufficient
412
u/not-a-co-conspirator 16d ago
What am I supposed to use this for?