Watching the announcement, every feature felt like something my phone already does—better.
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.
If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.
I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.
To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.
I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.
When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”
But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.
I'm not sure if it's so much that most people don't care, but that hand crafted items are more expensive. As evidence of popular interest, "craftwashing"[1] mass produced goods with terms like "artisanal", and "small-batch" can be an effective marketing strategy. Using the example of a bible, a 1611 King James facsimile still commands a hefty premium[2] over a regular print. Or for paintings, who would prefer a print over an original?
There's also the "Cottagecore" aesthetic that was popular a few years ago, which is conceptually similar to the Arts and Crafts movement or the earlier Luddites.
It's been interesting reading this thread and seeing that others have also switched to using Codex over Claude Code. I kept running into a huge issue with Claude Code creating mock implementations and general fakery when it was overwhelmed. I spent so much time tuning my input prompt just to keep it from making things worse that I eventually switched.
Granted, it's not an apples-to-apples comparison since Codex has the advantage of working in a fully scaffolded codebase where it only has to paint by numbers, but my overall experience has been significantly better since switching.
Other systems don't have a bespoke "planning" mode and there you need to "tune your input prompt" as they just rush in to implementation by guessing what you wanted
"As users experience heightened excitement during intimate encounters, the contained insects will occasionally emerge to stimulate sensory receptors, amplifying pleasure through sheer surprise."
They’ve poured effort into replicating the feel of a notebook: restricted toolset, textured screen, stylus handwriting, etc., but I'm at a loss why this is worth hundreds of dollars plus a subscription instead of just using paper notebooks.
- Paper-like feel? Actual paper still wins.
- Undo, folders, search, tags? Flipping through a notebook and adding sticky notes gets you there faster.
- Templates? A $10 pad of graph or dotted paper gives infinite variety.
Handwriting-to-text and cloud sync is perhaps the strongest case, but even there it's probably faster to draft on paper and digitize with keyboard or speech.
Does-it have a lockscreen and is it reasonnably good in term of security? That would be the only incentive for someone who want to keep handwritten notes without having to lock the notebook in a safe every time it is left unattended.
I am not thinking security against state actor, rather people within same household/office who might have too much curiosity.
I bought a Remarkable and ended up returning it. It was a cool device, but you're exactly right, I had a hard time justifying the cost over a $10 notebook.
I built a small plugin that sends you to your reading list whenever you open a common time-sink site. It’s been a simple but effective way to avoid slipping into distraction.
I ran into the same problem—my reading list kept growing but I never actually got through it. Feeds are engineered to feel effortless; opening my backlog felt like work.
Instead of blocking sites outright, I tried redirecting attention at the key moment. I built a small extension that sets a daily reading goal, then reroutes me from doomscrolling sites until I hit it. After that, I can browse freely. It’s been a better balance: turning the feed’s habit loop into a nudge for something I actually want to do.
Turns out you can just click and drag to select everything in Minesweeper, and it reveals all the hidden numbers. There’s even a sneaky little “debug” text in the bottom-left corner that shows where all the bombs are.
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.