Gemini needs a sidekick for Android XR to work, and it could be on your wrist

Wear OS Weekly

My weekly column focuses on the state of Wear OS, from new developments and updates to the latest apps and features we want to highlight.

Wear OS (aka Android Wear) and Google Glass both went public in 2014, but they had nothing to do with one another. Now, Google should show off its new AR glasses at I/O 2025 later this month. But this time, Google would be smart to make Android watches part of its XR experience. Yes, really!

Google I/O 2025 starts on May 20, but Google and Samsung have been showing off Android XR demos for months, starting with the Samsung Moohan XR headset and most recently at an XR TED Talk with the “Project HAEAN” AR glasses that record and memorize what you see.

Google has no Wear OS panels planned for I/O, but it will hold two Android XR panels. One’s focused on the Android XR SDK and AI tools, while the other centers on building AR apps by adding “3D models, stereoscopic video, and hand-tracking” to existing Android apps.

It’s a reasonable bet that we’ll see AR glasses tech on the I/O stage, in other words. Since I’ll be attending I/O, I hope I’ll finally have a chance to demo them and see how well the Gemini commands and gesture controls work.

But having watched Marques Brownlee’s Android XR demo and read The Verge’s AR glasses demo, I can already tell that no matter how well they work, voice and gesture controls alone aren’t going to cut it for Android XR. And Android smartwatches are the obvious backup choice.

Outdoor use cases, indoor controls

(Image credit: Android Central)

I tend to use controllers when I play VR games on my Meta Quest 3, but every time I’ve used hand tracking on a Quest, Apple Vision Pro, Snap Spectacles, and other XR devices, my response is usually, “Wow, this almost works!”

In a room that’s too dark or in direct sunlight, the inside-out camera tracking will struggle to capture your hand gestures properly. In ideal lighting, with your hand always held up in the camera’s view, you can pinch to select menu options with reasonable accuracy. But I still expect missed inputs and prefer the simplicity of a controller.

Now picture using these glasses outdoors, where these deliberate, unnatural gestures might make passersby think I’m gesturing at them — or just a weirdo.

Smart glasses are meant to blend in, but this is a double-edged sword; calling attention to the fact that I’m wearing tech will only bring back the “Glasshole” problem and make people uncomfortable. (Maybe they’ll be called X-aRseholes?)

Google’s Project HAEAN Android XR glasses demo (Image credit: Gilberto Tadday / TED)

Gemini voice commands are a more seamless fit. The demos I’ve seen show that Gemini can carry out actions reliably after a few seconds to process. In the multimodal Live mode, you simply point at or focus on something to have Gemini answer your question about it — no controller required.

But when it comes to my Ray-Ban Meta smart glasses and asking the Meta AI to take photos, I (again) only really talk to the assistant when no one’s around.

Google likes the idea of people talking freely to AR glasses at any time. And sure, maybe they’ll become ubiquitous so that public AI chats are socially acceptable. But if I’m on public transit, in an office, or at the grocery store, I might ask the occasional quiet question, but I’d much rather have a less disruptive, non-spoken alternative.

Maybe you’re less concerned about societal norms than me. You’ll still have to worry about ambient noise disrupting commands or accidentally triggering Gemini. And there’s always a few seconds of waiting for Gemini to process your request, and trying again if Gemini gets it wrong, while tapping buttons feels more immediate.

(Image credit: Meta)

When Meta designed its Orion AR glasses, it also created an sEMG neural band that recognizes finger gestures so you can subtly trigger actions, without vocalizing or keeping your hands in view. Meta knew this problem needed to be solved to make AR glasses more viable, at some point.

But in Google and Samsung’s case, they already have ready-made wearables with input screens, gesture recognition, and other tools that’d mesh surprisingly well with smart and AR glasses.

Why Wear OS and Android XR should sync

(Image credit: Andrew Myrick / Android Central)

We mostly use Android watches to check notifications, track workouts, and ask Assistant questions. But they can also trigger actions on other devices: Taking a photo, unlocking your phone via UWB, toggling Google TV controls, checking your Nest Doorbell feed, and so on.

Imagine if Wear OS had an Android XR mode. It could still show phone notifications, but its display (when tilted-to-wake) would mirror whichever app you have open on your glasses. Contextual actions like video playback controls, taking a photo, or pausing a Gemini Live chat would trigger immediately with a tap.

Even better, imagine if you could twist the Pixel Watch 3’s crown or Galaxy Watch 8 Classic’s rotating bezel like a scroll wheel in menus or browsers, specifically affecting whichever window you’re looking at. That sounds much better than pinching and flicking your hand over and over!

Galaxy Watches support a few basic gestures like double taps and knocking, and I wonder if this could reinforce Android XR controls, offering a second source of information that you want to select or move something, even if the camera missed the input.

(Image credit: Nicholas Sutrich / Android Central)

I’d generally feel more excited about AR glasses if I knew I had a tactile backup option to voice commands, even if Gemini and hand gestures are the primary, expected control schemes. The only question in my mind is whether Google can make Wear OS work as a controller.

This patent site spotted Samsung patents for using a smartwatch or smart ring for XR controls, though the article is painfully vague on details, except to say that the emphasis was more on the Galaxy Ring than the Galaxy Watch.

It’s evidence, at least, that Samsung’s engineers are looking for alternative XR control schemes. The Project Moohan XR headset may ship with controllers, but the eventual goal is to sell all-day smart glasses and AR glasses; those require a more subtle and consistent control scheme than gestures and commands — at least in my opinion.

I understand why Samsung’s first instinct would be to use smart rings as controllers; they’re seamless and don’t have a separate OS to worry about. But until I hear otherwise, I’ll keep arguing that Wear OS would be a better fit and more useful!


Source link
Exit mobile version