Meta’s Smart Glasses Are Becoming Artificially Intelligent. We Took Them for a Spin.
In a sign that the tech industry keeps getting weirder, Meta soon plans to release a big update that transforms the Ray-Ban Meta, its camera glasses that shoot videos, into a gadget seen only in sci-fi movies.
Posted — UpdatedIn a sign that the tech industry keeps getting weirder, Meta soon plans to release a big update that transforms the Ray-Ban Meta, its camera glasses that shoot videos, into a gadget seen only in sci-fi movies.
This month, the glasses will be able to use new artificial intelligence software to see the real world and describe what you’re looking at, similar to the AI assistant in the movie “Her.”
The glasses, which come in various frames starting at $300 and lenses starting at $17, have mostly been used for shooting photos and videos and listening to music. But with the new AI software, they can be used to scan famous landmarks, translate languages and identify animal breeds and exotic fruits, among other tasks.
To use the AI software, wearers just say, “Hey, Meta,” followed by a prompt, such as “Look and tell me what kind of dog this is.” The AI then responds in a computer-generated voice that plays through the glasses’ tiny speakers.
The concept of the AI software is so novel and quirky that when we — Brian X. Chen, a tech columnist who reviewed the Ray-Bans last year, and Mike Isaac, who covers Meta and wears the smart glasses to produce a cooking show — heard about it, we were dying to try it. Meta gave us early access to the update, and we took the technology for a spin over the past few weeks.
We wore the glasses to the zoo, grocery stores and a museum while grilling the AI with questions and requests.
The upshot: We were simultaneously entertained by the virtual assistant’s goof-ups — for example, mistaking a monkey for a giraffe — and impressed when it carried out useful tasks such as determining that a pack of cookies was gluten-free.
A Meta spokesperson said that because the technology was still new, the artificial intelligence wouldn’t always get things right, and that feedback would improve the glasses over time.
Meta’s software also created transcripts of our questions and the AI’s responses, which we captured in screenshots. Here are the highlights from our month of coexisting with Meta’s assistant.
Pets
“A cute Corgi dog sitting on the ground with its tongue out,” the assistant said. Correct, especially the part about being cute.
Zoo Animals
The AI was wrong the vast majority of the time, in part because many animals were caged off and farther away. It mistook a primate for a giraffe, a duck for a turtle and a meerkat for a giant panda, among other mix-ups. On the other hand, I was impressed when the AI correctly identified a species of parrot known as the blue-and-gold macaw, as well as zebras.
The strangest part of this experiment was speaking to an AI assistant around children and their parents. They pretended not to listen to the only solo adult at the park as I seemingly muttered to myself.
Food
When Meta’s AI worked, it was charming. I picked up a pack of strange-looking Oreos and asked it to look at the packaging and tell me if they were gluten-free. (They were not.) It answered questions like these correctly about half the time, though I can’t say it saved time compared with reading the label.
But the entire reason I got into these glasses in the first place was to start my own Instagram cooking show — a flattering way of saying I record myself making food for the week while talking to myself. These glasses made doing so much easier than using a phone and one hand.
The AI assistant can also offer some kitchen help. If I need to know how many teaspoons are in a tablespoon and my hands are covered in olive oil, for example, I can ask it to tell me. (There are three teaspoons in a tablespoon, just FYI.)
But when I asked the AI to look at a handful of ingredients I had and come up with a recipe, it spat out rapid-fire instructions for an egg custard — not exactly helpful for following directions at my own pace.
A handful of examples to choose from could have been more useful, but that might require tweaks to the user interface and maybe even a screen inside my lenses.
A Meta spokesman said users could ask follow-up questions to get tighter, more useful responses from its assistant.
Monuments and Museums
Other times were hit or miss. As I drove home from the city to my house in Oakland, I asked Meta what bridge I was on while looking out the window in front of me (both hands on the wheel, of course). The first response was the Golden Gate Bridge, which was wrong. On the second try, it figured out I was on the Bay Bridge, which made me wonder if it just needed a clearer shot of the newer portion’s tall, white suspension poles to be right.
After the update, I tried looking at images on my computer screen of more famous works of art, including the Mona Lisa, and the AI correctly identified those.
Languages
Bottom Line
Meta’s AI-powered glasses offer an intriguing glimpse into a future that feels distant. The flaws underscore the limitations and challenges in designing this type of product. The glasses could probably do better at identifying zoo animals and fruit, for instance, if the camera had a higher resolution — but a nicer lens would add bulk. And no matter where we were, it was awkward to speak to a virtual assistant in public. It’s unclear if that ever will feel normal.
But when it worked, it worked well and we had fun — and the fact that Meta’s AI can do things like translate languages and identify landmarks through a pair of hip-looking glasses shows how far the tech has come.
Copyright 2024 New York Times News Service. All rights reserved.