I don’t want to seem anti-AI. I really do think it can be a valuable tool for a lot of tasks. But almost every time I see it being shown off in consumer electronics, it isn’t presented as a tool but as a do-everything interface that defines how you interact with whatever device uses it. That’s how Meta frames its AI, and that isn’t the best use of the technology. Meta itself proved that when its live demo of Meta AI on its newest Meta Ray-Ban Smart Glasses completely fell apart.
If you didn’t see it live, the demo was supposed to show off how Meta AI on smart glasses can be useful in daily tasks like cooking. The demonstrator wore them in front of a counter with ingredients on it, and asked Meta AI how to make Korean barbecue sauce. The idea was that the cameras on the glasses would let the AI identify what ingredients were available and use that information to walk him through the process. It started out promising, with the voice response complimenting him on what he had ready and telling him what ingredients are common in Korean barbecue flavors. The demonstrator then asked the glasses how to get started.
Meta AI failing to explain how to make steak sauce. (Credit: Meta)
Then Meta AI started telling him what to do now that he mixed the main ingredients together for the sauce base. Except, he hadn’t mixed the sauce base yet. The demonstrator then asked the AI for the first step. Once again, Meta AI told him what to do now that his sauce base was ready. Then the camera cut back to Meta CEO Mark Zuckerberg: “The irony of the whole thing is that you spend years making the technology, and then the Wi-Fi of the day kind of catches you.”
The AI failed the demo. By the time the presentation moved on to Zuckerberg dealing with video call and control band glitches (which I won’t fault AI for, but they were still very funny), I could have gotten the information to make Korean barbecue sauce and whipped up a batch in my kitchen. And honestly? If I started at the same time as that live demo without AI or a specific recipe on hand, I could have still made the sauce faster than if the AI actually worked. I could have even used a smart device to do it. The best-case scenario for Meta’s Live AI in that cooking demo was still a completely unnecessary layer of automation between the user and the information needed to perform the task.
If you want to cook something new, you need some kind of recipe. Maybe it’s a set of formal instructions or maybe it’s a loose outline of ingredients and cooking times, but you need that base. They’re really easy to find, whether you search for them yourself, use an app, or watch a cooking video on YouTube. The information is available, and just as importantly, it’s (usually) tested by whoever wrote or recorded it. It doesn’t need to be synthesized completely new and processed in the cloud, step by step.
A dramatization of Amazon Alexa talking the user through cooking steps on an Echo Show 15. (Credit: Amazon)
Speaking of step-by-step, that’s one of the biggest problems of AI guidance for cooking. Meta AI was supposed to walk the demonstrator through making the sauce that way. It was going to tell him how to start, by gathering the ingredients. Then it was going to tell him to mix the right ones to make the base. Then, after making the base, it was going to tell him what to do next. It failed at telling him how to mix the base, and that information was never made available to him.
When you follow a recipe, you read the entire recipe first, then you go through the steps. This lets you know what you should be prepared to do, when the individual ingredients and tools will be necessary, and how long it will take. Meta AI wouldn’t have given him that information even if it did work. It was going to be just voice prompts walking him through the process as he went. That’s not good for cooking or learning how to cook.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
The only useful element the Meta AI demo could have shown would have been how to cook using the ingredients on hand, identified by the glasses. Even then, I wouldn’t trust an AI to give me good suggestions on substitutions or how to cook a dish without certain ingredients. Those tricks are hard enough to get right if you’re an experienced chef, and expecting an AI to have the right insight on what any change will do to flavor and texture is a big gamble.
I can see how you could use AI for cooking. I’ve seen it successfully used in the past not just on phones and tablets, but on smart displays with voice assistants. It’s simple: “Alexa, show me a recipe for what I want to make.” And a recipe will show up on the screen, maybe with a video or voice instructions. And yes, voice assistants count as AI.
Recommended by Our Editors
The Meta AI demo offers none of that information. No displayed lists of ingredients or steps to look over before you start. No reference material to use the same recipe on your own later. No choice in whose recipe you use, because the AI is synthesizing it based on the information in its knowledge base. You ask it for something, and it gives you that thing in a single, generated form. In automating the process to be more instantly convenient, it keeps a lot of information and control out of the user’s hands. That’s what most of these current AI systems have been trying to do: cutting out a huge part of any given process in the name of streamlining it through technology.
The live demo of the Meta Ray-Ban Display smart glasses and Meta Neural Band, which had some glitches probably not related to AI. (Credit: Meta)
Maybe the Meta Ray-Ban Display will give that information. It isn’t just voice-controlled, but has a graphical interface with direct interactivity via its Meta Neural Band. Perhaps it will let you treat Meta AI like Alexa, having it simply bring up a recipe you can read through and save for later. I’d be mildly surprised if it does, though, since the Meta Ray-Ban Display isn’t just the company’s newest smart glasses, but its newest “AI glasses.” Meta is pushing its AI not as a tool built into devices, but as the heart that drives those devices.
Automation and convenience are great, but not if it means taking choice out of people’s hands. I don’t mind asking AI for help cooking if I can choose the recipe and read it myself. I don’t mind asking AI to play music for me if I can choose what it plays. I don’t mind asking AI for answers if I can clearly see and go to its sources. But when it cuts out everything I would otherwise have to let me fully control my experience, that’s a price that’s too hard to pay.
And even if it wasn’t, it still means relying on network-connected, cloud-based technology that can simply break and leave you with nothing to work with. If Meta Connect proved anything, it proved that.
About Our Expert
Will Greenwald
Principal Writer, Consumer Electronics
Experience
I’m PCMag’s home theater and AR/VR expert, and your go-to source of information and recommendations for game consoles and accessories, smart displays, smart glasses, smart speakers, soundbars, TVs, and VR headsets. I’m an ISF-certified TV calibrator and THX-certified home theater technician, I’ve served as a CES Innovation Awards judge, and while Bandai hasn’t officially certified me, I’m also proficient at building Gundam plastic models up to MG-class. I also enjoy genre fiction writing, and my urban fantasy novel, Alex Norton, Paranormal Technical Support, is currently available on Amazon.
Read Full Bio