Written by 5:10 pm AI Threat, Big Tech companies

### Unveiling the Impressive and Alarming AI Features of Ray-Ban’s Meta Smart Glasses

Meta is adding new AI capabilities to its smart glasses. Using them has been unexpectedly eye-openi…

Some of the notable features of the Ray-Ban Meta smart glasses that caught my attention in the initial review were the ones I hadn’t had the chance to test yet. One of the most intriguing capabilities of these glasses is their ability to respond to queries based on your visual focus, a feature Meta dubs as “multimodal AI.” This feature allows you to seek translations by simply looking at words or identify plants and landmarks. Another significant update I was looking forward to was the integration of real-time information into the Meta AI assistant. However, the AI had a “knowledge cut-off” date of December 2022 last autumn, limiting its responsiveness to certain queries.

Fortunately, Meta has started rolling out these features, with multimodal search currently in an “early access” phase. After using them for a few weeks now, I have found the insights quite enlightening in terms of the current state of AI technology. While multimodal search shows great potential, especially for travel-related tasks, the real-time information processing by Meta AI is still quite unreliable, often providing inaccurate responses to straightforward questions.

During my initial experience with Meta’s bidirectional search at Connect last fall, I envisioned a game-changing potential for their smart glasses. The first-generation collaboration between Meta and Ray-Ban resulted in aesthetically pleasing glasses, albeit not highly practical. Despite the initial awkwardness of having an AI-enabled camera on my glasses, the concept of having an AI assistant with visual capabilities seemed promising in terms of functionality outweighing any discomfort.

The multimodal feature, in my opinion, holds significant promise, particularly for tasks like travel where real-time translations and word definitions can be incredibly useful. While the current functionality is limited, the potential is evident.

For tasks like monument identification during travel, having this feature could be invaluable. Although I haven’t had the chance to test it extensively, the early access version of multimodal search seems promising for such applications.

On the other hand, the usefulness of multimodal search in everyday scenarios at home remains to be seen. While it can identify some plant species among other random things, its practical applications seem somewhat limited at the moment.

One interesting use case I explored was seeking recipe suggestions using Meta AI. The AI provided reasonable recommendations based on the ingredients I had on hand, showcasing its potential utility in certain scenarios.

In terms of real-time information processing, Meta AI still faces challenges. It often provides incorrect responses to queries, especially in the realm of current affairs. Despite claiming to rely partially on Bing for real-time data, the accuracy of the information provided leaves much to be desired.

Overall, while Meta’s smart glasses offer exciting possibilities for enhanced convenience and travel experiences, there is a clear need for improved accuracy and reliability in the AI capabilities. The current functionalities, while promising, require further refinement to reach their full potential.

Visited 2 times, 1 visit(s) today
Last modified: January 26, 2024
Close Search Window
Close