In addition to unveiling a new gesture-powered search feature for Android devices, Google has also introduced an AI-enhanced addition to its visual search capabilities through Google Lens. Commencing today, users can utilize their camera to capture an image or upload a photo/screenshot to Lens, enabling them to pose questions about the visual content and receive responses generated by AI.
This update enhances the existing multisearch functionalities in Lens, enabling users to conduct searches using both text and images simultaneously. Previously, such searches would lead users to visual matches, but with this latest launch, users will now receive AI-generated results providing valuable insights.
Image Credits: Google
For instance, Google highlights the utility of this feature in learning about plants. Users can snap a photo of a plant and inquire, “When do I water this?” Instead of merely displaying related images, Lens identifies the plant and provides information on watering frequency, such as “every two weeks.” This functionality leverages data sourced from the web, encompassing details from websites, product pages, and videos.
Moreover, this feature synergizes with Google’s new search gestures, known as Circle to Search. This integration allows users to initiate generative AI queries with a gesture and subsequently ask questions about the selected item, enhancing the search experience.
It is important to note that while Lens’ multisearch feature offers generative AI insights, it differs from Google’s experimental genAI search SGE (Search Generative Experience), which remains opt-in only.
Image Credits: Google
The AI-powered overviews for multisearch in Lens are now available to all users in the U.S. in English, effective immediately. Unlike some of Google’s other AI experiments, this feature is not confined to Google Labs. To access it, users can simply tap on the Lens camera icon within the Google search app on iOS or Android devices, or in the search bar on Android phones.
Aligned with the Circle to Search functionality, this addition underscores Google Search’s relevance in the era of AI. Amidst the proliferation of SEO-optimized content on the web, Circle to Search and the accompanying AI-powered capability in Lens aim to enhance search outcomes by tapping into a vast knowledge network, encompassing numerous web pages in Google’s index, albeit presenting results in a distinct manner.
Nevertheless, reliance on AI implies that responses may not always be entirely accurate or pertinent. Web content is not exhaustive like an encyclopedia; therefore, the accuracy of responses hinges on the source material and the AI’s capacity to provide answers without errors.
Google emphasizes transparency by ensuring that its genAI products, such as the Google Search Generative Experience, attribute their sources, empowering users to verify the responses. While SGE remains in Labs, Google plans to introduce generative AI advancements more broadly, as exemplified by the current rollout of multisearch results.
The AI overviews for multisearch in Lens are now live, with the gesture-based Circle to Search set to debut on Jan. 31.