Written by 5:52 am AI Device, Generative AI

### Google Teases Revolutionary Android Features Powered by Gemini AI

Google Gemini on your smartphone has serious potential.

The buzz surrounding Google’s Gemini AI coming to Android primarily revolves around its more captivating features, yet it’s the mundane tasks that have caught my attention.

The potential for the AI to handle routine chores intrigued me. When I tasked Gemini with locating British Airways emails in Gmail, it not only found the messages but also categorized them (T&Cs updates, upcoming trips, account and privacy) and provided summaries of relevant emails for quick reference. This functionality was more remarkable to me than any other Google showcase I’ve witnessed.

Requesting Gemini on my Pixel 8 to suggest a burger joint for dining with friends on Friday at Tottenham Court Road, along with recommendations for pubs, bars, and a late-night spot afterwards, yielded solid suggestions, leveraging its familiarity with the area. It streamlined the planning process, saving time.

The potential applications of AI for everyday tasks excite me more than conjuring eerie images, coding, or scanning images for shopping items. Smartphones have long grappled with feature overload, overwhelming users with an array of functions that can be daunting to navigate. People have more pressing matters in their lives than mastering the intricacies of using their phones efficiently.

I envision Google’s Gemini roadmap incorporating AI further into a smartphone’s core functions. I anticipate being able to instruct the chatbot to activate my mobile hotspot for an hour, create a shared folder in Photos, and add my last 10 images to it in a single typed command.

Picture Gemini crafting a video showcasing all the new features recently updated on your phone. I aspire to execute these tasks instantly, with a brief sentence, on the fly, aligning with the essence of mobile computing.

This technology should eliminate the need to scour Reddit, blogs, or help guides just to locate a specific setting. Encouragingly, based on current progress, it seems likely that chatbots like Gemini and Galaxy AI are progressing towards this direction.

However, there are still limitations to overcome. When I requested Gemini on my Pixel 8 to enable dark mode, book an Uber, or retrieve an old shopping list from Google Keep, it couldn’t fulfill these commands yet.

I set the AI to more complex tasks, such as drafting an email for a recent purchase refund by scanning my Gmail for pertinent details, including the order number. While it managed to compile the email, it couldn’t proceed with sending it or handling the refund process—yet.

The prospect of a future iteration of Gemini mastering such tasks is tantalizing. Envision third-party integrations enabling your personal chatbot to autonomously handle a substantial portion of the refund process based solely on your initial request—an enticing selling point for this technology.

As Google integrates Gemini with Assistant, there’s potential for enhancing functions that previously faltered in the company’s Nest smart home lineup. Imagine instructing Gemini, in a single sentence, to ensure your heating and lights activate daily at 6 p.m. All the neglected skills that Google is eliminating from Assistant could become instantly accessible, leveraging Gemini’s capabilities.

Users won’t need to specify which skill to use; instead, they can direct Gemini to complete a specific task, utilizing its available skills. This approach is already in practice. When Gemini engages with Maps or Workspace, logos for those services appear as the AI processes the request.

While image creation, editing, and AI interactions may dominate headlines, the true smartphone revolution lies in uncovering hidden phone features and, crucially, saving valuable time.

Update February 19th: A recent report by Chrome Unboxed suggests that Google may be developing a new Chromebook featuring a built-in Assistant hardware key. The discovery in the Chromium Repositories references an Assistant key for a forthcoming Chromebook codenamed “Xol.” Details about the device are scarce, with development commencing on January 3rd this year. Although it remains uncertain if this is a Google-manufactured laptop, the emergence of Gemini hints at an intriguing future for the Chrome OS laptop range. As Gemini evolves, so will Chromebook laptops integrating the AI into the operating system. Chromebooks operate on Chrome OS, a streamlined system heavily reliant on cloud-based software. This setup positions these laptops well to execute Gemini tasks like video creation, image editing, and resource-intensive operations typically challenging on a Chromebook. The latest Gemini update, version 1.5, significantly expands the context window, enabling it to handle larger queries and more data simultaneously. According to the company’s blog post announcing the update, this enhancement supports an hour of video, 30,000 lines of code, or over 700,000 words. This development opens up possibilities for a potential new Gemini-powered Chromebook to tackle serious business and productivity tasks effectively.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 26, 2024
Close Search Window
Close