Last month, AI Mode made significant strides with the integration of Google Lens, enhancing its capabilities for multimodal search. This exciting development allows users to combine visual and textual searches, making information retrieval more intuitive. In addition to this, Google is currently working on a new feature titled “Live for AI Mode.”
In our latest APK Insight post, we’ve examined the newest version of an application that Google recently uploaded to the Play Store. By decompiling these files, known as APKs in the context of Android apps, we gain access to various lines of code that suggest possible future features. It’s essential to note that while Google may or may not implement these features, our analysis aims to provide a glimpse into what may come. We focus particularly on features that are nearing completion, allowing us to showcase how they might function if they are released.
The latest insights indicate that Google Search is set to receive functionality similar to Gemini Live, branded as Project Astra. However, this new feature appears to emphasize search capabilities rather than personal assistance. According to strings found in the recent beta version of the Google app (version 16.17), users will be able to engage in a real-time voice conversation with AI Mode to refine their search queries. Users can mute the microphone or exit the session with ease, which enhances overall usability.
According to the official announcement, Live for AI Mode is categorized as experimental and may not always yield accurate results. It is important to note that this feature requires AI Mode to function. Google is integrating this Live capability with Google Lens, allowing users to start their searches by simply speaking. To listen to results, users must adjust the volume on their devices and tap to interrupt response playback. However, it’s important to remember that follow-up questions will initiate a new search entirely.
The Live feature will include various control notifications while active, such as options to end the session or turn off the microphone. Users will also have the ability to share their screens alongside the camera, enhancing the interactive experience during searches.
Notably, the Live for AI Mode will offer a “Transcript” feature, allowing users to read their conversations. Google aims to provide links to explore the web while users chat, enhancing the search experience. Users can tap on links to delve deeper into topics of interest, unmute to resume chatting, or interrupt the AI’s responses simply by tapping or speaking. Additionally, users can enable or disable video during their sessions for a more personalized interaction.
With the introduction of Live for AI Mode, Google continues to innovate and improve the way users interact with information. The integration with Google Lens and the addition of real-time voice conversations signify a substantial leap forward in multimodal search technology. As we await the official rollout of these features, it’s clear that Google is committed to enhancing user experience through advanced AI capabilities.