In a significant move to enhance user security, Google has announced the rollout of new artificial intelligence (AI)-powered scam detection features aimed at protecting Android device users and their personal information. These advanced features specifically target conversational scams, which often start as seemingly harmless interactions but can quickly escalate into dangerous situations, as highlighted by Google's recent statement.
As phone scammers increasingly employ spoofing techniques to disguise their actual numbers and impersonate trusted businesses, Google has taken proactive steps. The tech giant has partnered with financial institutions to gain insights into the types of scams that customers frequently encounter. This collaboration has enabled Google to develop sophisticated AI models capable of identifying suspicious patterns during conversations.
These AI models operate entirely on-device, ensuring user privacy is maintained while providing real-time warnings during potential scam interactions. Users will be alerted if a conversation is likely a scam, allowing them to take action by either dismissing the alert or reporting and blocking the sender.
The scam detection feature is activated by default for conversations with phone numbers not saved in the device's contact list. Google has emphasized that user conversations remain private. If a user decides to report a chat as spam, only the sender's details and recent messages will be shared with Google and telecommunications carriers.
This feature is set to launch first in English in the U.S., U.K., and Canada, with plans for broader global expansion at a later date. Additionally, a similar scam detection feature for phone calls is being rolled out to all English-speaking Pixel 9+ users in the United States. Initially introduced in November 2024, this feature represents a commitment by Google to enhance user security.
While the scam detection feature is off by default to give users control over its use, it is important to note that the call audio is processed ephemerally and is not utilized during conversations with saved contacts. If users choose to enable the feature, they will receive auditory notifications, including a beep at the start and during the call, to indicate that the scam detection is active.
This development comes shortly after Google reported that over 1 billion Chrome users have adopted the Enhanced Protection mode through its Safe Browsing feature. When enabled, this mode offers additional security through advanced AI and machine learning models aimed at identifying dangerous URLs associated with known phishing, social engineering, and scam tactics.
Safe Browsing's Enhanced Protection models are designed to recognize URLs that closely resemble trusted domains, providing users with a higher level of security against dangerous downloads and online threats.
By integrating these AI-powered features, Google demonstrates its ongoing commitment to safeguarding users against evolving scam tactics, ensuring that personal information remains protected in an increasingly digital world.