You may recall Apple's unexpected moment a few weeks ago when users discovered that their photos were being scanned by Apple Intelligence to match landmarks. This revelation, which wasn't communicated to users beforehand, caused a significant uproar among security experts. Now, Google is experiencing a similar situation, and once again, the issue is not the technology itself but the lack of transparency.
Apple's Enhanced Visual Search involves sending parts of photos to the cloud to compare against a global index of points of interest. While this process is touted as privacy-preserving, crypto expert Matthew Green expressed frustration, noting, “It’s very frustrating when you learn about a service two days before New Year’s and find that it’s already enabled on your phone.”
Google's current predicament revolves around SafetyCore, an Android system update that enables on-device image scanning. While the technology aims to blur or flag sensitive content, it is reportedly more private than Apple's Enhanced Visual Search as it operates entirely on-device. However, the fact that it was installed without user notification has sparked skepticism.
Previously, I highlighted SafetyCore's potential to enhance security in Google Messages and suggested its application in Gmail. This would shift security scanning from Google’s servers to the user’s phone. Despite its benefits, the lack of openness remains a significant issue.
GrapheneOS, an Android security developer, offers some reassurance, stating that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It uses on-device machine learning models to classify content as spam, scams, malware, etc., allowing apps to check content locally without sharing it with a service.”
However, GrapheneOS also highlights the transparency issue, emphasizing that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project.” The lack of open-source models further underscores the transparency problem.
Google asserts that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and it only classifies specific content when an app requests it through an optionally enabled feature.” Nevertheless, the core issue remains that users were not informed about its installation.
According to ZDNet, “Google never told users this service was being installed on their phones. If you have a new Android device or one with software updated since October, you almost certainly have SafetyCore on your phone.” As with Apple, “one of SafetyCore's most controversial aspects is that it installs silently on devices running Android 9 and later without explicit user consent,” raising privacy and control concerns.
If you “don’t trust Google,” as ZDNet suggests, you can manage the SafetyCore feature. To uninstall or disable the service, navigate to 'SafetyCore' under 'System Apps' in the main 'Apps' settings menu on your phone.
Recent experiences with Apple and Google highlight the importance of transparency. If these companies plan to transform our phones into AI-powered machines, they must inform users beforehand and provide an option to consent. Otherwise, it perpetuates fear of the unknown.