Artificial Intelligence to Prevent Physical Snooping of Smartphones

Most security applications, innovations and systems for smartphones and tablets focus on digital intrusions; for example, internet security suites scan devices for malware while virtual private networking (VPN) apps make connections safer at public Wi-Fi hotspots. Physical security, on the other hand, has not received as much attention from developers, but a team of Google researchers intends to change this through artificial intelligence.

According to a recent report published by ZDNet, two researchers working on machine learning projects recently demonstrated a system that can alert smartphone users if someone is peeking over their shoulder for the purpose of snooping. The idea is to run an artificial intelligence routine that connects to a face recognition neural network that has learned to recognize the smartphone owner’s face; when an algorithm estimates that a different face is gazing at a certain distance, the owner is notified of the privacy intrusion.

Physical snooping of smartphones is an activity known as “shoulder surfing” in popular culture, and it has become more common with the advent of “phablets,” smartphones with extra-large touchscreen displays that tend to attract more attention. In espionage circles, shoulder surfing is a clandestine tactic to obtain sensitive information such as messages content, username and password credentials, apps used, and browsing habits. In recent months, security researchers have warned about malicious hackers practicing shoulder surfing in crowded places such as train stations.

The technologies being used include gaze detection and face recognition, which fall under the advanced fields of computer vision and spatial awareness. At this time, the researchers have not completed a scientific paper on their project, which will be formally presented in December 2017 during a conference on neural networks.

It should be noted that the artificial intelligence construct programmed for this security feature does not require a connection to remote servers, which is a common strategy used in AI applications these days. The machine learning and algorithmic processes used in this project are small enough to be coded into a script that runs in a fashion similar to a background process in Windows.

The video circulated by the researchers demonstrating the feature shows that they are using a Google Pixel smartphone running the latest version of the Android mobile operating system. In the video, the researchers explain that the two algorithmic processes of facial recognition and gaze detection only take milliseconds after the target image is acquired by the front-facing camera of the Pixel. Battery consumption for this process was not discussed in the video.

 

Dil Bole Oberoi