So many pictures, so little time to tag all those smartphone-generated memories. A pair of Microsoft interns have come up with a remedy.
The TagSense app created by Duke University and University of South Carolina researchers using Google Nexus Phones exploits motion, light and other sensors on the end user’s mobile phone, and those phones nearby, to piece together information about photos taken. This, they say, will help users more easily search for photos in the future.
The prototype app was discussed at the Association for Computing Machinery’s International Conference on Mobile Systems, Applications and Services (MobiSys), held last month in Washington, D.C. Details of the app are explained in a paper titled TagSense: A Smartphone-based Approach to Automatic Image Tagging
"When a TagSense-enabled phone takes a picture, it automatically detects the human subjects and the surrounding context by gathering sensor information from nearby phones," said Xuan Bao, a Ph.D. student in computer science at Duke, in a statement.
The phone’s accelerometer, for example, can detect if a person is stationary or moving in a photo, and other sensors can be used to determine everything from lighting to weather to whether the people in a picture are silent or laughing. Such attributes are piled into the tagging system, and can go beyond rapidly improving facial recognition technologies to help categorize and later identify photos.
The researchers envision members of groups to opt-in to allow their phones’ sensors to be used to compile the TagSense info (controversy has surrounded less privacy-sensitive photo tagging systems, such as Facebook’s facial recognition features.)
A commercial app could be released within a few years, the researchers say.