The race is on to develop tools to help sift through the vast quantities of video that are being produced by wearable camera technology like Google Glass and Looxcie.
The researchers are working to develop tools to help make sense of the vast quantities of video that are going to be produced by wearable camera technology like Google Glass and Looxcie.
“The amount of what we call ‘egocentric’ video, which is video that is shot from the perspective of a person who is moving around, is about to explode,” said Kristen Grauman, associate professor of computer science in the College of Natural Sciences. “We’re going to need better methods for summarizing and sifting through this data.”
Grauman and her colleagues developed a superior technique that uses machine learning to automatically analyze recorded videos and assemble a better short “story” of the footage than what you get from existing methods...read more.