While Google Glass wasn’t the success the company wanted it to be, it’s now being used to help children with autism recognize and classify their emotions.
At Stanford University, researchers Catalin Voss and Nick Haber are using both face-tracking technology and machine learning to create home treatments for individuals with autism. The project, titled the Autism Glass Project, is part of the Wall Lab in the Stanford School of Medicine.
For children with autism, reading others’ emotions and understanding the needs of others can often be a difficult task. The Google Glass software used by researchers Voss and Haber uses face extraction is order to detect ‘action units’ from faces, helping individuals with autism to identify the emotions of people they interact with.
For the project’s second phase, an 100-child study will be employed to see whether the system is viable as an at-home treatment.
The difficulty researchers are facing is translating the device’s applications to measurable learning, meaning that the child synthesizes the knowledge when they are no longer utilizing the Google Glass device.
“We didn’t want this to be a prosthesis,” Haber told Tech Crunch.
In order to understand how this off-device learning would exactly work, Haber and Voss’s team employed a first phase in 2014, which involved 40 studies and was conducted in their Stanford lab.
First, the team studied the interaction between the children and computer screens, then continued by allowing the children to interact with their surroundings via a game called “Capture the Smile,” developed by MIT’s Media Lab.
For the game, children wear the Glass device and look for people who exhibit specific emotions. The game monitors performance and combines both video analysis and questionnaires to ultimately build what Tech Crunch calls a ‘quantitative phenotype’ of autism that can be built for each study participant.
According to the CDC, around one in 50 children are diagnosed with some form of autism.
Ultimately, the game can help to track the child’s progress and long-term emotion recognition capabilities via the device.
The next phase will take several months to complete and will allow increased parent involvement, something that is crucial to the treatment process.