Automatic Behavioral Analysis from Faces and Applications to Risk Marker Quantification for Autism
Date
2018
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Abstract
This dissertation presents novel methods for behavioral analysis with a focus on early risk marker identification for autism. We present current contributions including a method for pose-invariant facial expression recognition, a self-contained mobile application for behavioral analysis, and a framework to calibrate a trained deep model with data synthesis and augmentation. First we focus on pose-invariant facial expression recognition. It is known that 3D features have higher discrimination power than 2D features; however, usually 3D features are not readily available at testing time. For pose-invariant facial expression recognition, we utilize multi-modal features at training and exploit the cross-modal relationship at testing. We extend our pose-invariant facial expression recognition method and present other methods to characterize a multitude of risk behaviors related to risk marker identification for autism. In practice, identification of children with neurodevelopmental disorders requires low specificity screening with questionnaires followed by time-consuming, in-person observational analysis by highly-trained clinicians. To alleviate the time and resource expensive risk identification process, we develop a self-contained, closed- loop, mobile application that records a child’s face while he/she is watching specific, expertly-curated movie stimuli and automatically analyzes the behavioral responses of the child. We validate our methods to those of expert human raters. Using the developed methods, we present findings on group differences for behavioral risk markers for autism and interactions between motivational framing context, facial affect, and memory outcome. Lastly, we present a framework to use face synthesis to calibrate trained deep models to deployment scenarios that they have not been trained on. Face synthesis involves creating novel realizations of an image of a face and is an effective method that is predominantly employed only at training and in a blind manner (e.g., blindly synthesize as much as possible). We present a framework that optimally select synthesis variations and employs it both during training and at testing, leading to more e cient training and better performance.
Type
Department
Description
Provenance
Citation
Permalink
Citation
Hashemi, Jordan (2018). Automatic Behavioral Analysis from Faces and Applications to Risk Marker Quantification for Autism. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/16951.
Collections
Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.