Deep Learning for Applications in Inverse Modeling, Legislator Analysis, and Computer Vision for Security
Repository Usage Stats
To judiciously use machine learning – particularly deep learning – requires identifying how to extract features from data and effectively leveraging those features to make predictions. This dissertation concerns deep learning methods for three applications: inverse modeling, legislator analysis, and computer vision for security. To address inverse problems, we present a new method, the Mixture Manifold Network, which uses multiple neural backward models in a forward-backward architecture. We experimentally demonstrate that the Mixture Manifold Network performs better than computationally fast generative model baselines, while performance approaching that of computationally slow iterative methods. For legislator modeling, we seek to learn representations that capture legislator attitudes that may not be contained in their voting records. We present a model that instead considers their tweeting behavior, and we use reactions to former President Donald Trump on Twitter as an illustrative example. For computer vision, we address two security-related applications using deep convolutional feature extractors. In the first of these, we leverage domain adaptation with deep object detection for threatening items – such as guns, knives, and blunt objects – in X-ray scans of air passenger luggage. In the second, we apply an occlusion-robust classifier to infrared imagery. For each application above, we describe the datasets for the problem, how the presented methods extract features from that data, and how efficacious predictions are produced from each of our proposed models
Spell, Gregory Paul (2023). Deep Learning for Applications in Inverse Modeling, Legislator Analysis, and Computer Vision for Security. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/27647.
Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.