Security-Aware Sensor Fusion in Autonomy
Date
2025
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Attention Stats
Abstract
Autonomous vehicles (AVs) such as unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), have made remarkable progress over the past decade. These systems now perform complex tasks ranging from self-driving to aerial surveillance to infrastructure monitoring. However, as these technologies move closer to large-scale deployment, their increasing dependence on black box deep neural networks to build situational awareness exposes them to security risks. While much research has focused on improving safety under benign conditions, adversarial threats such as sensor spoofing, data manipulation, and insider attacks remain underexplored and poorly defended in real-world systems. Addressing these emerging security challenges is essential to realizing the full potential of safe autonomy in safety-critical environments.
This dissertation advances the field of security-aware sensor fusion in autonomy through four key contributions spanning research tools, vulnerability analysis, and assured algorithms. First, it introduces AVstack, an open-source, reconfigurable software platform for the design, implementation, test, and analysis of AVs. AVstack supports rapid prototyping, longitudinal evaluation, and multi-agent collaboration by bridging the gap between disparate datasets, simulators, and AV components. Its modular design enables reusable and flexible experimentation across perception, planning, and control pipelines, providing a foundation for advancing research on multi-sensor and multi-agent autonomy.
Building on this platform, the dissertation presents the first comprehensive security analysis of LiDAR and camera-LiDAR fusion under realistic black-box threat models. It introduces the frustum attack: a novel, stealthy LiDAR spoofing technique capable of evading sensor fusion defenses by maintaining semantic consistency between sensing modalities. Through evaluations on state-of-the-art algorithms and industry-grade simulators, this work identifies LiDAR as a critical component of the trusted computing base. It further demonstrates that both physical and cyber-level attacks can compromise perception pipelines, motivating the development of security-aware fusion defenses.
To strengthen resilience against perception attacks in multi-agent systems, this dissertation introduces a full-stack framework for security-aware sensor fusion in networks of collaborative autonomous agents. By modeling trust as a latent, probabilistic variable informed by real-time perception and agent behavior, our Multi-Agent Trust Estimation (MATE) framework enables detection and identification of (dis)trusted agents and information. We integrate this trust model into a Trust-Informed Fusion step that weights agent contributions to shared situational awareness by trust estimates, mitigating the influence of insider threats. Experiments in high-fidelity smart city simulations demonstrate that this approach significantly reduces the impact of false positives, false negatives, and translation attacks, while maintaining high performance in benign scenarios. These results advance the state of collaborative autonomy by enabling more secure and reliable situational awareness in contested, safety-critical environments.
In addition to collaborative defenses, this dissertation proposes a neuro-symbolic sensor fusion architecture for enhancing perception assuredness in single-agent platforms equipped with multiple complementary sensors. Recognizing the limitations of deep neural networks as pattern-matchers vulnerable to adversarial manipulation, we introduce a hybrid approach to perception in autonomy that integrates structured logical reasoning with data-driven learning. Leveraging advancements in scene graph generation and foundation models, this neuro-symbolic framework grounds perception in high-level semantic relationships and commonsense reasoning. Through feasibility studies using physics-based simulators and real-world datasets, we show that this approach improves resilience, interpretability, and security in AI-driven perception. By bridging low-level sensing and high-level reasoning, our neuro-symbolic architecture lays the groundwork for more trustworthy and robust perception.
Type
Department
Description
Provenance
Subjects
Citation
Permalink
Citation
Hallyburton, Spencer (2025). Security-Aware Sensor Fusion in Autonomy. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/33333.
Collections
Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.
