Extending Test-Time Domain Adaptation to Reinforcement Learning for Robot Control
Date
2025
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Abstract
For robots to be useful in dynamic, real-world environments, machine learning models used for robot control must generalize to domains that were not encountered during training. Test-time domain adaptation methods aim to allow machine learning models to adapt to new target domains as information becomes available at test-time. In this work, we first motivate the need for source-free, unsupervised test-time domain adaptation methods for robot control. We then investigate the limitations of using self-supervised auxiliary tasks to adapt the soft actor-critic algorithm in continuous control reinforcement learning environments. We find that the use of self-supervised auxiliary tasks poses a significant risk of overfitting to the auxiliary tasks at test-time, severely degrading performance on the main control task. This risk is especially acute in the continual learning context where a model must adapt to multiple target domains sequentially. We conclude by suggesting ideas for future research to improve this method and advance test-time domain adaptation more broadly.
Type
Department
Description
Provenance
Subjects
Citation
Permalink
Citation
Oswald, Christopher (2025). Extending Test-Time Domain Adaptation to Reinforcement Learning for Robot Control. Master's thesis, Duke University. Retrieved from https://hdl.handle.net/10161/32915.
Collections
Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.
