Transfer learning in continuous RL under unobservable contextual information

dc.contributor.advisor

Zavlanos, Michael

dc.contributor.author

Liu, Chenyu

dc.date.accessioned

2021-01-12T22:32:13Z

dc.date.available

2021-07-11T08:17:08Z

dc.date.issued

2020

dc.department

Mechanical Engineering and Materials Science

dc.description.abstract

In this paper, we consider a transfer Reinforcement Learning (RL) problem in continuous stateand action spaces, under unobserved contextual information. The context here can represent a specific unique mental view of the world that an expert agent has formed through past interactions with this world. We assume that this context is not accessible to a learner agent who can only observe the expert data and does not know how they were generated. Then, our goal is to use the context-aware continuous expert data to learn an optimal context-unaware policy for the learner using only a few new data samples. To this date, such problems are typically solved using imitation learning that assumes that both the expert and learner agents have access to the same information. However, if the learner does not know the expert context, using the expert data alone will result in a biased learner policy and will require many new data samples to improve. To address this challenge, in this paper, we formulate the learning problem that the learner agent solves as a causal bound-constrained Multi-Armed-Bandit (MAB) problem. The arms of this MAB correspond to a set of basis policy functions that can be initialized in an unsupervised way using the expert data and represent the different expert behaviors affected by the unobserved context. On the other hand, the MAB constraints correspond to causal bounds on the accumulated rewards of these basis policy functions that we also compute from the expert data. The solution to this MAB allows the learner agent to select the best basis policy and improve it online. And the use of causal bounds reduces the exploration variance and, therefore, improves the learning rate. We provide numerical experiments on an autonomous driving example that show that our proposed transfer RL method improves the learner’s policy faster compared to imitation learning methods and enjoys much lower variance during training

dc.identifier.uri

https://hdl.handle.net/10161/22222

dc.subject

Artificial intelligence

dc.subject

Robotics

dc.subject

Computer science

dc.title

Transfer learning in continuous RL under unobservable contextual information

dc.type

Master's thesis

duke.embargo.months

5.884931506849314

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_duke_0066N_15964.pdf
Size:
1.34 MB
Format:
Adobe Portable Document Format

Collections