Skip to main content
Duke University Libraries
DukeSpace Scholarship by Duke Authors
  • Login
  • Ask
  • Menu
  • Login
  • Ask a Librarian
  • Search & Find
  • Using the Library
  • Research Support
  • Course Support
  • Libraries
  • About
View Item 
  •   DukeSpace
  • Theses and Dissertations
  • Duke Dissertations
  • View Item
  •   DukeSpace
  • Theses and Dissertations
  • Duke Dissertations
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Using Reinforcement Learning and Bayesian Optimization on Problems in Vehicle Dynamics and Random Vibration Environmental Testing

Thumbnail
View / Download
39.1 Mb
Date
2022
Author
Manring, Levi Hodge
Advisor
Mann, Brian P
Repository Usage Stats
98
views
12
downloads
Abstract

To accomplish the increasingly complex tasks that humans seek to achieve through technology, the advancement of the understanding and application of control systems is paramount for success. For relatively simple dynamic systems, model-based analytical control policies can be created without too much trouble (such as Proportional-Integral-Derivative (PID) or Linear-Quadratic-Regulator (LQR) controllers). However, for systems where the dynamics are very complex or even unknown, more advanced control techniques are necessary, especially when there is an interest in optimizing the control policy. This dissertation presents the application of nonlinear control methods to some challenging problems in vehicle automation and environmental testing.The first part of this dissertation presents the application of Reinforcement Learning (RL) to control a vehicle to get unstuck from a ditch. A simulation model of a vehicle moving on an arbitrary ditch surface was developed, with consideration of four different wheel-slip conditions. The transition between four state-spaces was developed as well as an integration routine to accurately integrate and switch between each of the four wheel-slip conditions. Two RL algorithms were applied to control the vehicle to escape the ditch – Probabilistic Inference for Learning COntrol (PILCO) and Deep Deterministic Policy Gradient (DDPG). PILCO was used to demonstrate the need of incorporating wheel-slip and the need for a neural network approach to capture all regions of the vehicle dynamic behavior. Reward functions were designed to incentivize the RL algorithms to achieve the desired goal. Both Rear-Wheel-Drive (RWD) and All-Wheel-Drive (AWD) simulation models were tested, and successful control policies achieved the goal of controlling the vehicle to get unstuck from the ditch while minimizing wheel-slip. Additionally, the control policies were tested over a wide range of ditch profile shapes, demonstrating a region of robustness. The second part of this dissertation presents a control solution in the area of environmental testing. In the area of environmental testing, there is an increasing demand for more challenging and aggressive environmental testing procedures. This dissertation presents a study on the convergence of the Matrix Power Control Algorithm (MPCA) for Random Vibration Control (RVC) testing, which is a particular type of environmental testing. A moving-average method was presented to reduce the control loop times and reduce the amplification of measurement noise. Additionally, Bayesian optimization was employed to optimize control parameters and the window size for the moving-average. An Euler-Bernoulli beam and the Box Assembly with Removable Component (BARC) structure were used in simulation and experiment, respectively, to demonstrate improvement in the convergence of MPCA over the baseline performance. In the experimental implementation, a LabVIEW controller was developed to implement the convergence improvements. This dissertation also presents a method for comparing Frequency Response Functions (FRFs), which is a data analysis problem in environmental testing. A Log-Frequency Shift (LFS) method was developed to shift a comparison FRF so that the dominant features (modes) of two FRFs were aligned. This then allowed the application of existing FRF comparison metrics with greater correlation with expert intuition. The Phase Similarity Metric (PSM) method was also introduced as an effective method for comparing the phases of two FRFs. These methods were demonstrated to be effective in simulation of an Euler-Bernoulli beam and validated using an experiment with random vibration applied to a thin beam.

Description
Dissertation
Type
Dissertation
Department
Mechanical Engineering and Materials Science
Subject
Engineering
ditch
dynamics
nonlinear
optimal control
random vibration
vehicle automation
Permalink
https://hdl.handle.net/10161/25210
Citation
Manring, Levi Hodge (2022). Using Reinforcement Learning and Bayesian Optimization on Problems in Vehicle Dynamics and Random Vibration Environmental Testing. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/25210.
Collections
  • Duke Dissertations
More Info
Show full item record
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.

Rights for Collection: Duke Dissertations


Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info

Make Your Work Available Here

How to Deposit

Browse

All of DukeSpaceCommunities & CollectionsAuthorsTitlesTypesBy Issue DateDepartmentsAffiliations of Duke Author(s)SubjectsBy Submit DateThis CollectionAuthorsTitlesTypesBy Issue DateDepartmentsAffiliations of Duke Author(s)SubjectsBy Submit Date

My Account

LoginRegister

Statistics

View Usage Statistics
Duke University Libraries

Contact Us

411 Chapel Drive
Durham, NC 27708
(919) 660-5870
Perkins Library Service Desk

Digital Repositories at Duke

  • Report a problem with the repositories
  • About digital repositories at Duke
  • Accessibility Policy
  • Deaccession and DMCA Takedown Policy

TwitterFacebookYouTubeFlickrInstagramBlogs

Sign Up for Our Newsletter
  • Re-use & Attribution / Privacy
  • Harmful Language Statement
  • Support the Libraries
Duke University