dc.description.abstract |
<p>Physiological signals measured from the body, such as brain activity and motor
behavior, can be used to infer different physiological states or processes in humans.
Signal processing and machine learning often play a fundamental role in this assessment,
providing unique approaches to analyzing and interpreting physiological data for a
variety of applications, such as medical diagnosis and human-computer interaction.
In this work, these approaches were utilized and adapted for two separate applications:
brain-computer interfaces (BCIs) and the assessment of visual-motor skill in virtual
reality (VR).</p><p>The goal of BCI technology is to allow people with severe motor
impairments to control a device without the need for voluntary muscle control. Conventional
BCIs operate by converting electrophysiological signals measured from the brain into
meaningful control commands, eliminating the need for physical interaction with the
system. However, despite encouraging improvements over the last decade, BCI use remains
primarily in research laboratories. One of the biggest obstacles limiting their daily
in-home use is the significant amount of time and expertise that is often required
to set up the biosensors (electrodes) for recording brain activity. The most common
modality for brain recording is electroencephalography (EEG), which typically employs
gel-based “wet” electrodes for recording signals with high signal-to-noise ratios
(SNRs). However, while wet electrodes record higher quality signals than dry electrodes,
they often hinder frequent use because of the complex and time-consuming process of
applying the electrodes to the scalp. Therefore, in this research, a signal processing
solution was implemented to help mitigate noise in a dry electrode system to facilitate
a more practical BCI device for everyday use in people with severe motor impairments.
This solution utilized a Bayesian algorithm that automatically determined the amount
of EEG data to collect online based on the quality of incoming data. The hypothesis
for this research was that the algorithm would detect the need for additional data
collection in low SNR scenarios, such as those in the dry electrode systems, and collect
sufficient data to improve BCI performance. In addition to this solution, two anomaly
detection techniques were implemented to characterize the differences between the
wet and dry electrode recordings to determine if any additional types of signal processing
would further improve BCI performance with dry electrodes. Taken as a whole, this
research demonstrated the impact of noise in dry electrode recordings on BCI performance
and showed the potential of a signal processing approach for noise mitigation. However,
further signal processing efforts are likely necessary for full mitigation and adoption
of dry electrodes for use in the home.</p><p>The second study presented in this work
focused on signal processing and machine learning techniques for assessing visual-motor
skill during a simulated marksmanship task in immersive VR. Immersive VR systems offer
flexible control of an interactive environment, along with precise position and orientation
tracking of realistic movements. These systems can also be used in conjunction with
brain monitoring techniques, such as EEG, to record neural signals as individuals
perform complex motor tasks. In this study, these elements were fused to investigate
the psychophysiological mechanisms underlying visual-motor skill during a multi-day
simulated marksmanship training regimen. On each of 3 days, twenty participants performed
a task where they were instructed to shoot simulated clay pigeons that were launched
from behind a trap house using a mock firearm controller. Through the practice of
this protocol, participants significantly improved their shot accuracy and precision.
Furthermore, systematic changes in the variables extracted from the EEG and kinematic
signals were observed that accompanied these improvements in performance. Using a
machine learning approach, two predictive classification models were developed to
automatically determine the combinations of EEG and kinematic variables that best
differentiated successful (target hit) from unsuccessful (target miss) trials and
high-performing participants (top fourth) from low-performing participants (bottom
fourth). Finally, in order to capture the more complex patterns of human motion in
the spatiotemporal domain, time series methods for motion trajectory prediction were
developed that utilized the raw tracking data to estimate the future motion of the
firearm controller. The objective of this approach was to predict whether the controller’s
virtually projected ray would intersect with the target before the trigger was pulled
to shoot, with the eventual goal of alerting participants in real-time when shooting
may be suboptimal. </p><p>Overall, the findings from this research project point towards
a comprehensive psychophysiological signal processing approach that can be used to
characterize and predict human performance in VR, which has the potential to revolutionize
the design of current simulation-based training programs for realistic visual-motor
tasks.</p>
|
|