Evaluating the effects of image persistence on dynamic target acquisition in low frame rate virtual environments


© 2016 IEEE.User performance in virtual environments with degraded visual conditions due to low frame rates is an interesting area of inquiry. Visual content shown in a low frame rate simulation has the quality of the original image, but persists for an extended period until the next frame is displayed (so-called high persistence-HP). An alternative, called low persistence (LP), involves displaying the rendered frame for a single display frame and blanking the screen while waiting for the next frame to be generated. Previous research has evaluated the usefulness of the LP technique in low frame rate simulations during a static target acquisition task. To gain greater knowledge about the LP technique, we have conducted a user study to evaluate user performance and learning during a dynamic target acquisition task. The acquisition task was evaluated under a high frame rate, (60 fps) condition, a traditional low frame rate HP condition (10 fps), and the experimental low frame rate LP technique. The task involved the acquisition of targets moving along several different trajectories, modeled after a shotgun trap shooting task. The results of our study indicate the LP condition approaches high frame rate performance within certain classes of target trajectories. Interestingly we also see that learning is consistent across conditions, indicating that it may not always be necessary to train under a visually high frame rate system to learn a particular task. We discuss implications of using the LP technique to mitigate low frame rate issues as well as its potential usefulness for training in low frame rate virtual environments.






Published Version (Please cite this version)


Publication Info

Zielinski, David J, Hrishikesh M Rao, Nicholas D Potter, Marc A Sommer, Lawrence G Appelbaum and Regis Kopper (2016). Evaluating the effects of image persistence on dynamic target acquisition in low frame rate virtual environments. 2016 IEEE Symposium on 3D User Interfaces, 3DUI 2016 - Proceedings. pp. 133–140. 10.1109/3DUI.2016.7460043 Retrieved from https://hdl.handle.net/10161/11598.

This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.



David Zielinski

AR/VR Technology Specialist

David J. Zielinski is currently a technology specialist for the Duke University OIT Co-Lab (2021-present). Previously the Department of Art, Art History & Visual Studies (2018-2020) and the DiVE Virtual Reality Lab (video) (2004-2018), under the direction of Regis Kopper (2013-2018), Ryan P. McMahan (2012), and Rachael Brady (2004-2012). He received his bachelors (2002) and masters (2004) degrees in Computer Science from the University of Illinois at Urbana-Champaign, where he worked on a suite of virtual reality musical instruments (video) under the guidance of Bill Sherman. He is an experienced VR/AR software developer, researcher, and educator. 


Marc A. Sommer

Professor of Biomedical Engineering

We study circuits for cognition. Using a combination of neurophysiology and biomedical engineering, we focus on the interaction between brain areas during visual perception, decision-making, and motor planning. Specific projects include the role of frontal cortex in metacognition, the role of cerebellar-frontal circuits in action timing, the neural basis of "good enough" decision-making (satisficing), and the neural mechanisms of transcranial magnetic stimulation (TMS).

Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.