Browsing by Author "Zielinski, David J"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Evaluating the effects of image persistence on dynamic target acquisition in low frame rate virtual environments(2016 IEEE Symposium on 3D User Interfaces, 3DUI 2016 - Proceedings, 2016-04-26) Zielinski, David J; Rao, Hrishikesh M; Potter, Nicholas D; Sommer, Marc A; Appelbaum, Lawrence G; Kopper, Regis© 2016 IEEE.User performance in virtual environments with degraded visual conditions due to low frame rates is an interesting area of inquiry. Visual content shown in a low frame rate simulation has the quality of the original image, but persists for an extended period until the next frame is displayed (so-called high persistence-HP). An alternative, called low persistence (LP), involves displaying the rendered frame for a single display frame and blanking the screen while waiting for the next frame to be generated. Previous research has evaluated the usefulness of the LP technique in low frame rate simulations during a static target acquisition task. To gain greater knowledge about the LP technique, we have conducted a user study to evaluate user performance and learning during a dynamic target acquisition task. The acquisition task was evaluated under a high frame rate, (60 fps) condition, a traditional low frame rate HP condition (10 fps), and the experimental low frame rate LP technique. The task involved the acquisition of targets moving along several different trajectories, modeled after a shotgun trap shooting task. The results of our study indicate the LP condition approaches high frame rate performance within certain classes of target trajectories. Interestingly we also see that learning is consistent across conditions, indicating that it may not always be necessary to train under a visually high frame rate system to learn a particular task. We discuss implications of using the LP technique to mitigate low frame rate issues as well as its potential usefulness for training in low frame rate virtual environments.Item Open Access Exploring the effects of image persistence in low frame rate virtual environments(2015 IEEE Virtual Reality Conference, VR 2015 - Proceedings, 2015-08-25) Zielinski, David J; Rao, Hrishikesh M; Sommer, Marc A; Kopper, Regis© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.Item Open Access Revealing context-specific conditioned fear memories with full immersion virtual reality.(Front Behav Neurosci, 2011) Huff, Nicole C; Hernandez, Jose Alba; Fecteau, Matthew E; Zielinski, David J; Brady, Rachael; Labar, Kevin SThe extinction of conditioned fear is known to be context-specific and is often considered more contextually bound than the fear memory itself (Bouton, 2004). Yet, recent findings in rodents have challenged the notion that contextual fear retention is initially generalized. The context-specificity of a cued fear memory to the learning context has not been addressed in the human literature largely due to limitations in methodology. Here we adapt a novel technology to test the context-specificity of cued fear conditioning using full immersion 3-D virtual reality (VR). During acquisition training, healthy participants navigated through virtual environments containing dynamic snake and spider conditioned stimuli (CSs), one of which was paired with electrical wrist stimulation. During a 24-h delayed retention test, one group returned to the same context as acquisition training whereas another group experienced the CSs in a novel context. Unconditioned stimulus expectancy ratings were assayed on-line during fear acquisition as an index of contingency awareness. Skin conductance responses time-locked to CS onset were the dependent measure of cued fear, and skin conductance levels during the interstimulus interval were an index of context fear. Findings indicate that early in acquisition training, participants express contingency awareness as well as differential contextual fear, whereas differential cued fear emerged later in acquisition. During the retention test, differential cued fear retention was enhanced in the group who returned to the same context as acquisition training relative to the context shift group. The results extend recent rodent work to illustrate differences in cued and context fear acquisition and the contextual specificity of recent fear memories. Findings support the use of full immersion VR as a novel tool in cognitive neuroscience to bridge rodent models of contextual phenomena underlying human clinical disorders.Item Open Access Sensorimotor learning during a marksmanship task in immersive virtual reality(Frontiers in Psychology, 2018-02-06) Rao, Hrishikesh M; Khanna, Rajan; Zielinski, David J; Lu, Yvonne; Clements, Jillian M; Potter, Nicholas D; Sommer, Marc A; Kopper, Regis; Appelbaum, L GregorySensorimotor learning refers to improvements that occur through practice in the performance of sensory-guided motor behaviors. Leveraging novel technical capabilities of an immersive virtual environment, we probed the component kinematic processes that mediate sensorimotor learning. Twenty naïve subjects performed a simulated marksmanship task modeled after Olympic Trap Shooting standards. We measured movement kinematics and shooting performance as participants practiced 350 trials while receiving trial-by-trial feedback about shooting success. Spatiotemporal analysis of motion tracking elucidated the ballistic and refinement phases of hand movements. We found systematic changes in movement kinematics that accompanied improvements in shot accuracy during training, though reaction and response times did not change over blocks. In particular, we observed longer, slower, and more precise ballistic movements that replaced effort spent on corrections and refinement. Collectively, these results leverage developments in immersive virtual reality technology to quantify and compare the kinematics of movement during early learning of full body sensorimotor orienting.Item Open Access Wireless, Web-Based Interactive Control of Optical Coherence Tomography with Mobile Devices.(Transl Vis Sci Technol, 2017-01) Mehta, Rajvi; Nankivil, Derek; Zielinski, David J; Waterman, Gar; Keller, Brenton; Limkakeng, Alexander T; Kopper, Regis; Izatt, Joseph A; Kuo, Anthony NPURPOSE: Optical coherence tomography (OCT) is widely used in ophthalmology clinics and has potential for more general medical settings and remote diagnostics. In anticipation of remote applications, we developed wireless interactive control of an OCT system using mobile devices. METHODS: A web-based user interface (WebUI) was developed to interact with a handheld OCT system. The WebUI consisted of key OCT displays and controls ported to a webpage using HTML and JavaScript. Client-server relationships were created between the WebUI and the OCT system computer. The WebUI was accessed on a cellular phone mounted to the handheld OCT probe to wirelessly control the OCT system. Twenty subjects were imaged using the WebUI to assess the system. System latency was measured using different connection types (wireless 802.11n only, wireless to remote virtual private network [VPN], and cellular). RESULTS: Using a cellular phone, the WebUI was successfully used to capture posterior eye OCT images in all subjects. Simultaneous interactivity by a remote user on a laptop was also demonstrated. On average, use of the WebUI added only 58, 95, and 170 ms to the system latency using wireless only, wireless to VPN, and cellular connections, respectively. Qualitatively, operator usage was not affected. CONCLUSIONS: Using a WebUI, we demonstrated wireless and remote control of an OCT system with mobile devices. TRANSLATIONAL RELEVANCE: The web and open source software tools used in this project make it possible for any mobile device to potentially control an OCT system through a WebUI. This platform can be a basis for remote, teleophthalmology applications using OCT.