With the help of partners On-X, Theoris, Renault, CEA List, and Lumiscaphe, software has been developed to allow the use of an augmented tablet (iPad mini) inside a CAVE. The CAVE displays virtual automotive designs and stakeholders can use the tablet to interact with the models. This is an interesting, and unusual, setup because a tablet augments virtual reality (VR), a scheme I like to call "Computer Mediated VR." With this in mind, interesting theoretical and philosophical questions arise involving a user's sense of presence in multiple nested virtual and physical environments.
Unique perceptual problems may also arise from the use of heterogeneous screens in a single immersive virtual environment, possibly affecting users' abilities to perform spatial tasks. It is unclear if users will be able to effectively integrate information between displays.
This project was also fun because I had a few chances to visit the Renault Technocentre in Guyancourt, France, to see their VR systems and learn about virtual design review at a major automaker.
Paper presented in IEEE VR 2015 PERCAR workshop.Institut Image – Le2i [PDF]
Paper presented in IEEE VR 2015 Lab and Project track.
In collaboration with partners, including Bouygues Construction, a virtual reality (VR) theater was built to allow decision makers to explore virtual buildings before they have been constructed. A key objective of this project was providing a 1:1 sense of scale, and my primary role was researching technology considerations to make this goal a reality.
While there have been many past studies on the topic of distance perception in VR, the overwhelming majority of those studies were conducted using a head-mounted display. Because the Callisto room is built around a large curved screen, I explored the possibility that users might utilize distance cues provided by the physical system in addition to frequently-studied virtual cues. If this is the case, the use of a curved screen may present additional challenges.
Though this project ended in 2014, I have continued to conduct follow-up experiments. In addition to the work on physical cues described above, I have also been exploring the perceptual impacts of movement cost (example: effort) and cooperative locomotion interfaces (including a multi-CAVE setup).
Spatial Cognition IX, Lecture Notes in Artificial Intelligence 8684.
The Virtusphere is a locomotion interface, resembling a human-sized hamster ball, that allows for infinite locomotion through virtual worlds using a semi-natural walking action. Experienced users can perform quite well in the sphere, but new users find it challenging. Before using the Virtusphere for a study session, the participant must be trained to use it safely and effectively. Additionally, cognitive resources are required when using unnatural interfaces, raising the possibility that using the sphere may interfere with completion of experimental tasks if participants are not trained sufficiently. A study was conducted to investigate the learning curve, both in terms of basic locomotion performance and success on concurrent cognitive tasks. Results showed that users had cognitive problems long after movement performance was high.
Additionally, we were interested in improving the Virtusphere interface itself. In this vein, we experimented with using additional orientation sensors (InertiaCube 3) in order to more accurately capture the user's intentions.
These experiments were conducted in the Virtusphere with graphics displayed wirelessly on a head-mounted display (SX60 or RelaxView). Stimuli were generated with the Unreal Engine and Vizard.
The interdisciplinary Transregional Collaborative Research Center Spatial Cognition: Reasoning, Action, Interaction, involving the Universities of Bremen and Freiburg, was established by the Deutsche Forschungsgemeinschaft (DFG). I primarily studied participants in the Virtusphere by placing them in virtual worlds that violated the rules of Euclidean geometry (impossible worlds). We wanted to know more about the mental representations of spatial scenes. In the earlier studies, users were asked to identify the shortest paths between two points in the virtual world. More recently, we developed a blind-walking paradigm, in which users explore a virtual world and are then placed on a virtual plane with minimal directional cues where they are asked to retrace their steps through the previous (sometimes impossible) corridor.
This represents the second conceptual part of my dissertation. In the previous project, described below, I observed competition between some unnatural aspects of locomotion interfaces and certain concurrent tasks. For this project I created a new fuzzy adaptive system to adjust interface parameters according to a user's concurrent task load. In a study, users of the fuzzy system exhibited higher locomotion performance compared to users of a baseline system. Concurrent task performance was mixed, warranting further investigation.
This experiment was conducted in the C6 CAVE in the Virtual Reality Applications Center at Iowa State University. The scenarios were controlled by the VirtuTrace experiment engine. The fuzzy system was built on the fuzzylite open-source fuzzy logic control library.
Presence: Teleoperators and Virtual Environments.
This represents the first conceptual part of my dissertation. It comprises two studies, the first investigating the cognitive demands of using virtual locomotion interfaces with unnatural aspects and the second investigating the impact of a restricted field of view on cognitive demands during virtual locomotion. These studies identified cognitive resources required for certain interface actions. Cognitive resources are considered to come from finite pools. If an interface requires cognitive resources, they will not be available for concurrent domain-specific tasks.
These experiments were conducted in the C6 CAVE in the Virtual Reality Applications Center at Iowa State University. The scenarios were controlled by the VirtuTrace experiment engine.
Presence: Teleoperators and Virtual Environments.
This was a large interdisciplinary team funded by a grant from the Air Force Office of Scientific Research. The objective was to enable a small number of operators to control a large number of semi-autonomous unmanned vehicles on a remote battlefield, using immersive virtual reality (VR). It is important for operators in such systems to maintain high levels of situational awareness. Therefore, information must be presented effectively and cognitive load must be carefully considered.
As part of this work, I toured some user experience/human factors labs at Wright-Patterson Air Force Base near Dayton, Ohio. There I had a chance to try some desktop teleoperation scenarios and I was struck at just how difficult it is for a single operator to manage even a handful of vehicles on basic surveillance missions.
This work centered around the C6 CAVE in the Virtual Reality Applications Center at Iowa State University. At the time, this was the highest resolution immersive VR system in the world 100 Million pixels rear-projected on six walls.
This stemmed from a class project in HCI 575 X (Computational Perception), relating to my interests in human sense-of-self and neuroplasticity. I worked with an interdisciplinary team of three other graduate students to develop a virtual reality game in which all left-right movements were inverted. In other words: when a user stepped one step to the left, the graphics moved two steps to the left, giving the illusion of having stepped to the right. The game was a simple one which required players to bounce a virtual ball off of their chests to hit virtual boxes ("Body Pong"). Though this was quite challenging, some members of the team (including myself) became relatively skilled at it.
This application was designed for the old C4 CAVE in the Virtual Reality Applications Center at Iowa State University. It was originally written entirely in C++ and OpenGL, eventually moving to OpenSceneGraph.
Brief video of me playing the game.
This was a distributed simulation system developed in the Advanced Modeling, Optimization, and Systems (AMOS) lab at Wright State University. The software allowed researchers to simulate the operations of unmanned vehicles and investigate various optimization problems (weapons-to-target assignment, etc.). My role on this project was to maintain the existing C# architecture, though in fact it was already in very good shape. For my Master's thesis, I extended the system to allow for non-programmer users to define unmanned vehicle agent behaviors.
Abstract: College students are primarily concerned with the price and convenience of the food they choose to eat. Environmental impact is not a consideration in their food decisions. We present a web-based solution that simplifies meal choices and addresses the perception that home-prepared meals are inconvenient and expensive. The solution provides a web service that suggests convenient recipes that use local and seasonal ingredients tailored to the user's location. This promotes sustainable food purchasing habits. The solution uses a location-aware mobile device as an example platform. The study presents the participatory design process that informed the development of this solution.
This was a project for the CHI 2009 student design competition. A total of 125 papers were submitted and this was one of just ten to be invited to the poster session. To the best of my knowledge, this makes me the only person (certainly the first) from Iowa State University to have been accepted to the Student Design Competition twice.
CHI 2009 Student Design Competition.
I spent a summer as a user experience intern at LSI Corporation's (now NetApp) Engenio Storage Group, in Wichita, Kansas. I worked with an international team of human-factors engineers to design and test user workflows and interfaces to be used for managing enterprise-grade disk storage systems. I took the lead on creating pixel-perfect interface mockups, using Visio, for a few new features. I also conducted hardware and software user studies (with storage experts as subjects) in a full-featured usability lab (one-way glass, video equipment, etc.).
This job was also very educational for me in terms of storage systems. I was forced to very quickly learn all I could about storage terminology and technology (firmware, controllers, RAID, etc.).
My design is in use on systems worldwide.Large drive tray support [PNG]
This changed in the released version.
Abstract: Research in homelessness points to a recent increase in the population of homeless women. Survivors of domestic violence who become homeless as a result of their flight from an abusive situation seem to comprise an increasingly significant segment of this group. GuardDV is a system that seeks to address the safety concerns of domestic violence survivors who do not possess a stable residence. The system warns the potential victim and the corresponding law enforcement organizations about the physical proximity of the aggressor. For this project, an interdisciplinary team committed to improve the quality of life of DV homeless survivors employed qualitative accounts as part of a participatory design effort.
This was a project for the CHI 2008 student design competition. A total of 36 (I think?) papers were submitted and this is one of just 12 to be invited to the poster session. In the poster session, it was reviewed and selected as one of four to proceed to the final presentation stage. We were the first team from Iowa State University to attend the Student Design Competition.
CHI 2008 Student Design Competition.
VirtuTrace is a full-featured "experiment engine," allowing researchers to quickly develop immersive experiment protocols for display in a CAVE (for example). The project stemmed from Professor Nir Keren's work with measuring the responses of real firefighters while making stressful virtual decisions. I contributed heavily to this project because I used it to conduct all of the experiments for my dissertation.
VirtuTrace is built in C++, on top of many libraries including VRJuggler, OpenSceneGraph, and Bullet Physics. My primary role on the project was implementing locomotion interfaces and player physics. Due to the small team size, I was actually involved in almost all architecture decisions. We placed highest importance on following the best development practices, in order to keep the architecture flexible and extensible, while preserving high performance and maintainability.
I used to develop and maintain websites as a consultant for small- to medium-sized businesses. I worked mostly in raw HTML and CSS to create lean, standards compliant, and easy-to-maintain sites.
I still do fair amount of basic website design and administration for my own purposes. Some sites, such as this one, are written in raw HTML. I tend to use Wordpress for blogging, though I do have past experience working with PHP and MYSQL. I self-host all of my sites on a VPS running Ubuntu and Apache. In all cases, I focus on performance, security, and usability.
A travel blog for me and my wife.