Volume 3, issue 2 : Summer 2022. DOI: 10.1037/tmb0000073
Systematic research examining affordances such as navigability are important for understanding how to design virtual learning environments. To explore this, a fully immersive virtual cemetery modeled after Edgar Lee Masters’ (1915) fictitious Spoon River Anthology was created to assess how motion mechanics related to participant navigability would influence learning outcomes. A 2 × 2, fully crossed, between-subjects (N = 116) experiment manipulating navigation type (natural walking vs. teleportation) and type of piloting (proximal vs. distal orientations) was tested. Participants used the Oculus Rift S virtual reality (VR) headset with a portable backpack, allowing unimpeded movement to complete the study. Results demonstrated that the type of navigation and piloting orientation differentially influenced learning performance. Specifically, an interaction effect occurred where participants in the natural walking with a distal orientation condition showed significantly higher scores on narrative testing, spatial location, and spatial reconstruction tasks, when compared to the teleportation with distal orientation condition. Implications for how these findings may be used in VR environments where the prime objective is learning are discussed.
Keywords: motion mechanics, affordances, navigation, learning, virtual reality
Action Editor: Nick Bowman was the action editor for this article.
Disclosures: There are no known commercial or financial conflicts of interest for any of the authors listed on this article.
Data Availability: Data are available upon request. Analysis methods are described in the article. Psychometric measures are listed in the appendix.
Correspondence concerning this article should be addressed to Edward Downs, Department of Communication, University of Minnesota Duluth, 465 A.B. Anderson Hall, 1121 University Drive, Duluth, MN 55812, United States. Email: firstname.lastname@example.org
In March of 2014, Facebook CEO, Mark Zuckerberg, closed a business transaction by purchasing Oculus VR for $2 billion in cash and stock options. The message was clear. The largest social media company in the world was going “all in” on virtual reality (VR) technology. In the years that followed, a number of players in the VR industry emerged: Sony, Samsung, and HTC, among others. The responses that these immersive technologies generated at trade shows and public demonstrations were impressive. In 2021 Meta, the newly created parent company of Facebook, invested untold sums with the goal of bringing Stephenson and Cline’s VR “metaverse” to fruition. Tech writers (e.g., Young, 2021) recognize that education is a core component in Meta’s vision. As is often the case with emerging technologies and platforms, even in this second wave of VR, the conversation has again turned to their application as learning tools.
The virtual worlds generated and displayed by VR technologies certainly have the capacity to teach. Some companies have stated that VR is the future of learning—even though the empirical evidence to support these claims is conflicting (Makransky et al., 2017; Merchant et al., 2014). There is little doubt that VR technologies can make a learning experience more enjoyable (Cheng et al., 2017), engaging (Franceschi et al., 2008), or creative (Thornhill-Miller & Dupont, 2016). However, there are still many questions that remain unanswered with respect to the myriad of different ways that a VR program can be created and its subsequent relationship to measurable learning outcomes. Virtual environment experiences often enlist many elements of interaction with the virtual space, including how people move through the space, how they interact with objects in the environment, or how a user is embodied through an avatar representation. A starting point for all interactions is to consider how people will move or navigate through the virtual environment. Many types of navigation are possible including walking, walking-in-place, flying (Usoh et al., 1999; Wendt et al., 2010), teleportation in which people transport to new locations instantly, or redirected walking techniques that attempt to maximize space by leveraging people’s inability to perceive small manipulations to orientation (Grechkin et al., 2016; Razzaque et al., 2001; Steinicke et al., 2009). Nilsson et al. (2018) provides an encompassing survey of the many ways navigation has been explored in VR. Many of the choices come with trade-offs to maximize user comfort, ability to freely explore room-scale or larger spaces, and ease of use, all while minimizing motion or simulator sickness symptoms. In this work, we specifically focus on motion mechanics that allow free exploration of VR environments through room-scale locomotion and teleportation. These are among the most common navigation types employed as they afford free exploration within VR environments, are easy for people to apply, and work for a wide range of VR experiences. These motion mechanics are interactive and lend themselves to experiential learning (Kolb, 1984) in VR environments. This research explores how motion mechanics, specifically, the technological affordance of navigability, can differentially impact learning performance in a fully immersive VR environment. An experiment was conducted that manipulated the way participants were able to navigate a virtual space (either through teleportation or natural walking) and type of piloting—or how far participants typically moved between salient information locations within the virtual environment (proximal vs. distal orientations). At the end of the experience, participants completed a narrative learning assessment, as well as spatial positioning, and spatial reconstruction measures. The following review provides a theoretical framework to anchor this study and concludes with research questions for empirical testing.
Situated cognition (Brown et al., 1989) states that learning is a product of the activities and situations in which it is produced. Proponents argue that traditional classroom exercises are devoid of the culture and jargon used by practitioners in the field, and because of that, classroom exercises lack authenticity. This lack of authenticity is said to create a disconnect between what is learned in the classroom and what is necessary to be successful in the real world. As researchers note, how an individual perceives a given activity can be determined by tools available in an environment for use. Fully immersive, headset-based VR can provide a strong sense of immersion in situated learning contexts; for example, exploring life on a Fijian island (Schott & Marshall, 2018), taking an immersive virtual field trip to Word War II (WWII) memorials (Cheng & Tsai, 2019), or exploring the International Space Station (Rupp et al., 2019). However, more research is required to understand how technological affordances, such as how one employs motion to navigate through a digital space, fits within the situated learning paradigm. In other words, fully immersive virtual environments have the potential to take traditional classroom exercises and situate them in a larger, digital cultural context; however, different tools or technological affordances within that environment can differentially influence learning. Different activities, and ostensibly, different tools (or technological affordances in the vernacular of Modality–Agency–Interactivity–Navigability [MAIN] model) are expected to produce different learning outcomes, not equivalent experiences (Brown et al., 1989). One such affordance is related to motion mechanics as described by the Mechanics, Dynamics, Aesthetics (MDA) framework.
Researchers suggest that theoretically anchored studies using learning theories and principles of game design are necessary to advance our understanding of learning in VR (Fowler, 2015; Radianti et al., 2020). The MDA model (Hunicke et al., 2004) breaks game and interactive design into three core components: mechanics, dynamics, and aesthetics. This framework posits that designers create a game and that game players consume the designer’s product. However, the difference between digital games and other sources of entertainment (e.g., books, music, movies, etc.) is how interactivity changes the individual experience while working toward an objective. A game’s designers may incorporate events to elicit specific emotional reactions from a player (aesthetics), while also keeping them engaged through principles of “fun” that can be further broken down into, but not limited to, fantasy, narrative, challenge, fellowship, discovery, expression, and so on. If aesthetics are what keeps the player invested, dynamics are the link to how this happens within the game by creating the conditions through which these aesthetic principles are brought about. For example, questing for a legendary sword that can slay evil due to its magical properties (i.e., exploration, fantasy, and narrative). Mechanics can be thought of as the structure or “rules” of a created experience. These rules define the actions and behaviors a player can perform in a digital space. For example, how the player swings the magical sword due to different button inputs. Similarly, how one is allowed to move through a digital space, for example, via teleportation or via natural walking in a VR environment or covering large versus small distances when exploring a space would be examples of motion mechanics. With these examples, navigation or navigability is at once a principle of game design through mechanics in the MDA framework and a technological affordance in the MAIN model.
A technological affordance is understood as a capability that is provided by a technology or medium. Changes in technological affordances can change user experiences in technological environments. The MAIN model (Sundar, 2008) identifies four technological affordances that are capable of changing user perceptions. Modality refers to the structural features of a medium; Agency refers to the attribution of sources for information; Interactivity refers to action potentials embedded within a technology or medium; and Navigability refers to the ability to understand one’s position in a digital space. All of the affordances in Sundar’s (2008) MAIN model are recognized as being situated within a medium. Therefore, navigability—and by extension, different navigation styles in a VR environment—may be understood as a property of that medium. Changing the way one moves in a VR environment is a navigability affordance manipulation. Combined with the MDA framework, motion mechanics may be understood as any navigational affordance incorporated into VR environments that allows users to move in and locomote through these digital spaces. Two such mechanics explored in this research are navigation type and piloting.
Navigation type is the method or style of motion, for example, natural walking, flying, grappling, or teleporting that allows a person to move themselves through an immersive virtual experience. As people interact with an environment through navigation (whether virtual or real), the visual information and optic flow generated by the locomotion impact a person’s internal representation of the environment (Rieser, 1999; Rieser & Pick, 2007). These works help to drive our choices for navigation as an independent variable. How people walk or navigate through a virtual learning space may matter and the decision criteria for why to utilize a certain motion mechanic are not always straightforward. Ideally, people could utilize natural walking, but this is not always possible. Natural walking relies on a person’s natural ability to move around and explore a space. It is preferred over other motion mechanics (Xie et al., 2018) and beneficial for developing cognitive maps of large-scale spaces (Ruddle et al., 2011). It also has the potential to reduce symptoms of simulator sickness and likely provides a more veridical understanding of that space because of the relationship between perception, action, and proprioceptive representations (Rieser, 1999). To facilitate natural walking in VR, a person’s motions are captured in real time, often using several cameras to determine the person’s position in the physical space. Inertial sensors, such as accelerometers and gyroscopes, usually embedded within the VR headset, provide information about the person’s head orientation and the direction they are looking. These positions and orientations are then relayed back to a computer capable of rerendering a scene from the person’s perspective in real time. Position tracking within a physical space can be done with either externally mounted infrared cameras within the room that watch for infrared reflective markers on the person or through inside-out camera tracking that uses cameras onboard the VR headset to determine the position of the headset within the room. Regardless of which tracking apparatus is utilized, such methods of movement are limited by the size of a physical space rather than the size of the digital virtual space. Sophisticated hardware treadmill systems have the potential to provide large-scale walking experiences through virtual spaces, but these technologies require a substantial investment (Kulkarni et al., 2015; Souman et al., 2011).
When room size limits the scale of motion, alternative navigation techniques can be used to afford exploration of the virtual space. Many interfaces have been developed to increase the limits of spatial navigation, such as using joysticks or thumb toggles (Cirio et al., 2009), walking in place (Nilsson et al., 2016), applying translation gains to move across greater distances (Interrante et al., 2007), and redirected walking (Grechkin et al., 2016; Razzaque et al., 2001; Steinicke et al., 2009). Another common technique that expands the limits of navigation beyond physical space is teleportation. Teleportation in VR allows users to change positions instantly by targeting the location they wish to be at with the controllers and pressing a controller button, thus affording the exploration of virtual spaces potentially much larger than a physical space can accommodate. Teleportation may be the only option for navigation across larger virtual environments contained within small physical boundaries. However, teleportation may not always be good for understanding the spatial layout of a virtual environment, especially if the distance between noteworthy positions is large. In general, humans are quite good at spatial updating within most real or digital environments provided they contain sufficient visual cues for generating optic flow. VR users may not recall spaces in simple, sparsely populated virtual environments nor when the distance traveled between areas of interest increases (Chance et al., 1998; Riecke et al., 2007).
The benefits of physically walking versus teleporting in VR demonstrate that walking provides spatial updating and physical orientation cues, which may allow for better memory recall and performance on subsequent tasks (Paris et al., 2019; Xie et al., 2018). Navigating through the environment may lead to extended learning of spatial information, beyond any narrative content or experiences coded into a learning environment. It is also possible that teleportation will not induce enough self-motion cues to assist with the spatial updating of a space due to the instantaneous changes that occur between different visual perspectives. This idiothetic information from one’s own movements includes vestibular, muscular, and proprioceptive signals. Though walking has been shown to have better outcomes for learning, one limitation of walking as a method of movement in VR that needs consideration, from both a design and implementation standpoint, is that space is often limited (Paris et al., 2019). This suggests that a test of these two environments needs to be situated in a VR space where both navigation types are feasible.
In addition to navigation type, there are other variables related to movement (read: navigability) that could potentially influence learning outcomes in virtual environments. Piloting refers to how one navigates or moves through space with regard to certain landmarks in that space (Gallistel & Matzel, 2013). This mechanism both signals one’s own position in a space and allows a person to build a cognitive map of a space based on salient environmental cues. If events in a VR learning environment occurred over short distances, we refer to this as a proximal orientation. Making small movements from place-to-place in VR in order to learn presented information may facilitate understanding of a smaller concentrated amount of information. If events in a VR learning environment occurred over long distances, this would be referred to as a distal orientation. Instantaneous changes in position across long distances due to teleporting can be disorienting and users may have to spend additional time reacquiring their spatial position (Bowman et al., 1997, 1999; Moghadam et al., 2020). Distal motions may at once facilitate a more holistic orientation and understanding of a space if a person is walking through it but may impede the depth of information learned if a person is teleporting through it. Proximal motions may facilitate a more linear, concentrated understanding of a smaller portion of that same space. Both piloting orientations can differentially impact a person’s understanding of the space as well as the information contained in that space. Both may be important and beneficial in different scenarios, depending on many digital environmental factors, including scaling, size, and learning objectives of the VR space.
While researchers are beginning to understand the role technological affordances play in the user experience, systematic works indicate that a large number of studies are making claims about learning which are not theoretically anchored nor are they actually testing how the manipulation of those affordances may differentially impact measurable learning outcomes (see Mikropoulos & Natsis, 2011; Radianti et al., 2020, for meta-analytic reviews). Research has proposed that any learning space needs to be designed with predetermined learning outcomes in mind (Biggs, 2003). One learning measure in this study uses multiple-choice questions to test knowledge of the narrative contained within the VR environment, and two of the three learning measures examine spatial understanding of the VR environment. The first is a spatial updating task to recall specific locations in the virtual environment, and the second is a spatial reconstruction task to test spatial learning. These different levels of learning as specified in Bloom’s taxonomy (Bloom, 1956; Krathwohl & Anderson, 2009) allow us to examine learning beyond memory of the narrative and examine spatial understanding as well.
To recap this review of the literature, while both navigation type and piloting type are theoretically grounded in research related to learning and technology (e.g., MAIN model affordance of navigability and MDA framework’s mechanics), there is not enough published work to predict a priori the direction in which these independent variables may differentially impact dependent learning outcomes. Formally articulated, this experimental research will address the following research question: Research Question 1: Controlling for the amount of time participants will interact with VR materials, how does navigation type and piloting orientation influence learning outcomes as measured by a declarative knowledge test, spatial updating measure, and spatial reconstruction task, in a fully immersive, virtual reality environment?
As mentioned earlier, conventional wisdom and industry marketing suggest that if nothing else, VR environments are entertaining. At least some research supports this conjecture (e.g., Cheng et al., 2017). Theoretical work examining the role of affect (positive and negative) on learning recognizes that the relationships are complex. Bidirectional relationships are thought to exist between affect, motivation, and cognition (e.g., Linnenbrink, 2006). Programmatic quantitative and qualitative research indicates that both positively and negatively valanced academic emotions are related to motivation, learning strategies, cognitive resources, and academic achievement (Pekrun et al., 2002). Focus group research examining adult learners found that enjoyment of a life-long learning curriculum led to feelings of intellectual stimulation (Lightfoot & Brady, 2005). Independent of educational content, affordances can elicit affective responses, too (Sundar, 2008). Published works suggest at least a preference for natural walking over teleportation—read: navigation—(Sayers, 2004; Sayyad et al., 2020), but no available research predicts a directional affective response to proximal or distal piloting orientations in VR. Given this information would be of interest to designers of both entertainment and educational VR environments, we ask the question: Research Question 2: Does enjoyment of the VR experience differ across conditions?
Finally, previous experimental research with other digital learning environments has demonstrated that presence (and by proxy, derivatives of presence) can significantly mediate the relationship between affordances and outcome variables (e.g., Downs & Oliver, 2016; Rupp et al., 2019). Given both the empirical findings, as well as the theoretical links that scholars have made between presence and learning in VR environments (e.g., Cheng et al., 2017; Dalgarno & Lee, 2009; Fowler, 2015), it is important to examine presence and its derivatives (i.e., immersion and engagement; Witmer & Singer, 1998) in a larger learning model. As such, we ask the question: Research Question 3: Will immersion and engagement mediate the relationships between type of motion and learning outcomes and/or piloting orientation and learning outcomes?
To explore the role that navigation plays on learning in a virtual environment, a fully immersive, virtual rendition of “Spoon River Cemetery,” based on American author, Edgar Lee Masters (1915) fictitious Spoon River Anthology was created. The virtual environment borrows from Masters’ work by telling the stories of Spoon River’s citizens through the epitaphs on their tombstones. To create a novel experience for participants who may have read Spoon River Anthology, a new story was written for the inhabitants of Spoon River. This original reinterpretation folded in a medical mystery and changed some of the relationships from Masters’ original work. The information presented in the narrative was related to the lives, occupations, and relationships with other inhabitants of Spoon River, and for many, the cause of their death. The immersive VR environment, complete with 13 tombstones and a Celtic cross (situated in the middle of the cemetery), was programmed in Unity, 2018.2.10 (see Figure 1). All conditions took place in an open lab room that is approximately 6.4 m wide by 10 m long. The virtual Spoon River environment was designed to fit completely within the physical space of the room, thus ensuring that participants could naturally walk to all tombstone locations.
All participants used an Oculus Rift S VR headset. A Micro-Star International (MSI) VR One portable backpack computer with an NVIDIA GTX 1070 graphics processing unit (GPU) drove the Oculus Rift S display. Participants donned the backpack and wore it throughout the VR portions of the experiment. This setup allowed all participants to move freely about the experimental space without being tethered by wires to a computer. A single Oculus Rift S controller was held in the right hand of all participants in both the teleportation and natural walking conditions. The controller was rendered in the three-dimensional environment and was primarily used to allow participants to teleport to a new location by pointing to one of the location markers and pressing the A button on the controller. Participants in the natural walking conditions held the controller and could see it for consistency across the between-subjects conditions but were instructed to not press any buttons or triggers as they were nonfunctional and not necessary when naturally walking through the VR space.
Participants were randomly assigned to one of four conditions, manipulating navigation type (natural walking vs. teleportation) and type of piloting (via proximal or distal orientations). This resulted in a 2 × 2, fully crossed, factorial design. After completing a pencil-and-paper demographic questionnaire, participants were introduced to, and given instruction on how to wear and use the VR equipment. Work by Rand et al. (2015) argued that user safety takes precedence over other cognitive resources, such as those used for learning. As such, it was important to create a training environment for participants before they interacted with the experimental stimulus. A training room was created which visually depicted a brick wall with stone placards attached to it. Transparent cyan cylinders were placed on the ground in front of the placards indicating which placards the participant could visit. Participants always had two choices to choose from (until there was only one unvisited location remaining). Participants were instructed on how to walk or teleport to whichever cyan cylinder they chose. In all practice conditions when participants reached the location, phrases on the placard would appear to them and they would have 10 s to view the information and read the phrases aloud before the words disappeared. Reading aloud confirmed that the font type and size were sufficient for participants to read prior to entering their experimental condition. A visual countdown timer appeared above the presented information to indicate time remaining. After the timer ended, the words disappeared from the placard, the timer disappeared, and two new cyan cylinders appeared at unvisited locations. Participants also practiced range-of-motion skills (e.g., squatting and turning in place) so they could see how their perspective would change in the VR environment with various head and body movements.
The experiences in the training environment were designed to approximate experiences in the experimental conditions. After visiting the seven locations in the training environment, participants began the experimental portion of the task in Spoon River Cemetery. The simulation for all participants began at the entrance gate of the VR cemetery directly in front of a Celtic cross statue. Participants were instructed to visit each of the 13 tombstones using the same method of movement as used in the training environment. They had the option to pick one of two possible tombstones to move to, indicated by a cyan location cylinder (see Figure 2). This choice allowed participants’ paths through the cemetery to be slightly different while always aligning with the piloting condition (nearby choices for the proximal condition and far away choices for the distal condition). When they reached these locations, information about a specific Spoon River resident would appear on the tombstone. To control for the amount of time a participant could see the information, participants had 35 s to read each tombstone before the epitaph’s text disappeared. This process continued until each tombstone was visited and participants explored the space on their own. Before starting the experiment in the virtual cemetery, participants were told that after their experience, they would be given a quiz about the information presented to them on the tombstones in the environment.
After the virtual Spoon River Cemetery experience, participants were taken out of VR and asked to complete a pencil-and-paper multiple-choice test that assessed how much the participant learned from the stories presented on the tombstones. This declarative knowledge, or factual knowledge that can be memorized, represents the most basic level of understanding in Bloom’s taxonomy (Bloom, 1956) and as such, represents a common form of questioning that students are very familiar with. This narrative assessment consisted of 20 multiple-choice items with each item having four answers to choose from. Sample questions include “Who is the richest person in Spoon River?” and “Who is responsible for the most deaths in Spoon River?” Upon completion of this measure, participants completed another paper-and-pencil questionnaire that asked questions pertaining to immersion, engagement, and enjoyment while participating in the VR experience.
As previously stated, after the narrative test measure, a pencil-and-paper questionnaire was administered to measure subjective experiences. See appendix for list of all psychometric measurement items. Immersion was measured on a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Questions included: I felt like I was physically inside the VR environment; I felt immersed in the VR environment; I felt like I was surrounded by the VR environment. The three items were deemed reliable and internally consistent with Cronbach’s α = .85.
Novelty was measured on a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Sample items included: The experience I had today using this VR technology was a new one for me; The experience I had today with this VR technology was very routine for me (reverse coded). The three items were deemed reliable and internally consistent with Cronbach’s α = .77.
Enjoyment was measured on a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Sample items included: I enjoyed the experimental task I participated in today; I would like to have experiences like this again in the future; I was disappointed participating in the task today (reverse coded). The six items were deemed reliable and internally consistent with Cronbach’s α = .83. Participants then went on to complete two additional learning measures: spatial updating and spatial reconstruction.
Upon completion of the pencil-and-paper measures, participants donned the VR gear a second time and were put back into the Spoon River Cemetery VR experience. Participants were asked to close their eyes and to visualize themselves at the entrance of the cemetery—imagining standing just inside the entrance gate with the Celtic cross monument in front of them (the Celtic cross was not a visited tombstone, but a marker situated in the middle of the cemetery and had no epitaph written on it). They were asked to imagine all the tombstones they visited when they were in the cemetery. This visualization lasted for approximately 30 s. Participants were asked to open their eyes and the second VR experience began. This time, none of the tombstones that they visited from the first VR experience were present. Only the large Celtic cross in the center of the cemetery and the surrounding cemetery gates remained as reference points. Participants were asked to look around and find a white location marker on the ground just in front of the entrance gate. They were asked to walk to this marker as it would be used as a home base for them to walk back to during this measure.
Participants were read one name at a time from the tombstones that they visited when in the original experimental condition. They were instructed to physically walk (regardless of previous experimental navigation condition) to the location they were standing when they read this person’s tombstone. When the participant indicated where they thought they were standing, this position in the virtual space was recorded and the participant was asked to return to the white marker. This process was repeated 10 times with different names from the cemetery. Prior to the study, the researchers selected 10 random names from the cemetery to be used for this task. Each participant received the same 10 names but in a randomized order. When all 10 positions were recorded, participants removed the VR equipment. If a participant stopped within one-half of a meter of the center of the appropriate tombstone, they received one point. This value was chosen as the width of the tombstones was approximately .75 m. If they were outside of the half-meter radius, they did not receive a point. The maximum score if all location positions were within the establish parameter was a 10 out of 10. The process for this measure was the same for all participants, regardless of experimental condition. Participants in the teleportation conditions were informed that it was safe to walk to any of the tombstones as the virtual experience was designed to fit within the physical lab space.
Prior to beginning the final spatial reconstruction task, participants engaged in a distractor task to assist with clearing any immediate information from the spatial location task from working, short-term memory. Participants were instructed to take 17 names mounted on foam-core placards—some of which they saw on tombstones in the Spoon River Cemetery—and put them in alphabetical order using the first letter of the last name. The average amount of time it took for participants to complete the alphabetical task was 116 s. This amount of time is sufficiently longer than the 30 s, filled delay deemed necessary to clear working, short-term memory (Baddeley & Hitch, 1974).
Following the distractor task, participants completed the final learning assessment, the spatial reconstruction task. This measure asked participants to place miniature tombstones, carbon copies of the tombstones they experienced in VR mounted on foam core, on a scaled down, physical diorama of the Spoon River Cemetery. A blank 24″ × 36″ diorama board with red dots represented the locations where tombstones could be placed (see Figure 3). Participants were given the position of one central landmark (the Celtic cross located in the center of the VR cemetery) and were instructed to look at the board as if they were standing at the entrance of Spoon River Cemetery. In both the alphabetical task and the tombstone placement task, four extra names were added as foils in order to assess both the correct placement of the original tombstones as well as correctly identifying the appropriate names of Spoon River’s residents. When participants completed the reconstruction task, the completed diorama was digitally photographed for later coding. If a participant placed the correct tombstone in the proper position on the diorama, they received one point. If the position or name was incorrect, they did not receive a point. The maximum score a person could receive in this assessment was 13 out of 13 correctly placed tombstones.
This concluded the experiment. If participants elected to complete the study for course/extra credit, their name was recorded, they were debriefed and thanked for their time before exiting the lab.
All experimental protocols and measures were approved by the institutional review board at the University of Minnesota, and participants gave written informed consent before the start of laboratory procedures. A power analysis was conducted using G*Power (Faul et al., 2007). Given multivariate analysis of variance (MANOVA) falls under the category of F tests, and assuming a small (.06) effect size, and an α of .05, we would have 80% power to detect the effect of navigation type and piloting method on learning outcomes if (n = 117) participants completed the study. Participants were recruited through flyers, word of mouth, and from undergraduate courses which offered extra credit for participation. Each participant was screened for susceptibility to motion sickness, media induced epileptic seizures, and visual disorders that could affect reading and cognitive function before the start of the experiment.
A total of (N = 122) participants began the present study. Three participants reported feeling dizzy/nauseous due to the VR technology and elected to terminate their part in the experiment. For an additional three participants, tech issues (battery/computer glitches) interrupted their sessions. Due to the lack of information and/or interruptions, these six participants were dropped from the data set. The remaining participants (n = 116) were university students aged 18–36 (M = 20 years, SD = 2.64), the group most likely to be using VR technologies for educational purposes. A slight majority of the participants were females (n = 65, 56%) and 32 different campus majors were represented. By condition, the cell sizes were relatively equal: walking, distal (n = 29); teleporting, distal (n = 30); walking, proximal (n = 29); and teleporting, proximal (n = 28).
Prior to the beginning of the experimental session, participants were asked how they were feeling based on a 1 (not well/sick) to 7 (great/very healthy) Likert scale and asked if they had any health conditions that they felt might interfere with their participation (e.g., illness, nausea, dizziness, vision impairment, equilibrium, etc.). Participants also answered self-report questions inquiring about previous VR or augmented reality (AR) technology use (e.g., Google Cardboard, Samsung VR, Sony PSVR, Oculus Rift, HTC Vive, Hololens, etc.) and estimates of the total amount of time they had cumulatively spent in these environments. More than half of the total sample of students (n = 65) reported having never used or experienced any virtual—AR technologies prior to the experiment. For those with previous experience in VR, 36 participants had experienced one type of VR or AR technology, 10 had experienced two technologies, four had experienced three technologies, and one had experienced four different VR technologies. In terms of cumulative hours of VR time, a vast majority (n = 111) of participants estimated they had spent between 0 and 10 hr in VR. Four participants reported between 11 and 20 hr of VR experience and one participant reported between 31 and 40 hr of prior VR experience. Participants self-reported that the Spoon River Cemetery VR experience was novel (M = 5.97, SD = 1.40) with no statistical differences across conditions (p = .847), and 44% (n = 51) averaging a 7 on the 7-point novelty measure.
Linear measurements taken “as the crow flies” between stops at the tombstones in the VR version of Spoon River Cemetery revealed differences in the amount of ground covered between proximal and distal conditions. Participants moving in the two distal piloting conditions (i.e., walking or teleporting to the furthest tombstones) traveled the furthest, with an average of (M = 58.26 total meters, SD = 1.58), compared to the two proximal conditions (M = 18.86 total meters, SD = 3.58). The distal versus proximal conditions differed statistically at t(78.43) = 76.69, p < .001 (interpreted using equal variances not assumed). Those in the distal conditions covered more than three times the distance than those in the proximal conditions.
Statistical analyses are detailed in the following Results section. A list of psychometric measurement items are listed in the appendix. The data set is available upon request.
The original analysis plan for testing the learning measures was to conduct univariate tests for each dependent learning measure (narrative test, spatial location, and spatial reconstruction). However, the three dependent measures for learning were all significantly correlated at p < .001. (see Table 1). One of the bivariate correlations—between spatial location and spatial reconstruction—indicated a potential multicollinearity issue, Pearson’s r = .705. Because this correlation is right on the threshold, and because statistical problems caused by multicollinearity generally occur at higher values (.90 and higher; Tabachnick & Fidell, 2013), the decision was made to include all three dependent learning measures in the multivariate test. Factorial MANOVA was conducted to test the effect of navigation type (natural walking vs. teleportation) and piloting method (proximal vs. distal) on all three dependent learning measures. Skewness statistics for each of the dependent measures were all within the ±1.0 threshold, indicating the data were normally distributed. Levene’s test and Box’s M test were not significant, indicating homogeneity of variance and of variance and covariance matrices, respectively. An examination of Mahalanobis distances indicated that there were no outliers in the data set. Thus, all criteria for multivariate testing were met. Factorial MANOVA revealed there was neither a main effect for navigation type, Wilks’ Λ = .96, F(3, 108) = 1.426, p < .24, partial η2 = .038, nor for piloting method, Wilks’Λ = .99, F(3, 108) = .54, p < .65, partial η2 = .02. However, a significant interaction effect between navigation type and piloting method occurred: Wilks’Λ = .91, F(3, 108) = 3.43, p < .02, partial η2 = .09. Because only the interaction effect was significant in the multivariate test, only the interaction effects for the subsequent univariate follow-up tests are reported.
Bivariate Correlation Matrix for Three Learning Dependent Variables
Dependent variable learning measures
1. Narrative test
2. Spatial location
3. Spatial reconstruction
*p < .001.
To test the effect of navigation type (natural walking vs. teleportation) and piloting method (proximal vs. distal) on the number of correct answers on the 20-question, multiple-choice test, factorial analysis of variance (ANOVA) was conducted. The univariate test indicated that the interaction effect between navigation type and piloting method remained statistically significant, F(1, 112) = 6.252, p = .014, partial η2 = .053. Those in the natural walking, distal orientation condition scored the highest on the narrative test of the four groups (M = 15.26, SD = 3.43), followed by the teleportation, proximal orientation condition (M = 13.89, SD = 3.99), the natural walking, proximal orientation condition (M = 12.83, SD = 4.42), and the teleportation, distal orientation condition (M = 12.43, SD = 3.62). Post hoc pairwise comparisons of the four conditions indicate that natural walking with distal orientation differed significantly from both the teleporting with distal orientation (p < .013) as well as the natural walking with proximal orientation condition (p < .037). The natural walking with distal orientation condition performed best overall, while the teleportation with distal orientation condition performed the worst. When the Holm’s sequential Bonferroni adjustment was made to attenuate for family-wise α error, both significant pairwise findings emerged as not significant. Given the known conservative nature of these adjustments, inconsistent application of them, and the importance of not overcorrecting into a type two error, we will interpret the largest difference as significant (O’Keefe, 2003). The natural walking with distal orientation condition differed significantly from the teleporting with distal orientation condition. Figure 4 illustrates how navigation type and piloting method affected the outcomes of the narrative test.
To test the effect of navigation type (natural walking vs. teleportation) and type of piloting (proximal vs. distal) on the number of correct locations to which participants walked to within .5 m of the center of the correct tombstone, factorial analysis of variance was conducted. Results show that there was no significant difference between proximal and distal conditions, p = .85, or between walking and teleporting conditions, p = .165. However, a significant interaction effect was found, F(1, 110) = 8.41, p = .005, partial η2 = .071. Those in the natural walking, distal condition scored the highest of the four groups (M = 3.15, SD = 1.83), followed by the teleportation, proximal condition (M = 2.71, SD = 1.76), the natural walking, proximal condition (M = 2.17, SD = 2.02), and the teleportation, distal condition (M = 1.6, SD = 2.04). Post hoc pairwise comparisons of the four conditions indicate that natural walking with a distal orientation differed statistically from the teleportation with distal orientation (p < .003) and approached a significant difference with the natural walking with proximal orientation (p < .06). The teleport with distal orientation differed significantly from the teleport with proximal orientation (p < .029). Holm’s sequential Bonferroni adjustment to attenuate for family-wise error indicated that natural walking with a distal orientation remained statistically different from the teleportation with distal orientation. Again, participants in the natural walking with distal orientation condition performed the best overall, while those in the teleportation with distal orientation condition again performed the worst. Figure 5 illustrates the interaction between navigation and piloting for the spatial updating task.
A subsequent univariate test indicated there was a significant interaction effect between type of navigation and piloting orientation on spatial reconstruction, F(1, 111) = 4.68, p = .033, partial η2 = .04. Those in the natural walking, distal condition scored the highest of the four groups (M = 7.4, SD = 3.16), while the teleportation, distal condition scored the lowest (M = 4.67, SD = 3.7). The teleportation, proximal condition (M = 6.39, SD = 3.22) and the natural walking, proximal condition scores (M = 6.31, SD = 3.39) were situated between the two. Post hoc pairwise comparisons of the four conditions indicate that natural walking with distal orientation differed significantly from the teleport with distal orientation condition (p < .004). For this variable, it is worth noting that teleporting with distal orientation also approached significant differences with both the teleport proximal condition (p < .055) and natural walking proximal condition (p < .065). Holm’s sequential Bonferroni adjustment to attenuate for family-wise error indicates that natural walking with distal orientation differed significantly from the teleport with distal orientation condition. As with the previous two tests of learning, participants in the natural walking with distal orientation condition performed best overall on correct tombstone placement, while those in the teleportation with distal orientation condition performed the worst. Figure 6 illustrates the effect of navigation and piloting on spatial reconstruction.
To test the effect of type of motion and type of piloting on task enjoyment, a factorial analysis of variance was conducted. The data collected for task enjoyment was both negatively skewed and violated the homoscedasticity assumption. A log 10 transformation was conducted to normalize the data and the subsequent test was interpreted using the HC3 PROCESS macro in SPSS, which estimates a heteroscedasticity-consistent standard error (Hayes & Cai, 2007). For consistency, means and standard deviations are reported in the original metric. There were no significant main effects for type of motion or piloting. However, an interaction between the two variables approached significance, F(1, 112) = 3.53, p = .063, partial η2 = .031. Once again, those in the natural walking with distal orientation condition scored the highest of the four groups (M = 6.54, SD = .57), with the lowest scoring group being the teleportation with distal orientation (M = 5.89, SD = 1.19). While the omnibus test is not statistically significant, post hoc pairwise comparisons indicate a statistically significant difference between the natural walking with distal orientation condition and the teleport with distal orientation condition (p < .003). This finding remains significant with the application of Holm’s sequential Bonferroni correction. The consistent pattern and practical importance of this finding are elaborated upon in the Discussion section.
Researchers have recommended that mediating variables can be examined in the context of VR learning environments (e.g., Cheng et al., 2017; Dalgarno & Lee, 2009; Fowler, 2015). This study measured immersion and engagement (Witmer & Singer, 1998), both thought to be subdimensions of presence that could potentially mediate the relationship between motion mechanics in VR environments and learning outcomes. The scale for engagement was not reliable and was dropped from subsequent analyses. For the mediation tests, a composite total learning score was created by combining the results of all three learning variables. Analysis using Hayes’ PROCESS macro, V4.0 (Model Number 4 for a test of mediation) revealed that immersion did not mediate the relationship between type of motion (walking or teleporting) and the composite total learning outcome score. Similarly, immersion did not mediate the relationship between piloting orientation and composite total learning outcome score.
Broadly speaking, this research project examined how the affordance of navigability could be manipulated to differentially impact learning outcomes in a virtual learning environment. More to the point, the type of navigation used (natural walking vs. teleportation) was crossed with two piloting orientations (proximal vs. distal) to determine if there were differences in learning outcomes as measured by narrative testing, spatial location, and spatial reconstruction activities. The following paragraphs will discuss the results in terms of learning outcomes in VR environments, implications for theory, and offer some modest suggestions for the development of educational materials in VR and future research in this domain.
Regarding learning outcomes, two noteworthy patterns emerged. First, the combination of natural walking with distal orientation statistically scored the highest on all three of the learning outcomes. Second, the teleportation with distal orientation condition statistically scored the lowest on all three of the learning outcomes. Our research question asked if any combination of conditions differentially influenced learning outcomes? The pattern of responses indicated that there is clearly a preferred combination of motion mechanics on learning outcomes. If learning as measured by narrative testing, spatial location, and spatial reconstruction are important, then natural walking with distal orientation is the best choice.
This finding is consistent with previous theorizing in this area. A review by Chrastil and Warren (2012) covering active versus passive learning in virtual environments theorized that there are advantages when locomoting naturally, including, increasing learning through attention, and encoding. Walking through the environment and directly experiencing it (experiential learning theory; Kolb, 1984) may have led to stronger internal representations (Rieser, 1999; Rieser & Pick, 2007) of the space and potentially better spatial memory about locations within the environment (Presson & Hazelrigg, 1984). Natural walking involves these cognitive processes, engaging working memory and the ability to encode spatial information to a higher degree than participants in the teleporting condition, who do not physically move. Evidence from this study supports this idea when the positions of interest were distally placed. The fact that participants in the natural walking with distal orientation condition consistently outperformed all other conditions on the learning measures suggests that not all navigability conditions are created equal when it comes to learning outcomes. Walking—a largely automatic process in adults—may have allowed participants an extra few seconds to reflect on the information that they had engaged with while moving toward their next heading, allowing them to perform better on the learning measures. By contrast, in the teleport condition, available resources were not used to process relevant narrative and spatial information, but rather used to reorient to the rapid change in scenery and new position in space, thereby detracting from the overall learning potential.
While both the multivariate test examining type of motion and piloting orientation revealed a small to medium effect size (partial η2 = .09) and the subsequent univariate tests revealed small effect sizes (narrative test, partial η2 = .053; spatial location task, partial η2 = .071; and spatial recreation, partial η2 = .04), further examination underscores the importance of examining both statistically significant and practically relevant findings. For example, the practical relevance of the interaction between navigation type and piloting method on narrative testing is noteworthy. The multiple-choice format used for the narrative test measure used in this study is similar to the multiple-choice exam format frequently used in U.S. high schools and colleges. When converted to percentage scores, those in the natural walking with distal orientation condition scored a 76.3%. Those in the teleporting with proximal orientation condition scored a 69.45%. Those in the natural walking with proximal orientation condition scored a 64.1%, and those in the teleporting with distal orientation condition scored a 62.15% on the narrative test. By many grading rubrics, the percent score differences for the four conditions just reported are differences between earning a grade of C, D+, F, and F (respectively) on an exam. This pattern of percentage results is consistent with other published, experimental studies examining learning with technology (e.g., Downs et al., 2011, 2015). Content creators, teachers, and programmers should be aware that technological affordances can be manipulated (i.e., navigability mechanics: types of navigation and piloting orientation) in ways that can affect learning outcomes.
Although the omnibus ANOVA test was not initially statistically significant (p < .063, but pairwise comparison between natural walking with distal orientation and teleport with distal orientation was significant at p < .003), it is worth mentioning that the natural walking with distal orientation condition received the highest rating of all four conditions for enjoyment. Further post hoc analysis revealed that enjoyment was significantly positively correlated with all three tests of learning: narrative test (r = .21, p < .025), spatial updating task (r = .20, p < .037), and spatial reconstruction task (r = .26, p < .004). Those who design and customize learning environments may wish to consider ways to make the VR environments more enjoyable. As this study demonstrates, the natural walking with distal orientation maximized learning outcomes and enjoyment, and enjoyment was significantly positively correlated with learning outcomes.
Regarding the mediation models, immersion did not mediate the relationship between type of motion nor type of piloting on learning outcomes. Engagement was not tested as a mediating variable as the measure was not reliable. Broadly speaking, the findings reported here indicate that immersion does not mediate the relationship between type of motion and piloting orientation. This finding is inconsistent with other experimental work conducted in interactive digital environments (e.g., Balakrishnan & Sundar, 2011; Downs & Oliver, 2016). As different conceptualizations and operationalizations for presence exist, our understanding of VR environments would benefit from continuing to test its many nuanced facets.
The study of emerging VR, AR, and extended reality (XR) technologies as teaching and learning devices is still in its infancy. As such, scholars (e.g., Fowler, 2015) rightfully lament the lack of any unified pedagogical theory of learning in VR. Nonetheless, as this study demonstrated, existing theories examining the role of technological affordances (Sundar, 2008) are capable of contributing to our understanding of best practices for learning in virtual and augmented spaces.
This study provides a foundation from which to anchor a consistent program of research into learning in VR environments examining the relationship between types of navigation and learning. However, some observations may help to focus scholars interested in pursuing this line of inquiry. Natural walking provides a representation of VR spaces that is continuous. Since it is continuous, time that would ordinarily be spent orienting, as is necessary for the teleportation conditions, can be spent reflecting on information that has been learned in the VR environment. The teleportation condition, on the other hand, facilitates motion with instantaneous movement from one place to another and does not allow time for spatial updating or reflection on newly learned information. Xie et al. (2018) referred to this as the difference between continuous (e.g., natural walking) and discrete (e.g., teleporting) means of locomotion. The findings from this study are consistent with Xie et al. (2018). Future research would benefit from examining how alternative forms of continuous motion, especially in larger virtual worlds (e.g., flying or use of mechs and vehicles), influence learning outcomes.
Related to this point, the present study examined a relatively small virtual environment due to the physical lab space available to construct an environment that also considered participant safety (the experiment was conducted within the bounds of a 3.048 × 6.096 m lab space). Therefore, proximal and distal distances were operationalized within the bounds of these measurements, with total distances covered averaging 18.88 m and 58.22 m, respectively. Given these dimensions, the environment tested in this study may be regarded as a “visual line of sight” or VLOS environment. Though studies have shown that individuals tend to underestimate distances in VR compared to the physical world (see Creem-Regehr et al., 2015, for review), further studies could examine motion mechanics across vast distances in VR (e.g., Elder Scrolls V Skyrim—estimated size 14.5 square miles; Philpott-Kenny, 2020). These vastly larger environments are referred to as “beyond line of sight” or BLOS environments. Future work should examine if certain combinations of navigational properties facilitate differences in learning outcomes between VLOS and BLOS VR environments.
Another limitation has to do with the choice to use a cemetery as a backdrop for the VR experiment. While the visual imagery stayed true to Masters’ Spoon River Anthology, the virtual environment shown to participants could make some users uncomfortable due to cultural milieu. However, a one-sample t-test indicated that participants’ scores for enjoyment were significantly higher than the midpoint of the scale, t(115) = 28.61, p < .001, Cohen’s d = .854. Given the average of the entire sample of students for enjoyment of the task was (M = 6.27) on a 7-point scale, any systematic negative cultural or individual responses to cemeteries for this sample of participants can be ruled out. Nonetheless, future research using these types of stimuli should continue to screen for content sensitivity.
The nature of this study limits our understanding of learning outcomes as it utilized one-time only measurements. Future studies should be developed with a longitudinal design to examine whether the learning gains would hold over time. Also important would be research that examined learning over multiple trials and even with other people in collaborative VR spaces. Given the controls necessary to eliminate the influence of repetition (the timer and disappearing epitaphs on the tombstones), it is also important to examine how participants would learn in a VR environment that allows participants to freely navigate a space in their own way, on their own time (e.g., see Ferguson et al., 2020).
Finally, embodiment should also be examined in future work related to digital learning spaces. Current systems typically resort to either utilizing a nonanimated body in VR environments as a proxy for the self or forgoing a body altogether, giving the perspective to a viewer of floating through the ether. However, as technology allows, an embodied, fully animated avatar could change learning outcomes in digital learning spaces. A moving digital body mapped to an existing real body might encourage egocentric coding of spatial information or an understanding of objects in relation to the self. This as opposed to allocentric coding of information where features of an environment are processed in relation to each other. Future work should examine whether egocentric or allocentric coding differentially impacts learning outcomes.
Whether or not VR lives up to its billing as the future of learning is still up for debate. Nonetheless, learning can occur in VR environments. This study lends support to the growing body of research that suggests the design of virtual learning environments matter. When holding content and time spent interacting with materials constant, how an environment is designed, that is to say; how affordances like navigability are manipulated can have measurable effects on learning outcomes. Future research would benefit from testing learning outcomes across media, across various technological affordances, and across various learner characteristics to determine what works best to fulfill desired learning objectives.
Although the title of this study suggests that misplacing tombstones from the virtual Spoon River Cemetery spatial reconstruction task may be the “grave errors” in question, the authors contend that the gravest of errors is the assumption by designers of virtual educational spaces that all forms of navigation in VR environments are equally suited for learning. To summarize, the results from this experiment demonstrated that natural walking with distal piloting performed best on all three tested learning outcomes (narrative testing, spatial location, and spatial reconstruction). Future research needs to continue to examine navigation methods as well as other technological affordances to match the appropriate mechanics to desired outcomes.
The experience I had today using this virtual reality (VR) technology was a new one for me.
The experience I had today with this VR technology was very routine for me. (RC)
This was the first time I have used VR technology like this before.
I enjoyed the experimental task I participated in today.
I thought that the task I participated in was frustrating. (RC)
I had fun participating in the experiment today.
I was disappointed participating in the task today. (RC)
I would like to have experiences like this again in the future.
I thought the task I participated in today was boring. (RC)
I felt like I was physically inside the VR environment.
I felt immersed in the VR environment.
I felt like I was surrounded by the VR environment.
The virtual world was responsive to actions that I initiated.
I was aware of events occurring in the lab space when I was in the virtual world.
It was easy to manipulate objects in the virtual world.
The virtual world made me feel disoriented. (RC)
Using the control mechanisms was intuitive.
I got proficient in moving around through the virtual environment.
The visual display interfered with my ability to perform the required activities. (RC)
I could concentrate on the assigned tasks in the virtual environment because the control mechanisms were easy to use.
Note. RC = reverse coded.