Misinformation in Virtual Reality

by James Brown, Jeremy Bailenson & Jeffrey Hanock, Stanford University, USA.

Virtual Reality (VR) use is growing, and the medium is becoming ever more immersive, creating perceptual experiences that come closer to the fidelity of reality. The positive applications of this technology have grown with its popularity. One such use-case is VR exposure therapy, where patients can make full use of the unique immersive quality of VR to slowly and safely expose themselves to a particular fear. Another is as an educational tool, where VR has been shown to be effective at promoting climate change awareness and the learning of physical tasks. However, there is a growing concern that these same immersive properties may be used for manipulating users and amplifying misinformation. The current information ecosystem, which includes text, images, and video, has already shown to be an ample breeding ground for misinformation, with significant ramifications on society, democracy, and trust. The potency of immersive VR makes the potential for manipulative persuasion using misinformation even more alarming and a lack of research on the topic relevant to trust and safety stakeholders further exacerbates the issue. By examining VR’s various features and affordances, researchers at Stanford have begun to develop the first framework for how VR might influence misinformation effects. This analysis can help inform trust and safety measures for platforms hosting VR content and guide further research in this rapidly evolving medium.

Misinformation has become increasingly salient in today’s media environment due to the internet’s ability to rapidly spread false or misleading information. Exposure to misinformation can have negative consequences, as people often struggle to discern between credible news and false information online. To advance the understanding of misinformation in VR, it’s essential to examine the relationships between VR features, the presentation of information as true or not, and people’s beliefs and perceptions. Before proceeding, we introduce and define a new concept called “misexperience”. In this article, the VR content is misinformation, the user’s activity inside VR with that content is misexperience, and the false beliefs that could occur subsequently are misperceptions.

Research on VR and misinformation is scarce, but studies have shown that virtual experiences can change subsequent attitudes and behaviors. VR research often focuses on presence, which refers to the psychological impact of experiences that feel real to users. Direct experiences in VR have been shown to translate to beliefs and behaviors differently than indirect experiences. Although there is little research directly testing the hypothesis, it is expected that the power of VR to create direct “misexperiences” might have a powerful effect on misperceptions.

The concept of affordances is vital for understanding VR’s potential for fostering misperceptions. It allows us to examine how VR technology’s features can shape human beliefs and behavior in non-deterministic ways. An affordance-based approach avoids technological determinism by recognizing both the potential persuasive power of a VR feature and a user’s psychological experience of that feature in influencing their beliefs or behaviors. This approach considers the interaction between different VR features and user characteristics. For example, immersive features such as “field-of-view” and “haptic touch” pertain to the medium’s technological aspects, while content features such as “embodiment” and “persistence” relate to the information conveyed about users or other actors in the environment. This approach provides insights into the implications of specific VR features for misinformation without being overly deterministic and helps recognize that exposure to misinformation in VR does not automatically lead to misperceptions. See the paper “Misinformation in Virtual Reality” for a list of Immersive and Content affordances and examples of their impact on the user in an example misexperience.

VR has the potential to bring about significant positive change in the world by fostering connectedness, creating communities, and serving as a tool for education, collaboration, and psychological exploration. However, as VR stands on the cusp of widespread adoption, we should seek to avoid repeating the same mistakes that traditional social media platforms made a decade ago. Especially since some of those same social media companies are also primarily driving VR: ninety percent of all VR headsets were sold by either Meta or ByteDance in 2022. Current mitigation efforts include minimalist content moderation for standalone experiences or community-based moderation in social VR applications. However, these methods may be insufficient in addressing misinformation and other abuses in VR environments.

To address the challenges of misinformation in VR, higher media literacy is necessary for both content providers and users. VR content providers and creators should collaborate with the trust and safety community and watchdog organizations while scaling the size and resources of their teams for more effective moderation. Additionally, research should validate how misinformation in VR affects attitudes and behaviors, how fast misinformation spreads in VR, and how platforms can increase detection rates. By validating these concerns and shaping further discussions and mitigation efforts, the positive potential of VR can be better realized while minimizing the risks.

For the Full publication: Brown, J. G.,  Bailenson, J., & Hancock, J. (2023). Misinformation in Virtual RealityJournal of Online Trust and Safety, March 2023

Authors

James Brown, MBA Candidate at Stanford Graduate School of Business, Stanford University, USA.

Jeffrey Hanock, Founding Director of the Stanford Social Media Lab and Professor in the Department of Communication, Stanford University, USA.

Jeremy Bailenson, Thomas More Storke Professor in the Department of Communication and Founding Director of Stanford University’s Virtual Human Interaction Lab, Stanford University, USA.