Exploring the potential of immersive virtual environments for learning American Sign Language

by Jindi Wang, Durham University, UK.

Effective communication forms the bedrock of human connection, and for the deaf community, sign language stands as a powerful conduit to bridge the gap with the hearing world. Yet, the journey of learning sign language online has proven to be a challenging odyssey for many students, with traditional approaches often feeling tedious and uninspiring. In this transformative exploration, we delve into a pioneering initiative aiming to redefine the landscape of American Sign Language (ASL) education.

In our recent research, detailed in this publication, we introduced a groundbreaking immersive environment tailored for learning ASL numbers 0–9. The core motivation behind our endeavour was to address the inherent limitations of website-based learning, providing a fresh and engaging perspective for students.

 Immersive ASL Learning: A Visionary Leap

Our hypothesis centred on the belief that an immersive virtual environment could revolutionize the learning experience, fostering heightened engagement compared to traditional website-based methods. To substantiate this, we conducted a comprehensive user survey employing six assessment scales — Attractiveness, Efficiency, Perspicuity, Dependability, Stimulation, and Novelty. With the intention of collecting user feedback, which would serve as the study’s data source, we invited 15 users (M = 8, F = 7) to test the immersive sign language learning environment we developed, and 15 different users (M = 8, F = 7) to test the website-based ASL learning environment. Most users have minimal to no prior knowledge of ASL or other sign languages. The user experience analysis unearthed a compelling preference for immersive VR environments among users. The allure of experiential learning in a virtual space outweighed the convenience and user-friendliness associated with web-based platforms. Participants expressed a sense of immersion that went beyond traditional learning methods, fostering a deeper connection with the language and culture of ASL.

Promising Results and Future Horizons

In conclusion, our findings resonate with the initial hypothesis, showcasing a strong user preference for the immersive environment over the website-based mode. Users expressed satisfaction with the virtual learning space, setting the stage for future enhancements. Our roadmap includes integrating dynamic elements such as backdrop movement, scene changes, and animation prompts to elevate the immersive experience. Automatic settings will be introduced to streamline user interactions, and a user-friendly interface will guide users through system controls.

Looking ahead, our vision extends to the development of a robust sign recognition model, enabling the inclusion of sophisticated sign language learning materials. This evolution marks a transformative chapter in ASL education, where immersive technology becomes a catalyst for deeper engagement and enriched learning outcomes.

In the ever-evolving landscape of education technology, our immersive ASL learning environment stands as a testament to the potential of virtual reality in breaking down barriers and creating inclusive spaces for diverse learners. As we continue to refine and expand our initiative, we envision a future where immersive technologies redefine the landscape of sign language education, fostering seamless communication and understanding between deaf and hearing communities. This innovative approach heralds a new dawn in sign language education, promising a more connected and inclusive world.

The continuous evolution of immersive technology is paving the way for a paradigm shift in how we approach language learning, creating an environment that transcends traditional boundaries and empowers learners in ways previously unimagined. Our journey is only beginning, and the possibilities for enhancing the educational landscape through immersive technologies are vast and exciting.

To explore the complete research findings, access the full paper here.

Author

Jindi Wang, Postgraduate Student at Durham University, UK.