{"id":33817,"date":"2024-07-02T14:32:36","date_gmt":"2024-07-02T12:32:36","guid":{"rendered":"https:\/\/media-and-learning.eu\/?p=33817"},"modified":"2024-07-03T12:47:29","modified_gmt":"2024-07-03T10:47:29","slug":"accelerating-xr-workflows-with-artificial-intelligence-recap","status":"publish","type":"post","link":"https:\/\/media-and-learning.eu\/subject\/higher-education\/accelerating-xr-workflows-with-artificial-intelligence-recap\/","title":{"rendered":"Accelerating XR workflows with Artificial Intelligence &#8211; recap"},"content":{"rendered":"\n<p>by <strong>Pippa Brownlie<\/strong>, XR ERA, The Netherlands.<\/p>\n\n\n\n<p>On the 26th of May, we were thrilled to collaborate with the Media &amp; Learning Association for a Meetup on the pertinent topic of \u201c<a href=\"https:\/\/media-and-learning.eu\/event\/xr-in-he\/\" target=\"_blank\" rel=\"noreferrer noopener\">Accelerating XR workflows with Artificial Intelligence<\/a>\u201d.<\/p>\n\n\n\n<p>Having been welcomed by Jeremy Nelson &#8211; Senior Director of Creative Studios at the University of Michigan &#8211; the Meetup began with a poll, aiming to gauge the extent of the audience\u2019s experience with AI and XR as well as the ways in which they have, thus far, put the technologies to use. The results demonstrated strong similarities between the majority of the attendees.<\/p>\n\n\n\n<p>In terms of technological familiarity with both AI and XR, most declared themselves either \u2018not familiar\u2019 or \u2018somewhat familiar\u2019, with a relatively even split between the two categories. Concerning use cases already explored, these tended to fall within the industries of entertainment, education, and healthcare. The most significant challenges listed included those of technical limitations, lack of skilled professionals, and a general perception of technological complexity.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>ABOUT OUR SPEAKERS<\/strong><\/h2>\n\n\n\n<p><strong>Pete Stockley<\/strong> is a Learning Systems Specialist at Omnia Vocational School in Espoo, Finland.<\/p>\n\n\n\n<p><strong>Paul Melis <\/strong>is a Senior Visualisation Advisor at SURF in the Netherlands.<\/p>\n\n\n\n<p><strong>Orestis Spyrou<\/strong> is a Researcher in Extended Realities at the Social &amp; Creative Technologies Lab of Wageningen University &amp; Research, also in the Netherlands.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>GENERATIVE AI IN XR &#8211; Pete Stockley<\/strong><\/h2>\n\n\n\n<p>As Pete explains, Omnia Vocational School is an organisation in which the individual needs of students are prioritised. Although without doubt a fantastic educational opportunity, from a staff perspective, this often has the downside of resulting in limited time capacity for researching, trialling, and implementing new tools. At Omnia, however, they are fortunate. Considerable attention remains paid towards AI and XR to the extent that both AI and XR workgroups are actively in place. These aim to deepen understanding of generative AI tools, create no-code learning experiences, meet the ever-adapting needs of teachers in vocational training, and offer students the opportunity to actively explore AI and XR, both in content and tools.<\/p>\n\n\n\n<p>What, in practice, does this approach mean at Omnia? To demonstrate, Pete provides the example of the \u2018Seven Wonders of the World\u2019 experience which he created in collaboration with a colleague from the school\u2019s department of tourism. In this, all seven locations known as the world\u2019s seven wonders were AI-generated using ThingLink &#8211; a Finnish-designed platform which facilitates the simple creation of interactive and immersive experiences. Although this generative AI function of Thinglink was used to fabricate the locations which students would explore, the information which they would read while doing so was instead generated through Copilot. The image below provides a clear breakdown of precisely which AI tools were employed, so please feel welcome to take a look and try them out for yourselves!<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"583\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15.png\" alt=\"\" class=\"wp-image-33818\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15.png 1024w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-300x171.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-768x437.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-370x211.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-270x154.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-570x325.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-15-740x421.png 740w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In the case of this particular experience, amongst the primary reasons for which ThingLink was chosen was its functionality of allowing interactive tags to be added to 3D models, images, and videos, as well as 360-degree videos. While exploring the seven virtual environments, students could touch these tags to receive information regarding specific aspects of the landscape before them. Although this informative text was AI-generated, it is important to note that all text in the experience had first been proof-read by educators to ensure their accuracy. While this method, of course, still requires a certain time commitment from educational staff, the difference in terms of time required for this approach in comparison with that of writing every text from scratch is considerable.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"980\" height=\"550\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16.png\" alt=\"\" class=\"wp-image-33819\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16.png 980w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-300x168.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-768x431.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-370x208.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-270x152.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-570x320.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-16-740x415.png 740w\" sizes=\"auto, (max-width: 980px) 100vw, 980px\" \/><\/figure>\n\n\n\n<p>As Pete explains, no tool is perfect, yet this is not necessarily something with which to be too greatly concerned. Owing to their AI-generated nature, the environments generated by ThingLink are not wholly accurate, especially in their smaller details. This cannot be said to be acceptable for all use cases yet, in this case, it presents little cause for worry. What, as Pete stresses, these environments are excellent for is allowing both students and teachers to gain active, hands-on experience within the realms of AI and XR while enhancing the educational experience. The information absorbed by the students via the interactive tagged hotspots is, after all, fully accurate.<\/p>\n\n\n\n<p>From his experience of developing this Seven Wonders of the World VR (as well as browser) experience, Pete has derived several takeaways, amongst the most significant of which include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Ideally, work in small teams.<\/li>\n\n\n\n<li>Look for each tool\u2019s possibilities and restrictions &amp; discover these hands-on whenever possible!<\/li>\n\n\n\n<li>Test results in authentic learning scenarios.<\/li>\n\n\n\n<li>Ask for feedback from peers and students, clarifying whether the tool provided added value.<\/li>\n\n\n\n<li>Share the knowledge!<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>XR &amp; AI at SURF &#8211; Paul Melis<\/strong><\/h2>\n\n\n\n<p>Following Pete Stockley\u2019s in-depth presentation of his AI and XR educational case study, we were delighted also to welcome Paul Melis from SURF to present his insights into the field from a technically-oriented perspective.<\/p>\n\n\n\n<p>Having first provided some background concerning SURF as a public values-oriented organisation and having clarified its central role within the education sector as opposed to one directly involved in teaching, Paul proceeds to show the results of the prompt which he gave to Meta\u2019s Llama3 8B AI: \u201cwhat are the uses of AI for extended reality workflows?\u201d. What these results are can be seen in the image below and, as Paul acknowledges, several more could likely be found.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"980\" height=\"541\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17.png\" alt=\"\" class=\"wp-image-33820\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17.png 980w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-300x166.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-768x424.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-370x204.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-270x149.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-570x315.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-17-740x409.png 740w\" sizes=\"auto, (max-width: 980px) 100vw, 980px\" \/><\/figure>\n\n\n\n<p>Taking data as his starting point, he then turns to discuss the various types of data which are generated through XR technology\u2019s use, both before and during user interaction. What he reminds us next is important to bear in mind: XR headsets are already making use of AI on an operational level, whether this is openly promoted or not. In this way, not only are they gathering complex data, but they are also putting it to complex uses.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"980\" height=\"548\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18.png\" alt=\"\" class=\"wp-image-33821\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18.png 980w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-300x168.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-768x429.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-370x207.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-270x151.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-570x319.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-18-740x414.png 740w\" sizes=\"auto, (max-width: 980px) 100vw, 980px\" \/><\/figure>\n\n\n\n<p>Amongst the many reasons for this data\u2019s complexity is that of its frequent inaccessibility. Indeed, even when accessible, it is seldom able to be easily handled owing to issues of latency and the length of performance processing times. Moreover, it is often the case that existing algorithms are not yet sufficient to perform certain fundamental tasks. Despite this complexity, there are, however, potential means of easing the situation.<\/p>\n\n\n\n<p>Considering what is currently possible concerning the processing of AI on XR devices, Paul now raises for discussion the possibility of processing data locally rather than through the cloud. This method is especially beneficial when it comes, once again, to latency as well as when an application has no need for a network connection. From ethical and privacy perspectives too, this method of local processing has potential advantages. The question is, when will it become truly possible on any scale?<\/p>\n\n\n\n<p>To give an impression of the present feasibility of running AI locally, Paul clarifies that the AI query with which he opened his presentation was, indeed, run locally. Problematically, doing so occupies approximately half of his computer\u2019s GPU memory. If he were to run this same query on a Quest 3 headset, this would consume almost all of the GPU memory available on the device. Despite this workflow\u2019s glaring present-day limitations, Paul does, however, provide a silver lining in the form of noting Qualcomm\u2019s active impetus towards developing a solution. If successful, this has great potential.<\/p>\n\n\n\n<p>Moving now to demonstrate another bid to increase AI\u2019s capacity within the XR realm, Paul introduces us to SURF\u2019s still under development in-house generative AI platform, WillMa. Developed and maintained by SURF\u2019s Machine Learning team, WillMa provides a selection of open-source AI models from Hugging Face and offers the following functions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Conversational\/instructional\/coding LLMs<\/li>\n\n\n\n<li>Text-to-speech &amp; speech-to-text (Dutch)<\/li>\n\n\n\n<li>Image generation (StableDiffusion)<\/li>\n\n\n\n<li>In the future, the possibility for models uploaded by users<\/li>\n<\/ul>\n\n\n\n<p>In SURF\u2019s case, hosting this platform in-house allows them the benefits of digital sovereignty, the opportunity to develop in accordance with their own community\u2019s goals, the possibility of integrating with other SURF infrastructure, and an easy means of experimentation.<\/p>\n\n\n\n<p>To put SURF\u2019s current work into context within the realms of AI and XR, Paul concludes his presentation by providing the example of their ongoing project concerning Digital Heritage in XR. Here, within a virtual or mixed reality environment, 3D scans of objects can be displayed and interacted with by users who, after picking up the digital objects using hand tracking, are able to pose questions regarding both the objects specifically and their broader contexts. Such questions are then answered by the WillMa generative AI which formulates a response based upon previously SURF-entered metadata.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Unlocking the Future of XR Entertainment by Pioneering 3D AI Empowered Copilots &#8211; Orestis Spyrou<\/strong><\/h2>\n\n\n\n<p>Orestisbegins his presentation not with his slides, but instead with a live demo of the generative AI mixed reality application \u2018Digibroer\u2019 on the Quest 3. Viewing this app from the perspective of Orestis himself in the headset, we are immediately greeted by the sight of a digital avatar Orestis standing in front of his real background. This is made possible through passthrough. As the real Orestis begins to speak with his avatar, the avatar replies. Although, at first, it responds with great enthusiasm &#8211; if not always the most applicable sentiments &#8211; as the interaction proceeds and Orestis poses questions regarding the topic of digital twins in agriculture, the avatar\u2019s accuracy and understanding improves greatly.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"980\" height=\"483\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19.png\" alt=\"\" class=\"wp-image-33822\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19.png 980w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-300x148.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-768x379.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-370x182.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-270x133.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-570x281.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/image-19-740x365.png 740w\" sizes=\"auto, (max-width: 980px) 100vw, 980px\" \/><\/figure>\n\n\n\n<p>Following this highly memorable demo, Orestis proceeds by contextualising this app created by himself and his colleagues at Wageningen University &amp; Research. As he elaborates, the principal motivation behind its development was the desire to create an educator with 24\/7 availability, covering not only a vast range of topics, but also answering queries not easily able to be addressed through traditional book learning.<\/p>\n\n\n\n<p>When discussing what he believes to be immersive technologies\u2019 most significant benefits within the education sector, Orestis cites what he considers their highly intuitive nature as well as their ability to transport users into locations and scenarios otherwise beyond reach. Moreover, the involvement of AI brings the additional benefits of increased personalisation and interactivity. As he acknowledges, when it comes to XR and AI, there remain numerous technical challenges such as interoperability, yet there is much about which to be positive. Upon this, we ought to concentrate!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Q&amp;A<\/strong><\/h2>\n\n\n\n<p>During the Q&amp;A which followed, several questions were asked regarding SURF\u2019s WillMa platform as well as the technicalities of both Pete and Orestis\u2019 apps, yet one question elicited particularly varied responses from our speakers:<\/p>\n\n\n\n<p>\u2018Did AI add value and speed to your workflows? Did it change the way you are working in any way? Do we all now need to become prompt engineers or can we simply dive in?\u2019<\/p>\n\n\n\n<p>In his response, Paul expressed his mixed feelings. For now at least, he seldom uses AI tooling in his work, having found currently available tools insufficient to meet his needs. How quickly this will change to the extent that he believes necessary for him to begin to use them frequently remains to be seen, yet he concludes by remarking upon the astonishing speed of their development at present.<\/p>\n\n\n\n<p>By contrast, even at present, Pete finds there to be considerable value in AI and XR\u2019s use, both independently and collaboratively. Although their effective use will come about only with time, effort, and a willingness to accept that things may not work perfectly on the first attempt, the results are, for him, ultimately more than worthwhile. Moreover, once your familiarity with the technology increases through its use, you will likely make up any time once considered lost.<\/p>\n\n\n\n<p>When answering this question, Orestis focussed particularly upon the final section, namely that of whether it is necessary to become a prompt engineer yourself. Regarding this, he raised the possibility of requesting the assistance of Chat GPT when writing prompts, asking the LLM to aid with phrasing and refining.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>On behalf of both ourselves at XR ERA and the Media &amp; Learning Team, we would like to say a big thank you to Paul, Pete, and Orestis for joining us to share their thoughts and insights!<\/p>\n\n\n\n<p>In July, there will be no Meetup owing to a short XR ERA summer vacation but we\u2019ll be back in August as usual and will keep you fully up to date with what we have planned over the coming weeks.<\/p>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:26% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-1024x1024.png\" alt=\"\" class=\"wp-image-33964 size-full\" srcset=\"https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-1024x1024.png 1024w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-300x300.png 300w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-125x125.png 125w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-768x768.png 768w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-1536x1536.png 1536w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-370x370.png 370w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-270x270.png 270w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-570x570.png 570w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1-740x740.png 740w, https:\/\/media-and-learning.eu\/files\/2024\/07\/Pippa-1.png 2000w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-large-font-size\">Author<\/p>\n\n\n\n<p><strong>Pippa Brownlie<\/strong>, XR ERA, The Netherlands<\/p>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>by Pippa Brownlie, XR ERA, The Netherlands. On the 26th of May, we were thrilled to collaborate with the Media &amp; Learning Association for a Meetup on the pertinent topic of \u201cAccelerating XR workflows with Artificial Intelligence\u201d. Having been welcomed by Jeremy Nelson &#8211; Senior Director of Creative Studios at the University of Michigan &#8211; [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":32360,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mo_disable_npp":"","footnotes":""},"categories":[271,362,4,275],"tags":[],"class_list":["post-33817","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ar-vr","category-artificial-intelligence","category-featured-articles","category-higher-education"],"featured_image_src":"https:\/\/media-and-learning.eu\/files\/2024\/04\/bigstock-Online-Meeting-Metaverse-Avata-466061609-2-1500x500-1.jpg","author_info":{"display_name":"Dovile Dudenaite","author_link":"https:\/\/media-and-learning.eu\/author\/dovile-dudenaite\/"},"_links":{"self":[{"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/posts\/33817","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/comments?post=33817"}],"version-history":[{"count":6,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/posts\/33817\/revisions"}],"predecessor-version":[{"id":33968,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/posts\/33817\/revisions\/33968"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/media\/32360"}],"wp:attachment":[{"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/media?parent=33817"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/categories?post=33817"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/media-and-learning.eu\/api-json\/wp\/v2\/tags?post=33817"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}