The incorporation of artificial intelligence (AI) in augmented reality (AR) and virtual reality (VR) applications is going to affect practically every company, although the speed at which such applications are rolled out will vary by industry.
In spring 2023, the World Economic Forum’s Kelly Ommundsen, head of digital inclusion, and Jaci Eisenberg, head of alumni relations, noted that the integration of GenAI and the metaverse could create dynamic virtual environments, where AI-generated content can adapt and respond to users’ actions, creating personalised and engaging experiences.
“It can also enable new forms of art, entertainment and communication, pushing the boundaries of what is possible in the virtual realm,” they added.
AI, in its many forms – from machine and deep learning to generative AI (GenAI) and its large language models (LLMs) – will support the creation of interactive, immersive virtual environments, make sense of sensor data for digital twins, create content for the metaverse and gaming applications, and allow users to interact in many ways.
It will enable interactive storylines and reactions of systems, facilitate the creation of responsive avatars, fill in backgrounds and empty spaces in gaming landscapes and digital twin environments, and allow developers to scale virtual worlds quickly. Use cases for AI in extended reality (XR) applications are truly limitless.
AI and XR build on architecture and construction
AI can support visualisations of natural phenomena by pre-processing data. For example, a research team from the University of Arizona and Purdue University is developing an application to make sense of the large amount of data that agricultural sensors can gather.
The researchers are leveraging AI and VR to develop VR-Bio-Talk, a platform that will enable users to create advanced visualisations and analyse the data to better understand biological systems and agricultural practices.
“The goal of the project is to create a virtual reality environment that mirrors real-world agricultural environments and is paired with an artificial intelligence program,” says the University of Arizona. AI will process and condense collected data and then transfer the information to virtual representations. The National Science Foundation is supporting the project with funding of more than $2m.
According to Duke Pauli, director of the Center for Agroecosystems Research at the University of Arizona, the platform’s benefits could translate to most applications in which sensor data and virtual representation will play a role.
“Extracting information from data has always required programming expertise, but having the ability to break down technical barriers and let users converse natively with their data will enable researchers to focus more on their science and generate greater societal impact,” says Pauli.
In industrial applications, the combination of AI and XR is already changing processes. In architecture and construction, building information modelling (BIM) and computer-aided design (CAD) have seen use for decades. Related information has therefore existed for a long time in a digital format.
Leading design software provider Autodesk offers solutions for construction, engineering, product design and entertainment applications. Its portfolio is based on the core AutoCAD platform for computer-aided design, Revit design software and Tandem to create digital twins. The company increasingly integrates AI applications. Autodesk’s chief marketing officer, Dara Treseder, points out that this integration allows users to “automate menial tasks, augment creativity and analyse complex information”.
Autodesk also is working on an experimental research effort to create GenAI applications for designing and manufacturing purposes. Project Bernini is using AI to generate functional three-dimensional shapes from various types of input such as two-dimensional images, text, voxels or point clouds. The application hails back to a scientific research effort to create 3D shapes. The company expects AI to redefine architecture as algorithms will drive efficiency, innovation and sustainability in the industry.
“From automated drafting to real-time updates in building information modelling to energy consumption analysis, AI streamlines workflows, allowing architects to focus on creating visionary designs,” it says.
AI and XR change design
Design applications are set to benefit tremendously by combining AI and XR well beyond architecture and construction. Adobe has integrated the AI features Generative Fill and Generative Expand in its Photoshop design software to enable customers to use text prompts to add design elements and expand image dimensions, respectively. The company’s graphics software Illustrator and stock image database also offer AI functions.
To further the use of AI-based applications to create visuals, AI computing firm Nvidia and Shutterstock, a provider of stock photography and footage, have entered a partnership to build a model for AI-created imagery for industrial and entertainment applications.
Other applications look at ways to enhance digital twins with photorealistic environments that instantly respond to users’ actions. Younite AI, for example, is leveraging AI’s capabilities to “create virtual reality projects that range from cultural and historical experiences to future-focused simulations and training”.
Such advanced environments can find use for scenario simulations and decision-making support, as well as a consumer-facing applications, including marketing materials or product manuals.
AI, XR and intelligent agents
AI agents are another consideration for entertainment and consumer-facing commercial applications.
GenAI has a very natural place in gaming applications, where it has the potential to create an entire support industry. Artificial Agency has deployed AI to develop generative behaviour for gaming applications. The founders target game developers that will use their company’s product to create sophisticated game mechanics that support interactive storytelling.
Similarly, Inworld AI is developing an AI engine for media and gaming applications to create “experiences with autonomous behaviour, evolving narratives and worlds that respond to each action”.
NTT Docomo also has been working on a GenAI model to create non-player characters for virtual environments – developers can use text description to create these characters.
Josh Rush, founder and CEO of Surreal Events, says: “AI-powered agents will inevitably play a huge role at scale…. On the development side, it will allow for quicker, cheaper, and higher-fidelity art creation and deployment cycles. And on the channel management front, it will reduce long-term operating costs.”
In sales applications, Walmart is working on an Adaptive Retail campaign that promises to provide personal shopping experiences. The effort will leverage AI, AR and an immersive commerce platform. To that end, the company created an AI-enabled Customer Support Assistant and Wallaby, which comprises several retail-focused LLMs to provide customer-facing experiences. The efforts attempt to bring retail to situations in which shopping can attach to other activities consumers are engaged in, such as embedding offers in gaming environments.
Walmart also created Retina, an AR platform that uses AI to create virtual 3D products. The application allows consumers to access product information when viewing these objects. Another effort is exploring the use of headsets to visualise furniture in various settings, for example.
AI and XR combine real-world and virtual elements
Finally, there are also applications that highlight AI’s ability to connect virtual applications with objects and aspects of the real world. For example, in automotive applications, two Finnish companies are looking at leveraging AI to create AR-enhanced vision for automotive and aerospace applications.
Basemark is developing a toolkit to enable developers of automotive applications to create AR experiences via its AI-based computer vision technology. The company’s product Rocksolid AR is designed to enable immersive augmented reality experiences on any in-vehicle display.
Similarly, Distance Technologies is using contextual AI to create glasses-free mixed reality experiences. Alphabet’s venture arm, GV, has invested in the company, whose applications can project street signs onto a car’s windscreen or maps in a pilot’s cockpit. The company is working on a solution that can allow any transparent surface to become an AR display, with initial target markets the automotive and aerospace industries. It is also creating advanced XR experiences by using, in the founders’ words, “contextual AI”, which should have immediate application opportunities in other industries as well.
More experimental, but potentially truly powerful in augmented applications, AI also can render real-world objects as interactive elements. Google has developed a prototype system that can transform physical objects into virtually augmented objects.
The project, XR-Objects, employs multimodal LLMs to create the effect. The researchers provide an example of an apple and an orange that are on a table. The system identifies the fruits and then provides a virtual menu that mentions the type of fruit and offers information about the fruits, lets users compare these fruits, share the information with a friend, or add them to a shopping list.
The researchers note that XR applications for the most part treat real-world environments as a passive backdrop while only the digital elements have interactive features. XR-Objects intends to remedy that limitation by merging physical objects and virtual elements seamlessly.
There are concerns associated with the indiscriminate use of AI to create XR environments and related processes – as a previous article, The irresistible marriage of AI and XR, notes – but the benefits of embedding AI applications are manifold, and productivity will increase substantially with AI-generated visualisations and AI-complemented data use. AI and XR combine to create powerful solutions for practically every industry and, over time, for every company.
Martin Schwirn is the author of “Small data, big disruptions: How to spot signals of change and manage uncertainty” (ISBN 9781632651921). He is also senior advisor for strategic foresight at Business Finland, helping startups and incumbents find their position in tomorrow’s marketplace.