BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Moving towards a Future of Testing in the Metaverse

Moving towards a Future of Testing in the Metaverse

This item in japanese

Key Takeaways

  • Defining and understanding the metaverse concept begins with exploring envisioned characteristics such as its immersiveness, interconnectedness and ability to deliver an endless set of experiences to consumers.
  • Making the metaverse a reality comes along with a number of engineering risks and quality concerns ranging from data privacy and security, to personal safety and virtual harassment.
  • Applying test-driven design principles to the development of the metaverse will allow teams to identify risks early and ensure that the metaverse is testable.  
  • Achieving acceptable levels of test coverage in the metaverse may only be possible with advanced test automation capabilities powered by AI and machine learning.
  • Testers bring invaluable skills such as user empathy, creativity, curiosity, collaboration and communication to metaverse development and will likely play a key role in enabling its success.

Although the idea of the metaverse began as fiction, it is likely that it will soon become a reality. The metaverse will bring together a variety of modern computing technologies to realize a grand vision of an Internet experience with deeper social connections and experiences. However, the specification, design, development, validation, and overall delivery of the metaverse presents grand engineering challenges. In this article I’ll describe the metaverse concept, discuss its key engineering challenges and quality concerns, and then walk through recent technological advances in AI and software testing that are helping to mitigate these challenges.  To wrap up, I share some of my thoughts on the role of software testers as we move towards a future of testing in the metaverse.

The Metaverse

With all the hype and chatter around the metaverse, it’s becoming increasingly difficult to describe exactly what the metaverse is and what it looks like. To be clear, the metaverse doesn’t actually exist yet and so a good way to  describe it is as a hypothetical iteration of the Internet as a single, universal, simulated world, facilitated by a variety of modern computing technologies.  In three words, the metaverse will be immersive, interconnected, and endless.  Let’s explore these three characteristics a bit more.

Immersion

The metaverse will draw people into a plethora of experiences using virtual and real environments, or some combination of the two. It is projected that new and different levels of immersion will be achieved through the use of  virtual, augmented, and merged or mixed reality technologies, collectively referred to as extended reality (XR).  User experience designer Tijane Tall describes the key differentiators among the immersiveness of these experiences as follows:

  • Virtual Reality (VR): the perception of being physically present in a non-physical world.  VR uses a completely digital environment, which is fully immersive by enclosing the user in a synthetic experience with little to no sense of the real world.  
  • Augmented Reality (AR): having digital information overlaid on top of the physical world. Unlike VR, AR keeps the real world central to the user experience and enhances it with virtual information.  
  • Merged or Mixed Reality (MR): intertwining virtual and real environments. MR might sound similar to AR, but it goes beyond the simple overlay of information and instead enables interactions between physical and virtual objects.

Technologies showcased at CES last year are also promising to enable a new level of immersion.  For example, a company called OVR Technology showed a VR headset with a container for eight aromas that can be mixed together to create various scents. The headset that will bring smell to virtual experiences is scheduled to be released later this year.

Interconnection

Virtual worlds that are coined as “metaverses” today are mostly, if not all, separate and disjointed. For example, there are little to no integrations between the popular gaming metaverses Roblox and Fortnite. Now, what if the opposite were true?  Imagine for a moment that there were deep integrations between these two experiences- so deep that one could walk their avatar from Roblox into Fortnite and vice-versa, and upon doing so their experience would be seamlessly transitioned.  In the metaverse, even if there are distinct virtual spaces, this type of seamless transition from one space, place, or world to another will exist.  Things like avatar customizations and preferences will be retained if desired. This is not to say that everything in the world should look exactly the same.  Instead, it would be more of a visual equivalence rather than equality.  As a result, my sports T-shirt in the Fortnite space may look different than the one in Roblox, but the color and branding make it apparent that this is my avatar. Integrations among various technologies such as blockchain, security, cryptocurrency, non-fungible tokens (NFTs), and more will be necessary to establish a fully interconnected metaverse.  

Endlessness

Possibilities for realizing a variety of different experiences in the metaverse with various users will be endless.  We can already see this happening in several platforms and modern video games and VR/AR experiences are proving that almost anything you can imagine can become an immersive experience.  Of course, this is a bit of an exaggeration as in reality, there are limits, both in the real world and as relates to every technology, but there is no doubt that the vast range of experiences that are likely to be available, along with the immersion and interconnectedness is what makes the idea of the metaverse so appealing. 

Metaverse Engineering Risks and Quality Concerns

As interest and investment in the metaverse grows, many are raising concerns about the potential risks in an environment where the boundaries between the physical and virtual world are blurred.  Some of the key engineering risks and quality surrounding the development of the metaverse are:  

  • Identity and Reputation: Ensuring that an avatar in the metaverse is who they say they are, and protecting users from avatar impersonation and other activities that can harm their reputation.  
  • Ownership and Property: Granting and verifying the creation, purchase, and ownership rights to digital assets such as virtual properties, artistic works, and more.  
  • Theft and Fraud: Stealing, scamming, and other types of crimes for financial gain as payment systems, banking and other forms of commerce migrate to the metaverse.  
  • Privacy and Data Abuse: Having malicious actors make their presence undetectable in the metaverse and invisibly joining meetings or eavesdropping on conversations. There is also a significant risk of data abuse and the need for protections against misinformation.  
  • Harassment and Personal Safety: Protecting users from various forms of harassment while in the metaverse, especially when using XR technologies.  The advent of these types of experiences now means that harassment and personal safety is not just a physical thing, but now a virtual experience that should be guarded against.  
  • Legislation and Jurisdiction: Identifying any boundaries and rules of the virtual spaces that are accessible to anyone across the world, and making sure they are safe and secure for everyone.  Governance of the metaverse brings together several of the aforementioned risks.  
  • User Experience: If the metaverse is to become a space where people can connect, form meaningful relationships and be immersed in novel digital experiences, then the visual, audio, performance, accessibility and other user experience related concerns must be addressed.

Mitigating Metaverse Risks with Continuous Testing

Software testing is all about assessing, mitigating, and preventing risks from becoming real and causing project delays and damage.  I always encourage engineering teams to take a holistic view of software testing and treat it as an integral part of the development process.  This is the idea that testing is continuous and therefore begins from as early as product inception and persists even after the system has been deployed to production. 

Test-Driven Metaverse Design

A research colleague of mine once described the idea of using testing as the headlights of a software project during its early stages.  The analogy and illustration he gave was one of a car driving down a dangerous and windy road at night with the only visible lights on the road being those projected from the car’s headlights.  The moving car is the software project while the edges of the road represent risks, and the headlights are testing-related activities.  As the project moves forward, testing sheds light on the project risks and allows engineering teams to make informed decisions through risk identification, quantification, estimation and ultimately mitigation.  Similarly, as we start to design and develop the metaverse, teams can leverage test-driven design techniques for risk mitigation. These may include:

  • Acceptance Test-Driven Design (ATDD): using customer, development, and testing perspectives to collaborate and write acceptance tests prior to building the associated functionality.  Such tests act as a form of requirements to describe how the system will work.
  • Design for Testability (DFT): developing the system with a number of testing and debugging features to facilitate the execution of tests pre- and post-deployment.  In other words, testing is treated as a design concern to make the resulting system more observable and controllable.

Metaverse Testing

Achieving acceptable levels of coverage when testing the metaverse will likely require a high degree of automation. In comparison with traditional desktop, web, or mobile applications, the state space of a 3D, open world, extended reality, and online experience is truly vast and exponentially large. In the metaverse, at any moment you will be able to navigate your avatar to a given experience, equip various items and customizations, and interact with other human or computer-controlled characters.  The content itself will be constantly evolving, making it a continuously moving target from an engineering perspective. Without sufficient test automation capabilities, creating, executing, and maintaining tests for the metaverse would be extremely expensive, tedious, and repetitive activities.  

AI for Testing the Metaverse

The good news is that advances in AI and machine learning (ML) have been helping us to create highly adaptive, resilient, scalable automated testing solutions. In my previous role as chief scientist at test.ai, I had the pleasure of leading multiple projects that applied AI and ML to automated testing. Here are some details on the most relevant projects and promising directions that leverage AI for automated testing of metaverse-like experiences.

AI for Testing Digital Avatars

Advances in computer vision unlocks a realm of possibilities for test automation. Bots can be trained to recognize and validate visual elements just like humans do. As a proof-of-concept, we applied visual validation to the digital personas developed by SoulMachines. This involved training object detection classifiers to recognize scenarios like when the digital person was speaking, waiting for a response, smiling, serious, or confused. Leveraging AI, we developed automated tests to validate conversation-based interactions with the digital avatars. This included two forms of input actions, one using the on-screen textual chat window, and the other by tapping into the video stream to “trick” the bots into thinking that pre-recorded videos were live interactions with humans. A test engineer could therefore pre-record video questions or responses for the digital person, and the automation could check that the avatar had an appropriate response or reaction. Large, transformer-based natural language processing (NLP) models like OpenAI’s GPT-3, and its newest variant ChatGPT, can also be leveraged for generating conversational test input data or to validate expected responses.

Even in the early stages of their development, the trained bots were able to learn how to recognize interactions that were relevant to the application context and ignore others. Let’s look at a concrete example. The figure below shows the results of running a test where the goal was to validate that the bot was able to respond appropriately to the gesture of smiling. We all know that smiles are contagious and it’s very hard to resist smiling back at someone who smiles at you, and so we wanted to test this visual aspect of the bot interactions. The automation therefore launched the digital person, tapped into the live video stream, and showed the digital avatar a video stream of one of our engineers who after a few moments started to smile. The automation then checked to see the avatar’s response to smiling, and here was the result.  

As shown in the figure, if you compare the bot’s current observation of the avatar with the prior observation, you will notice there are two differences. Firstly, the avatar’s eyes are closed at the moment of the capture as indicated by the blue boxes, and it is also smiling broadly enough that its teeth are now visible (red boxes). However, the difference mask generated by our platform only reports one difference — the smile. Can you guess why? Perhaps a bug in our testing platform? No, quite the contrary. Here the bots have learned that blinking is part of the regular animation cycle of the digital avatar. It’s not just trained on a single image, but actually trained on videos of the avatars, which include regular movements. With those animations now recognized as part of the ground truth, the bot distinguishes that the big smile is a deviation from the norm, and so produces an image difference mask highlighting that change and that change only. Just like a human would, AI can notice that the avatar smiled back in response to someone smiling at it, and knows that the eyes blinking at the moment of screen capture is just coincidental.

AI for Testing Gameplay

When it comes to playing games, AI has come a long way. Decades ago, bots were using brute force computation to play trivial games like tic-tac-toe. However, today, they are combining self-play with reinforcement learning to reach expert levels in more complex, intuitive games like Go, Atari Games, Mario Brothers, and more. This begs the question: if the bots can do that with AI, why not extend those bots with the aforementioned visual testing capabilities? When placed in this context, the test automation problem for gameplay doesn’t really seem as hard. It’s really just a matter of bringing the previously mentioned AI for game testing technologies together into an environment that combines it with real-time, AI-based gameplay.

Let’s take a look at an example. Suppose you’re tasked with testing a first-person shooter, where players engage in weapon-based combat. The game has a cooperative mode in which you can have either friendly players or enemy players in your field of view at any given time. During gameplay, your player has an on-screen, heads-up-display (HUD) that visually indicates health points, the kill count, and whether there is an enemy currently being targeted in the crosshair of your weapon. Here’s how you can automatically test the gameplay mechanics of this title:  

  • Implement Real-Time Object Detection and Visual Diffing. Using images and videos from the game, you then train machine learning models that enable the bots to recognize enemies, friendlies, weapons, and any other objects of interest. In addition, you train them to report on the visual differences observed in-game when compared to the previously recorded baselines.
  • Model the Basic Actions the Bots Can Perform. In order for the bots to learn to play the game, you must first define the different moves or steps they can take. In this example, the bots can perform actions such as moving forward and backwards, strafing left and right, jumping or crouching, aiming, and shooting their weapon.   
  • Define Bot Rewards for Reinforcement Learning. Now that the bots can perform actions in the environment, you want to set them to take random actions and then give them a positive or negative reward based on the outcome. In this example, you could specify three rewards: 
    • A positive reward for locking onto targets to encourage the bot to aim its weapon  at enemy players.
    • A positive reward for increasing the kill count so that the bot doesn’t just aim at enemies, but fires its weapon at them.
    • A negative reward for a decrease in health points to discourage the bot from taking damage.  

With object detection, visual diffing, action rigging, and goal-based reinforcement learning capabilities in place, it is time to let the bot loose to train in the game environment. Initially, the bot will not be very good at attaining any of the goals such as attacking enemies and not taking damage. However, over time, after thousands of episodes of trying and failing, the bot gets better at playing the game.  During training, visuals can be used as a baseline for future comparisons or the bot can be trained to detect visual glitches.  Here’s a video of one of the trained bots in action. Live in-game action is shown at the top-left, while a visualization of what the bot sees in near real-time is shown in the top-right.  

AI for Testing Virtual Reality

By combining software-based controller emulation with commodity hardware such as a Raspberry Pi, it is possible to automate several types of hardware devices ranging from gaming consoles, controllers, and video streaming devices.  Such integrated tools and drivers allow the bots to observe and manipulate the input-out functions of these devices.  As part of our research and development efforts into gaming and metaverse testing, we built integrations with VR headsets. Once we could control inputs and observe outputs in VR, it was just a matter of tying that API into a subsystem we refer to as the Gaming Cortex, which is basically the ML brain that combines the real-time object detection and goal-based reinforcement learning mentioned previously.

The final result is that engineers or external programs can make calls to the VR API controller and leverage it to define and execute tests in that environment. Here’s a look at it in action as we execute a script that programmatically modifies the yaw function causing the headset to rotate within the VR space.

A Future of Testing in the Metaverse

I firmly believe that in addition to the technical and engineering challenges that come along with creating something as complex as the metaverse, its development will bring with it several opportunities for testers to play a vital role in the future of the Internet.  As software experiences become more “human”, skills like user empathy, critical thinking, risk analysis, and creativity become even more necessary and will be emphasized. Being such a grand vision, the metaverse requires a significant level of “big picture” thinking which is another skill that many testers bring to the table.  Here are a few of my favorite skills that I associate with not just good, but great testers.

In cases where AI/ML are a core part of the metaverse development stack, testing skills like data selection, partitioning, and test data generation will move testers to the front of the development process. With that we can also expect to see more focus on testing as a development practice, leveraging approaches like acceptance test-driven design and design for testability to ensure that the metaverse is not only correct, complete, user-friendly, safe and secure, but that it is testable and automatable at scale.

A huge thanks to Yashas Mavinakere, Jonathan Beltran, Dionny Santiago, Justin Phillips, and Jason Stredwick for their contributions to the work described in this article. 

References

About the Author

Rate this Article

Adoption
Style

BT