In this exclusive interview, TheGamingEconomy speaks to Yuheng Chen, COO of rct studio, to discuss the development of their AI-driven solutions and first-party titles, following a successful Series A funding round of USD$10m (£8.1m) led by Makers Fund.
How will the recent funding round be used to support development of Morpheus Engine, and its constituent Motion Prediction Generation System & Chaos Box? What role are the investors taking on in supporting the company?
We’ve already been busy putting the USD$10m (£8.1m) to work in developing many of the features you see with the Motion Prediction Generation System and Chaos Box today.
These new modules work alongside our Morpheus engine, an AI-powered creative system that opens up almost infinite narrative alternatives and story structures through deciphering the meaning of words to transform them into 3D rendered animations. Together, they serve as the infrastructure behind our mission to develop dynamic entertainment experiences for interactive films and virtual reality (VR) gaming environments.
Alongside the role this funding has played in the development of these technologies, we’ve taken a big step towards our goal of releasing our first dynamic entertainment title by the end of the year. Makers Fund, which led our Series A round, is focused on the future of interactive entertainment. They bring a lot of expertise in the gaming industry and have broad connections with all the major players in the space that will be helpful as we scale the company.
How does the Motion Prediction Generation system save time within developer workflows? What case scenarios can it best be applied to currently?
It takes an enormous amount of time and effort for developers to come up with compelling stories and create equally compelling animations for them. For example, the development of animations for Activision Blizzard’s Call of Duty Infinite Warfare title took three years and an animation team of 15. But with AI technology like ours, production costs can be slashed and the workflows needed to develop immersive entertainment experiences can be significantly reduced.
Specifically, the Motion Prediction Generation System takes what today requires years of animation work from large and experienced teams, and instead generates life-like animation in real-time, creating a new framework for designing virtual worlds of the future. The system gives developers the ability to iterate and control character animation in real-time from the cloud, which is a huge resource-saver. Typically, developers rely on the time-intensive process of producing character animations in advance of distribution via sequential frames that don’t even adapt to real-time environments.
It uses deep learning to do this, through understanding large amounts of data collected from real human motion patterns. A neural network is then able to predict the natural motion of a character in the next frame based on data from the previous frame to achieve the most relevant motion under different parameters that control the dynamics of the animations, while also factoring in any environmental factors.
What are the key similarities and differences between the Chaos Box system to other artificial intelligence programmes, such as DeepMind and AlphaZero? How are these aspects beneficial for video game developers?
Chaos Box is the power behind each individual character action in the storyline that rct studio’s Morpheus Engine develops. It’s a module that’s able to combine creative elements with an understanding of human behaviour to bring characters to life and create endless possibilities for how scenarios and entire storylines play out. Unlike other solutions, we’re talking about powering actions for every non-player character.
Through this simulation, Chaos Box can figure out the most compelling resolution for a story, and there’s no need to script every alternative for each scene in advance. Once the main storylines, character descriptions, motivations, and parameters are mapped, the Chaos Box algorithm completes the rest. This is a massive resource-saver.
We believe all gaming companies will soon be able to leverage this technology to easily build interactive, open-world environments where AI-powered characters co-adapt to produce elaborate storylines. As a result, consumers will no longer be confined to fixed dialogues and rigid interactions between non-player characters.
In the future, we think that by learning from the millions of data points behind Morpheus-powered experiences, we can unlock the secrets of human behaviour patterns in different scenarios. And being able to do this has benefits across industries outside of gaming and entertainment as well — perhaps even fields such as health and science.
What are plans for the rollout of Morpheus Engine and for the rct studio company through the remainder of 2020 and beyond?
Our goal is to put out our first title on virtual reality gaming platforms like Oculus by the end of this year. We will likely raise another round of funding in the coming months to meet this goal, as this will be the next capital-intensive step we take.
We anticipate that around 80% of our business will be focused on developing titles directly for consumers in the future. However, we’ll also be opening up the use of our Morpheus Engine technology to both gaming and movie studios as we assist them with animation production and virtual world development through the power of AI. There is a great opportunity to utilise our technology and integrate it into their current workflows so they can save valuable time, resources and costs.
We’re already working with AAA video game developers on an upcoming title and have interest from major film studios on future collaborations on interactive entertainment experiences. I hope to be able to update you on progress with both in the near future!