Tencent, Qualcomm, and Vivo recently announced a new partnership, with a view to refining game experiences by harnessing the potential of on-device mobile AI.
We’ve all heard so much about AI in recent months that it’s easy to assume any reference to it is simply over-enthusiastic marketing speak. But what this collection of tech giants have underway is pretty fascinating. It’s also somewhat confusing, as it’s a little complicated on the technical side of things. So let’s take a look at the new initiative and what it might mean for game ad tech and martech.
Qualcomm is a long-established semiconductor outfit, meaning their chips and other technologies are powering numerous devices across the world. Vivo, meanwhile, is a rapidly expanding Chinese smartphone producer, which is currently endeavouring to woo gamers with its iQOO devices. Tencent, of course, is the multinational tech giant behind or involved in many of the biggest games and game companies there are.
Together, the three companies are looking to explore ways to harness the power of Qualcomm’s AI Engine for gaming, in a collaboration titled ‘Project Imagination’. The trio hope to offer more immersive, high-performing gaming experiences by harnessing the potential of on-device AI, neural network learning, and inference. Stay with us.
On-device mobile AI is significant because – to date – most use of artificial intelligence on smartphones is done via the cloud. Put another way, the artificially intelligent computers or technologies that can monitor and inform mobile games do so remotely.
But what of the talk of ‘neural network training’ and ‘inference’? ‘AI’ itself refers to a broad sweep of technological abilities, generally where computers undertake some kind of activity or decision-making process autonomously. ‘Machine learning’, meanwhile, refers to the specific ability of computers to learn. That learning process often takes place through training the neural network of an AI-powered machine. Let’s say a hypothetical computer with a neural network is deployed to learn to identify pictures of horses from a selection of images. That computer may be trained by viewing hundreds of thousands of images, eventually grasping a sense of what a horse looks like; and what it doesn’t look like. Its ‘neural network’ is essentially the part of its ‘brain’ that can learn through experience. Neural networks – to a degree, at least – function like our own brains, using interconnected neurons. That’s a slight oversimplification, but the parallel holds. As such, ‘neural network training’ is the self-education of a computer’s ‘brain’.
‘Inference’, meanwhile, refers to the process of applying knowledge gained through neural-network training. Let’s consider our hypothetical horse identification computer. Having learned what a horse looks like, to identify one it does not need to learn everything from scratch every time. It can ‘infer’ what a horse looks like from what it has previously learned. It should be able to do so quickly and efficiently, without having to analyse huge datasets. Again, a comparison with the human process holds up loosely. If we know what a horse looks like, we can easily identify one each time we see one. We don’t need to learn what a horse is every time we see one.
Of course, the potential here goes far beyond equine identification. Neural networks can learn and infer a great deal about how users behave, how games are played, how money is spent, how ads are viewed, and much more besides.
Again, previously, mobile devices and games have generally been granted the ability to learn and infer through remote, cloud-based AI. AI has long been helping mobile game developers, publishers, and distributors; but the industry has only scratched the surface of the opportunity. Qualcomm, Tencent, and Vivo hope to ramp up that potential of deploying AI by housing the intelligence – and the ability to learn – on individual smartphones.
“Qualcomm Technologies has been committed to driving forward the application and implementation of on-device AI, and seeking to create rich use cases and innovative user experiences, such as gaming in collaboration with our AI ecosystem partners using the advanced Qualcomm AI Engine on Snapdragon Mobile Platforms”, confirmed Frank Meng, chairman of Qualcomm China, in a statement to the press. “Our technologies have already driven a broad range of AI use cases. Through this work between four parties [two coming from Tencent], we are looking forward to exploring more innovative AI features and applications on devices.”
“We are excited to work with Qualcomm Technologies, Honor of Kings, and Tencent AI Lab using the fourth-generation Qualcomm AI Engine, and create powerful and smart AI experiences starting with the ‘Project Imagination’”, added Fred Wong, general manager of creative innovation at Vivo. “We have introduced our new Multi-Turbo together with the launch of iQOO in March this year, a technology that can fully tap the potential of hardware and, thus, offer both professional players and gaming enthusiasts with more stable, faster mobile gaming experiences. Meanwhile, we are also taking concerted efforts for building an esports AI team in mobile named ‘SUPEX’, and hope to continuously enhance and optimise the strength of the AI team through the environment in MOBA games, and ultimately deliver better competition experiences to mobile esports.”
The impact on ad tech?
The potential to bolster mobile game development and live service maintenance is relatively obvious. If each individual smartphone has the ability to learn and infer, developers and publishers can readily analyse the ways games are played, the ways money is spent, or the success of a particular update or event. Game outfits could then tweak their product and services accordingly, potentially even distinctly for individual users, thanks to the on-device abilities. And it sounds like Vivo is even building an esports team built from AI agents (‘agents’ being individual AI programmes or entities). That may see phones without human users enter esports competitions as teams, providing the likes of data, game testing, and even a sparring tool for human players.
But what about the worlds of game ad tech and martech? We could certainly see vast improvements in targeted ads and bespoke content. Perhaps uniquely crafted ads could be delivered to players on a case-by-case basis, arriving at the time and frequency that suits a single user. The format of ads could be automatically tailored to suit the preference of individuals, using inferred analysis of their previous response and interactions to ads. At some point, it may even be possible to generate playables on the fly, each tailored for the preferences of a single user. Broadly, we are talking here about refined, optimised ‘hyper-personalisation’.
We could also see unique, personalised monetisation offerings or deals, bespoke IAP items, and events that respond to player behaviour. And then there’s the notion of AI-powered applications that can hold convincing, natural conversations with players over text or even via spoken audio. With regard to contextual in-game advertising, we could see non-player characters share marketing messaging through live conversation. Placed ads could converse with users, bolstering engagement. Equally, customer service could be delivered more convincingly through in-game conversation. Elsewhere, customer insights could be far more precisely understood and rapidly delivered, by generating algorithms based on myriad datapoints secured on hundreds of thousands of intelligent devices.
All of those ideas are speculative, of course. But the potential of high-powered on-device machine learning and inference for ad tech and martech is tremendous.
We have confidence in one thing: in the future, game ad tech will be doing a lot more than correctly identifying pictures of horses.
Ad TechAdvertiserAdvertisingAnalyticsApp Store OptimisationApp StoresAudienceChinaCreativeDataDeveloperDigital MarketingDisplayesportsGamingHyper CasualIn-AppMobileMonetisationPlayablesProgrammaticPublisherTargetingTencentUncategorized