AI, Performance Capture, Games and Deepfakes

The Uncanny Valley Got Uncannier

Photo by Quino Al on Unsplash

Photo by Quino Al on Unsplash

Performance capture has promised for some time to deliver real-time virtual character performance of unmatched, convincing quality. For many years though, the technology was only available to filmmakers with large budgets and time to spare for non-live SFX.

CG (computer graphics) is now commonplace in movies and television, and artificial intelligence is poised to become the most important new technology to emerge in Hollywood in some time. Overlapping the two fields promises an unprecedented level of creativity in areas such as “performance capture” and synthetic performance, but also dark applications such as deepfakes. More on that later…

On the positive side, the range of AI application to live film/video performance could allow creative tools such as real-time editing by enabling directors to modify a script on the fly, as well as creating performances by fantastic creatures and real world actors alike, current and past.

The impact on creativity as well as on key production cost drivers (scheduling and budgeting, two of the major items among film/TV cost factors), could be broad and very impactful. Controlling how and when principal photography takes place, how many retakes are needed, and how much could be done in postproduction will give directors and VFX producers an unprecedented level of creative control.

Creating believable 100% digital human characters on screen is the ultimate test of AI applications to VFX: the line between the “uncanny valley”, where characters are good superficial replicas of humans (but lack real human qualities by becoming “creepy” and soulless simulacra) and believable synthetic performance is a very tricky and subtle one. The human eye is incredibly adept at picking up infinitesimal details that separate a real human from a synthetic one, even when the differences are not evident at a conscious level.

At the moment, AI works best where there is a need to fill in the details of a human or CG performance, or a real actor must be “de-aged” or made to look older (think Guardians of the Galaxy Vol 2, where Kurt Russel was de-aged to fit into his character Ego’s skin.)

AI is also finding its way into video games. Although not new at all – games have been at the forefront of AI research for many years (think of Alan Turning’s first chess-playing game, or the bosses in From Software’s action RPG Dark Souls 3) – the new frontier of AI in video games is the generation of virtual worlds and characters in a procedural way, based on high-level rules, rather than low-level scripts and pre-created assets. AI-powered non-playable characters behaviors could also become completely non-scripted, given gameplay range and variety not seen before.

AI also holds great interest for game makers for its potential in areas such as content creation, non-playable characters intelligent behaviors, predicting human behaviors and adaptable games, game maps and level creation, and even labor replacement in areas such as testing, art creation and design.

Weka‘s Alita Battle Angel

Weka‘s Alita Battle Angel

With such potential also comes a dark side though: the ability to create videos that look real to a majority of viewers is clearly very dangerous and disruptive. Enter deepfakes, a technique used to synthesize human imagery based on artificial intelligence. It is used to superimpose existing images and videos onto other images or videos using a machine learning technique called a "generative adversarial network" or GAN. The technique allows one to swap the face of person A in a video of person B, as shown by this recent video of President Obama here.

 

In light of the recent social media exploits by Russia’s IRA (Internet Research Agency) and their efforts disrupt and sow discord in American and European political discourse, it’s easy to see why we should all be concerned about what we see online and what we believe as “true.” It’s also easy to see why the Pentagon is researching various techniques to mark authentic videos and detect and expose deepfakes. The potential for good is great, but the potential risks of abuses are also overwhelming. Many of which could go unnoticed and create outcomes that greatly surpass the effects of foreign electoral influence in the 2016 and 2018 elections.

Is it a foregone conclusion that the next election cycle will see an influx of deepfakes, so it is imperative for everyone with an interest in democracy and self-determination to educate themselves in this technology, its applications, and above all healthy skepticism and civic sense of responsibility to speak up and flag abuses and fakes, wherever they might appear.

To learn more about performance capture, synthetic performance, and deepfakes the articles and videos below provide excellent jumping-off points to learn more.

 

Performance capture and synthesis

 

Video games and AI

Deepfakes, fake news

Giancarlo Mori