In the cut scenes with Midgar's inhabitants,
we have used a technology that automatically generates lip syncing
from the words of the characters and also adjusts their facial animations.
By having the audio-based automatic adjustments, it provides an extra layer of realism
to the scenes that would otherwise look less lifelike.
It changes the lip syncing of the character in correspondence to the spoken language automatically.
Also, it analyses the emotions of the spoken dialogue and reflects them in their facial expressions.
And when a character utters a sound, their eyebrows move, accompanied by their corresponding bodily movements.
The word AI may have a broad range of definition, but in these cut scenes,
we have used it to track information such as, in a conversation for instance,
who is involved in the conversation and the environment where the conversation takes place.
These AI factors automatically select the camera angles: where to place the camera, who to aim the camera at, etc.
Who speaks at any one time, who are they paying attention to when they’re speaking and so on.
We create various movements according to all of this different data.
There are 6 basic emotions: anger, disgust, fear, happiness, sorrow and surprise.
And the animations need to be created for these emotions, for all the characters.
We also created a sliding scale chart for the automatic emotional adaptation called
Emotion Switch Model. There were two axis for that.
The horizontal axis is for happiness and distress,
the vertical one for surprise and tiredness.
And they are set up for every character.
Based on the actor’s voices, the emotions are picked up in the sound file
and reflected in the motions.
When you place a character at one point on that chart,
the emotion and animation are automatically generated and shown.