FFVII REMAKE Official MODS Thread

jeangl123

Pro Adventurer
AKA
Jean
Cloud is prepared for Christmas
https://www.nexusmods.com/finalfantasy7remake/mods/107
107-1640354558-1476958707.png
Some other cool new mods were also released with hair and eye color options.
 

Prism

Pro Adventurer
AKA
pikpixelart
Wow...these ARE coming in fast, and way faster than I thought. Hell yes. I guess it's time to buy the game again, lol. I just got Intergrade on the PS5...

I wonder how much documentation there is for modding as of yet. I'll take a look, since I'd definitely be interesting in doing some 3D modifications.
 

ultima espio

Pro Adventurer
Wow...these ARE coming in fast, and way faster than I thought. Hell yes. I guess it's time to buy the game again, lol. I just got Intergrade on the PS5...

I wonder how much documentation there is for modding as of yet. I'll take a look, since I'd definitely be interesting in doing some 3D modifications.

This sums it all up:
Data current as of 12/24/21: No model imports are currently possible. This includes anything not already in game. Weapons, clothing, etc. No character swaps are currently possible unless they are the same character in different clothing and it is already in game (I.E. the dresses, hurt/dirty, gloveless). We cannot put any character over any other or it will t pose. Some work has managed to get some characters to not look completely Frankenstein but nothing that could be an official mod release. Sound, material, and in game model swap have been modded but very little of anything else currently. Data tables have also been modded. A non pak method using 3DMigoto and another method injecting the DDS into the uasset are currently our best bet for texture mods.

Tutorials are on Discord
 

Prism

Pro Adventurer
AKA
pikpixelart
From the mods existing, they seem to definitely just be able to work with existing in-game models for sure. That's usually how modding progression I've seen in the past goes - starting with textures and model swaps, then getting increasingly more advanced. (When SSBU got to the point where model imports were possible, they initially had to be under the model's original filesize. then innovations were made to where any model import works) I'm looking forward to seeing where FFVII-R's modding progresses.

And thank you for the pointer to the tutorials. On the Nexus discord, I'm assuming?
 

Eerie

Fire and Blood
Yeah, there are a lot of facial expressions really interesting "off camera". We see Cloud having the biggest grins with Tifa, for example. Or being completely lost when he talks about his time in Shinra. Lots of interesting details that didn't make it in the final cut but were still worked and give so much context to everything.

Edit: thinking about everyone who said Tifa had no reaction to Sephiroth:


Well she did... someone decided to cut that off :s
 
Last edited:
I assume that all (or at least 95%) of the heavy lifting with the facial expressions is done automatically with the software that detects the tone of the conversation. (See 5:58 -> 9:53 in the behind the scenes video "Inside FFVIIR – Episode 5: Graphics and Visual Effects") Still, it's neat to see that the facial-expression-AI doesn't stop acting even when character faces are off-screen.

*EDIT*
In the cut scenes with Midgar's inhabitants,​
we have used a technology that automatically generates lip syncing​
from the words of the characters and also adjusts their facial animations.​
By having the audio-based automatic adjustments, it provides an extra layer of realism​
to the scenes that would otherwise look less lifelike.​
It changes the lip syncing of the character in correspondence to the spoken language automatically.​
Also, it analyses the emotions of the spoken dialogue and reflects them in their facial expressions.​
And when a character utters a sound, their eyebrows move, accompanied by their corresponding bodily movements.​
The word AI may have a broad range of definition, but in these cut scenes,​
we have used it to track information such as, in a conversation for instance,​
who is involved in the conversation and the environment where the conversation takes place.​
These AI factors automatically select the camera angles: where to place the camera, who to aim the camera at, etc.​
Who speaks at any one time, who are they paying attention to when they’re speaking and so on.​
We create various movements according to all of this different data.​
There are 6 basic emotions: anger, disgust, fear, happiness, sorrow and surprise.​
And the animations need to be created for these emotions, for all the characters.​
We also created a sliding scale chart for the automatic emotional adaptation called​
Emotion Switch Model. There were two axis for that.​
The horizontal axis is for happiness and distress,​
the vertical one for surprise and tiredness.​
And they are set up for every character.​
Based on the actor’s voices, the emotions are picked up in the sound file​
and reflected in the motions.​
When you place a character at one point on that chart,​
the emotion and animation are automatically generated and shown.​

With the AI even deciding on camera angles it makes you wonder how often the scene developers decided to manually control the camera and what expressions/angles they might have missed to incorporate because they weren't aware of the full extent of expressions that the AI was creating.
 
Last edited:

Eerie

Fire and Blood
Yeah but in Tifa's case here, she doesn't talk? How did the AI detect what kind of face to give her?
 
My reading is that the procedural animations are used for all characters, though with a larger set of facial-capture data to draw from for the main characters. Though the true extent of the AI is a bit unclear. It *might* be fully responsible for Tifa's expression in that one scene based on factors like her body motion capture (if the AI takes that into account), where she backs away in trepidation. It's really hard to say, given the general versatility of AI and the fact that the video doesn't go into complete detail of how it works.
 

Tetsujin

he/they
AKA
Tets
They do use it for all characters but I mean it's just for the simple generic ones like when you talk to NPCs or have little cutscenes as part of a sidequest etc. The simple back and forth stuff. You can tell there is a huge difference in how characters animate in those vs the fully animated cinematic cutscenes. That emotion on Tifa's face here comes from a human being whether it was facial capture or keyframe animation, I'd bet on it. At least 5 bucks. Maybe 4.50 :monster:
 

a_apple 2.0

Pro Adventurer
AKA
a_apple
I assume that all (or at least 95%) of the heavy lifting with the facial expressions is done automatically with the software that detects the tone of the conversation. (See 5:58 -> 9:53 in the behind the scenes video "Inside FFVIIR – Episode 5: Graphics and Visual Effects") Still, it's neat to see that the facial-expression-AI doesn't stop acting even when character faces are off-screen.

*EDIT*
In the cut scenes with Midgar's inhabitants,​
we have used a technology that automatically generates lip syncing​
from the words of the characters and also adjusts their facial animations.​
By having the audio-based automatic adjustments, it provides an extra layer of realism​
to the scenes that would otherwise look less lifelike.​
It changes the lip syncing of the character in correspondence to the spoken language automatically.​
Also, it analyses the emotions of the spoken dialogue and reflects them in their facial expressions.​
And when a character utters a sound, their eyebrows move, accompanied by their corresponding bodily movements.​
The word AI may have a broad range of definition, but in these cut scenes,​
we have used it to track information such as, in a conversation for instance,​
who is involved in the conversation and the environment where the conversation takes place.​
These AI factors automatically select the camera angles: where to place the camera, who to aim the camera at, etc.​
Who speaks at any one time, who are they paying attention to when they’re speaking and so on.​
We create various movements according to all of this different data.​
There are 6 basic emotions: anger, disgust, fear, happiness, sorrow and surprise.​
And the animations need to be created for these emotions, for all the characters.​
We also created a sliding scale chart for the automatic emotional adaptation called​
Emotion Switch Model. There were two axis for that.​
The horizontal axis is for happiness and distress,​
the vertical one for surprise and tiredness.​
And they are set up for every character.​
Based on the actor’s voices, the emotions are picked up in the sound file​
and reflected in the motions.​
When you place a character at one point on that chart,​
the emotion and animation are automatically generated and shown.​

With the AI even deciding on camera angles it makes you wonder how often the scene developers decided to manually control the camera and what expressions/angles they might have missed to incorporate because they weren't aware of the full extent of expressions that the AI was creating.
If a AI existed that could animate that well facial expressions games would take a lot less time to make :mon:
Obviously the devs animated the entire scene and than decided what angles to use lol
 
Funny thing is that one reason I got attached to the "AI did most of the work" impression, from my initial viewings of the behind-the-scenes video, is that I considered it the only logical explanation for how Square was able to create FF7R in the first place. :lol:

Though in hindsight it is more reasonable to assume that lots of facial expressions were derived from scene-specific motion capture (and I would hope they did body + face capture at the same time). All that said I'm still left wondering the extent to which facial expressions were automated based on the emotion-sliders and the aforementioned AI.
 

Eerie

Fire and Blood
Looking at pictures from motion capture actors since the pandemic though, they ALWAYS wear masks, so I'm thinking their expressions should be minimal... I'm not sure how much they were used in the first part?
 

a_apple 2.0

Pro Adventurer
AKA
a_apple
SE definitely doesn't use face tracing for remake and most of their ips. Normally games do that where the character model is heavily based on a actor like Last of us or the Resident Evil remakes.
 

Tetsujin

he/they
AKA
Tets
SE definitely doesn't use face tracing for remake and most of their ips. Normally games do that where the character model is heavily based on a actor like Last of us or the Resident Evil remakes.

They used facial capture for Remake, they talk about it in the behind the scenes video. And plenty of games that use it have characters that look nothing like the actors, TLOU and RE are actually good examples of that. Capcom in particular often use the likeness of models instead of the actors.
 
Top Bottom