Video: Ninja Theory’s upcoming action game, Hellblade, has a protagonist with expressions that actually look, well, human. Here’s how they did it.
Through the use of the Unreal Engine and a few other things, Ninja Theory can take the live performance of an actress and render it directly into the game, as it happens.
“We could set up and shoot scenes directly within the virtual world with virtual actors, without having to worry about what the end result would look like,” said Tameem Antoniades, a developer on Hellblade. It’s pretty impressive! Typical mocapping looks ancient by comparison.
I love this bit, too:
Comments
10 responses to “The Tech Behind A PS4 Game’s Realistic Facial Expressions”
I would be quite more surprised if they manually created and animated the face instead of motion capture copying it.
Isn’t that what Naughty Dog…errr…does? They have motion capture, but only use it as reference since their character’s faces are so different to the mocap actors.
But why bother key-framing when you can capture all the tiny nuanced movement not noticed by the human eye anyway. Depending on what the goal is I prefer animating VIA Motion Capture and in particular prototyping through motion capture as it save literal weeks and months on a project.
Is tweaking mocap animations still a nightmare due to the amount of data/keyframes?
A guy I worked with a while back had to edit some provided mocap data for a project and the process sucked ass. Then again, none of us were animators.
I guess there would have to be some better solutions out there these days; ie convert pure keyframe mocap into manageable steps.
Yeah the software I use captures 120 FPS which is totally useless so you cut the keyframes down to every fourth or fifth frame. Then the IR sensors are getting much better at guessing the locations of fingers and hands so it’s all pretty sweet now.
Well from what i can see this motion capture is missing smaller details under a specific threshold, all of which can be entered manually. Manual animation will also be more effective in regards to the amount of detail it has.
Avatar which has by far the best facial animation was all done via MoCap and then with clean up but very little was keyframed as it would take forever to do what they did by hand. Key framing can help with exaggeration of certain aspects of the animation or add detail where it didn’t exist but MoCap when set up correctly will yield higher quality detail and results which is why so many studios are moving over to it.
Each to his own though, I got friend who are not interested in using MoCap whereas it’s all I would ideally use unless it’s a simple animation.
This is all well and good – applaud is in place for them. BUT this is all in vain if the gameplay is given the broken, crooked back seat.
I’m afraid if this and ‘graphics’ technologies is what they [developers] keep highlighting and/ or concerntrating on mostly (and gameplay is not as polished or as deep) leaving the gameplay suffering – then what is the (profanity) point? [similar to their previous game – heavenly sword]
IMO, I Rather they concentrate on the gameplay first so we have a fun, engaging deep gameplay experience FIRST than what most developers focus and churn out these days – ‘movies’
That’s quite a realistic looking character, impressive.
I think the actors sometimes tend to over emphasize expressions on behalf of the technology, that just makes me laugh. It’s like trying to act in slow motion in front of the camera.