It won’t come as a surprise just how much work went into The Last of Us 2‘s characters. But you might appreciate knowing just how far Naughty Dog went to get the eyes — just the eyes — right.
Naughty Dog’s Waylon Brinck and Qingzhou Tang gave a talk at SIGGRAPH, the global graphics technology conference, earlier this year about the technical art behind The Last of Us 2. That talk hasn’t been uploaded online, but slides from the SIGGRAPH presentation have been.
It’s an incredible amount of granular detail and covers the shading features created by Naughty Dog just for The Last of Us 2. Some of those include the technology specifically for deformable snow, surface wetness and, critically, improvements to how eyes are rendered in video games.
The above shot is the difference between how Ellie’s eyes looked in the initial Last of Us 2 reveal trailer, and how they were finally implemented in the game. According to Brinck and Tang, the original implementation was good — but only in certain lighting conditions and scenarios.
Eyes are one of those things that are immediately immersion breaking — if something’s off, you’ll notice it, and you’ll never be able to unsee it. So Naughty Dog had to overhaul their eye rendering so it was flawless in every environment.
The first step of the process was getting a high quality ground truth, or reference image, as well as shots in different lighting scenarios. So if you don’t like eyes up close, well, uh, apologies for this shot:
Part of the overhaul included adding screen-space shadows for characters’ eyelashes and eyelids. It was a “brute force” approach, according to the developers, but one that came with very little cost since the amount of space those shaders take up on screen at any given moment was very small.
Even in a shot where the facial model occupies the majority of the screen — like this one of Joel below — the hit to the PlayStation 4’s GPU was less than a millisecond, which isn’t a bad trade-off for how realistic the effect is.
What’s fascinating is just how much time was spent nailing the little details. In the SIGGRAPH talk, the Naughty Dog developers spoke about how the limbus — that’s the dark ring of the iris — wasn’t really showing up accurately. The limbus is something that can be impossible to see if a subject is looking right into a light source, so if you think about what it takes to implement that in a video game, getting it right can be incredibly tricky.
Now you’d think, by this point, that the eyes were looking pretty good. They’d sorted out how to make them look realistic under multiple lighting conditions and environments.
But naturally, as AAA games often do — and Naughty Dog is famous for doing — things had to go a step further.
“We had a moment in the game where we wanted some wolves to have retro-reflective eyes, like many nocturnal animals do,” Brinck said in the presentation.
So that naturally meant going back to the drawing board, because animals like wolves (and cats) have an extra reflective layer behind the retina compared to humans:
Because of this extra reflective layer, Naughty Dog put together a short piece of code to calculate the reflection of the eye pupil and apply a mask from the iris. A small iridescence layer was applied, which had the benefit of making the nocturnal animals a little scarier.
That fit really well with the theme of the game, so the developers naturally applied the same effect to the Infected.
Now keep in mind – this is just the technical work that went into the eyes. The full presentation, which you can check out here, go into detail about how Last of Us 2‘s breakable glass was built and the four main techniques behind it, the power of Naughty Dog’s “Uber Shader” which affects almost every surface in the game, how heightmaps were implemented to give a better sense of depth in things like a pile of rocks, giving the realistic feeling of “wetness” and what it takes to get snow to deform.
There’s also a talk on what the next-gen can offer going forward. Take this comparison of Abby from Last of Us 2, versus the reference shot:
The left is Abby’s in-game model, and the right is the shot from Jocelyn Mettler. It’s pretty damn good! But if you look more closely, you can also pick a lot of areas where improvements can be made. The detail in hair’s an obvious challenge — hair has always been a problem for video games, mind you — and the in-game model doesn’t really replicate the way light properly bounces off a person’s face. The fuzziness of the lower cheek is another thing that isn’t replicated on the in-game model at all, and even things like ridges in the neck or things like the pores in skin still have a ways to go.
But that’s a look at how much work it takes to get those small, minute details right — and where technical artists are headed with the next generation. It’s seriously a fascinating read, so definitely check the full presentation out.
Correction: The original story said Laura Bailey’s face was used for Abby’s face, but it was Jocelyn Mettler (Bailey was responsible for the voice overs). Apologies for the error.