It’s a classic joke about video games. Video games can simulate complex interlocking networks and systems, calculate the movement of thousands of individuals all at once, model weather patterns and general entire galaxies on the fly, but hair? That’s a step too far.
Despite seeming like a natural extension of character models — which have made extraordinary strides over the past console generation — hair has always been especially challenging. Add problems with older rendering technology and weaknesses in things like subsurface light scattering and how different skin tones react to different lighting conditions, and you’ve got a whole host of problems stopping characters looking authentic the way they should.
It’s understandable, then, that fans go absolutely ballistic for any game that gets close. Take a look at the reaction to Black Desert in 2014, how much we loved Destiny‘s hair at launch, or how much Child of Light made us stop and appreciate Aurora.
So, why exactly is video game hair so hard to get right?
A large part of the problem with video game hair is that it’s not a static thing. Hair, and the tens of thousands of strands and follicles within it, is constantly moving. And that constant movement needs to be calculated in real-time, not to mention the calculations necessary when anything happens to hair — like when it gets wet, is affected by the wind, how it reacts to light, and so on.
If you’re wondering why you’ve seen so many bald dudes or characters in armour or wearing helmets in older games, it’s partly because it’s easier to render. It’s supremely difficult to do realistic hair well — or at least hair that doesn’t look ridiculous — unless it’s a hairstyle that’s pressed flat against a person’s skull. You see this a lot in characters with longer hair. The hair would generally be attached to the character model as a texture or layers of textures onto the body, almost like an appendage, and it’d move in unusual ways.
“I guess the main reason animation-wise is that if you want someone to have any kind of hair-style other than short or bald, you are going to need to figure out how to move it, and in modern games that’s normally with physics,” Mark Pearce, an animator at Greenheart Games (Game Dev Tycoon, Tavern Keeper), explained over email.
“This means unless you have a lot of bald or short haired characters, you’re going to have a lot of physics considerations to make, and that’s not even mentioning clothing, other accessories, etc.”
To help with this, AMD and Nvidia developed their own software suites to allow for more advanced hair simulation. AMD’s offering was called TressFX, and it was first used in 2013’s Tomb Raider reboot. Nvidia developed their own system called HairWorks, and like AMD, their system used regular geometry processing of a graphics card to emulate what was happening to hair on a finer level.
But the problem? Both technologies had significant performance penalties, even for the most high end gaming rigs. So one of the major problems is budgetary concerns. The amount of CPU/GPU time dedicated to hair has to be weighed against all the other things that could be happening on-screen (or being rendered off-screen) at any given moment. There’s also the time spent working out what the best compromise is from an aesthetic level, and the amount of time developers have to spend working out all the edge-case scenarios where Hair Goes Wild.
“Hair, like anything else, you just see that every day,” Renaud Galand, Blizzard’s lead character artist on Overwatch and Overwatch 2, told Kotaku Australia. “That’s why it’s so hard to get a face to look right. We keep hearing about the uncanny valley — hair is part of that as well to some extent. When it’s looking off, you know it right away. You can’t point your finger on it, but you know it.”
To illustrate the complexity, the developers at Greenheart Games sent me a screenshot from Unity showing just some of the options they have to consider when managing the physics of hair.
“There’s also plugins and other solutions you can try,” Patrick Klug, co-founder of Greenheart Games, explained. “As you might imagine, it’s a lot of fiddling around with settings that give the best compromise of look & feel vs. how likely it is to break in all the corner cases.”
For an indie project, the Tavern Keeper team tried modelling hair via character joints (as seen in the screenshot above) but “we observed them misbehaving too often”. At the time we spoke, Greenheart Games were looking at using cloth physics instead. That’s a whole other layer of complexity, and getting that level of simulation is difficult even for the biggest developers.
In Hollywood, for instance, Pixar established an entire department dedicated towards costume, hair, and physics simulation. That work led to the creation of a core simulation engine that would handle the exceedingly complex calculations in a realistic manner — like Violet constantly playing with her long hair in 2004’s The Incredibles.
Pixar, of course, has a very different set of goals, challenges and hardware budgets to work with. Video games run at much higher frame rates, are calculating a wider array of interlocking systems that have to respond to any manner of inputs from a player, and also have the challenge of dealing with scalable hardware.
And that’s before you get into the practical challenges. Even the biggest video game studios don’t have quite the same degree of freedom or support as Pixar, which can support teams spending years on internal tools thanks to their thousands of employees and literal billions of revenue over the decades. (That said, some of the biggest video game developers definitely try.)
So it’s practically impossible for a video game to have a situation like Ratatouille, where chef Colette has over 176,000 strands of hair that authentically sways, bounces and shifts as she flies around the kitchen. Video games don’t have the polygon budget to support that degree of realism unless there’s practically nothing else on screen that needs rendering.
But players still want — and expect — good hair in video games. So how is it done?
Here’s one basic rundown of how the process might work. You take a round of reference photos of the model representing your character. Artists will then look at different shots of similar hair, or other hairstyles they think might work with the character they have in mind.
After narrowing down the look they want, more reference photos are shot from different angles. This helps the artists get a proper idea of how the hair should appear; most games will give the player control of the camera at some point, and the hair needs to look accurate from all angles. A third round of reference shots might be required, and these can be shots of the hair against a white background.
This helps artists basically get the hair’s texture right. From there, that detail can then be uploaded into another program (like Maya or Blender). From there, the texture can be rendered down into hair cards, which artists then manually place onto the character model.
Different studios have their own process, tools and particular requirements. So this isn’t a hard and fast process for all game development. But there are plenty of good reference points. There’s a great short video from Myrkur Games’ Eydís Ósk Ingadóttir here, and a great talk below from Johan Lithvall, a specialist character artist who worked with Guerilla on Horizon: Zero Dawn.
To better understand the AAA experience, I reached out to several major publishers and studios for this story. Due to complications around the coronavirus pandemic and its impact on development — particularly with developers working on upcoming releases — most were unable to assist.
However, one major studio that was able to get back to me was Blizzard. I was keen to hear back from them, particularly as I wondered how different the process might be for stylised games. A character like Overwatch‘s Tracer has great hair, but it’s not detailed with individual strands to the same extent as, say, Lara from Rise of the Tomb Raider, Aloy’s long hair and braids in Horizon Zero Dawn, or Cal’s medium-length locks from Jedi Fallen Order.
A stylised look is more scalable, too, which helps in retaining an aesthetic on lower-end platforms and consoles like the Switch. But even then hair is still an enormous challenge, according to Blizard’s Renaud Galand. And one of the major difficulties facing hair is the way it’s lit, as much as the detail of the hair itself.
“A lot of people forget that what makes hair hair is the way it reacts to light, and that’s what really makes it believable or not,” Galand said.
“If you take a step back and look at games from the PS2 or PS3, the main difference about what we could do back then and what we’re about to do right now is more resources at our disposal, which means we have more budget to put roundness and definition to the hair. But for me, the lighting engines are really awesome and really progressing, so you get bounce highlighting, anisotropic highlights, scattering, translucency that not a lot of people think of,” Galand added.
A specific benefit with the hair in Overwatch, and one that applies to Overwatch 2, is a sculpting phase. Like almost every studio, Blizzard’s characters start life as concept art, but having a stylised look helps avoid a lot of the painful minutia involved in the creation of realistic hair.
“We actually sculpt hair; that’s something that’s been pretty liberating on Overwatch. We didn’t have to groom or go with a bajillion small, little polygons to try to achieve something that had volume. We could directly go in and actually sculpt the hair to give initial volume,” Galand said.
After the sculpting and modelling phase, work then goes into the materials and shaders. That’s the point at which graphics engineers get involved, because they have to work in concert with the rest of the development team to accomplish and balance everyone’s needs.
For Tracer’s design in the original Overwatch, Galand said the developers really wanted to nail anisotropic highlights and the fuzziness of the tips of Tracer’s hair. “We wanted the tips to be fuzzy and the anisotropic highlights — which is like the particular highlight that you have on hair that has a way of bending to the light in a very particular way — to be not necessarily accurate, but to have some of the properties that you could see on real hair. And then we put a bit of a spin on it, to stylise that, to give a bit more of that stylised/anime look to it.”
But even in a scenario like that, there are millions of things that can go wrong that spoil the presentation. A useful example of just how many little details go into the creation of authentic hair can be found in the regular updates for Squadron 42, the single-player off-shoot of the crowdfunded Star Citizen.
Here’s a passage from the latest developer update, explaining some of the work done for a new hair shader:
The hair shader received several improvements, such as the ability to disable specular highlights on eyelashes, improved boundary occlusion at hairlines, support for ambient lights in forward shading, and better hair quality during camera cuts. Dual-lobe approximation for the skin shader was added and the eye shader received a couple of improvements as well.
For example, the depth of field effect needed several changes to work better with the new hair shader. Tweaks were also made to the sub-surface-scattering effect to improve occlusion at the boundary between materials, such as around teeth and hair.
It’s not directly mentioned, but the Star Citizen devs here are talking about the same problem that Galand mentioned. And it’s the same problem that caused Naughty Dog to build a model of the human iris for The Last of Us 2. Their original solution was good, but it didn’t work in all environments. And eyes, teeth, hair all share a similar flaw: if any of them aren’t right, the player will know feel something is “off”, even if they can’t identify exactly what the problem is.
That’s why things like occlusion matters. If you don’t have proper lighting that deals with things like the edges of where hair meets the forehead, it can play a major difference in a character looking like they have a haircut versus hair that’s stuck on. It’s a crude example, but it’s just one instance of how things in video games can go wrong.
Galand gave another example of just how much a difference lighting can make. While talking over a Zoom call, Galand pointed out his own hair — a shorter take on this style — as an example. “We’re on camera, right? I’m looking at my hair and I see … drop shadows from one strain of hair onto another, which gives an overall shadow of the hair onto the face.”
“We don’t have — when you do a movie or something like that, you have all the resources you want, you don’t have to render more than one frame a second, so you can put all the lighting power you want, and that really helps in creating all of those subtle shadows,” Galand said. “When it came to Overwatch, I remember in the first test, we were missing a bit of that. The hair was really feeling completely detached, and not grounded like the rest of the characters. So we had to go back and realise and understand from a video game point of view, and the engine we were dealing with at the time, we had to put a lot more of that information into the textures.”
“But for me, the thing that we struggle with the most, and still feels in some games that I feel with from time to time, the hair feels completely disconnected from the face. It almost looks like a wig and kind of floats up there. So it takes a little bit more resources and fine-tuning to make sure that the whole thing feels like it’s combined together.”
One game that offers a potential window into where video games might go is the latest version of FIFA 21. As opposed to using “hair cards”, EA’s Frostbite engine can now render and simulate physics on individual strands of hair. In a presentation for the Digital Dragons 2020 industry event, EA Frostbite engineers Jon Valdes and Robin Taillandier explained the advantages of their new strand-based system.
It allows for some seriously impressive movement: a character wearing a headband can have strands at the top of their head moving vastly less than the ends of their hair. However, there’s an obvious cost. Not every character on the pitch can have their hair modelled to this level of quality, even using the power of the next-gen consoles.
So in FIFA 21, the benefits are isolated to the star players. But even advancing technology this far forward also brings more problems to the surface. For instance, Frostbite doesn’t seem like it can properly account for the sweat that builds up. Naturally, your face and head will get sweaty as you run up and down a football pitch. FIFA has always showed pores of sweat on players over the years. But we’re not quite at the place where games are capable of working out what parts of a character’s hair has been affected by sweat and how that impacts its physics and motion.
Similarly, the strand-based rendering — and this was a substantial problem in the earlier strand-based TressFX and Hairworks rendering in games like The Witcher 3 and Tomb Raider — has an elasticity and floatiness that feels unrealistic. Watching it slowed down in videos or GIFs certainly doesn’t help, but it also stands out because we’ve been accustomed to years of video games devoting barely any resources to hair physics. (Early versions of TressFX and Hairworks could also be supremely janky, sometimes with hilarious results.)
But even in instances where character models only have a “skinned” head of hair, as opposed to a fully simulated one, the graphical improvement is massive. It’s especially effective on shorter haircuts, and facial hair looks miles better too.
Something that’s especially prominent between in the shots of Messi above is the shadow cast from the fringe onto the forehead. It’s the kind of shadow that we’ve heard real-time ray tracing is especially good for, but as the Frostbite developers noted in their talk above, computers struggle when trying to render anything smaller than a pixel.
“Ray tracing will basically grab a surface and will bounce the lighting information on that surface onto another surface, [so] it’s kind of like self-casting in a way,” Blizzard’s Galand said. He believed that the solution would eventually come from greater power. There’s definitely potential in the PlayStation 5 and Xbox Series X, but it will take years before developers become fully accustomed with the limits of the new hardware, and the various tradeoffs they can and can’t make in different scenarios.
“I think eventually we’ll end up with enough power that we’ll be able to grab like a pre-rendered solution for real-time,” Galand said. “We won’t actually have pre-rendered, but we’ll have as much power in real-time that pre-rendered engines have right now. Yes, at some point we won’t have to actually paint anything anymore. Are we there yet? I’m yet to see a hair rendered with the help of ray-tracing, so I can’t comment on that too much, but I guess the next generation will tell us that.”
Still, that doesn’t mean existing game engines and technology can’t be impressive today. I asked Galand if there were any projects that he was particularly impressed by, and while FIFA‘s new Frostbite tech hadn’t been publicly announced at the time, the Blizzard artist said he was especially taken back by the detail in Game Science’s Black Myth: Wu Kong, the Chinese Souls-like game that went viral last year.
If you look closely in Wu Kong — and keeping in mind this is supremely early footage of a game that may not even be released for at least a few years — you’ll notice that Wu Kong‘s hair and fur doesn’t really react to the environment. That changes a little when Wu Kong transforms, but even still the hair remains largely “baked” onto the character model.
“Is it the most realistic hair? I don’t think it is,” Galand said, stressing this was just his personal perspective. “But from an artistic point of view, just the way that the hair is included in the rest of the environment, where it’s not just about the hair but the whole picture when you look at the image, I think that game is just beautiful. And it features quite a bit of fur and hair.”
Of course, by the time Wu Kong gets closer to release, applying more physics to the character’s hair might be a choice the studio makes. But they might choose to spend their budget on better reflections, ray traced lighting or global illumination, or maintaining some overhead to uphold a high frame rate.
But we should never neglect the potential for developers to make magic happen. As my chat with Galand finished up, he mentioned that Shadow of the Colossus was astonishing for its time. “The thing that we can all agree on: nobody understood how the guys who created Shadow of the Colossus on the PS2 did it,” he said.
“We were all floored: this looks like real fur. How did they manage to pull this off?”
The simplest answer to why video game hair has been so difficult until now is because it’s a video game. Their very nature opens up literally millions upon billions of possibilities for things to go wrong: inputs from players with non-player characters, different angles, scenarios and approaches, and video games have to be capable of handling all of those without breaking.
But even as the PS5, Xbox Series X and next-generation of gaming PCs offer more potential, that power has to be balanced against what developers value most. Different studios also have their own technology and tools to solve the unique problems they run into. Frostbite’s strand technology will be available to EA studios, for instance, although not all EA studios will use it: Respawn famously used Unreal Engine to create Jedi Fallen Order, while Apex Legends was built on a heavily modified version of Valve’s Source engine that Respawn used on Titanfall.
Basically, every studio will come up with their own solution, look at how others have solved similar problems, and then incorporate those ideas or parts of those ideas into future projects. Those solutions are being developed right now, as studios turn their eye towards games unhindered by the PS4 or Xbox One. What will be really exciting is all the projects that come after that, once everyone’s had more experience seeing where the next generation can go.
This article has been updated since its original publication.