The 3D Graphics Techniques Helping Our Indie Game Look Great

As some readers know, I also moonlight as an independent games developer, based out of Melbourne. At the start of this year, I began working on our studio’s third game, Fear Equation, which we officially announced yesterday. The response has been positive so far, particularly on how sweet the visuals look. Sure, it can’t compete with the latest AAA games, but we’ve spent a lot of time getting as close as we can… and here’s how.

With each release, the fidelity of our games has increased substantially. We went from the 2D, coded-from-scratch Zafehouse: Diaries (2012):

To the Unity 4-powered Deadnaut (2014):

And now we’re hacking away at Fear Equation — a game that puts you in the shoes of an engineer driving a modified freight train through a fog that brings nightmares to life. It’s also developed in Unity 4:

Yep, we’ve levelled a few times in the graphics department, to say the least. There are two factors at play that helped bring the graphics of Fear Equation up to snuff: an awesome 3D artist, Gavin Jenkins, and two 3D techniques we implemented — Physically-Based Rendering (PBR) and Screen Space Directional Occlusion (SSDO). We also have tasteful bloom, Subpixel Morphological Anti-Aliasing (SMAA) and a few other effects going on, but it’s PBR and SSDO that are doing the heavy lifting.

What are these effects exactly? Let me do my best to explain, though I will warning you now, it gets technical in some parts. Bear with me!


PBR: Physically-Based Rendering

Now, this is a concept you’ve likely read or heard before, as it’s one of the headline features of both Unreal Engine 4 and Unity 5, as well as all the top engines from the likes of Crytek and DICE. There are many, many papers online discussing what PBR is, but the only one you really need to read is from Disney. Well, Brent Burley at Walt Disney Animation Studios, really.

It might be surprising to learn that the same people who made Wreck-It Ralph and Frozen are also indirectly pioneering the latest generation of 3D graphics in video games, but when you stop and think about it, it makes perfect sense. Up until recently, PCs and consoles weren’t sufficiently powerful to replicate the visuals we see in animated films (at respectable frame rates anyway), but that gap is beginning to close and as it does, it’s becoming increasingly more practical to implement these advances in cutting-edge games… with a few tweaks in the interests of performance.

Physically-based rendering is exactly what it says on the tin: figuring out the equations that determine how light interacts with physical surfaces in reality and applying those formulas to 3D rendering. Lighting a surface can be broken down into two parts: the diffuse term and specular term. “Term” basically means “what numbers do I use (and what do I do with them) to figure out how this should look”.

The diffuse term determines how a surface reflects light in all directions. So if the sun hits a green piece of paper, then it should reflect green to the viewer. Since pretty much forever, games have used the Lambertian model for the diffuse term. It’s fast, simple and does an excellent job of approximating how light is reflected off a matte surface.

In fact, the Lambertian model is so effective, Epic stuck with it for Unreal Engine 4 after assessing several alternatives. According to senior graphics programmer Brian Karis, Epic “couldn’t justify the extra [performance] cost” of more sophisticated models, when the difference in visual quality was negligible.

Now, the specular term is the exciting part and where most of the research from both game developers and 3D animation studios goes. You know the glare you see on a reflective surface, like a metal pipe or a polished wooden table? That’s specular.

Everything has a specular reflection, it’s just more noticeable on shiny surfaces than rough ones. But to replicate real life, we need to take specular into account across the board, rather than applying it where we think it should be. This is another core aspect of PBR — taking a single lighting model and using it consistently. As such, we need a way to tell our 3D engine how “rough” a surface is — the less smooth, the more light is lost in the scratches and holes and the less reflective it is.

In PBR, these imperfections are called “micro-facets” and the formula used to determine how light interacts with them — along with several other variables collectively called the bidirectional reflectance distribution function (BRDF) — is at the heart of any specular term.

Just like diffuse reflection, specular has its own granddaddy: the Blinn-Phong model. Again, it’s only in recent years games have departed from this proven approach to more realistic models, including Cook-Torrance and Ward.

I’ll freely admit that maths involved here are over my head, but all you need to understand is that they’re equations that take (mostly) the same input, but provide different output. That output is — to put it simply — a bright circle of some description, though the specifics of how that circle looks differs between formulas.

To provide a visual example, here’s what the micro-facet distribution of real-life chrome, GGX and Beckmann look like (with the latter two, like Blinn-Phong, being ways of calculating the specular term):

Image: Brent Burley / Walt Disney Animation Studios

It’s clear that GGX provides the closest approximation to the real deal and it’s this model, slight modified, that has been adopted by Epic with Unreal Engine 4 and Disney. The implementations aren’t identical — Epic uses a number of optimisations that sacrifice a negligible amount of quality for speed — but the core features are the same.

And so that’s what we’re using in Fear Equation. The base implementation is a carbon copy of Disney’s, though like Epic, we ditched the fancy diffuse term for good old Lambertian for precisely the same reasons Epic did.

Here are some screenshot so you can see how “roughness” factors into the lighting calculation. First, GGX in Fear Equation with average roughness:

The same shot, but I’ve increased the roughness to near maximum on the metal surface near the bulb:

Finally, here’s Blinn Phong. I’d say the images speak for themselves.


SSDO: Screen Space Directional Occlusion

Real-time 3D graphics is all about faking what real life looks like and applying a facsimile of ambient occlusion is just another trick programmers have up their sleeves. All the little shadows you see in cracks or between adjoining surfaces, such as bricks and cobblestones, is ambient occlusion. While traditional shadowing techniques do a good job with medium to large scale stuff, we need to work a bit harder to cover the small things.

Enter Screen Space Ambient Occlusion, or SSAO. Pioneered by Crytek, SSAO has become the standard for adding depth to a scene by applying detail shadows. It works by looking at the “depth” of a pixel — that is, how far it is away from the camera — and comparing it to its neighbours. This information can then be used to tell how occluded or blocked a pixel is and darken it appropriately.

SSAO applied during the rendering process of Deus Ex: Human Revolution

While a clever approach and relatively fast, SSAO is still an approximation and not without is disadvantages. The biggest one is that SSAO does not take into account direct lighting, so you sometimes get unnaturally dark areas, despite being in full light. Smarter SSAO algorithms try to account for this, but they still fall short because the technique is a post-process effect.

Post-process means it is applied after the scene is rendered, at which point the ways in which the frame can be manipulated are limited. Imagine taking a photo and then trying to darken parts of it in Photoshop to say, change a morning scene to an evening one. It’d be better if you’d just taken it at the right time instead.

This is where more sophisticated approaches are needed. One that’s received attention from studios such as Crytek is Screen Space Direction Occlusion (SSDO). The idea is to take into account the direction and strength of one or more light sources on a pixel and use that to determine the ambient occlusion level, rather than figuring it out from depth alone.

To calculate the occlusion term for each pixel, we store the world normals and scene depth into special buffers (or G-buffers to use the proper jargon). A “world normal” is a directional vector that tells us which way a pixel is facing. The problem is we have to save this information in a format your GPU can easily understand and process so, why not use the colour channels of a texture — red, green and blue — and use them as co-ordinate “channels” instead for X, Y and Z?

Here’s what that looks like:

It’s pretty… in an acid trip sort of way. We also need depth, stored as a value between zero and one, which looks like this:

I’ve adjusted the scaling a bit so you can see the depth “gradient” better. Normally you’d see a mostly black image as a depth buffer is usually exponential, using more space to store the position of close pixels and less for those further away.

With this information, we can compute the occlusion values for each pixel, which is also stored as a vector of sorts:

With the final occlusion “buffer”, we can more accurately represent how “blocked” a pixel is and rather than applying the shadow on top of the image, which would give us that tell-tale “smudged charcoal” look of SSAO, we plug the calculated occlusion term directly into our lighting equation as the scene is rendered, instead of after.

The following three screenshots show a scene without ambient occlusion, SSAO and SSDO (in that order — click the images for larger versions):

I won’t say definitively that SSDO is the be-all, end-all of ambient occlusion techniques (real-time global illumination is where it’s at), but given SSDO’s more accurate representation of real lighting, we’re sticking with it.

While Fear Equation won’t win any awards for its graphics, we’re happy with how it’s turned out visually and it completely suits the style we’re going for. Hopefully players will appreciate the effort we’ve gone to in making the game look swell. Now we’re refining the fun part, which I hear is also important.

If the game piques your interest, look out for it later this year (barring delays from cosmic invasions or supernatural disasters).


In addition to his weekend work on Kotaku Australia, Logan Booker works as an independent developer at Screwfly Studios, along with David Kidd. Their first game was Zafehouse: Diaries, released in September 2012 and their second title was Deadnaut, published in November 2014. Fear Equation is Screwfly’s just-announced third title. You can follow Logan and David on Twitter, though they won’t be offended if you just check out their games instead.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


One response to “The 3D Graphics Techniques Helping Our Indie Game Look Great”