Even though the hardware's now available, regular gamers are still yet to get their hands on real-time ray tracing. Battlefield V became the first game out of the gate, with ray-traced reflections and more complex lighting.
But one of the more interesting techniques was the deep learning anti-aliasing (DLSS), a technique that leverages neural networks to improve the accuracy and performance of anti-aliasing. It went live in Final Fantasy 15 last week, and the initial results are interesting.
DLSS was enabled in Final Fantasy XV last week, which came as a surprise given Square Enix's announcement that future development on the PC version of FF15 had been cancelled. A version of the FF15 benchmark had been released publicly (and to press prior to that, to allow for GPU testing), but given the occlusion and LOD problems still present in the benchmark, and the obtuse nature of the benchmark to begin with, it wasn't particularly useful.
But now that DLSS has been incorporated into FF15 proper, we can get a better sense of how deep learning might benefit real-world performance in games.
So, some basics. The same rig I used for testing the RTX 2080 Ti and RTX 2080 was used here, although I only ran the tests with the 2080 Ti for the sake of brevity. (It's been quite the week.)
The tests were run at 4K on the Highest preset, with the only change being whether DLSS was toggled on or off. HDR wasn't enabled for the purposes of testing.
To help illustrate the difference in motion, I used Nvidia Shadowplay to record two videos below - one while DLSS was running, and one while it wasn't. I've encoded them both down to 1080p at 40Mbps so you can get some idea of the performance difference in two early cut-scenes. Factor in a small performance hit - about 1 to 2% - and you'll get an idea of what the uninterrupted performance is like.
In the opening cut scene, which has a ton of fire effects (but few characters on screen) there's a 10-12fps improvement with DLSS enabled. When Noctis and his crew start pushing the Regalia around, the 2080 Ti is hovering just under 60fps with DLSS disabled. When the neural network technique is turned on, that performance is hovering around 65-68fps.
It's worth noting at this point that there's not just one possible implementation of Nvidia's AI-powered anti-aliasing. As revealed when the RTX cards launched earlier this year, developers can choose to implement a version of DLSS that concentrates on accuracy - better AA for images moving at high speed, for instance - or they can focus on performance.
On the left, DLSS is off. On the right, DLSS is enabled.
Anti-aliasing is the kind of thing you notice most in motion, or when details are zoomed up. So what I've done here is take a shot from the opening Hammerhead cut scene - where you're introduced to Cindy and Cid for the first time - and zoomed in to focus in on Prompto in the background.
Nvidia Hairworks is on in both scenes, it's worth noting. That's on by default as part of the Highest preset.
On the left is the effect without DLSS, and on the right with DLSS on. Here's the original scene without DLSS, so you get where it's coming from. (If you're on desktop, click on one image and use the left/right arrow to do a quick before/after so you get a better idea of the changes.)
And with DLSS on:
The major difference with this scene is a change in the lighting. You can see a different, slightly nicer reflection in the windshield. There's some more contrast - and grime, perhaps? - on Cindy. The Regalia in the background is a little less flatter, and the shadows on Gladiolus are a little darker when DLSS is enabled.
The "Regalia" detail in the bottom right of the image is a little more blurred with DLSS on, however. That's consistent with some early testing that came out of the FFXV benchmark when the RTX cards first launched, although it's not been until now that we've been able to get some in-game confirmation.
How much this was impacted by the game's time of day is in question. It's taken from the first cut scene, where the player has had no control beyond pushing moving the Regalia forward in the Stand By Me sequence before the title card rolls. So the conditions should be the same. But it's the kind of thing I'd want to confirm with further testing.
For one final comparison, I ran outside of Hammerhead and got into a quick battle with some Sabretusks. Thanks to YouTube's compression, it'll be too difficult to gauge the difference in visual quality, but the performance benefit is obvious. Without DLSS, FFXV hovers under 50fps for most of the battle, often closer to 45fps.
When DLSS is enabled, the frame rate is around the 60fps mark. Take away the impact of recording during gameplay through Shadowplay, and consider the overhead you'd get if you dropped down a preset, or disabled Nvidia Hairworks, and you've got yourself a pretty nice looking game at 4K with 60fps and more to spare.
I'll be looking forward to doing a bit more of a deep dive when I get more time. But the performance benefits in FFXV are pretty obvious. It makes a noticeable difference, and the visual effect is generally more pleasing as well - artifacting on hair was reduced, there's more sharpening on objects at distance, and character models popped out a little more.
It's not a technique that will work in every scene. But for the improvement in FPS? I'll take it. Now what can other developers do when they apply deep learning to their games?