When Nvidia launched their RTX GPUs, the cards shipped with a wealth of potential to leverage AI in different scenarios. One of those was deep learning super sampling (DLSS), an AI-powered anti-aliasing technique that was designed to improve frame rates at higher resolutions by using neural network upscaling. The technique worked in practice, but the hit to image quality varied from game to game. And there were other limitations, like only being able to use DLSS at certain resolutions. It all made up for a feature that sounded nice but was super limited, because Nvidia had to individually train their neural networks for every single game they wanted to support.
It wasn't efficient, and not many games supported DLSS as a result. But Nvidia has completely redone how their AI-powered anti-aliasing technique works, implementing a new system that will be applicable for all games going forward without the need to train networks individually.
Before we get into what the new technique is, a quick recap on DLSS. It's one function in Nvidia's deep neural network technology stack, with DLSS being a technique specific to anti-aliasing (AA). Older AA techniques like temporal anti-aliasing (TAA) or multisample anti-aliasing (MSAA) have worked well, but they run into a ton of issues when dealing with games in motion because of how they work, or performance problems at higher resolutions.
Temporal anti-aliasing in particular has a ton of accuracy problems, because it makes an estimate of what the image should look like based on data from previous frames. That's fine if you're dealing with slow still images, but add small details in the background - text especially - or a lot of motion, and you'll get a ton of ghosting and blurred edges where there should be fine detail.
DLSS is designed to correct that problem through the use of reference images. A supercomputer runs through tens of thousands of rendered images at a super high resolution to establish a ground truth. The RTX GPU then takes a frame of a game running at a lower resolution, and uses the ground truth as a basis for determining what the image should look like at a higher resolution, ideally resulting in a 1440p game running at 4K without the extra overhead.
The benefit for most people is that, generally, DLSS comes with a sizeable FPS improvement. How much varies from game to game. In Metro Exodus, the FPS jump was barely there and certainly not worth the bizarre hit to image quality. In Final Fantasy XV, the game was perfectly playable at 4K/60fps as a result, although being the first implementation of DLSS, there were some visual quirks, like you can see in this shot of Prompto.
And that's generally been the problem with DLSS. You get more FPS, but it comes at the cost of artifacing, shimmering and other graphical quirks that result in a lower quality image than if you were just playing at a higher resolution outright. It's not the win-win scenario Nvidia originally pitched back at the RTX launch, but it's also not a loss for gamers either: it's just a matter of picking the tradeoff that suits you best.
However, that might be about to change.
The Australians at Hardware Unboxed - you've seen some of their work here occasionally, as they're also writers for Techspot - had a lengthy conversation with Nvidia recently, asking about the implementation of DLSS in Control and, more importantly, Wolfenstein Youngblood and Deliver us the Moon. Youngblood is especially fascinating, not because it's a great game, but because it's the first major game with the new version of DLSS.
According to conversations between Hardware Unboxed and Nvidia, the first iteration of the next stage of DLSS - or what's being called "an approximation of the work in progress AI network" - appeared in Remedy's Control. The key technical point is that Control's implementation of DLSS didn't actually use the special tensor cores that are exclusively part of Nvidia's RTX cards, but regular shader cores that you can find on any GPU.
That in and of itself is huge, because it opens up the possibility for DLSS as a technique to be expanded to non-RTX cards (everyone with a 10 series Pascal or GTX 1660 Ti or lower GPU). But Control still suffered from some of the legacy DLSS problems: artifacing and shimmering in fine details, particularly in cut-scenes with close-ups of Jesse's hair, and you could only use DLSS if you were playing at 1440p or 4K.
The new implementation of DLSS, which Nvidia are just calling DLSS but Hardware Unboxed (rightly) suggests should be called DLSS 2.0, has no such restrictions. The new SDK has removed all resolution requirements, so you can use DLSS at 1080p. Nvidia are also using a generalised training technique for DLSS going forward, so games don't have to be trained individually with a supercomputer, massively broadening the range of titles that DLSS will be able to support.
Unfortunately, according to Nvidia's engineers, the future version of DLSS will still require the specialised tensor core on RTX GPUs. Their argument was that the AI cores offered better potential performance and image quality. Control was also supposedly hugely difficult to implement, and its implementation wouldn't be applicable across other types of games, so Nvidia wanted to use a model that they could more easily adapt to future games through Game Ready driver updates, rather than needing updates from the developers themselves.
If you've played Youngblood and you have an RTX card, chances are you'll have seen the changes in how DLSS appears in the options. There's quality, balanced and performance options for DLSS, and the image quality is much more on par with a native 4K image. It's not identical, but for most gamers it's pretty damn good, and somewhere between 25 to almost 50 percent better average FPS at 4K is a huge win. If you're running at 1080p or 1440p, however, you're best off sticking with the Quality preset.
All in all, it's a great evolution in how DLSS works. Being able to use techniques like this at lower resolutions will also be handy for anyone trying to get the most out of 240Hz or monitors with even higher refresh rates than that. And it's what gamers want hardware companies to do: continue iterating on their algorithms and feature set for existing owners.
As the next generation cycles in, and people start pushing for 4K games to run at 120fps or higher, techniques like DLSS are going to be key to getting those framerates. It'll be real interesting to check back in with DLSS later this year if rumours about new, lower process node Nvidia GPUs finally launch - and when gamers have more games using the new DLSS implementation.