Nvidia’s RTX 3080 Ti Is Sounding Like An Absolute Monster

After lots of COVID-19 related delays, Nvidia is finally set to lift the lid on their Ampere series of GPUs. But before Nvidia’s CEO could lift the lid, a ton of information is already starting to leak about the company’s next gaming flagship … and some of the features are sounding downright bonkers.

Details about Ampere, Nvidia’s next generation of GPUs, are due to be announced in a virtual keynote just before midnight tonight. The address was meant to be for the Graphics Technology Conference, which was scheduled for March 23 until the coronavirus pandemic put a hold on everything.

GTC typically isn’t a place where gaming takes prominence, but there’s a few clues about what to expect. The release, for one, says “Get Amped”, which is an obvious nod to the name of the company’s next-gen architecture (Ampere). Innovations in AI have increasingly found their way into consumer gaming GPUs however, most prominently with the advancement of the neural network-powered deep-learning super sampling (DLSS) that’s been used so effectively in Control and Wolfenstein 2.

But what’s been really juicy have been a string of leaks about the RTX 3000 series, and the RTX 3080 Ti in particular. I’ll start with the infrastructural rumours, because that sounds supremely cool and is also less likely to be bullshit compared to numbers of CUDA cores, clock speeds, and what have you.

YouTuber Moore’s Law Is Dead posted about half an hour of information this week, revealing that the Ampere generation will feature what’s called NVCache. The idea is that the GPU will be able to dynamically use bandwidth from your SSD, VRAM and physical memory to improve load times. It’s principally a little similar to what the next-gen consoles are doing, although because the PC is a more open platform than what consoles are, there’s going to be a lot more question marks about the effectiveness of this in practice.

The tensor cores on the Ampere generation are supposedly getting more utility, too. Tensor cores can now compress and decompress things stored in VRAM, and it’s designed to save VRAM usage in instances where you want to play at, say, 4K. The Ampere cards are also vastly more efficient when it comes to ray-tracing, with one source telling Moore’s Law Is Dead that the upcoming GA102 Ampere GPU – the chip most likely to be found on the RTX 3080 Ti – was four to five times faster in Minecraft RTX than a Titan RTX card.

DLSS is supposedly getting an upgrade as well, although like the current implementation it’ll only work with games after a Game Ready driver has been shipped. It’s still much easier for Nvidia to implement for new games, thanks to the improvements made in DLSS 2.0 which you can read about below. But the interesting rumour here is that DLSS 3.0 will be enabled by default in games, and that Nvidia will encourage benchmarking sites (and supposedly reviewers/press/influencers) to test all games with DLSS 3.0 enabled.

That one wouldn’t surprise me a great deal. Playing through Control‘s latest DLC had about a 30 percent FPS improvement on my rig at 1440p, and the image quality – while not as sharp or accurate at regular 1440p – was good enough that I preferred to play with DLSS enabled. When Control was first seeded out to reviewers, there was an FPS improvement with DLSS, but stuttering and the very noticeable tradeoffs to image quality (like artifacing around Jesse Faden’s hair in cutscenes) meant the game was more enjoyable running with DLSS disabled. If DLSS 3.0 continues that level of improvement, it’d be weird for Nvidia not to encourage benchmarks with it enabled: everyone wants to look as good as possible, and PC gamers will generally always prefer significantly higher frame rates if the tradeoff is minimal or at least acceptable.

[referenced url=”https://www.kotaku.com.au/2020/02/nvidia-rtx-dlss-quietly-got-a-hell-of-a-lot-better/” thumb=”https://www.kotaku.com.au/wp-content/uploads/sites/3/2019/08/control-final-4-410×231.jpg” title=”Nvidia Very Quietly Made DLSS A Hell Of A Lot Better” excerpt=”When Nvidia launched their RTX GPUs, the cards shipped with a wealth of potential to leverage AI in different scenarios.”]

What’s really juicy, though, is some of the raw specs. The current rumour is that the GPU is now being heavily bottlenecked by the CPU at 1080p and 1440p, which would push 4K gaming as the standard for even mid-range PCs. The 3080 Ti itself is likely to have over 5000 CUDA cores, around 40 percent more memory bandwidth than the 2080 Ti, and a 40 percent boost in rasterisation performance in 4K in “unfavourable games”. The boost clock speeds would hit around 2.2GHz on the flagship Ampere gaming GPUs, with the cut-down versions hitting even higher clocks thanks to the power efficiencies gained by jumping to the 7nm process.

Ampere’s feature set is getting a bump too. The NVENC hardware encoder will allow for 8K/60 decoding with H.264 and H.265, and the super-impressive RTX Voice will be updated from beta. NVENC is excellent enough for streamers already, and extra efficiencies will always be welcomed particularly by lower-tier streamers or those stuck on internet connections that cap the maximum bitrate they can upload (basically everyone in Australia).

With all that in mind, tonight’s keynote should be interesting viewing. It’s worth noting that it’s unlikely that we’ll get a lot of details around specific GPUs, because that’s not what Nvidia uses GTC for. GTC is a conference to talk about AI, advancements in graphics and architectural improvements. The gaming GPUs won’t likely be released until late September or early October, going off the September 20 release of the RTX 2080 Ti and RTX 2080. Moore’s Law added that there will be a Game Ready driver for Cyberpunk 2077 on launch, although that’s not really much of a surprise – it’s still the biggest game of the year at this point, unless GTA 6 gets a surprise release 2020 release date.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


12 responses to “Nvidia’s RTX 3080 Ti Is Sounding Like An Absolute Monster”

Leave a Reply

Your email address will not be published. Required fields are marked *