Nvidia's RTX 3080 Ti Is Sounding Like An Absolute Monster

nvidia rtx 3080 ti

After lots of COVID-19 related delays, Nvidia is finally set to lift the lid on their Ampere series of GPUs. But before Nvidia's CEO could lift the lid, a ton of information is already starting to leak about the company's next gaming flagship ... and some of the features are sounding downright bonkers.

Details about Ampere, Nvidia's next generation of GPUs, are due to be announced in a virtual keynote just before midnight tonight. The address was meant to be for the Graphics Technology Conference, which was scheduled for March 23 until the coronavirus pandemic put a hold on everything.

GTC typically isn't a place where gaming takes prominence, but there's a few clues about what to expect. The release, for one, says "Get Amped", which is an obvious nod to the name of the company's next-gen architecture (Ampere). Innovations in AI have increasingly found their way into consumer gaming GPUs however, most prominently with the advancement of the neural network-powered deep-learning super sampling (DLSS) that's been used so effectively in Control and Wolfenstein 2.

But what's been really juicy have been a string of leaks about the RTX 3000 series, and the RTX 3080 Ti in particular. I'll start with the infrastructural rumours, because that sounds supremely cool and is also less likely to be bullshit compared to numbers of CUDA cores, clock speeds, and what have you.

YouTuber Moore's Law Is Dead posted about half an hour of information this week, revealing that the Ampere generation will feature what's called NVCache. The idea is that the GPU will be able to dynamically use bandwidth from your SSD, VRAM and physical memory to improve load times. It's principally a little similar to what the next-gen consoles are doing, although because the PC is a more open platform than what consoles are, there's going to be a lot more question marks about the effectiveness of this in practice.

The tensor cores on the Ampere generation are supposedly getting more utility, too. Tensor cores can now compress and decompress things stored in VRAM, and it's designed to save VRAM usage in instances where you want to play at, say, 4K. The Ampere cards are also vastly more efficient when it comes to ray-tracing, with one source telling Moore's Law Is Dead that the upcoming GA102 Ampere GPU - the chip most likely to be found on the RTX 3080 Ti - was four to five times faster in Minecraft RTX than a Titan RTX card.

DLSS is supposedly getting an upgrade as well, although like the current implementation it'll only work with games after a Game Ready driver has been shipped. It's still much easier for Nvidia to implement for new games, thanks to the improvements made in DLSS 2.0 which you can read about below. But the interesting rumour here is that DLSS 3.0 will be enabled by default in games, and that Nvidia will encourage benchmarking sites (and supposedly reviewers/press/influencers) to test all games with DLSS 3.0 enabled.

That one wouldn't surprise me a great deal. Playing through Control's latest DLC had about a 30 percent FPS improvement on my rig at 1440p, and the image quality - while not as sharp or accurate at regular 1440p - was good enough that I preferred to play with DLSS enabled. When Control was first seeded out to reviewers, there was an FPS improvement with DLSS, but stuttering and the very noticeable tradeoffs to image quality (like artifacing around Jesse Faden's hair in cutscenes) meant the game was more enjoyable running with DLSS disabled. If DLSS 3.0 continues that level of improvement, it'd be weird for Nvidia not to encourage benchmarks with it enabled: everyone wants to look as good as possible, and PC gamers will generally always prefer significantly higher frame rates if the tradeoff is minimal or at least acceptable.

Nvidia Very Quietly Made DLSS A Hell Of A Lot Better

When Nvidia launched their RTX GPUs, the cards shipped with a wealth of potential to leverage AI in different scenarios.

Read more

What's really juicy, though, is some of the raw specs. The current rumour is that the GPU is now being heavily bottlenecked by the CPU at 1080p and 1440p, which would push 4K gaming as the standard for even mid-range PCs. The 3080 Ti itself is likely to have over 5000 CUDA cores, around 40 percent more memory bandwidth than the 2080 Ti, and a 40 percent boost in rasterisation performance in 4K in "unfavourable games". The boost clock speeds would hit around 2.2GHz on the flagship Ampere gaming GPUs, with the cut-down versions hitting even higher clocks thanks to the power efficiencies gained by jumping to the 7nm process.

Ampere's feature set is getting a bump too. The NVENC hardware encoder will allow for 8K/60 decoding with H.264 and H.265, and the super-impressive RTX Voice will be updated from beta. NVENC is excellent enough for streamers already, and extra efficiencies will always be welcomed particularly by lower-tier streamers or those stuck on internet connections that cap the maximum bitrate they can upload (basically everyone in Australia).

With all that in mind, tonight's keynote should be interesting viewing. It's worth noting that it's unlikely that we'll get a lot of details around specific GPUs, because that's not what Nvidia uses GTC for. GTC is a conference to talk about AI, advancements in graphics and architectural improvements. The gaming GPUs won't likely be released until late September or early October, going off the September 20 release of the RTX 2080 Ti and RTX 2080. Moore's Law added that there will be a Game Ready driver for Cyberpunk 2077 on launch, although that's not really much of a surprise - it's still the biggest game of the year at this point, unless GTA 6 gets a surprise release 2020 release date.


Comments

    Awesome, but the price.... I bought a 2080 RTX soon after launch from the US, it was a good deal and is STILL a good deal. The price has stayed pretty high.

    These are really strong cards but, i mean, it's obvious that TI's especially are for enthusiasts only, the price will be high. I'm guessing north of $1600 for a 3080 TI which for everyone who has anything north of a 970 is a real cause for consideration, because all of those cards are very strong at 1080p and 1440p.

    It's exciting though. Cool to see improvement is still coming down strong.

      I'm guessing north of $1600 for a 3080 TI

      I wish... I am predicting around $2500 and closer to $3000 for the higher end OC versions, possibly higher.

      2080Ti's are going for around $2200 to $2500 now, Supply lines are being affected by Covid but its not going away anytime soon and its possible that it'll still be affected by the suspected september release.

        Yeah. I checked after making that comment. I felt a bit silly buying a 2080 at the time, but I got lucky. The prices now.... so basically you can build a really decent 1440p machine for the price of one 3080TI..... pretty mad. But it's an enthusiast part I guess. It's like complaining about the price of ferrari's. They're not for everyone.

          I reckon it'll partially depend on what AMD comes out with. Maybe $2499 at launch, dropping down to $1899 by end of year.

            Might be a decent article idea? The price of GFX cards has gone ballistic in the last few generations, and it doesn't feel like performance has increased anywhere near the same level. For example, 1080 TI vs 2080 TI in any non-ray traced application.

              It will all depend on AMD's Vega 2. If Vega 2 is fast enough the RTX 3080 ti will be $1000. If not $1200

            it really doesnt matter what AMD comes out with. AMD has not been competitive in the high end market... well ever. and theres no indication of that changing any time soon. the 2080TI launched in the 2200-2500 range and thats where the price has remained. the 3080TI will take that price point or increase it since there will be nothing to rival it.
            i would like nothing more than to be proven wrong, since a 3080TI would make a nice upgrade from the 1080TI, but theres little chance that will be the case :(

        Holy moly..... likely similar pricing to a PS5 and a very nice 65 inch 4K TV.

    Realtime H265 encoding is nice and all, but it's kind of useless until Twitch supports H265 ingest.

    The DLSS 3.0 slide is contradictory as hell in MLID video, and a lot of the other info seems a touch haphazard.

    It'd be cool if we do get that big of a jump, but I'm filing this under FUD personally.

    So $5-6000aud for a GPU, lol, who cares, that's ridiculous, only morons will buy this.

Join the discussion!

Trending Stories Right Now