Some Wild Nvidia GPU Benchmarks Appear

It’s still a good few months away from the prospect of new graphics cards being announced. Still, we know they’re coming: AMD have been talking up “Big Navi,” their card with real-time ray tracing support, while Nvidia’s line has been due for a process shrink for a couple of years.

And while rumours have been floating around ahead of AMD’s Investor Day, two new Nvidia cards have popped up on the Geekbench database.

If you’re unfamiliar, Geekbench is one of the more common synthetic benchmarks for desktops and mobiles. It’s not generally a GPU-focused benchmark, but like most synthetic tests, it does report a whole bunch of information about the system being tested.

Which is why it’s so interesting when one user discovered a couple of new tests in Geekbench’s database featuring a 32GB GPU with 124 multiprocessors, which would result in a GPU with about 7,936 CUDA cores, and a 24GB GPU with 118 multiprocessors, which equates to 7552 CUDA cores.

The only device Nvidia has on the market with 32GB of RAM at the moment is the Tesla V100 GPU, a data centre product designed for AI training, inferencing, computational science and the kind of data crunching that regular users will never, ever do. The RTX 2080 Ti, which is the top of the stack for most regular gamers, has only 4,352 CUDA cores and 11GB of GDDR6 RAM.

There’s obvious a few caveats with the geekbench listing, primarily the fact that the core clocks/speeds wouldn’t be final, particularly given when the Geekbench test was uploaded. The memory wouldn’t be final – Nvidia has been using either GDDR6 in their consumer cards and HMB2 in their data centre offerings. If Nvidia did go with something as monstrous as this, it wouldn’t likely be part of the RTX 30 series, but the next generation of data centre GPUs, with the gaming GPUs based on cut-down versions of the same chip. It might even be possible that the next-generation GPUs have less CUDA cores and multiprocessors than what’s on the market today, if the performance gains from a process shrink, power efficiencies (enabling higher clock speeds) and better architecture are substantial enough.

The Geekbench results also don’t tell us anything about the research Nvidia has been doing into chiplet designs, something they’ve been investigating for years. “If it became economically the right thing to do to, assemble GPUs from multiple chiplets, we basically have de-risked the technology. Now it’s a tool in the toolbox for a GPU designer,” Bill Dally, chief Nvidia scientist, said last year.

Still, it’s fun to ruminate and think about. Plenty of people are pondering a GPU upgrade this year wholly for Cyberpunk 2077, and honestly I can’t blame thing. But what would you want to see from a next-gen GPU that the RTX 2080 Ti, 2080 or 2070 doesn’t currently offer?

For me, 4K / 120fps would be the benchmark. It’ll be the minimum we’ll expect from some games on the next-gen consoles, and it’d be a shame if the PCs of 2020 couldn’t match at least that without too many tradeoffs.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


7 responses to “Some Wild Nvidia GPU Benchmarks Appear”

Leave a Reply

Your email address will not be published. Required fields are marked *