Some Wild Nvidia GPU Benchmarks Appear

Some Wild Nvidia GPU Benchmarks Appear

It’s still a good few months away from the prospect of new graphics cards being announced. Still, we know they’re coming: AMD have been talking up “Big Navi,” their card with real-time ray tracing support, while Nvidia’s line has been due for a process shrink for a couple of years.

And while rumours have been floating around ahead of AMD’s Investor Day, two new Nvidia cards have popped up on the Geekbench database.

If you’re unfamiliar, Geekbench is one of the more common synthetic benchmarks for desktops and mobiles. It’s not generally a GPU-focused benchmark, but like most synthetic tests, it does report a whole bunch of information about the system being tested.

Which is why it’s so interesting when one user discovered a couple of new tests in Geekbench’s database featuring a 32GB GPU with 124 multiprocessors, which would result in a GPU with about 7,936 CUDA cores, and a 24GB GPU with 118 multiprocessors, which equates to 7552 CUDA cores.

The only device Nvidia has on the market with 32GB of RAM at the moment is the Tesla V100 GPU, a data centre product designed for AI training, inferencing, computational science and the kind of data crunching that regular users will never, ever do. The RTX 2080 Ti, which is the top of the stack for most regular gamers, has only 4,352 CUDA cores and 11GB of GDDR6 RAM.

There’s obvious a few caveats with the geekbench listing, primarily the fact that the core clocks/speeds wouldn’t be final, particularly given when the Geekbench test was uploaded. The memory wouldn’t be final – Nvidia has been using either GDDR6 in their consumer cards and HMB2 in their data centre offerings. If Nvidia did go with something as monstrous as this, it wouldn’t likely be part of the RTX 30 series, but the next generation of data centre GPUs, with the gaming GPUs based on cut-down versions of the same chip. It might even be possible that the next-generation GPUs have less CUDA cores and multiprocessors than what’s on the market today, if the performance gains from a process shrink, power efficiencies (enabling higher clock speeds) and better architecture are substantial enough.

The Geekbench results also don’t tell us anything about the research Nvidia has been doing into chiplet designs, something they’ve been investigating for years. “If it became economically the right thing to do to, assemble GPUs from multiple chiplets, we basically have de-risked the technology. Now it’s a tool in the toolbox for a GPU designer,” Bill Dally, chief Nvidia scientist, said last year.

Still, it’s fun to ruminate and think about. Plenty of people are pondering a GPU upgrade this year wholly for Cyberpunk 2077, and honestly I can’t blame thing. But what would you want to see from a next-gen GPU that the RTX 2080 Ti, 2080 or 2070 doesn’t currently offer?

For me, 4K / 120fps would be the benchmark. It’ll be the minimum we’ll expect from some games on the next-gen consoles, and it’d be a shame if the PCs of 2020 couldn’t match at least that without too many tradeoffs.


  • And no doubt they still will be out of most peoples price ranges.
    They don’t need to compete with anyone so they price gouge the crap out of the top of the range cards.

    • These are 100% new Tesla machine learning cards so yes they will be outside of everyone but corporations price range.

      But a stripped down 12GB 6000-7000 CUDA 3080TI could be possible, it will probably cost more then the 2080TI because, you know, NVIDIA.

  • I’m one of those who has been upgrading just for Cyberpunk 2077. I mean sure other games too but it’s one of the main ones, so far I’ve upgraded my mobo, cpu and ram, i’ve got a brand new fancy monitor for 144hz, and I’m waiting for the reveal of Ampere to decide on my gpu. Exciting times ahead.

  • Plenty of people are pondering a GPU upgrade this year wholly for Cyberpunk 2077

    I was looking over the pics on the steam page for Cyberpunk 2077 and some of the pics hopefully aren’t indicative of the final product, one pic makes it look like it an early Xbox 360 game.

  • …and honestly I can’t blame thing.


    For me, 4K / 120fps would be the benchmark.

    Nah, I’ve given up on chasing more pixels. Diminishing returns. I stick to 1080p (not enlarged displays to maximize density) and get rock-solid 144 fps. I’ll probably justify the expense of an upgrade once lighting, rather than resolution, can be improved, which I guess it what ray-tracing is attempting to sell us on. Give me 1080p 144Hz RTX over 1440p/4k 144Hz non-RTX and I’ll pay up.

Log in to comment on this story!