About three years ago we got our first look at Nvidia’s Kepler architecture that powered the GTX 680, a $US500 card packing 3.54 billion transistors and 192GB/s bandwidth. It was the end of the line for the GeForce 600 series, but it wasn’t the end for the Kepler architecture.
While we eagerly awaited for next-gen GeForce 700 cards, Nvidia dropped the GeForce GTX Titan, wielding 7.08 billion transistors for an unwieldy price of $US1,000. The Titan instantly claimed king of the hill, and even though the Radeon R9 290X brought similar performance for half the price six months later, Nvidia refused to budge on the Titan’s MSRP.
This was every gamer’s dream GPU for half a year, but its fate was sealed when the GTX 780 Ti shipped many months later (Nov/13), offering more CUDA cores at a more affordable $US700.
Although the GTX Titan was great for gaming, that wasn’t the sole purpose of the GPU, which was equipped with 64 double-precision cores for 1.3 teraflops of double-precision performance. Previously only found in Tesla workstations and supercomputers, this feature made the Titan ideal for students, researchers and engineers after consumer-level supercomputing performance.
A year after the original Titan’s release, Nvidia followed up with a full 2880-core version known as the Titan Black, which boosted the card’s double-precision performance 1.7 teraflops. A month later, the GTX Titan Z put two Titan Blacks on one PCB for 2.7 teraflops of compute power, though this card never made sense at $US3000 — triple the Titan Black’s price.
Since then, the Maxwell-based GeForce 900 series arrived with the GTX 980’s unbeatable performance vs. power ratio leading the charge as today’s undisputed single-GPU king. Given that the GTX 980 has a modest 2048 cores using 5.2 billion transistors in a small 398mm2 die area, it manages to be 29 per cent smaller with 26 per cent fewer transistors than the flagship Kepler parts.
We knew there would be more ahead for Maxwell and so here it comes. Six months after the GTX 980, Nvidia is back with the GeForce GTX Titan X, a card that’s bigger and more complex than any other. However, unlike previous Titan GPUs, the new Titan X is designed exclusively for high-end gaming and as such offers similar compute performance similar to the GTX 980.
Announced at GDC, there’s plenty to be psyched about: headline features include 3072 CUDA cores, 12GB of GDDR5 memory running at 7Gbps, and a whopping 8 billion transistors. At its peak, the GTX Titan X will deliver 6600 GFLOPS single precision and 206 GFLOPS double precision processing power.
Nvidia reserved pricing information to the last minute as they delivered the opening keynote at their GPU Technology Conference — unsurprisingly the Titan X will be $US999. But without getting bogged down in how stupid that was — let’s focus on the fact that we get to show you how the GTX Titan X performs and that it’s a hard launch with availability expected today.
Titan X’s GM200 GPU in Detail
The GeForce Titan X is a processing powerhorse. The GM200 chip carries six graphics processing clusters, 24 streaming multiprocessors with 3072 CUDA cores (single precision).
As noted earlier, the Titan features a core configuration that consists of 3072 SPUs which take care of pixel/vertex/geometry shading duties, while texture filtering is performed by 192 texture units. With a base clock frequency of 1000MHz, texture filtering rate is 192 Gigatexels/sec, which is over 33 per cent higher than the GTX 980. The Titan X also ships with 3MB of L2 cache and 96 ROPs.
The memory subsystem of GTX Titan X consists of six 64-bit memory controllers (384-bit) with 12GB of GDDR5 memory. This means that the 384-bit wide memory interface and 7GHz memory clock deliver a peak memory bandwidth that is 50 per cent higher than GTX 980 at 336.5GB/sec.
And with its massive 12GB of GDDR5 memory, gamers can play the latest DX12 games on the Titan X at 4K resolutions without worrying about running short on graphics memory.
Nvidia says that the Titan X is built using the full implementation of GM200. The display/video engines are unchanged from the GM204 GPU used in the GTX 980. Also like the GTX 980, overall double-precision instruction throughput is 1/32 the rate of single-precision instruction throughput.
As mentioned, the base clock speed of the GTX Titan X is 1000MHz, though it does feature a typical Boost Clock speed of 1075MHz. The Boost Clock speed is based on the average Titan X card running a wide variety of games and applications. Note that the actual Boost Clock will vary from game to game depending on actual system conditions.
Setting performance aside for a moment, one of the Titan X’s other noteworthy features is its stunning board design. As was the case with previous Titan cards, the Titan X has an aluminium cover. The metal casing gives the board a premium look and feel, while the card’s unique black cover sets it apart from predecessors — this is the Darth Vader of Titans.
A copper vapor chamber is used to cool the Titan X’s GM200 GPU. This vapor chamber is combined with a large, dual-slot aluminium heatsink to dissipate heat off the chip. A blower style fan then exhausts this hot air through the back of the graphics card and outside the PC’s chassis. The fan is designed to run very quietly, even while under load when the card is overclocked.
If you recall, the GTX 980 reference board design included a backplate on the underside of the card with a section that could be removed in order to improve airflow when multiple GTX 980 cards are placed directly adjacent to each other (as with 3- and 4-way SLI, for example). In order to provide maximum airflow to the Titan X’s cooler in these situations, Nvidia does not include a backplate on the Titan X reference.
The Titan X reference board measures 10.5 inches long. Display outputs include one dual-link DVI output, one HDMI 2.0 output and three DisplayPort connectors. One 8-pin PCIe power connector and one 6-pin PCIe power connector are required for operation.
Speaking of power connectors, the Titan X has a TDP rating of 250 watts and Nvidia calls for a 600w power supply when running just a single card. That is a little over 50 per cent higher than the TDP rating of the GTX 980, though it is still 14 per cent lower than the Radeon R9 290X.
Nvidia says that being a gaming enthusiast’s graphics card, the Titan X has been designed for overclocking and implements a six-phase power supply with overvoltaging capability. An additional two-phase power supply is dedicated for the board’s GDDR5 memory.
This 6+2 phase design supplies Titan X with more than enough power, even when the board is overclocked. The Titan X reference board design supplies the GPU with 275 watts of power at the maximum power target setting of 110 per cent.
Nvidia has used polarised capacitors (POSCAPS) to minimise unwanted board noise as well as moulded inductors. To further improve Titan X’s overclocking potential, Nvidia has improved airflow to these board components so they run cooler compared to previous high-end GK110 products, including the original GTX Titan.
Moreover, Nvidia says it pushed the Titan X to speeds of 1.4GHz using nothing more than the supplied air-cooler during its own testing, so we’re obviously interested in testing that.
Testing Methodology
All GPU configurations have been tested at the 2560×1600 and 3840×2160 (UHD) 4K resolutions and for this review we will be discussing the results from both resolutions.
All graphics cards have been tested with core and memory clock speeds set to the AMD and Nvidia specifications.
Test System Specs
- Intel Core i7-5960X (3.0GHz)
- x4 4GB Kingston Predator DDR4-2400 (CAS 12-13-13-24)
- Asrock X99 Extreme6 (Intel X99)
- Silverstone Strider Series (700w)
- Crucial MX200 1TB (SATA 6Gb/s)
- Nvidia GeForce Titan X (12288MB)
- Gainward GFeForce GTX 980 (4096MB)
- Gainward GeForce GTX 970 (4096MB)
- Palit GeForce GTX 780 Ti (3072MB)
- Palit GeForce GTX 780 (3072MB)
- AMD Radeon R9 295X2 (2x 4096MB)
- Gigabyte Radeon R9 290X (4096MB)
- Gigabyte Radeon R9 290 (4096MB)
- Microsoft Windows 8.1 Pro 64-bit
- Nvidia GeForce 347.52
- AMD Catalyst 14.12 Omega
Benchmarks: Crysis 3, BioShock
When gaming at 2560×1600 in Crysis 3 the GTX Titan X rendered an impressive 51fps on average, 46 per cent faster than the GTX 980 and 39 per cent faster than the R9 290X, while it was just 19 per cent slower than the R9 295X2.
Jumping to 4K reduced the average frame rate to just 26fps, though if we were to disable anti-aliasing we would likely receive playable performance. Even so, with 26fps the GTX Titan X was 48 per cent faster than the GTX 980 and 24 per cent faster than the R9 290X while also being 24 per cent slower than the R9 295X2.
The GTX Titan X has no trouble with BioShock Infinite at 2560×1600 using the ultra-quality preset, delivering highly playable 96fps, 35 per cent faster than the GTX 980 and a whopping 71 per cent faster than the R9 290X. Despite crushing the R9 290X, the GTX Titan X was still 17 per cent slower than the R9 295X2.
Even at 4K the Titan X is able to deliver playable performance with its 49fps being 29 per cent faster than the GTX 980, 58 per cent faster than the R9 290X and 22 per cent slower than the R9 295X2.
Benchmarks: Metro Redux, Tomb Raider
The GTX Titan X looks to be a beast when testing with Metro Redux as the R9 295X2 scales poorly at 2560×1600, allowing the Titan X to be 12 per cent faster than AMD’s dual-GPU solution in addition to topping the the GTX 980 and R9 290X.
Testing at 4K showed that tThe GTX Titan X looks to be a beast when testing with Metro Redux as the R9 295X2 scales poorly at 2560×1600, allowing the Titan X to be 12 per cent faster than AMD’s dual-GPU solution in addition to topping the the GTX 980 and R9 290X.
The Titan X again dominated at 2560×1600 with 82fps, 28 per cent faster than the R9 295X2 and 41 per cent faster than the GTX 980. At 4K, the Titan X was 21 per cent faster than the GTX 980, but only 18 per cent faster than the R9 290X and 8 per cent slower than the R9 295X2.
Steven Walton is a writer at TechSpot. TechSpot is a computer technology publication serving PC enthusiasts, gamers and IT pros since 1998.
Comments
26 responses to “Nvidia GeForce GTX Titan X: Bloody Fast, Surprisingly Efficient”
$999? Yet another card I’ll look at longingly from a distance but will never touch… ever. 🙁
I doubt it will be 999 in Australia…
I think you are missing a 1 in front of that 999.
Nah.
It will be more.
Last $999 card I bought in Australia cost my just under $1400
everybody in this planet knows Australia rips people off on computer parts
Just got an email from pc case gear. They start at 1499 for the evga flavour!
Am I the only one who prefers the silver cooler?
Nope, still want one though 😛
This guy needs to read his graphs properly. He says that the titan x is 28 percent faster than the 295×2 when running tomb raider yet the fps is 82 compared to 92 for the 295×2. Am I missing something here?
Hopefully the 390x gets some good numbers up so amd can compete with nvidia and we get better prices!
heh major typo. It did mention at the end that TItan X is slower than 295×2 by 8 percent.
Yeah, and also the percentages in his comments on the Crysis benchmarks appear to be way off the mark too. I haven’t ready beyond this yet.
Waiting for the GTX980Ti
I wonder if we’ll even see an GTX 980Ti now, given Nvidia have explicitly stated that the GTX Titan X is intended to be a high-end gaming card.
Same. We know its only a few months away.
Hint to buyers. . . always wait for the Ti models. Theyre the ones that will see you through the next two generations.
They don’t make Titan Ti models, and rumors say that the 980Ti won’t be as close to the Titan X as the 780Ti was to the Titan Black.
gtx 980ti X 2 then u dont have to worry about amd catching up for 18 months
Learnt my lesson from the original Titan. Bought a pair for SLI funsies, put them on water then watched Nvidia release the 780Ti within in a very short period that beat the Titan for $400 odd less. It was based on the same GK110 chip too – with less CUDA cores and RAM than the Titan. Sold the Titans shortly after before they lost too much value. Bought an XFX R290X Black Double Dissipation for less than $600, and mildly overclocked, performance was comparable to the Titan for well under half the price. (Give or take 5-10%) I benched them myself.
There’s a pattern forming – Nvidia have gotten a bit full of themselves of late, and this card will get pipped within 6 months by another Nvidia card for even less. Way to shit on your high end customers. Been a Nvidia boy since my first 7300GT SLI setup, but not anymore. But watch people still drool and shit their drawers every time Nvidia farts. They’re fast becoming the Apple of the GPU world.
They are becoming Apple! They make fantastic products but their pricing is insane and their stubbornness is just as bad. They are still selling the Titan Z at 3000 dollars. If amd play their cards right they may just claw back some market share this generation.
AMD have already won me over. I couldn’t care less about the prestige, which is what Nvidia seems to sell these days. Of course their cards are great – but it doesn’t seem to be the focus anymore.
Getting shat on after dropping $2500 on GPUs sucked – but the kicker was when my out-of-the-box 290X (albeit mildly OC’d, custom air cooler) largely kept up with a custom (svl7) BIOS, watercooled and overclocked GTX Titan.
This is why it’s imperative that AMD stick around and provide competition otherwise we’d be paying through the nose for a new gpu!
I’ve got the r9 295×2 and I love what Amd has done this generation – hopefully they keep it up with the new gen gpus…
This mostlikely wont happen this time as the 980ti will be a cut down version of the titan x..
I really want to see some benchmarks with Adobe/Davinci Resolve/PFClean and other graphics software. the 12GB actually gets used by those programs, and $1000 is cheap for 12GB if it performs.
Anyone with a brain will realise you can get two 290x/970GTX for considerably cheaper and gain more performance.
The main thing your paying for here is that large memory pool, which is nice but if not coupled with enough performance to outperform SLI/Crossfire alternatives is kinda moot point. I’m sure they sell a few of these to people who have more money then sense, but basically $999 is a utter ripoff, they should offer a 6GB version for half that, then maybe they will have some interest.
Would I be better of getting the 980Ti or wait for the fury x?
Nvidia release these cards too fast.
If they released a Titan then waited two years until the next then it would be tempting. But its every six months or something.
To fast, too often. Just reminds consumers of how they have (or could have) wasted their money.