2015-03-17

Never one to shy away from high-end video cards, in 2013 NVIDIA took the next step towards establishing a definitive brand for high-end cards with the launch of the GeForce GTX Titan. Proudly named after NVIDIA’s first massive supercomputer win – the Oak Ridge National Laboratory Titan – it set a new bar in performance. It also set a new bar in build quality for a single-GPU card, and at $999 it also set a new bar in price. The first true “luxury” video card, NVIDIA would gladly sell you one of their finest video cards if you had the pockets deep enough for it.

Since 2013 the Titan name has stuck around for additional products, although it never had quite the same impact as the original. The GTX Titan Black was a minor refresh of the GTX Titan, moving to a fully enabled GK110B GPU and from a consumer/gamer standpoint somewhat redundant due to the existence of the nearly-identical GTX 780 Ti. Meanwhile the dual-GPU GTX Titan Z was largely ignored, its performance sidelined by its unprecedented $3000 price tag and AMD’s very impressive Radeon R9 295X2 at half the price.

Now in 2015 NVIDIA is back with another Titan, and this time they are looking to recapture a lot of the magic of the original Titan. First teased back at GDC 2015 in an Epic Unreal Engine session, and used to drive more than a couple of demos at the show, the GTX Titan X gives NVIDIA’s flagship video card line the Maxwell treatment, bringing with it all of the new features and sizable performance gains that we saw from Maxwell last year with the GTX 980. To be sure, this isn’t a reprise of the original Titan – there are some important differences that make the new Titan not the same kind of prosumer card the original was – but from a performance standpoint NVIDIA is looking to make the GTX Titan X as memorable as the original. Which is to say that it’s by far the fastest single-GPU card on the market once again.

NVIDIA GPU Specification Comparison

GTX Titan X
GTX 980
GTX Titan Black
GTX Titan
CUDA Cores
3072
2048
2880
2688
Texture Units
192
128
240
224
ROPs
96
64
48
48
Core Clock
1000MHz
1126MHz
889MHz
837MHz
Boost Clock
1075MHz
1216MHz
980MHz
876MHz
Memory Clock
7GHz GDDR5
7GHz GDDR5
7GHz GDDR5
6GHz GDDR5
Memory Bus Width
384-bit
256-bit
384-bit
384-bit
VRAM
12GB
4GB
6GB
6GB
FP64
1/32 FP32
1/32 FP32
1/3 FP32
1/3 FP32
TDP
250W
165W
250W
250W
GPU
GM200
GM204
GK110B
GK110
Architecture
Maxwell 2
Maxwell 2
Kepler
Kepler
Transistor Count
8B
5.2B
7.1B
7.1B
Manufacturing Process
TSMC 28nm
TSMC 28nm
TSMC 28nm
TSMC 28nm
Launch Date
03/17/2015
09/18/14
02/18/2014
02/21/2013
Launch Price
$999
$549
$999
$999
To do this NVIDIA has assembled a new Maxwell GPU, GM200 (aka Big Maxwell). We’ll dive into GM200 in detail a bit later, but from a high-level standpoint GM200 is the GM204 taken to its logical extreme. It’s bigger, faster, and yes, more power hungry than GM204 before it. In fact at 8 billion transistors occupying 601mm2 it’s NVIDIA’s largest GPU ever. And for the first time in quite some time, virtually every last millimeter is dedicated to graphics performance, which coupled with Maxwell’s performance efficiency makes it a formidable foe.

Diving into the specs, GM200 can for most intents and purposes be considered a GM204 + 50%. It has 50% more CUDA cores, 50% more memory bandwidth, 50% more ROPs, and almost 50% more die size. Packing a fully enabled version of GM200, this gives the GTX Titan X 3072 CUDA cores and 192 texture units(spread over 24 SMMs), paired with 96 ROPs. Meanwhile considering that even the GM204-backed GTX 980 could outperform the GK110-backed GTX Titans and GTX 780 Ti thanks to Maxwell’s architectural improvements – 1 Maxwell CUDA core is quite a bit more capable than Kepler in practice, as we’ve seen – GTX Titan X is well geared to shoot well past the previous Titans and the GTX 980.



Feeding GM200 is a 384-bit memory bus driving 12GB of GDDR5 clocked at 7GHz. Compared to the GTX Titan Black this is one of the few areas where GTX Titan X doesn’t have an advantage in raw specifications – there’s really nowhere to go until HBM is ready – however in this case numbers can be deceptive as NVIDIA has heavily invested in memory compression for Maxwell to get more out of the 336GB/sec of memory bandwidth they have available. The 12GB of VRAM on the other hand continues NVIDIA’s trend of equipping Titan cards with as much VRAM as they can handle, and should ensure that the GTX Titan X has VRAM to spare for years to come. Meanwhile sitting between the GPU’s functional units and the memory bus is a relatively massive 3MB of L2 cache, retaining the same 32K:1 cache:ROP ratio of Maxwell 2 and giving the GPU more cache than ever before to try to keep memory operations off of the memory bus.

As for clockspeeds, as with the rest of the Maxwell lineup GTX Titan X is getting a solid clockspeed bump from its Kepler predecessor. The base clockspeed is up to 1Ghz (reported as 1002MHz by NVIDIA’s tools) while the boost clock is 1075MHz. This is roughly 100MHz (~10%) ahead of the GTX Titan Black and will further push the GTX Titan X ahead. However as is common with larger GPUs, NVIDIA has backed off on clockspeeds a bit compared to the smaller GM204, so GTX Titan X won’t clock quite as high as GTX 980 and the overall performance difference on paper is closer to 33% when comparing boost clocks.

Power consumption on the other hand is right where we’d expect it to be for a Titan class card. NVIDIA’s official TDP for GTX Titan X is 250W, the same as the previous single-GPU Titan cards (and other consumer GK110 cards). Like the original GTX Titan, expect GTX Titan X to spend a fair bit of its time TDP-bound; 250W is generous – a 51% increase over GTX 980 – but then again so is the number of transistors that need to be driven. Overall this puts GTX Titan X on the high side of the power consumption curve (just like GTX Titan before it), but it’s the price for that level of performance. Practically speaking 250W is something of a sweet spot for NVIDIA, as they know how to efficiently dissipate that much heat and it ensures GTX Titan X is a drop-in replacement for GTX Titan/780 in any existing system designs.



Moving on, the competitive landscape right now will greatly favor NVIDIA. With AMD’s high-end having last been refreshed in 2013 and with the GM204 GTX 980 already ahead of the Radeon 290X, GTX Titan X further builds on NVIDIA’s lead. No other single-GPU card is able to touch it, and even GTX 980 is left in the dust. This leaves NVIDIA as the uncontested custodian of the single-GPU performance crown.

The only thing that can really threaten the GTX Titan X at this time are multi-GPU configurations such as GTX 980 SLI and the Radeon R9 295X2, the latter of which is down to ~$699 these days and is certainly a potential spoiler for GTX Titan X. To be sure when multi-GPU works right either of these configurations can shoot past a single GTX Titan X, however when multi-GPU scaling falls apart then we have the usual problem of such setups falling well behind a single powerful GPU. Such setups are always a risk in that regard, and consequently as a single-GPU card GTX Titan X offers the best bet for consistent performance.

NVIDIA of course is well aware of this, and with GTX 980 already fending off the R9 290X NVIDIA is free to price GTX Titan X as they please. GTX Titan X is being positioned as a luxury video card (like the original GTX Titan) and NVIDIA is none too ashamed to price it accordingly. Complicating matters slightly however is the fact that unlike the Kepler Titan cards the GTX Titan X is not a prosumer-level compute monster. As we’ll see it lacks its predecessor’s awesome double precision performance, so NVIDIA does need to treat this latest Titan as a consumer gaming card rather than a gaming + entry level compute card as was the case with the original GTX Titan.

In any case, with the previous GTX Titan and GTX Titan Black launching at $999, it should come as no surprise that this is where GTX Titan X is launching as well. NVIDIA saw quite a bit of success with the original GTX Titan at this price, and with GTX Titan X they are shooting for the same luxury market once again. Consequently GTX Titan X will be the fastest single-GPU card you can buy, but it will once again cost quite a bit to get. For our part we'd like to see GTX Titan X priced lower - say closer to the $700 price tag of GTX 780 Ti - but it's hard to argue with NVIDIA's success on the original GTX Titan.

Finally, for launch availability this will be a hard launch with a slight twist. Rather than starting with retail and etail partners such as Newegg, NVIDIA is going to kick things off by selling cards directly, while partners will start to sell cards in a few weeks. For a card like GTX Titan X, NVIDIA selling cards directly is not a huge stretch; with all cards being identical reference cards, partners largely serve as distributors and technical support for buyers.

Meanwhile selling GTX Titan X directly also allowed NVIDIA to keep the card under wraps for longer while still offering a hard launch, as it left fewer avenues for leaks through partners. On the other hand I'm curious how partners will respond to being cut out of the loop like this, even if it is just temporary.

Before diving into our look at the GTX Titan X itself, I want to spend a bit of time talking about the GM200 GPU. GM200 is a very interesting GPU, and not for the usual reasons. In fact you could say that GM200 is remarkable for just how unremarkable it is.



From a semiconductor manufacturing standpoint we’re still at a standstill on 28nm for at least a little bit longer, pushing 28nm into its 4th year and having all sorts of knock-on effects. We’ve droned on about this for some time now, so we won’t repeat ourselves, but ultimately what it means for consumers is that AMD and NVIDIA have needed to make do with the tools they have, and in lieu of generational jumps in manufacturing have focused on architectural efficiency and wringing out everything they can get out of 28nm.

For NVIDIA those improvements came in the form of the company’s Maxwell architecture, which has made a concentrated effort to focus on energy and architectural efficiency to get the most out of their technology. In assembling GM204 NVIDIA built the true successor to GK104, putting together a pure graphics chip. From a design standpoint NVIDIA spent their energy efficiency gains on growing out GM204’s die size without increasing power, allowing them to go from 294mm2 and 3.5B transistors to 398mm2 and 5.2B transistors. With a larger die and larger transistor budget, NVIDIA was able to greatly increase performance by laying down a larger number of high performance (and relatively larger themselves) Maxwell SMMs.

On the other hand for GM206 and the GTX 960, NVIDIA banked the bulk of their energy savings, building what’s best described as half of a GM204 and leading to a GPU that didn’t offer as huge of a jump in performance from its predecessor (GK106) but also brought power usage down and kept costs in check.

Not Pictured: The 96 FP64 ALUs

But for Big Maxwell, neither option was open to NVIDIA. At 551mm2 GK110 was already a big GPU, so large (33%) increase in die size like with GM204 was not practical. Neither was leaving the die size at roughly the same area and building the Maxwell version of GK110, gaining only limited performance in the process. Instead NVIDIA has taken a 3rd option, and this is what makes GM200 so interesting.

For GM200 NVIDIA’s path of choice has been to divorce graphics from high performance FP64 compute. Big Kepler was a graphics powerhouse in its own right, but it also spent quite a bit of die area on FP64 CUDA cores and some other compute-centric functionality. This allowed NVIDIA to use a single GPU across the entire spectrum – GeForce, Quadro, and Tesla – but it also meant that GK110 was a bit jack-of-all-trades. Consequently when faced with another round of 28nm chips and intent on spending their Maxwell power savings on more graphics resources (ala GM204), NVIDIA built a big graphics GPU. Big Maxwell is not the successor to Big Kepler, but rather it’s a really (really) big version of GM204.

GM200 is 601mm2 of graphics, and this is what makes it remarkable. There are no special compute features here that only Tesla and Quadro users will tap into (save perhaps ECC), rather it really is GM204 with 50% more GPU. This means we’re looking at the same SMMs as on GM204, featuring 128 FP32 CUDA cores per SMM, a 512Kbit register file, and just 4 FP64 ALUs per SMM, leading to a puny native FP64 rate of just 1/32. As a result, all of that space in GK110 occupied by FP64 ALUs and other compute hardware – and NVIDIA won’t reveal quite how much space that was – has been reinvested in FP32 ALUs and other graphics-centric hardware.

NVIDIA Big GPUs

Die Size
Native FP64 Rate
GM200 (Big Maxwell)
601mm2
1/32
GK110 (Big Kepler)
551mm2
1/3
GF110 (Big Fermi)
520mm2
1/2
GT200 (Big Tesla)
576mm2
1/8
G80
484mm2
N/A
It’s this graphics “purification” that has enabled NVIDIA to improve their performance over GK110 by 50% without increasing power consumption and with only a moderate 50mm2 (9%) increase in die size. In fact in putting together GM200, NVIDIA has done something they haven’t done for years. The last flagship GPU from the company to dedicate this little space to FP64 was G80 – heart of the GeForce 8800GTX – which in fact didn’t have any FP64 hardware at all. In other words this is the “purist” flagship graphics GPU in 9 years.

Now to be clear here, when we say GM200 favors graphics we don’t mean exclusively, but rather it favors graphics and its associated FP32 math over FP64 math. GM200 is still a FP32 compute powerhouse, unlike anything else in NVIDIA’s lineup, and we don’t expect it will be matched by anything else from NVIDIA for quite some time. For that reason I wouldn’t be too surprised if we a Tesla card using it aimed at FP32 users such the oil & gas industry – something NVIDIA has done once before with the Tesla K10 – but you won’t be seeing GM200 in the successor to Tesla K40.

This is also why the GTX Titan X is arguably not a prosumer level card like the original GTX Titan. With the GTX Titan NVIDIA shipped it with its full 1/3 rate FP64 enabled, having GTX Titan pull double duty as the company’s consumer graphics flagship while also serving as their entry-level FP64 card. For GTX Titan X however this is not an option since GM200 is not a high performance FP64 GPU, and as a result the card is riding only on its graphics and FP32 compute capabilities. Which for that matter doesn’t mean that NVIDIA won’t also try to pitch it as a high-performance FP32 card for users who don’t need Tesla, but it won’t be the same kind of entry-level compute card like the original GTX Titan was. In other words, GTX Titan X is much more consumer focused than the original GTX Titan.

Telsa K80: The Only GK210 Card

Looking at the broader picture, I’m left to wonder if this is the start of a permanent divorce between graphics/FP32 compute and FP64 compute in the NVIDIA ecosystem. Until recently, NVIDIA has always piggybacked compute on their flagship GPUs as a means of bootstrapping the launch of the Tesla division. By putting compute in their flagship GPU, even if NVIDIA couldn’t sell those GPUs to compute customers they could sell them to GeForce/Quadro graphics customers. This limited the amount of total risk the company faced, as they’d never end up with a bunch of compute GPUs they could never sell.

However in the last 6 months we’ve seen a shift from NVIDIA at both ends of the spectrum. In November we saw the launch of a Tesla K80, a dual-GPU card featuring the GK210 GPU, a reworked version of GK110 that doubled the register file and shared memory sizes for better performance. GK210 would not come to GeForce or Quadro (though in theory it could have), making it the first compute-centric GPU from NVIDIA. And now with the launch of GM200 we have distinct graphics and compute GPUs from NVIDIA.

NVIDIA GPUs By Compute

GM200
GK210
GK110B
Stream Processors
3072
2880
2880
Memory Bus Width
384-bit
384-bit
384-bit
Register File Size (Per SM)
4 x 64KB
512KB
256KB
Shared Memory /
L1 Cache (Per SM)
96KB + 24KB
128KB
64KB
Transistor Count
8B
7.1B(?)
7.1B
Manufacturing Process
TSMC 28nm
TSMC 28nm
TSMC 28nm
Architecture
Maxwell
Kepler
Kepler
Tesla Products
None
K80
K40
The remaining question at this point is what happens from here. Was this divorce of compute and graphics a temporary action, the result of being stuck on the 28nm process for another generation? Or was it the first generation in a permanent divorce between graphics and compute, and consequently a divorce between GeForce/Quadro and Tesla? Is NVIDIA finally ready to let Tesla stand on its own?

With Pascal NVIDIA could very well build a jack-of-all-trades style GPU once more. However having already divorced graphics and compute for a generation, merging them again would eat up some of the power and die space benefits from going to 16nm FinFET, power and space that NVIDIA would likely want to invest in greater separate improvements in graphics and compute performance. We’ll see what Pascal brings, but I suspect GM200 is the shape of things to come for GeForce and the GTX Titan lineup.

Now that we’ve had a chance to look at the GM200 GPU at the heart of GTX Titan X, let’s take a look at the card itself.

From a design standpoint NVIDIA put together a very strong card with the original GTX Titan, combining a revised, magnesium-less version of their all-metal shroud with a high performance blower and vapor chamber assembly. The end result was a high performance 250W card that was quieter than some open-air cards, much quieter than a bunch of other blowers, and shiny to look at to boot. This design was further carried forward for the reference GTX 780 series, its stylings copied for the GTX Titan Z, and used with a cheaper cooling apparatus for the reference GTX 980.

For GTX Titan X, NVIDIA has opted to leave well enough alone, having made virtually no changes to the shroud or cooling apparatus. And truth be told it’s hard to fault NVIDIA right now, as this design remains the gold (well, aluminum) standard for a blower. Looks aside, after years of blowers that rattled, or were too loud, or didn’t cool discrete components very well, NVIDIA is sitting on a very solid design that I’m not really sure how anyone would top (but I’d love to see them try).

In any case, our favorite metal shroud is back once again. Composed of a cast aluminum housing and held together using a combination of rivets and screws, it’s as physically solid a shroud as we’ve ever seen. Meanwhile having already done a partial black dye job for GTX Titan Black and GTX 780 Ti – using black lettering a black-tinted polycarbonate window – NVIDIA has more or less completed the dye job by making the metal shroud itself almost completely black. What remains are aluminum accents and the Titan lettering (Titan, not Titan X, curiously enough) being unpainted aluminum as well. The card measures 10.5” long overall, which at this point is NVIDIA’s standard size for high-end GTX cards.

Drilling down we have the card’s primary cooling apparatus, composed of a nickel-tipped wedge-shaped heatsink and ringed radial fan. The heatsink itself is attached to the GPU via a copper vapor chamber, something that has been exclusive to GTX 780/Titan cards and provides the best possible heat transfer between the GPU and heatsink. Meanwhile the rest of the card is covered with a black aluminum baseplate, providing basic heatsink functionality for the VRMs and other components while also protecting them.

Finally at the bottom of the stack we have the card itself, complete with the GM200 GPU, VRAM chips, and various discrete components. Unlike the shroud and cooler, GM200’s PCB isn’t a complete carry-over from GK110, but it is none the less very similar with only a handful of changes made. This means we’re looking at the GPU and VRAM chips towards the front of the card, while the VRMs and other discrete components occupy the back. New specifically to GTX Titan X, NVIDIA has done some minor reworking to improve airflow to the discrete components and reduce temperatures, along with employing molded inductors.

As with GK110, NVIDIA still employs a 6+2 phase VRM design, with 6 phases for the GPU and another 2 for the VRAM. This means that GTX Titan X has a bit of power delivery headroom – NVIDIA allows the power limit to be increased by 10% to 275W – but hardcore overclockers will find that there isn’t an extreme amount of additional headroom to play with. Based on our sample the actual shipping voltage at the max boost clock is fairly low at 1.162v, so in non-TDP constrained scenarios there is some additional headroom through overvolting, up to 1.237v in the case of our sample.

In terms of overall design, the need to house 24 VRAM chips to get 12GB of VRAM means that the GTX Titan X has chips on the front as well as the back. Unlike the GTX 980 then, for this reason NVIDIA is once again back to skipping the backplate, leaving the back side of the card bare just as with the previous GTX Titan cards.

Moving on, in accordance with GTX Titan X’s 250W TDP and the reuse of the GTX Titan cooler, power delivery for the GTX Titan X is identical to its predecessors. This means a 6-pin and an 8-pin power connector at the top of the card, to provide up to 225W, with the final 75W coming from the PCIe slot. Interestingly the board does have another 8-pin PCIe connector position facing the rear of the card, but that goes unused for this specific GM200 card.

Meanwhile display I/O follows the same configuration we saw on GTX 980. This is 1x DL-DVI-I, 3x DisplayPort 1.2, and 1x HDMI 2.0, with a total limit of 4 displays. In the case of GTX Titan the DVI port is somewhat antiquated at this point – the card is generally overpowered for the relatively low maximum resolutions of DL-DVI – but on the other hand the HDMI 2.0 port is actually going to be of some value here since it means GTX Titan X can drive a 4K TV. Meanwhile if you have money to spare and need to drive more than a single 4K display, GTX Titan X also features a pair of SLI connectors for even more power.

In fact 4K will be a repeating theme for GTX Titan X, as this is one of the primary markets/use cases NVIDIA will be going after with the card. With GTX 980 generally good up to 2560x1440, the even more powerful GTX Titan X is best suited for 4K and VR, the two areas where GTX 980 came up short. In the case of 4K even a single GTX Titan X is going to struggle at times – we’re not at 60fps at 4K with a single GPU quite yet – but GTX Titan should be good enough for framerates between 30fps and 60fps at high quality settings. To fill the rest of the gap NVIDIA is also going to be promoting 4Kp60 G-Sync monitors alongside the GTX Titan X, as the 30-60fps range is where G-sync excels. And while G-sync can’t make up for lost frames it can take some of the bite out of sub-60fps framerates, making it a smoother/cleaner experience than it would otherwise be.

Longer term NVIDIA also sees the GTX Titan X as their most potent card for VR headsets., and they made sure that GTX Titan X was on the showfloor for GDC to drive a few of the major VR demos. Certainly VR will take just about whatever rendering power you can throw at it, if only in the name of reducing rendering latency. But overall we’re still very early in the game, especially with commercial VR headsets still being in development.

Finally, speaking of the long term, I wanted to hit upon the subject of the GTX Titan X’s 12GB of VRAM. With most other Maxwell cards already using 4Gb VRAM chips, the inclusion of 12GB of VRAM in NVIDIA’s flagship card was practically a given, especially since it doubles the 6GB of VRAM the original GTX Titan came with. At the same time however I’m curious to see just how long it takes for games to grow into this space. The original GTX Titan was fortunate enough to come out with 6GB right before the current-generation consoles launched, and with them their 8GB memory configurations, leading to a rather sudden jump in VRAM requirements that the GTX Titan was well positioned to handle. Much like 6GB in 2013, 12GB is overkill in 2015, but unlike the original GTX Titan I suspect 12GB will remain overkill for a much longer period of time, especially without a significant technology bump like the consoles to drive up VRAM requirements.

Also kicking off alongside GTX Titan X today will be the first article to use our new 2015 GPU benchmark suite.

For 2015 we have upgraded or replaced most of our games, retiring several long-time titles including Bioshock: Infinite, Metro, and our last DirectX 10 game, Crysis Warhead. Our returning titles are Battlefield 4 and Crysis 3, the former of which is still a popular MP title to this day, and the latter continuing to pulverize GPUs well before we hit its highest settings.

Joining these 2 games are 7 new titles. Middle Earth: Shadow of Mordor and Far Cry 4 are our new action/shooter games, while Dragon Age: Inquisition rides the line between an action game and an RPG. Meanwhile for strategy games we have Civilization: Beyond Earth and Total War: Attila, these two games representing the latest entries in their respective series. Rounding out our collection is GRID Autosport, the latest GRID game from Codemasters, and the unique first person puzzle/exploration game The Talos Principle from Croteam.

AnandTech GPU Bench 2015 Game List
Game
Genre
API(s)
Battlefield 4
FPS
DX11 + Mantle
Crysis 3
FPS
DX11
Shadow of Mordor
Action/Open World
DX11
Civilization: Beyond Earth
Strategy
DX11 + Mantle
Dragon Age: Inquisition
RPG
DX11 + Mantle
The Talos Principle
First Person Puzzle
DX11
Far Cry 4
FPS
DX11
Total War: Attila
Strategy
DX11
GRID Autosport
Racing
DX11
With new low-level APIs ramping up in 2015, we’re going to be paying particular attention to APIs starting this year, as everyone is interested in seeing what Vulkan (née Mantle) and DirectX 12 can do. Unless otherwise noted, going forward all benchmarks will be using low-level APIs when available, meaning DX12/Vulkan/Mantle when possible.

Meanwhile from a design standpoint our benchmark settings remain unchanged. For lower-end cards we’ll look at 1080p at various quality settings when practical, and for high-end cards we’ll be looking at 1080p and above at the highest quality settings. The one exception to this is 4K, which at 2.25x the resolution of 1440p remains difficult to hit playable framerates, in which case we’ll also include a lower quality setting to showcase what kind of quality hit it takes to make 4K playable on current video cards.

As for our hardware testbed, it remains unchanged from 2014, being composed of an overclocked Core i7-4960X hosed in an NZXT Phantom 630 Windowed Edition case.

Kicking off our 2015 benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

After stripping away the Frostbite engine’s expensive (and not wholly effective) MSAA, what we’re left with for BF4 at 4K with Ultra quality puts the GTX Titan X in a pretty good light. At 58.3fps it’s not quite up to the 60fps mark, but it comes very close, close enough that the GTX Titan X should be able to stay above 30fps virtually the entire time, and never drop too far below 30fps in even the worst case scenario. Alternatively, dropping to Medium quality should give the GTX Titan X plenty of headroom, with an average framerate of 94.8fps meaning even the lowest framerate never drops below 45fps.

From a benchmarking perspective Battlefield 4 at this point is a well optimized title that’s a pretty good microcosm of overall GPU performance. In this case we find that the GTX Titan X performs around 33% better than the GTX 980, which is almost exactly in-line with our earlier performance predictions. Keeping in mind that while GTX Titan X has 50% more execution units than GTX 980, it’s also clocked at around 88% of the clockspeed, so 33% is right where we should be in a GPU-bound scenario.

Otherwise compared to the GTX 780 Ti and the original GTX Titan, the performance advantage at 4K is around 50% and 66% respectively. GTX Titan X is not going to double the original Titan’s performance – there’s only so much you can do without a die shrink – but it continues to be amazing just how much extra performance NVIDIA has been able to wring out without increasing power consumption and with only a minimal increase in die size.

On the broader competitive landscape, this is far from the Radeon R9 290X/290XU’s best title, with GTX Titan X leading by 50-60%. However this is also a showcase title for when AFR goes right, as the R9 295X2 and GTX 980 SLI both shoot well past the GTX Titan X, demonstrating the performance/consistency tradeoff inherent in multi-GPU setups.

Finally, shifting gears for a moment, gamers looking for the ultimate 1440p card will not be disappointed. GTX Titan X will not get to 120fps here (it won’t even come close), but at 78.7fps it’s well suited for driving 1440p144 displays. In fact it’s the only single-GPU card to do better than 60fps at this resolution.

Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2015.

With GTX Titan X being based on the same iteration of the Maxwell architecture as the GTX 980 and its GM200 GPU essentially built as a GM204 + 50%, it comes as no surprise that the performance gains over GTX 980 are going to be rather consistent. In Crysis 3 the GTX Titan X holds a 35% performance lead at 4K, with that lead tapering slightly to 30% at 2560. Meanwhile the lead over the GK110 cards isn’t quite what we saw with BF4, dropping to around 45% and 55% for GTX 780 Ti and GTX Titan respectively.

As one of our most punishing games, this is also a good example of where even GTX Titan X will come up short at 4K. Even without MSAA and one step below Crysis 3’s Very High quality settings, the GTX Titan X can only muster 42fps. If you want to get to 60fps you will need to drop to Low quality, or drop the resolution to 1440p. The latter will get you 85.2fps at the same quality settings, which again highlights GTX Titan X’s second strength as a good card for driving high refresh rate 1440p displays.

Meanwhile this is another game where our multi-GPU cards still pull ahead, reminding us of the spoiler potential for the R9 295X2 and the GTX 980 SLI. In fact AMD gets some very good scaling here, and they need it as the GTX Titan X bests the R9 290XU by 56% at 4K High.

Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect of current-gen consoles have on VRAM requirements.

Once again even GTX Titan X won’t be enough for 60fps at 4K, but at 48.9fps it’s closer to 60fps than 30fps, representing a significant improvement in 4K performance in only a generation. Compared to the GTX 980 and NVIDIA’s other cards the GTX Titan X is once more in a comfortable lead, overtaking its smaller sibling by around 33% and the older GK110 cards at 45-60%.

Turning down the game’s quality settings to Very High does improve performance a bit, but at 54.1fps it’s still not quite enough for 60fps. The biggest advantage of Very High quality is alleviating some of the high VRAM requirements, something the GTX Titan cards don’t suffer from in the first place. Otherwise dropping to 1440p will give us a significant bump in performance, pushing framerates over 80fps once again.

Meanwhile the game’s minimum framerate further elaborates on the performance hit from the game’s high VRAM usage at Ultra quality. 3GB cards collapse here, leaving the 4GB cards and the 6GB original Titan much higher in our charts. Multi-GPU performance also struggles here, even with 4GB cards, reminding us that while multi-GPU setups can be potent, they do introduce performance consistency issues that single-GPU cards can avoid.

Shifting gears from action to strategy, we have Civilization: Beyond Earth, the latest in the Civilization series of strategy games. Civilization is not quite as GPU-demanding as some of our action games, but at Ultra quality it can still pose a challenge for even high-end video cards. Meanwhile as the first Mantle-enabled strategy title Civilization gives us an interesting look into low-level API performance on larger scale games, along with a look at developer Firaxis’s interesting use of split frame rendering with Mantle to reduce latency rather than improving framerates.

Though not as intricate as Crysis 3 or Shadow of Mordor, Civilization still requires a very powerful GPU to run it at 4K if you want to hit 60fps. In fact of our single-GPU configurations the GTX Titan X is the only card to crack 60fps, delivering 69fps at the game’s most extreme setting. This is once again well ahead of the GTX 980 – beating it by 31% at 4K – and 40%+ ahead of the GK110 cards. On the other hand this is the closest AMD’s R9 290XU will get, with the GTX Titan X only beating it by 23% at 4K.

Meanwhile at 1440p it’s entirely possible to play Civilization at 120fps, making it one of a few games where the GTX Titan X can keep up with high refresh rate 1440p monitors.

When it comes to minimum framerates the GTX Titan X doesn’t dominate quite like it does at average framerates, but it still handily takes the top spot. Even at its worst, the GTX Titan X can still deliver 44fps at 4K under Civilization.

Our RPG of choice for 2015 is Dragon Age: Inquisition, the latest game in the Dragon Age series of ARPGs. Offering an expansive world that can easily challenge even the best of our video cards, Dragon Age also offers us an alternative take on EA/DICE’s Frostbite 3 engine, which powers this game along with Battlefield 4.

Once again turning down Frostbite’s performance-crushing MSAA, what we find at 4K with Ultra quality is that the GTX Titan X is once more hitting framerates in the 40fps range. At 41.7fps the GTX Titan X is the only single-GPU card to average better than 30fps at these settings, with the next-closest card being the GTX 980 at exactly 30fps. Overall the GTX Titan X does particularly well at 4K Ultra, beating the GTX 980 by 39%, the GTX 780 Ti by 53%, and the R9 290XU by 44%.

Users looking for higher framerates can either turn down the quality setting one notch to high, which gets us 54.4fps from the GTX Titan, or drop down to 1440p, which is good for 79.3fps. Meanwhile our multi-GPU configurations once again make their presence felt. At 4K High quality both the GTX 980 SLI are over 60fps, however the GTX Titan X unexpectedly beats the R9 295X2 at 1440p.

Croteam’s first person puzzle and exploration game The Talos Principle may not involve much action, but the game’s lush environments still put even fast video cards to good use. Coupled with the use of 4x MSAA at Ultra quality, and even a tranquil puzzle game like Talos can make a good case for more powerful video cards.

At 4K Ultra quality the GTX Titan X won’t quite break 60fps, but at 53.4fps it’s not too far off. Compared to the GTX 980 this is another 35% performance advantage, though the lead over the GK110 cards is a bit smaller than normal at 40% and 47% for the GTX 780 Ti and GTX Titan respectively.

Meanwhile since I haven’t had a chance yet to address how GTX Titan X compares to NVIDIA’s flagship Fermi card, GTX 580, this is a good time. GTX 580 actually holds up decently here, delivering 32fps at 1440p, however GTX Titan X offers 3 times the performance, and more still in VRAM limited situations, showcasing how far ahead Big Maxwell is over Big Fermi over 4 years later.

The next game in our 2015 GPU benchmark suite is Far Cry 4, Ubisoft’s Himalayan action game. A lot like Crysis 3, Far Cry 4 can be quite tough on GPUs, especially with Ultra settings thanks to the game’s expansive environments.

At 4K Ultra this happens to be another case where the GTX Titan X delivers framerates around 40fps, in this case coming in at 42.1fps. To get a single-GPU card up to 60fps we need to drop to Medium settings, which gets the GTX Titan X to 60.5 at a fairly significant hit to image quality.

Compared to NVIDIA’s other high-end cards, Far Cry 4 puts the GTX Titan X in a very favorable light. Along with the customary 35% performance lead over the GTX 980 at 4K Ultra, the newest Titan beats the GTX 780 Ti and GTX Titan by 60% and 80% respectively, highlighting the architectural efficiency improvements in Maxwell. On the other hand the lead over the R9 290XU is only 29%, making it one of the smallest leads for the GTX Titan X and highlighting how as always AMD and NVIDIA’s relative performance shifts with the game in question.

Dropping down from 4K to 1440p, the GTX Titan X continues to do well, becoming the only single-GPU card to surpass 60fps even at this lower resolution.

The second strategy game in our benchmark suite, Total War: Attila is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a large area with a fortress in the middle, making it a good GPU stress test.

In creating Attila, the developers at Creative Assembly sought to push the limit of current generation video cards, and this is no more evident than at 4K Max Quality. At 23.5fps even the GTX Titan X is foiled here, never mind the GTX 980 and GK110 cards. To get single card performance above 30fps we have to drop a notch to the “Quality” setting, which gets the GTX Titan X up to 44.9fps. In any case, at these settings the GTX Titan X makes easy work of the single-GPU competition, beating everything else by 30-66%.

Alternatively we can drop from 4K to 1440p and still run Max Quality, in which case the GTX Titan X delivers a very similar 47.1fps.

The final game in our benchmark suite is also our racing entry, Codemasters’ GRID Autosport. Codemasters continues to set the bar for graphical fidelity in racing games, delivering realistic looking environments with layed with additional graphical effects. Based on their in-house EGO engine, GRID Autosport includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty on lower-end cards but does a good job of emulating more realistic lighting within the game world.

Even with everything cranked up to max, the GTX Titan X makes easy work of GRID at 4K, hitting 71.7fps at 4K Ultra and making it the only single-GPU card to crack 60fps. Even in GRID the GTX Titan X’s performance advantage over other cards continues to be substantial, beating the GTX 980 by 34%, the GTX 780 Ti by 49%, and the R9 290XU by 59%.

Otherwise racers looking for the 120fps experience can drop to 1440p, in which case the GTX Titan X is comes within inches of the 120fps mark, delivering an average framerate of 117.5fps.

As always we’ll also take a quick look at synthetic performance. In the case of GTX Titan X and its GM200 GPU, what we should see here is a pretty straightforward 30-40% increase in performance, owing to GM200’s evenly scaled out Maxwell 2 design.

At over 300fps even with TessMark’s most strenuous test case, the GTX Titan X is unsurprisingly the top card at tessellation performance. Delivering 24 triangles/clock, theoretical geometry throughput stands at a staggering 24B triangles/second.

Meanwhile 3DMark’s fillrate tests reiterate Maxwell’s biggest and smallest improvements over Kepler. With a decrease in ALU:TEX ratios, overall texture throughput on the GTX Titan X is very similar to the GTX 780 Ti. On the other hand thanks to improved memory compression GTX Titan X has a pixel fillrate unlike anything else. This in turn is a big part of the reason NVIDIA is pushing that GTX Titan X be paired up with 4K monitors, as it offers the kind of fillrate necessary to drive such a high resolution.

Shifting gears, we have our look at compute performance.

As we outlined earlier, GTX Titan X is not the same kind of compute powerhouse that the original GTX Titan was. Make no mistake, at single precision (FP32) compute tasks it is still a very potent card, which for consumer level workloads is generally all that will matter. But for pro-level double precision (FP64) workloads the new Titan lacks the high FP64 performance of the old one.

Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

While in LuxMark 2.0 AMD and NVIDIA were fairly close post-Maxwell, the recently released LuxMark 3.0 finds NVIDIA trailing AMD once more. While GTX Titan X sees a better than average 41% performance increase over the GTX 980 (owing to its ability to stay at its max boost clock on this benchmark) it’s not enough to dethrone the Radeon R9 290X. Even though GTX Titan X packs a lot of performance on paper, and can more than deliver it in graphics workloads, as we can see compute workloads are still highly variable.

For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.

Although GTX Titan X struggled at LuxMark, the same cannot be said for CompuBench. Though the lead varies with the specific sub-benchmark, in every case the latest Titan comes out on top. Face detection in particular shows some massive gains, with GTX Titan X more than doubling the GK110 based GTX 780 Ti's performance.

Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Traditionally a benchmark that favors AMD, GTX Titan X closes the gap some. But it's still not enough to surpass the R9 290X.

Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.

Folding @ Home’s single precision tests reiterate just how powerful GTX Titan X can be at FP32 workloads, even if it’s ostensibly a graphics GPU. With a 50-75% lead over the GTX 780 Ti, the GTX Titan X showcases some of the remarkable efficiency improvements that the Maxwell GPU architecture can offer in compute scenarios, and in the process shoots well past the AMD Radeon cards.

On the other hand with a native FP64 rate of 1/32, the GTX Titan X flounders at double precision. There is no better example of just how much the GTX Titan X and the original GTX Titan differ in their FP64 capabilities than this graph; the GTX Titan X can’t beat the GTX 580, never mind the chart-topping original GTX Titan. FP64 users looking for an entry level FP64 card would be well advised to stick with the GTX Titan Black for now. The new Titan is not the prosumer compute card that was the old Titan.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

With the GTX 980 already performing well here, the GTX Titan X takes it home, improving on the GTX 980 by 31%. Whereas GTX 980 could only hold even with the Radeon R9 290X, the GTX Titan X takes a clear lead.

Overall then the new GTX Titan X can still be a force to be reckoned with in compute scenarios, but only when the workloads are FP32. Users accustomed to the original GTX Titan’s FP64 performance on the other hand will find that this is a very different card, one that doesn’t live up to the same standards.

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

The GTX Titan X represents a very interesting intersection for NVIDIA, crossing Maxwell’s unparalleled power efficiency with GTX Titan’s flagship level performance goals and similarly high power allowance. The end result is that this gives us a chance to see how well Maxwell holds up when pushed to the limit; to see how well the architecture holds up in the form of a 601mm2 GPU with a 250W TDP.

GeForce GTX Titan X Voltages
GTX Titan X Boost Voltage
GTX 980 Boost Voltage
GTX Titan X Idle Voltage
1.162v
1.225v
0.849v
Starting off with voltages, based on our samples we find that NVIDIA has been rather conservative in their voltage allowance, presumably to keep power consumption down. With the highest stock boost bin hitting a voltage of just 1.162v, GTX Titan X operates notably lower on the voltage curve than the GTX 980. This goes hand-in-hand with GTX Titan X’s stock clockspeeds, which are around 100MHz lower than GTX 980.

GeForce GTX Titan X Average Clockspeeds
Game
GTX Titan X
GTX 980
Max Boost Clock
1215MHz
1252MHz
Battlefield 4

1088MHz

1227MHz

Crysis 3

1113MHz

1177MHz

Mordor

1126MHz

1164MHz

Civilization: BE

1088MHz

1215MHz

Dragon Age

1189MHz

1215MHz

Talos Principle

1126MHz

1215MHz

Far Cry 4

1101MHz

1164MHz

Total War: Attila

1088MHz

1177MHz

GRID Autosport

1151MHz

1190MHz

Speaking of clockspeeds, taking a look at our average clockspeeds for GTX Titan X and GTX 980 showcases just why the 50% larger GM200 GPU only leads to an average performance advantage of 35% for the GTX Titan X. While the max boost bins are both over 1.2GHz, the GTX Titan has to back off far more often to stay within its power and thermal limits. The final clockspeed difference between the two cards depends on the game in question, but we’re looking at a real-world clockspeed deficit of 50-100MHz for GTX Titan X.

Starting off with idle power consumption, the GTX Titan X comes out strong as expected. Even at 8 billion transistors, NVIDIA is able to keep power consumption at idle very low, with all of our recent single-GPU NVIDIA cards coming in at 73-74W at the wall.

Meanwhile load power consumption for GTX Titan X is more or less exactly what we’d expect. With NVIDIA having nailed down their throttling mechanisms for Kepler and Maxwell, the GTX Titan X has a load power profile almost identical to the GTX 780 Ti, the closest equivalent GK110 card. Under Crysis 3 this manifests itself as a 20W increase in power consumption at the wall – generally attributable to the greater CPU load from GTX Titan X’s better GPU performance – while under FurMark the two cards are within 2W of each other.

Compared to the GTX 980 on the other hand, this is of course a sizable increase in power consumption. With a TDP difference on paper of 85W, the difference at the wall is an almost perfect match. GTX Titan X still offers Maxwell’s overall energy efficiency, delivering greatly superior performance for the power consumption, but this is a 250W card and it shows. Meanwhile the GTX Titan X’s power consumption also ends up being very close to the unrestricted R9 290X Uber, which in light of the Titan’s 44% 4K performance advantage further drives home the point about NVIDIA’s power efficiency lead at this time.

With the same Titan cooler and same idle power consumption, it should come as no surprise that the GTX Titan X offers the same idle temperatures as its GK110 predecessors: a relatively cool 32C.

Moving on to load temperatures, the GTX Titan X has a stock temperature limit of 83C, just like the GTX 780 Ti. Consequently this is exactly where we see the card top out at under both FurMark and Crysis 3. 83C does lead to the card temperature throttling in most cases, though as we’ve seen in our look at average clockspeeds it’s generally not a big drop.

Last but not least we have our noise results. With the Titan cooler backing it, the GTX Titan X has no problem keeping quiet at idle. At 37.0db(A) it's technically the quietest card among our entire collection of high-end cards, and from a practical perspective is close to silent.

Much like GTX Titan X’s power profile, GTX Titan X’s noise profile almost perfectly mirrors the GTX 780 Ti. With the card hitting 51.3dB(A) under Crysis 3 and 52.4dB(A) under FurMark, it is respectively only 0.4dB and 0.1dB off from the GTX 780 Ti. From a practical perspective what this means is that the GTX Titan X isn’t quite the hushed card that was the GTX 980 – nor with a 250W TDP would we expect it to be – but for its chart-topping gaming performance it delivers some very impressive acoustics. The Titan cooler continues to serve NVIDIA well, allowing them to dissipate 250W in a blower without making a lot of noise in the process.

Overall then, from a power/temp/noise perspective the GTX Titan X is every bit as impressive as the original GTX Titan and its GTX 780 Ti sibling. Thanks to the Maxwell architecture and Titan cooler, NVIDIA has been able to deliver a 50% increase in gaming performance over the GTX 780 Ti without an increase in power consumption or noise, leading to NVIDIA once again delivering a flagship video card that can top the performance charts without unnecessarily sacrificing power consumption or noise.

Finally, no review of a GTX Titan card would be complete without a look at overclocking performance.

From a design standpoint, GTX Titan X already ships close to its power limits. NVIDIA’s 250W TDP can only be raised another 10% – to 275W – meaning that in TDP limited scenarios there’s not much headroom to play with. On the other hand with the stock voltage being so low, in clockspeed limited scenarios there’s a lot of room for pushing the performance envelope through overvolting. And neither of these options addresses the most potent aspect of overclocking, which is pushing the entirely clockspeed curve higher at the same voltages by increasing the clockspeed offsets.

GTX 980 ended up being a very capable overclocker, and as we’ll see it’s much the same story for the GTX Titan X.

GeForce GTX Titan X Overclocking

Stock
Overclocked
Core Clock
1002MHz
1202MHz
Boost Clock
1076Mhz
1276MHz
Max Boost Clock
1215MHz
1452MHz
Memory Clock
7GHz
7.8GHz
Max Voltage
1.162v
1.218v
Even when packing 8B transistors into a 601mm2, the GM200 GPU backing the GTX Titan X continues to offer the same kind of excellent overclocking headroom that we’ve come to see from the other Maxwell GPUs. Overall we have been able to increase our GPU clockspeed by 200MHz (20%) and the memory clockspeed by 800MHz (11%). At its peak this leads to the GTX Titan X pushing a maximum boost clock of 1.45GHz, and while TDP restrictions mean it can’t sustain this under most workloads, it’s still an impressive outcome for overclocking such a large GPU.

The performance gains from this overclock are a very consistent 16-19% across all 5 of our sample games at 4K, indicating that we're almost entirely GPU-bound as opposed to memory-bound. Though not quite enough to push the GTX Titan X above 60fps in Shadow of Mordor or Crysis 3, this puts it even closer than the GTX Titan X was at stock. Meanwhile we do crack 60fps on Battlefield 4 and The Talos Principle.

The tradeoff for this overclock is of course power and noise, both of which see significant increases. In fact the jump in power consumption with Crysis is a bit unexpected – further research shows that the GTX Titan X shifts from being temperature limited to TDP limited as a result of our overclocking efforts – while FurMark is in-line with the 25W increase in TDP. The 55dB noise levels that result, though not extreme, also mean that GTX Titan X is drifting farther away from being a quiet card. Ultimately it’s a pretty straightforward tradeoff for a further 16%+ increase in performance, but a tradeoff nonetheless.

When NVIDIA introduced the original GTX Titan in 2013 they set a new bar for performance, quality, and price for a high-end video card. The GTX Titan ended up being a major success for the company, a success that the company is keen to repeat. And now with their Maxwell architecture in hand, NVIDIA is in a position to do just that.

For as much of a legacy as the GTX Titan line can have at this point, it’s clear that the GTX Titan X is as worthy a successor as NVIDIA could hope for. NVIDIA has honed the already solid GTX Titan design, and coupled it with their largest Maxwell GPU, and in the process has put together a card that once again sets a new bar for performance and quality. That said, from a design perspective GTX Titan X is clearly evolutionary as opposed to the revolution that was the original GTX Titan, but it is nonetheless an impressive evolution.

Overall then it should come as no surprise that from a gaming performance standpoint the GTX Titan X stands alone. Delivering an average performance increase over the GTX 980 of 33%, GTX Titan X further builds on what was already a solid single-GPU performance lead for NVIDIA. Meanwhile compared to its immediate predecessors such as the GTX 780 Ti and the original GTX Titan, the GTX Titan X represents a significant, though perhaps not-quite-generational 50%-60% increase in performance. However perhaps most importantly, this performance improvement comes without any further increase in noise or power consumption as compared to NVIDIA’s previous generation flagship.

Show more