2015-06-01



NVIDIA Increases Pressure On AMD, Releases GM200-Based GeForce GTX 980 Ti.

Nvidia’s GeForce GTX 980 already trumps the Radeon R9 290X in virtually every gaming benchmark—solidly in some tests, by a hair in others—while also running far cooler, quieter, and more efficiently than AMD’s aging flagship.


When Oculus VR revealed the recommended PC specs for the forthcoming consumer release of its highly anticipated Oculus Rift virtual reality headset, the graphics card requirements were shockingly reasonable.Nvidia announced a suite of virtual reality-specific hardware and software optimizations called VR Direct to coincide with the launch of its Maxwell-based GTX 980 and 970 GPUs last September. But with a new generation of powerful Radeons buoyed by cutting-edge high-bandwidth memory (HBM) and a rumored new Fiji GPU right around the corner, Nvidia couldn’t stand still. Sure, the GeForce GTX 970 and Radeon R9 290 are no slouches in the eye-candy department, but delivering high-resolution visuals to two displays at 90 frames per second takes a lot of firepower. Yes, it carries an exorbitant price tag of $999, but it remains the most powerful single-GPU graphics card on the market and it integrates nicely into everything from super towers to compact mini-ITX builds.


It is now giving headset makers and game/app developers a way to leverage all those features — as well as some new ones — in the form of the GameWorks VR software development kit (SDK). How will developers create top-tier VR games that don’t require the latest and greatest graphics cards (like the newly announced GTX 980 Ti) to run at the blistering frame rates required to avoid the dreaded VR nausea? Announced earlier today alongside the GTX 980 Ti GPU (read our review here), GameWorks VR is basically a bundle of APIs, software libraries, and features.

Nvidia may have stumbled onto the answer with multi-resolution shading (MRS) feature, a new GameWorks VR middleware technology available for developers. MRS takes advantage of a quirk in the way VR headsets render images to drastically reduce the graphics performance needed to create virtual scenes—which could effectively be used to run VR games on less powerful hardware. Normally, graphics cards render full-screen images as a straight-ahead, rectangular scene, applying the same resolution across the entire image—think of how PC games appear when you’re playing them.

Nvidia distinguished engineer Tom Peterson told PCWorld that this can mean “between 50 and 100 percent less pixel work.” NVIDIA Multi-Res Shading (MRS): An innovative new rendering technique for VR. A few other interesting specifications: The 980 Ti packs in 6GB of VRAM and a 384-bit memory interface running at an effective memory clock of 7GHz; exactly the same as the Titan X.

The new graphics card packs 6GB of onboard GDDR5 memory clocked at a speedy 7GHz, and it uses a wider 384-bit bus than the 4GB GTX 980, which taps a 256-bit interface. The center of the image—where your eyes primarily focus in a VR headset, and where the image isn’t distorted— is rendered at full, native resolution.

The edges of the screen, however, are rendered at a reduced quality to take advantage of VR’s necessary warping and distortion. “We’re going to cut down the resolution [at those edges], we’re going to cut down the scaling, and effectively use fewer pixels,” says Peterson. The compressed image is rendered in parallel with the full-resolution center region on Nvidia’s Maxwell GPU architecture—and yes, Nvidia says a recent 900-series GeForce graphics card or GTX 750 Ti is required for multi-resolution shading—and then re-warped to appear through the VR headset’s lenses with no apparent loss of image fidelity.

The reality is that 6GB is still a more than adequate frame buffer for medium to high quality 4K gaming (though expect that to be a baseline requirement in the next few years), so for the vast majority of enthusiasts this shouldn’t be a deal breaker. With Direct Mode, the NVIDIA graphics driver recognizes the headset as a VR display rather than a standard desktop monitor, providing a more seamless user experience. Even more insane: The reduced quality edge regions truly aren’t noticeable in the final image unless the compression quality is cranked to extreme levels. Interested developers can now request access to an alpha version of the SDK, which the company says is already in the hands of big names like Oculus, Valve, Epic Games, HTC, and CCP Games.

In a closed-room Nvidia demo on an Oculus Rift, Peterson let me compare a scene rendered with MRS and without MRS, enabling and disabling the feature on the fly. While the base 12.0 version of DX12 in Windows 10 includes the low CPU overhead, asynchronous compute, and closer-to-the-metal control options everybody’s so excited about, supplementary feature levels add extra goodies. Though it’s a wee bit tamer than its big Titan X sibling, pretty much everything is—AMD’s beastly dual-GPU Radeon R9 295×2 aside in games with proper CrossFire support. For those keeping score, this is Nvidia’s fifth unanswered graphics card release, and its price and performance put the company in a strong position against the expected imminent launch of AMD’s Radeon 390x.

Ultra-high resolutions consume far more memory than gaming at 1080p or even 2560×1440, especially as you ramp up anti-aliasing options, but 6GB of RAM should take everything you throw at it and come out smiling and still hungry. That’s big news for VR developers, and for gamers who want to get into the virtual reality experience without spending the equivalent of a college education on a graphics card. “So if you’re a game developer, this means that you can have higher quality games, or that you can have your games run on more GPUs,” says Peterson.

I detail the system in full in PCWorld’s build guide, but here’s the Cliffs Notes version: Intel’s Core i7-5960X with a Corsair Hydro Series H100i closed-loop water cooler, to eliminate any potential for CPU bottlenecks affecting graphical benchmarks Nvidia’s MFAA technology can help boost your frame rates at home, but I disabled it during testing to avoid giving GeForce cards an unfair advantage. GameWorks is Nvidia-created middleware that adds features and technologies with performance optimized for GeForce graphics cards—but, naturally, not for AMD Radeon cards. That’s been the cause of much recent hand-wringing, most recently when The Witcher 3 launched with Nvidia’s HairWorks technology, allegedly—but not really—crippling performance on AMD hardware. (ExtremeTech has a superb overview of all the GameWorks concerns if you’re interested.) While the threat of GameWorks-packing titles that work well on GeForce GPUs—but not Radeons—feels overblown for standard games, the possibility of VR developers specifically targeting GeForce cards with a passion seems like a very real possibility, given the nascent nature of virtual reality and the potential performance benefits of multi-resolution shading.

This critically acclaimed game (though not by us) offers an optional Ultra HD Texture pack that can murder your graphics card’s frame buffer at high resolutions. Virtual reality’s one of the most exciting developments in the PC ecosystem in years, and if Nvidia’s performance claims for multi-resolution shading prove true, it could genuinely be a killer feature for the fledgling VR field. Speaking of hammering hardware, Sleeping Dogs: Definitive Edition may be a recent remake of an older (and surprisingly great) game, but it still chews up and spits out graphics cards for breakfast. Bioshock: Infinite is getting a bit long in the tooth and virtually every graphics card available today handles it wonderfully, but it’s nevertheless a fine representative for the still-popular Unreal Engine 3. (UE4 can’t come fast enough, though.) Dragon Age: Inquisition is a gorgeous, massive game—one of the best of 2014, in fact.

It runs on the same Frostbite 3 engine used to power Battlefield 4, but despite the close ties EA’s technical team enjoys with AMD—and heavy AMD promotion for the game—DAI doesn’t appear to play nice with the R9 295×2’s dual GPUs, seemingly utilizing only one at a time no matter whether you’re using AMD’s WHQL or beta drivers. You won’t be able to crank the eye candy to Ultra settings at 4K—you’ll want to stick to High settings, lest your game devolve into a slideshow. That’s perfectly acceptable for many gamers, but if it’s not for you, you could disable anti-aliasing entirely—smoothing out jaggies isn’t as necessary on such a pixel-packed screen, and AA comes with a sizeable performance hit—or investing in an Nvidia G-Sync-compatible monitor, which syncs the refresh rates of your GPU and your display to kill screen tearing and stuttering.

Or you could always turn to an SLI or CrossFire setup—if you’re willing to deal with the headaches inherent with a multi-card solution in exchange for more raw firepower. This graphics card brushes up against the $1000 Titan X’s lofty performance for $350 less and a free copy of Batman: Arkham Knight—essentially eliminating the practical need for PC gamers to consider the pricier card whatsoever. Beyond gaming, its lack of double-precision floating point performance severely limits the Titan X’s appeal for many GPU compute tasks, though the card still excels at single-precision performance.

Sure, if you’re running a multi-monitor setup with several 4K displays for 8K or 12K gaming, you’re going to want the larger 12GB frame buffer of the Titan X—and a SLI setup that essentially dedicates a Titan X to each screen.

Show more