Size, Noise and Heat Output
The GTX 680 is a standard size for a high-end card at 25.5 cm long. It uses a double-decker heatsink system and the built-in fan is positioned off-centre, expelling hot air from the chassis.
The fan hardly makes any noise at all, which is largely explained by the fact that the chip keeps its heat output under control well. When idle, this card is very discreet, and any noise from the fan will be mostly covered up by noise from the other components in your computer (hard drives, fans in the tower case etc.).
The GTX 680 has an effective cooling system
AMD did a great job of pushing power use down in its Radeon HD 7000 series graphics cards. Nvidia too has made progress, particularly when its cards are idle, as we only measured 110 watts for the GeForce GTX 580 and GTX 570 in our test set-up. In the same test machine, the GTX 680 used just 89 watts! That's quite a big saving, bringing Nvidia considerably closer to the 85 watts we measured with the Radeon HD 7970.
Nvidia is particularly proud of its new graphics cards!
Note that Nvidia hasn't equipped its Kepler series with a function similar to ZeroCore Power for pushing power use right down when the computer is on standby (with the screen off).
However, the Californian component-maker has come up with a way of reducing power use while gaming, by means of a utility supplied with the graphics card. The actual software supplied may vary depending on which version of the card you buy, but each card will come with something. Our test card came with Evga Precision X version 3.0.0b20, but Nvidia is apparently looking into offering some kind of generic utility for users to download.
This utility can be used to adjust various settings and, in particular, to user-set the maximum number of frames per second. Reducing the framerate means that the graphics card will no longer need to use its full capabilities to display images in a game, therefore reducing power use proportionally, as well as heat and noise output.
We tried limiting the framerate to 60 fps, and we found it made a clear difference to the card's power consumption:
Full FPS: no limit on framerate
60 FPS: framerate limited to 60 fps
Results given in watts of power used by the whole test computer
The GeForce GTX 680 is the first graphics card to use the new Kepler architecture, and comes loaded with a GK104 chip (1536 CUDA cores) clocked at 1 GHz and 2 GB of GDDR5 RAM at 1.5 GHz on a 256-bit bus. One slightly confusing thing about the GK104 is that, in the past, Nvidia has always used the number 4 (GK104) for mid-range graphics cards. Top-of-the-range models can usually be identified by a 10, like the GF110, used in the GeForce GTX 580 and 570. This seems to suggest that Nvidia could have another, higher-end Kepler-architecture chip up its sleeve for a later date.
Mark Rein (right), Vice President of Epic Games and co-creator of the Unreal series
was categoric at the product's presentation: one GeForce GTX 680 does as good a job as 3 GeForce GTX 580 cards ...
at least in the Samaritan Tech Demo (Unreal Engine 3.9), which, we have to admit, looks pretty impressive.
So while the chip's name may not have been chosen all that wisely, performance levels are still sufficient to outstrip the AMD Radeon HD 7970 by just over 6%, with an average performance in our indexing system of 130. Note, however, that this is an average, and that the difference between the two cards is more noticeable in some games than others. Our full set of test results for the GTX 680 can be viewed and compared in the graphics card Face-Off.
Our performance index for the GTX 680 is 130, while the HD 7970 comes in at 122. This makes for an interesting improvement compared with the previous generation of products, as this new card gains 23% in performance compared with the GTX 580 (index 105) and 41% compared with the GTX 570 (index 92).
To achieve these performance levels, Nvidia uses (among other things) a function called GPU Boost. This is comparable to the Turbo modes seen in AMD and Intel processors, as GPU Boost increases the clock speed of the graphics chip by a varying amount in relation to workload and within the limit of the card's TDP (Thermal Design Power). This increase is automatic, happens on the fly and can't be disabled by users. GPU Boost doesn't rely on a function in the driver but instead uses a sensor built onto the chip.
In practice: in Metro 2033: the clock rate varies between 1058 and 1097 MHz
with a slight peak at 1110 MHz when loading up the test scene.
GPU Boost: Consistent Product Performances?
This is where things get a little complicated. First, no matter what Nvidia may say, there is a limit to the maximum frequency, and this is 1110 MHz. Our test card never managed to outstrip this frequency, but practically always managed to achieve it, as the frequency systematically varied between 1097 and 1110 MHz with a few drops to around 1050 MHz.
However, not all chips are identical. In fact, it's perfectly feasible that come chips won't manage to spend quite as much time at 1110 MHz (they may heat up more due to greater power loss or a less well ventilated casing, for example). Worse still—they may not be able to get very far past 1058 MHz which, after all, is the only thing Nvidia is guaranteeing.
So while our test card managed average performances 6% higher than with the Radeon HD 7970, it's perfectly possible that some people could end up with cards that don't give quite the same level of performance. Performances therefore may be reduced ... but by how much?
GeForce GTX 680 Performance Without GPU Boost
To find out, we adjusted various parameters so as to make this card work at between 1006 and 1012 MHz (seeing as there's no way of switching the GPU Boost function off), no matter what the game and load levels. These extreme clock speeds allow us to test the minimum performances of the GTX 680. In these conditions, the performances are logically lower, and the average performance index drops from 130 to 127.
The difference in performance between the GTX 680 and the Radeon HD 7970 also drops from 6% to 4% on average. For our tests in 1920 x 1200 resolution with filters active, the 7% advantage in favour of the GTX 680 drops to 3%.
Results in frame per second.
Tests carried out in 1920 x 1200 pixels with anti-aliasing active.
Any users who are left feeling a little disappointed can always use the utility supplied with their graphics card (see above) to up the limit of the TDP.
Evga PrecisionX: this kind of utility, supplied by each GTX 680 manufacturer, allows users to:
limit the framerate (see above), boost the TDP (Power Target) or overclock the card.
In the future, Nvidia may offer a generic utility that users can download.
DirectX 11.1, PCI-Express and Conclusion
DirectX 11.1 and PCI-Express 3.0 are well supported by the Nvidia 600 series. For DirectX 11.1, you'll have to wait for the release of Windows 8 to get your hands on the first games that make use of it. Don't expect miracles with PCI-E 3.0 either, as the difference between this and PCI-E 2.0 is minute on current generations of AMD and Nvidia cards. However, the GTX 680 is backwards-compatible to work perfectly well with PCI-E 2.0 motherboards.
All in all, the GeForce GTX 680 brings several interesting improvements—power use has been reduced, noise levels are excellent and performance levels allow Nvidia to edge just ahead of AMD's Radeon HD 7970. There is, however, one major downside to this card, as it's impossible to guarantee performance levels from model to model. In fact, performance can vary by a maximum of 4% between different versions of the same card.
With prices from £420 to £450, this Nvidia GeFore GTX 680 card is on par with AMD's Radeon HD 7970. Let's hope that prompts AMD to start a price war!
|View performance index table|