View Full Version : Official - NV News Reviews the Gefore GTX 580
Article by NV News Editor In Chief John "ragejg" Grabski
November 9, 2010
In the hobby/tech/enthusiast world we've become familiar with the term “refresh”. Often used to attach a new and positive connotation to a product or service that has previously underperformed in one way or another, this “re-do” of sorts has become a welcome sight with regards to certain technologies over the years. Successful refreshes of recent products include Ford's Mustang (a model-year 2011 refresh put a lustworthy and fuel-efficient standard 300 hp v6 under the hood, replacing an outdated 4.0L unit, garnering the v6 model more attention than it's ever seen)
... and Microsoft's Windows 7, operating system where optimization of OS processes and resources, as well as user interface tweaks made this well-publicized refresh an attractive option versus much of the PC world's view of its predecessor, Windows Vista).
Both of these product refreshes had a couple things obviously in common: an increase in efficiency and improvements to ease-of use. GPU pioneer NVIDIA is no stranger to product refreshes. From the Geforce 256 DDR to the 7900 series, NVIDIA proved to the GPU enthusiast market that tweaks to existing technologies that improved efficiency and ease-of-use could make for marketable improvements in user experiences.
As of this writing in late 2010, NVIDIA has been marketing and selling its first generation of Fermi-based GPUs with limited success. The flagship model GTX 480, although a popular discussion subject on GPU-related forums, communities and review sites, has not exactly lit up the sales charts. Given that it is a “Halo” GPU, it's not expected to be a volume seller anyway, but a few nagging issues such as noise, heat output and power consumption have made it an easy target for rival AMD which markets their GPU products in a way that attempts to play off of Fermi's inherent weaknesses. That's not to say that NVIDIA can't do the same thing right back to AMD, as Fermi-based GPUs have proven to perform DirectX 11 features such as hardware tessellation more efficiently than comparable AMD GPUs.
NVIDIA is ready to introduce a new “Halo” GPU which is a refresh of the original Fermi tech. Called the GTX 580, this improved chunk of silicon was engineered to increase efficiency and improve ease-of-use for high-end gamers and PC hardware enthusiasts. To quote NVIDIA, the Geforce GTX 580 is “designed for gamers who want to enjoy their games at the maximum graphics settings and screen resolutions, with high levels of AA enabled”. To me that sounds like this GPU could likely run every game out there at 1920x1080 to 2560x1440 with at least 4x AA. Well, since the GTX 580 is coming to the market priced at $499 USD I sure hope it can do at least that, if not a little more.
Shown alongside the other Fermi-based GPUs with gaming capabilities, it appears from the outset on a superficial level that the GTX 580 is simply a natural progression... evolution over revolution. It appears that the improvements continue the lockstep of performance gains seen when going from GTS 450 to GTX 460/465 to GTX 470 etc. It is worth noting that the GF104-based GTX 460 cards seemed to offer “more efficient” performance than their GF100 counterparts, however... they ran cooler, consumed less power, and when overclocked properly, a GF104 GTX 460 could almost catch a GTX 470 in many games. So has NVIDIA taken notice of some of the efficiencies of GF104, and will they bolt some of that logic onto GF100 to make for a well-received refresh? And might there be other improvements, ones that don't show up on paper?
A look at NVIDIA's Geforce GTX 580
To be honest, on the surface the GTX 580 doesn't look too different from previous high-end NVIDIA GPUs, and that's not a bad thing. NVIDIA's reference card is an ominous black and green slab that looks like it means business, business, ... and possibly even more business.
The 10.5” board-length (same as GTX 480) GTX 580 has a 2-slot design, it features two SLI connectors for up to Tri-SLI capability, and receives auxiliary power from an 8-pin plus a six-pin connector to satisfy the 244W TDP specification of the GF110/512SP. Again, par-for-the course.
It is worth noting that unlike the GTX 480, NVIDIA's GTX 580 lacks the PCB ventilation cut-out holes seen in the older card. This appears to signal NVIDIA's confidence in the heat output of the GTX 580. Furthermore, the power circuitry looks quite a bit different than what is present on the GTX 580. Since the newer GPU has a few new features concerning power/voltage regulation, this is understandable. As such, the GTX 580 PCB appears to be almost sparsely laid-out compared to previous high-end graphics cards.
GTX 580 with fan shroud removed
Fan shrouds are usually unremarkable, and the GTX 580's is almost no different. However, it appears that more thought has been put into the design of this piece (more than what I saw in use on a GTX 465 anyway), as there seems to be some pieces serving a sort of an “air guide” functionality which could better remove warm air from the card. NVIDIA has mentioned that the angled end of this piece has proved to help in removing warm air as well.
The board's non-GPU heatsink/mounting plate has a design that looks similar to previous GTX 4XX cards. Since the GTX 580 has a lower TDP than the GTX 480, it can be assumed that the thermal management supplied by this part is going to be more than adequate in cooling power circuitry and the GDDR5 memory. The big story here is the fan. NVIDIA is proud of the new fan on the GTX 580, which is supposed to run at a lower pitch and with less vibration than previous GPU fans. Another benefit that will be covered later is a new fan control algorithm that is supposed to “smooth out” transitions between low and high fans speeds. NVIDIA measured noise levels on this new GPU while running Crysis Warhead at 1920x1200, 4xAA/16xAF using Release 260 drivers, the 580 registered at 47 dba, with the GTX 285, 280 and 480 splitting eardrums (hint: sarcasm) at 47.5, 48 and 52 dba, respectively. Great acoustics for a high-end NVIDIA GPU? Where do I sign up?
GPU Heat Sink
Wait, where are the heatpipes? What, does the GTX 580 run THAT cool? No, not quite. But look closer at the chip side and you'll see a vapor chamber in use. Hey, that's pretty cool (points for pun usage please?)! … And I had thought that new high-end GPU's would start sprouting heatpipes ON their heatpipes, with what amounts to a bundle of snakes or tentacles emerging from the board (and I shall dub thee... Cthulhu!).Nope. It looks like NVIDIA went the “simple, elegant, practical” route on this one. The vapor chamber makes use of an evaporator and a condenser to facilitate liquid return, and the air blowing over the heat sink takes care of business as all good heat sinks are supposed to. A preliminary thumbs-up to NVIDIA from the author is in order.
NVIDIA's vapor chamber cooling
More Information on GTX 580:
NVIDIA made it very clear when describing some of the technical aspects of the new GTX 580. The GF110 may have started life as a GF100, but the end product is quite different. Here's their take on just what the GF110 is:
To improve performance/watt, GTX 580 was re-engineered down to the transistor level. We evaluated every block of the GPU, using lower-leakage transistors on less timing sensitive processing paths, and higher speed transistors on more critical processing paths. A very large percentage of transistors on the chip were modified. Through this redesign, we were able to achieve faster clocks with less power.
As a result of this effort, we managed to deliver a product that runs at higher clocks with 512 CUDA Cores and 16 SMs, yet still generates less power than GTX 480.
That sounds pretty good, but is it just PR-speak or might these advancements be able to couple with Fermi's already solid tessellation and geometry performance and raw power to deliver a GPU that pushes pixels but does not warm the room?
SLI, PhysX 3D Vision & Surround Gaming:
NVIDIA is aware that most high-end GPU consumers already know about most of these features, and with the primary focus of the GF110 GPU being performance and power consumption/heat efficiency, I think that NVIDIA has not keyed quite so much on 3D Vision, Surround Gaming and PhysX. Obviously with its advanced architecture, the GTX 580 should be able to run PhysX-enabled games in single-GPU mode with little drop in performance. NV News will have a more in-depth look at GF110 PhysX performance in a later review. As far as SLI is concerned, NVIDIA is proud enough of the GTX 580's SLI performance to toot their horns a little. They claim that the GTX 580 offers 'tremendous” SLI scaling. They showed a graph that showed a bunch of newer games and their scaling performance relative to two HD 5870s in Crossfire. Obviously with the HD 5870s not-very-good Crossfire performance being in direct firing range of NVIDIA, even with most of the listed games exhibiting SLI scaling around 1.4 to 1.6 (with some below, some above), it still looked a lot more desirable than the AMD alternative.
For several generations, NVIDIA graphics cards have featured GPU throttling techniques to keep cards from overheating. With the GTX 580 there are a number of new features that expand on this idea in new ways. There is some new circuitry on the board designed to monitor and adjust current and voltage on each 12V rail of the card.
GTX 580 power monitoring circuitry
The graphics driver also plays a role, helping to dynamically adjust voltages in high-stress situations. Personally, I didn't see this feature coming, although I did recently spot some Chinese Radeon HD 5830's sporting a “Worry-Free” monkier and employing similar on-the fly voltage protection. If this is a measure to increase stability and product life, that's great, I'm all for that. But if it can also help attain more stable overclocks, I'm doubly for that.
If GTX 480 is "The 2010 Tank", is the GTX 580 "The 2010 1/2 Tank"?
512 CUDA Cores. Finally. This (according to the jury that is called the Internet) is what Fermi was supposed to be, right? Along with the jump in CUDA cores is an increase in SM/PolyMorph Engine count, up one to 16, and an increase of four texture filtering units to 64 over the older GTX 480. Memory speed has increased by around 300 mhz (indicating improvements in the memory controller), and core/shader clock speeds have also increased (by 72 mhz over the GTX 480). Pixel fillrate is fast approaching 40 GP/s and texel fillrate is approaching 50 GT/s. This is an astounding amount of tessellation, pixel and geometry-processing horsepower, and appears to further build on the GTX series' 3D gaming and multi-monitor gaming capabilities, where raw power is needed to handle the super-high resolutions. So, YES. This is the Fermi that we wanted. So shouldn't this be heavier, bigger and hotter than a GTX 480? Maybe, but obviously NVIDIA has worked hard to take this 3 billion-transistor GPU and tweak/optimize where they could in order to actually deliver a Fermi refresh that is faster, yet consumes less power and runs cooler.
It's time to kick the tires and see what this “Tank with a Weapons Upgrade” GPU can do in some gaming and benchmarking scenarios.
The test system for the GTX 580 was previously designed around middle-of-the-pack GPUs like the GTX 450/460/465 and Radeon HD 4850/4870/4890 and 5830. As such, the system components do not represent the absolute ideal environment for a GTX 580. The Tuniq case is a bit small, the power supply is no hot-rodded Gold unit, and the LCD is a relatively-pedestrian 24” 1920x1080 unit. BUT, there are plenty of potential GTX 580 purchasers out there with similar hardware. Some have not upgraded their case, power supply or monitor yet, and will likely run their new card with components similar to these for a while before they finish their upgrade cycle. At least CPU limitation won't be an issue with six AMD cores humming along full-time at 4 GHz.
AMD Phenom II x6 1055t, 4 GHz at 1.4625 vcore
KingWin XT-1264 CPU cooler with extra Cooler Master 120mm fan
Asus M4N98TD EVO nForce 980a SLI AM3 ATX motherboard
2x2 GB A-Data DDR3-1600 at 1510 MHz, 9.0, 9, 9, 24, 27, 1T
Antec EarthWatts EA650 650W power supply with one 120mm internal fan
Sunbeam Tuniq 3 ATX mid-tower case with 2 mounted 120mm fans and one 120mm fan blowing across the top of the graphics card and across the heatpiped northbridge
Mushkin Enhanced Callisto Deluxe (Sandforce SF-1222) 60 GB boot drive
2x Western Digital Caviar 16 SE WD2500KS 250 GB SATA2 drives in RAID 0
LG DVD/R/RW drive
Samsung SyncMaster B2430 24” 1920x1080 LCD
KingWin silver compact laptop-profile keyboard
Microsoft Sidewinder x5 mouse
Altec Lansing ADA885 4.1 THX-Certified speaker/sub setup
Windows 7 Ultimate SP1
Graphics card, driver, and driver settings:
NVIDIA Reference Geforce GTX 580 – 772 MHz Core / 1544 MHz Shader / 4008 MHz Memory (Effective)
Forceware Driver Version 262.99
60 HZ Refresh Rate
High Quality Texture Filtering
Gamma Correction Enabled
Ambient Occlusion Enabled - Quality
Antialiasing – Transparency: Multisample
Test system with the GTX 580 installed
Obviously it's a very tight fit for a large video card, but given that this case is adequately-cooled and ventilated enough to help a 1.4625v 4 Ghz x6 chip to idle at 36C and peak at 55 during Prime95 torture testing, everything should be kosher. So, this card will fit in mid-tower cases, but it'll be a little tight, and will likely require very optimal cooling.
Metro 2033 Benchmark
Lost Planet 2 Benchmark Test B
Battlefield: Bad Company 2 Gameplay
HAWX 2 Benchmark
Just Cause 2 Benchmark
Far Cry 2 Benchmark
Unigine Heaven (Extreme Tessellation)
Stone Giant (High Tessellation)
3DMark Vantage (Performance Preset)
Benchmark results for both synthetic benchmarks and game tests are the results of averaged multiple runs. Most gaming tests were performed at a resolution of 1920x1080 resolution with 4x Antialiasing and 16x Anisotropic Filtering enabled. Where applicable, higher AA modes are tested to assess performance impact.
Regarding Performance Testing:
We had planned on comparing the GTX 580 to a GTX 480 or other high-end card, but could not secure one in time. The only other cards in the author's possession at this time is a GTX 460 768 and a Radeon HD 5830, and neither one of those can really serve as much of a performance comparison. If we obtain a 480 or similar card to compare soon after this review is published, we will add to the review. However, what we did do in applicable cases was investigate the performance hit of the various levels of Antialiasing that should be available to the user with a high-end GPU such as this. So suffice to say, the depth in this review might be in a different area than some of the other reviews out there, and it is our hope that you the reader find some benefit in it.
Metro 2033 (THQ)
This heavily system-intensive benchmark was tested using an average of three runs of the in-game benchmark in DX11 mode (with tessellation ON)at 1920x1080, 16x AF with both available Antialiasing options (4xAA and AAA), in several different settings configurations. Note: GPU PhysX was set to auto-select.
Setting A: Quality: Very High; Advanced PhysX: Enabled; DOF: Enabled
Setting B: Quality: Very High; Advanced PhysX: Disabled; DOF: Enabled
Setting C: Quality: Very High; Advanced PhysX: Disabled; DOF: Disabled
Setting D: Quality: High; Advanced PhysX: Disabled; DOF: Disabled
With the NVIDIA Control Panel setting for Antialiasing set to Application select, Metro 2033 used its own AA method for the AAA setting, and this resulted in higher performance than with 4x Multisampling. Whatever the case, even the 25.67 fps average for setting A (all the bells and whistles, I mean ALL of them) at 1920x1080 with 4x AA is a huge result. In my opinion, the pacing of this game (a bit Halo-like) makes it more tolerable at some of the lower framerates, so these results for the most part indicate an almost across-the-board playability, although the real sweet spot is the Very High setting, PhysX disabled, DOF off (setting C), and AAA, resulting in a 46.5 fps average. This game looks amazing, and the GTX 580 delivers a breathtaking experience.
Lost Planet 2 (Capcom)
Utilizing the MT Framework engine, Lost Planet 2 has continued the cutting-edge graphical qualities introduced with the previous game in the series, Lost Planet: Extreme Condition. Previous NV News reviews have utilized other MT Framework titles (the aforementioned Lost Planet 1 and Resident Evil 5), but the DirectX 11 features of this new title make it a must for any modern DX11 GPU review.
Again, I have investigated several available AA modes. Global settings:
Benchmarking Test: Test B
Motion Blur: On
Shadow Detail: High
Texture Detail: High
Rendering Level: High
DirectX 11 Features: High
Anisotropic Filtering: 16x
While AA isn't quite free yet, it is worth noting that just 4x AA itself provides great image quality to start out with, and going all the way to 32x CSAA decreases performance by an almost insignificant level at this resolution. With 4x AA providing 51 fps and 32x CSAA providing 45.7, it is pretty much up to the user and their own preference of AA mode to decide what offers the best visual fidelity to them. Again, a stellar performance by the GTX 580 in a boundary-pushing title.
Battlefield: Bad Company 2 (DICE/EA)
This is somewhat of a first-generation DX11 title in that it doesn't use tessellation, but the usage of DX11 soft shadows is noteworthy. Again, I tested a multitude of AA settings to see what kind of performance hit is incurred in this popular title.
INI file tweak: FOV: 78
Level of Detail: High
Texture Quality: High
Shadow Quality: High
Effects Quality: High
Anisotropic Filtering: 16x
Unless you're an absolute frames-per-second junkie, it makes sense to employ the highest levels of AA possible with this title. Testing for disparities between SP and MP performance, I checked out a couple full servers playing the Atacama Desert map with 32x CSAA selected. With a min/max/avg of 52/98/77.3, performance was similar between the two game modes. This is a great-looking title regardless of AA mode (as long as SOME form of AA is used), but thee ability of the GTX 580 to provide more than playable framerates using 32x Coverage Sample Antialiasing is remarkable.
Tom Clancy's HAWX (UBI)
Though this isn't a newer title any more, it still offers compelling visuals and an exciting gaming experience. This DX10 title uses advanced SSAO (Screen-Space Ambient Occlusion) for advanced lighting as well as Depth of Field , HDR and engine heat effects.
Anisotropic Filtering: 16x
View Distance: High
Texture Quality: High
Engine Heat: On
Sun Shafts: High
SSAO: Very High
DirectX 10.1: On
I only evaluated two A modes in this game, and likely should have used Application Override to see how the game looked using 32x AA, as it's easy to see that the GTX 580 eats this title for lunch with the lower-class AA options. The visuals of this title still hold up, and this new GPU proves again that it's more than up to the task.
Tom Clancy's HAWX 2 Benchmark (UBI)
Using some of the most advanced hardware-based tessellation techniques of any game, the eagerly anticipated sequel to HAWX places you back in the pilot seat for more aerial excitement. The tessellation applied to terrain gives this game a unique look, and despite the fact that the world goes by so quickly in a jet airplane the sometimes-subtle improvement in visual fidelity sometimes adds up with other graphical enhancements to make this new title look quite bit nicer than its predecessor.
With the HAWX 2 benchmark, I again tested various AA modes to evaluate their impact on a moderately-high resolution using a high-end GPU.
Anisotropic Filtering: 16x
Terrain Tessellation: On
View Distance: High
Texture Quality: High
Depth of Field: On
Particle Density: High
I went into this benchmark thinking that with this being a sort of “second-generation DX11 title”, the GTX would have trouble keeping framerates high. I was surprised to find that this benchmarks actually shows higher performance on the GTX 580 hardware than HAWX 1 provided. 120 fps with 32x CSAA was quite an amazing result. There is a little bit of controversy surrounding the use of this benchmark, with AMD asserting that the results on NVIDIA hardware are unrealistic. I will look into this further over the next couple weeks, as I want to make sure that the performance figures I show for this advanced title are #1 representative of real gameplay and #2 not unnecessarily optimized. The NDA for the full version of HAWX 2 lifts late in the day November 9, 2010, and I will likely use the full version of the new game to evaluate real-life gameplay.
Just Cause 2 Demo (Eidos)
A TWIMTBP title, Just Casue 2 uses some advanced graphical effects such as a Bokeh filter for advanced water rendering. I am actually not clear as to whether this is a DX10 or DX11 title, however, but this is new-ish and very good-looking title. Again, I investigated various AA modes.
Anisotropic Filtering: 16x
Texture Detail: High
Shadows Quality: High
Water Detail: Very High
Objects Detail: Very High
Soft Particles: On
Hi-Res Shadows: On
Point Light Specular: On
Bokeh Filter: On
Enhanced Water Detail: On
Motion Blur: On
I couldn't seem to locate the “GPU water” setting, but the game was still set to extremely high visual fidelity settings. If I can locate said setting then I will edit these results and include the “GPU water” results.
Again, it's almost like the AA IS free. Obviously, with all the bells and whistles turned on, the GTX 580 is stressed even at 1920x1080, but it holds its ground and still delivers 54.88 fps with 32x MSAA.
Far Cry 2 (UBI)
It seems like this games has been a benchmark staple forever. Although the game itself has received mixed reviews for its long and drawn out sandbox gameplay, its visuals are still top-notch, and did prove to be fairly comparable to Crysis.
I utilized the Far Cry 2 benchmarking tool which helps create repeatable benchmark runs of different settings. I evaluated three different AA settings. In previous tests of Far Cry 2 I had specified an AF setting, but in reading the review guide, NVIDIA said that AF settings don't apply properly to the benchmarking tool, so it's best for AF to be set to Application Preference in the NVIDIA Control Panel. If you the reader have any feedback on my take on this, please discuss it in the NV News forums.
Fixed Time Step(No)
Disable Artificial Intelligence(Yes)
Overall Quality(Ultra High)
Terrain(Ultra High), Geometry(Ultra High)
It is very “pleasurable” to experience a GPU running a graphically-intensive game smoothly, and is even nicer when you an see that something like 8x MSAA really doesn't impact performance all that much. Icing on top of the cake is the ~50 fps minimum even at 8x AA.
Unigine Heaven 2.1 DX11 Tessellation Benchmark
This benchmark has become a standard for DirectX 11 video card tests, as it features modern rendering effects such as tessellation and SSAO (screen-space ambient occlusion).
36.1 fps at this moderately high resolution, with extreme tessellation, plus a healthy amount of AA and AF signify the raw power the GTX 580 is capable of. Only losing 6 fps when going to 8xQ MSAA only pushes the point home further that GF110 is a geometry specialist.
Stone Giant DX11 Tessellation Benchmark
Stone Giant is a new DirectX 11 benchmark that utilized heavy amounts of tessellation, and also requires that the video card rendering the scene can push large amounts of geometry.
With tessellation set to high, this test appeared to become SM-bound, as the results didn't hardly change at all going from no AA to 8xQ. With framerates in the eighties, however, this test was a breeze for the GTX 580, again indicating its prowess at handling heavy amounts of hardware tessellation.
3Dmark Vantage is an industry-standard GPU benchmarking tool which tests the DirectX 9 and DirectX 10 capabilities of hardware. It also includes several CPU tests and PhysX support.
Although the relevance of this benchmarking tool has decreased dramatically recently, it is still worthwhile to include some results in this review. Using the performance preset, the results are far greater than what I'm used to seeing (around 15k or so by cards like HD 4870, HD 5830, GTX 460/465), and further indicate the huge amounts of pixel-pushing and fillrate horsepower.
The GTX 580 is considered by some to just be an overclocked GTX 480. That assumption is fairly untrue, given the number of enhancements in the new GF110 architecture and the fact that there are more CUDA cores, SM units, more texture filtering units... but hey, what about an overclocked GTX 580? I mean, heck, if a GTX 580 is supposed to outperform a 480 by 15-20%, then how much value would an overclocked 580 have?
I haven't had a lot of time to mess with overclocking, but I did run a few tests, and at this point, in non-GPU PhysX-related scenarios, I am at what appears to be a conservative overclock of 840 MHz core / 1680 MHz shader clock, and 4500 MHz on the memory. I'll add more to this section of the review at a later time, but for now I've done a 3Dmark run, as well as a Stone Giant run. I feel that these two are a good choice, so you can see the pixel/texture fillrate improvements, as well as improvements in hardware-based tessellation capabilities.
So now this is a GTX 480 plus how many percent? I'm only sort of kidding. I really want to get a GTX 480 hooked up to this test system so I can do a proper comparison, but even just having this card, and already being pleased with its stock performance, the improvements experienced by the overclock are certainly welcomed. I should probably test Metro 2033 and see if I gain some playability.
Temperatures and Noise:
I expected a very hot GPU given the elevated clock speeds and additional CUDA cores etc. The GTX 580 is a pleasant surprise when it comes to heat levels. Ok, you remember that picture of the GTX 580 in my case, right? It looked like a tight fit; something that would likely get pretty hot, right? Well, using the default fan profile with the case closed, at idle using 2D desktop applications the temperatures level out at 45 C. Running without a side panel gets me 6 degrees. 39 to 45 C is a pretty nice 2D temperature, no? For what it's worth LinusTechTips (Youtube) had a 2D temp of 38 C on an open test bench setup. So If you have a large case that is cooled really well, you can likely expect a 2D temp somewhere around 38 to 42 C, which really isn't too bad at all. Moving onto 3D applications, the GPU did get pretty hot in some instances, floating around 84 degrees at peak while free-roving full screen in Stone Giant. Reportedly NVIDIA is throttling clock speeds and/or voltages while running Furmark, and 'll report on this more as I learn more. As for regular gaming, with the case closed up, temperatures topped out at 81 C, but NVIDIA's fan profile, in my opinion was too gentle. I would have liked the fan speed to go up more to keep the temps down a little better. With the case window off, core temperature peaked at 78 C. So a range of 78 to 81 C really isn't all that bad if you consider how hot some other GPUs such as the GTX 465 and HD 4850 get (up into the nineties).
These decent, if exceptional temperatures using the default fan profile show the benefits of the optimizations of the GF110 GPU and its vapor chamber cooler. If you want a Fermi card but are reticent due to temperatures, the GTX 580 may change your mind. Definitely a big thumbs up to NVIDIA for their improvements in this regard.
Regarding noise, the GTX 580 is positively quieter than the GTX 465 (GF100) I had, and honestly doesn't seem to make any more noise than my GTX 460 768MB. That's probably music to the ears of plenty of high-end enthusiasts out there hoping to have a quieter gaming rig, whether it's in single or dual or Tri-SLI format.
On a related note, despite the fact that a 600W power supply is recommended, I kinda figured that my system with its 650W Antec unit hooked to a very-much overclocked motherboard/CPU combo might prove to be unstable. That has not been the case, and despite having a GPU with a fairly high TDP coupled with a high-TDP CPU, I'm able to overclock both. For some reason I don't think this would be possible with a GTX 480.
NVIDIA, this is what we wanted. We wanted 512 CUDA cores, monstrous performance in DX10 and DX11 games with loads of AA and AF, advanced tessellation capabilities, good temperatures and low-noise. …. And you delivered.
Mind you, this is a $499 card which is a bit pricey, and AMD is sure to counter soon with the HD 6970, but if you are a gamer or hardware enthusiast whose budget allows for a top-of-the-line GPU, the GTX 580 overshadows the older 480 not just for its likely higher performance, but also for the ease of use and efficiency improvements in this revised architecture.
This was a good refresh, for sure. This GPU has what it takes and then some for extreme-image-quality gaming at 1920x1080, and likely has what it takes to more than sufficiently handle 1920x1200 and 2560x1440 as well. The GTX 580 could make for a great SLI setup that runs cooler than a pair of GTX 480s, and is for certain the GPU to consider when looking into a 3D Vision or 3D Vision Surround setup.
GTX 580 cards are available now at leading e-tailers and retailers starting at $499 USD.
We'd like to thank NVIDIA for providing a review sample to us for testing purposes.
ADDITIONAL NOTE ON AVAILABILITY:
EVGA has informed NV News about their GTX 580 cards. They will have 4 models initially, a Standard version, a Superclocked version, a Call of Duty: Black Ops version, and a water block version.
The Call of Duty versions, for what it's worth, do not include the game, but the package consists of a special box, card sticker and poster. All of these EVGA cards are also bundled with a free key for 3DMark 11 which will be made available to the purchaser when the software is released. Thanks to EVGA for this information!
Feedback/questions? Navigate HERE (http://www.nvnews.net/vbulletin/showthread.php?p=2344746#post2344746) to the feedback thread.
vBulletin® v3.7.1, Copyright ©2000-2015, Jelsoft Enterprises Ltd.