Originally Posted by shadow001
If those are accurate,the extra 14% in shader and 20% extra memory bandwith for the GTX480,by having the full 512 shaders and 384 bit bus,simply isn't enough to go after the HD5970 performance wise,and actually challenge it in a meaningfull way.
If the GTX480 really does use close to 300 watts as the leaks suggest,which is the maximum that the PCI-e spec allows power wise,then making a dual GPU variation using GTX480 chips is flat out of the question.
Even using GTX470 GPU's,it would already be pretty hard,as the HD5870 cards use 188 watts as it is,and the dual GPU variant clocks in at 294 watts,but only after the clocks were lowered to 750/1000(850/1200 is stock) and ATI uses cherry picked Cypress chips running at 1.05 volts to make the HD 5970 possible.
Available information suggests that the GTX470 clocks in at 220 watts power consumption,so a dual GPU card using those would need even more drastic mesures that ATI did with the HD5970 cards(disabling hardware inside the GPU),and at that point,would it still beat the HD5970 performance wise?
My own opinion is that Fermi needs 28nm in a big way to cut down on power consumption significantly,increase yeilds,potentially raise operating clock speeds for the core,and make it technically possible to make a dual GPU card and stay under that 300 watt limit....It's simply too big and power hungry to allow that while still built at 40nm.
I'm not GPU-architect-engineering-guru but...
It seems I recall the 7800GTX was under whelming, and the refresh, on the same process was leaps and bounds better (7900GTX). If nVidia managed it with the 7900- why couldn't they here? Who knows what is going on under the hood- maybe a lot of power leakage? I mean, there's a reason it's inefficient. It's possible, I'd imagine, they could fix this with a refresh? Maybe?