Originally Posted by Rollo
Like I said, please, ignore my lunatic ravings. Buy 7970 tri fire asap! Your 1.5 gtx580s were never meant for 76x14, you need those 7970s right away!
Actually, it'll be a quad fire setup now since i've been looking thru all the wiring inside the PC case, and actually found a way to power 4 cards and the extra power connectors on the motherboard with just one power supply(silverstone 1500 watt strider), so i'm just waiting on the water blocks from EK( next week hopefully), and ready to roll and hopefully all 4 cards hit 1200Mhz on the cores and 1.5 Ghz(6 Ghz effective as it's GDDR5 memory), meaning each card has nearly 300 GB/sec of memory bandwith to play with....
And don't get me wrong here, the purchase of the HD7970 was never in question once the reviews were out....Add the total lack of information on NV's side regarding when they'll release their cards and what exactly comes out first, well that didn't exactly inspire confidence in the least, but that's just me.
Here's my little crazy theory....As has been the standard operating procedure for Nvidia since the GTX280 releases, their high end GPU's are going to feature a much larger transistor budget using a much bigger die, and in order to beat the HD7970 in memory bandwith constrained scenarios, they also need a 512 bit memory bus, wich is tricky to pull off using GDDR 5 running at very high clock speeds, and there's no faster memory on the market as noboby is making XDR2 from rambus, all the while still having to beat the HD7970 in raw fillrate(they redid the entire back end without adding more rops) and in GP-GPU ability for single and dual floating point math, and trying not to exceed too much the power consumption of the HD7970 under load either, wich makes it complicated to cool down and stay quiet..
Lastly, they don't need another Fermi scenario where not everything could be enabled without impacting yeilds considerably, and was only acheived on the GTX580 6 months later, and it's one thing to release a 4.3 billion transistor GPU at 28nm(AMD's choice), and another releasing a 500mm^ die at 28nm right from the start, probably packing 6 billion transistors, and a more complicated PCB(512 bit memory bus means more traces on the PCB), and as an Nvidia engineer mentioned at the time..."Designing a 3 billion transistor GPU at 40nm from the start is hard", so you can imagine one with twice as many transistors at 28nm.
So i can understand why they may be taking the baby step aproach and releasing the midrange and low end versions of kepler first, get to know the process and it's specific requirements and leaving the big boy high end version for last if it's that large and complex....Just my crazy theory though.