Originally posted by Hellbinder
Here is the single most important thing people overlook or forget imo.
The Nv30 *SHOULD* have the same level of performance in FP32 as the 9700 in FP24.
The 9700 operates at full speed in FP24. Because it is designed that way through the entire chip. It only does FP24 and it does it with a grade of A+.
People will say that it is unfair that the Nv30 has to operate at FP32. Well not if it was CORRECTLY DESIGNED. The Nv30 should be running at full speed 100% of the time. This is not the same thing as PS/VS instruction execution. The Chip Simply should execute full FP32 at full speed. It is not an Unfair Comparrison to Force the Nv30 to Run at FP32.
Look at the evidence. M$ minimum Requirement for Dx9 is FP24. Becuase they were making an EXCEPTION for ATi's 96 bit color. The ideal was FP32. It was clearly passed off to everyone inside the industry that the Nv30 would perform at FP32 as its DEFAULT. Just like the 9700pro is FP24 default. then everything would be dithered down to 16 and 32bit color for older games. This however is not the truth as we have all learned. Nvidia for some reason have either been lying to even M$ and the industry in general OR the Nv30's hardware is simply a Disaster of Bugs.
Take your pic. Its really hard to say. Personally I think that its a bug ridden piece of hardware even in its current revision. Bugs that go far deeper than just Fog etc. The bugs also follow all the way through to the Nv31/34. If true this is the single most Bug Ridden piece of hardware since the Savage 2000. It is also absurd to try and blame TSMC for all the problems. Some of this screems of simply BAD DESIGN at the deepest levels of the core.
Maybe ATi just took the easy root out? What happens after ARTX? Do ATi have the balls, the cash, and the technical expertise to better this generation? What if with ARTX they blew their load?
Maybe FP24 was simple, but does it actually push us anyway forward? It's already in the spec, yesterdays news for developers. In the R350 they included the F-buffer, hmmm where does that fit in? Or is it a simple way into the offline rendering market.
Kudos to ATi, they took the simple and cheap way out and it works. But will it take us any further into a future that isn't one vision other than M$'s... naaah. And now I'm not knocking ATi, they made a smart decision. But was it future proof. I don't know, and nobody does until it happens.