Originally posted by Joe DeFuria
You're just fogging the issue. Sure...with high enough transistor budget (additional logic) and bandidth, etc, you may 'negate' a performance penalty.
Here's the way I think of it (as it applies to the GeForce4 vs. Radeon 9700). The GeForce4 can compute one degree of anisotropic per pixel pipeline per clock. The Radeon 9700 can do the same. Therefore, there is no reason that the Radeon 9700 cannot perform a similar calculation.
And, as I attempted to explain previously, the Radeon 9700's aniso degree selection appears to require a very similar transistor count to what would be required for doing the accurate calculation. I'm really beginning to think that the decision actually had very little to do with transistor count budgets, but was more an engineering time constraint, or something along those lines.
As another way to state it, since the original GeForce, nVidia's hardware has been able to do what looks like the accurate aniso degree calculation for one texture per pixel pipeline per clock, I see absolutely no reason why the Radeon 9700 cannot do the same for its one texture per pixel pipeline per clock.
And, stated again, the NV30 should therefore, with its eight pipelines, be able to put out very similar performance figures with anisotropic enabled (Actually, it should be able to beat the 9700 without too much trouble...given the almost certainly higher core speed).
Doesn't the GeForce3 have that ability though?
I don't think so. I would assume that the GeForce3's anisotropic is identical to the GeForce4's. I don't currently have a GeForce3 to test.