Originally posted by StealthHawk
You also disregard some facts such as FP24 is the minimum precision required by DX9. No one told nvidia to support FP32. No one mad nvidia not support FP24 AND FP32. Dropping down to FP16(which nvidia did in 3dmark03) means that nvidia is cheating, because they are no longer in the DX9 spec. Sorry, nvidia has no one to blame but themselves.
FP24 minimum for DX9? Please explain the following quote:
"DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit."
Isn't that exactly what I was saying, namely that ATI is rendering in a lower precision than they are supposed to and "assumed by DX9". So my question if ATI is cheating since they are using the faster FP24 mode and nvidia the slower but higher precision FP32 mode was not that wrong to ask after all
In fact, I see a conspiracy raising here which basically indicates that ATI deliberately only added FP24 mode to their R3x0 core so no one can force them to use another FP mode for DX9 compatibility. Nvidia on the contrary wanted to add flexibility to their nv3x architecture but now is forced by MS to default to FP32 if not requested otherwise by the game/program. Of course, speed-wise FP32 can't keep up with FP24 eventhough it offers a better IQ...
BTW, the quote from above is from no other person than John Carmack himself, posted today at slashdot.org