View Single Post
Old 03-30-03, 08:45 PM   #22
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by ChrisRay
Errm ATI cards fully support Floating Point 16 bit precision, Using the r200 pathway on Doom 3 is ran in 16 bit precision.

I dunno where the heck you learned that the r300 cannot do 16 bit precision.

And Vsync used to be part of DX specification. Microsoft would not certify any drivers that allowed for Vsync to be turned for until like the year 2001.
I don't know where you heard R300 can do FP16, I have never heard that stated, and it does not seem likely. people "in the know" always say that R300 does not support FP16, and does not support true 128bit color either(FP32). we all know it's max is FP24.

i'm not really sure where you heard the R200 pathway uses 16bit floating point precision, AFAIK Carmack has never explicitly stated such a thing.

Carmack said this in his .plan
Quote:
The reason for this is that ATI does everything at high precision all the time, while Nvidia internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by Nvidia, so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.
he says that ATI does everything in "high precision all the time." he also says that nvidia supports 3 formats, which we know to be INT12, FP16, and FP32.

he then says that "the exact precision ATI uses in in between the floating point precisions offered by Nvidia." again, nvidia offers FP16, and FP32, this suggests that the R300 can only use FP24.
  Reply With Quote