PDA

View Full Version : DirectX 9 benchmarking - important tests left out


Uttar
08-14-02, 02:11 PM
Hello everyone,

Anand compared the R300 to GF4, R250s, ...
That's all very nice but... how fast is it with higher color precision?

Higher color precision is a DX9 feature developers will be able to use very easily. It's also something pretty usefull.
However, each company will likely have a different system for it. Only time will tell if nVidia or ATI got the best system.

Basically, what i'm saying is that you can't judge NV30/R300 performance with 32 bit color. It should be tested with more if you get high FPS anyway - nobody needs 100 FPS, but higher color precision can be very nice.


I'm doing this message for two reasons:
1. I want to make everyone remember current R300 benchies are nice, but we still got no higher color precision number ( and that's important! )
2. I'd like to see sites like Anand do that type of test once they'll get their hands on a R300 and, later, a NV30.


Thanks for reading, and if you got any feedback, feel free to express yourself!


Uttar

-=DVS=-
08-14-02, 02:33 PM
He sayd somwere that at 128 bit precision it takes 4x times more bandwith to perform then it takes on 32 bit colors , so we can assume it will get 4 times lower score on super duper lookeing games and that is very good score , since it scores easly over 200 on many current games :rolleyes:

Toastywheatman
08-14-02, 04:29 PM
The pipeline is through and through 24 bpp per pixel float. It takes a 32 or 16 bit RGBA data component as input, expands it to 96 bit, manipulates it at 96 bit (writing intermediate results as 128 bits to preserve power of 2 framebuffer alignment), then dithers as a final step back to 10:10:10:2 RGBA for output through the DAC, all with no performance differences.

This is exactly the same as the Kyro I and II which internally rendered everything at 32bit making 16bit look better than the competitions equivalent.

sancheuz
08-14-02, 05:01 PM
So are you saying there wont be a performance hit?

StealthHawk
08-14-02, 09:37 PM
Originally posted by sancheuz
So are you saying there wont be a performance hit?

no, he's saying that the quality will be essentially the same.

edit: and there will probably be a minimal performance hit(minimal in relation to say a 2x or 4x performance hit)

Kruno
08-15-02, 02:38 AM
around 10%-30% roughly we are talking about here?

Uttar
08-15-02, 04:05 AM
Originally posted by DVS
He sayd somwere that at 128 bit precision it takes 4x times more bandwith to perform then it takes on 32 bit colors , so we can assume it will get 4 times lower score on super duper lookeing games and that is very good score , since it scores easly over 200 on many current games :rolleyes:

It isn't because it takes 4 times more bandwidth that it's 4 times slower: not all games are bandwidth limited.

And what if you activated higher bit color and AA? Maybe nVidia/ATI got an optimization for this kind of situation. Who knows.


Uttar

Kruno
08-15-02, 04:29 AM
Actually 32bpp takes no performance hit over 16bpp with aa and af or not. Also doesn't matter what res.

StealthHawk
08-15-02, 06:00 AM
Originally posted by K.I.L.E.R
Actually 32bpp takes no performance hit over 16bpp with aa and af or not. Also doesn't matter what res.

with what card are we talking about. i can definitely see the difference in framerate in good old Counter-Strike with 2x FSAA and 8X AF in 16bit or 32bit color. someone throws a smoke bomb in 32bit color, the fps drops lower than it would in 16bit.

i'm pretty sure that you would see some differences with a GF3 at 1280X1024 once you throw FSAA into the mix, even without FSAA probably(in video card limited situations). only time 32bit and 16bit should be the same is in CPU limited sitautions. of course with newer chips like GF4 with more efficient handling of bandwidth, there is less and less or a hit as the card is CPU limited in most situations.

Kruno
08-15-02, 07:32 AM
My specs are in my sig. :p
I noticed no performance dip using 32bpp in the games I play.

Uttar
08-15-02, 07:55 AM
Originally posted by K.I.L.E.R
My specs are in my sig. :p
I noticed no performance dip using 32bpp in the games I play.

Most likely depends on the game you're using! After all, some games aren't as bandwidth limited and are more made towards 32 bit. Or am i wrong here?


Uttar

StealthHawk
08-15-02, 09:49 PM
Originally posted by Uttar


Most likely depends on the game you're using! After all, some games aren't as bandwidth limited and are more made towards 32 bit. Or am i wrong here?


Uttar

in this day and age, cards and games alike or more optimized for 32bit color, so you're right.

SnakeEyes
08-16-02, 10:24 AM
Unless the card is more efficient for some reason in its output stage at 16bit than it is in the output stage at 32bit, there shouldn't be a perceived difference between 16bit and 32bit modes in performance (when the internal renderer always uses 32bits). This is true due to the fact that the card isn't saving any bandwidth internally when doing its rendering, since it's essentially doing the same operations in both, and storing the same amount of data, transporting that same data over the same bus. This might even be a moot point for all games if the memory bandwidth is high enough that it never becomes a bottleneck with any game, since only the GPUs ability to process the data is a limit in that situation (this is something that nVidia is said to be claiming is the case with nV30- Memory bandwidth is no longer an issue, therefore the GPUs ability to process becomes the main performance issue, combined with the CPUs ability to feed it).

That's not to say that there won't be a perceptible difference in the output quality (after all, there are less shades available to display the final image in 16bit mode), but the difference between 32bits output/32bit internal vs. 16bits output/32bit internal would be less than the difference between 32bits output/32bit internal vs. 16bit output/16 bit internal in quality (and is-> Kyro).