View Single Post
Old 11-14-03, 10:32 AM   #54
legion88 Junkie
Join Date: Jul 2002
Posts: 135

Originally posted by euan
I'm confused.

What is the difference between a synthetic test that does many various 3d operations (geometery, textures, shaders), and say a game fly-by or recorded demo?
In an ideal world, there would be no difference. But we don't live in an ideal world.

A properly designed test for graphics cards would not have any vendor-specific code. That is, the test won't have subroutines like "if card = ATI, then run routine X. if card = NVDIA, then run routine Y'. All the cards would use the same routines so all the cards would be treated the same by the test.

In games, however, that is not the case. Back in the old days of 3dfx, game developers loved glide for more reasons that it being simple to use. Game developers can be very confident that their glide code would run on any 3dfx-based card from the various video card manufacturers because they all are using the same glide drivers.

For Direct3D and OpenGl, that is not the case. A developer's OGL code is not guaranteed to run well on every OGL-capable card because the various cards are not using the same OGL drivers. A developer's D3D code is not guaranteed to run well on every D3D-capable card because the various cards are not using the same D3D drivers. It is not uncommon to see a D3D game crash often using one card while not crash at all on another.

By necessity, the developers had to implement vendor-specific code in their programs for performance issues or to prevent "show-stopping" bugs.

So for game benchmarks, video card performance is not the only thing being measured. The game benchmarks also measure how well the developer can fine-tune their game code for specific vendors.

A properly designed "synthetic" benchmark don't have vendor specific code. All the cards are treated the same.
With a Bush, a Dick, and a Colin everyone gets screwed

Why are you here?
legion88 is offline   Reply With Quote