Originally posted by Chalnoth
I think that Futuremark's "sticking with the standards to remain neutral" is flawed. They should optimize for all video cards, not the standards.
"All video cards" have not been produced yet, just as all games have not yet been produced. Nvidia's and ATi's offerings may differ greatly over the next year and a half...how good would their benchmark do if it were running code optimized for a certain make of card, then a future card received no such optimized code and performed poorly, even though it was more than capable of handling the challenge of future games? As far as I can tell, using a proprietary HLSL only colors the results in favor of one company, and at that, only the current generation of card offerings from said company. How the heck are they supposed to help people make a decision about a card purchase in 2004 using a benchmark that favors cards manufactured in November of 2002? Would they offer "updates" to the benchmark, as if all the games that were produced in the interim would have been cg-patched every time nvidia decided to release a new card?
And what do they say about the lower precision modes the FX favors? Is it a valid comparison between cards that aren't producing the same visual effects?
Regardless of whether or not it were true, Futuremark would risk
being regarded as favoritizing nvidia if it were to use cg to develop their benchmark...and for what? Optimization for a card that isn't out yet, and hasn't shown its true colors (FX), or would you rather it showed your Ti4600 in a better light (as if it can compete with the 9700)? What disservice are they doing consumers here?
And your suggestion about having it be "the lowest shader version possible" seems outdated...isn't this supposed to be a forward-looking benchmark? We have benchmarks for all lower versions, do we need another one?