The point of the benchmark is to see how a card renders handles ALL of the data of the scene. Not simply the part that you see. You mistake the point of the benchmark entirely. The visuals are simply eye candy to give you a subjective experience whilst the benchmark attempt to calculate an objective score. Cutting out the majority of the data to be passed to the card in effect makes the obtained value irrelevant since it merely reflects how well the card can render a portion of the data rather than the entire scene.
Apparently you don't understand that the intent of the benchmark is to plug through all of the data and let the card's routines sort out how to handle it and then render it. The reason it's always on the same "rail" is to give a control factor in the test. Inserting static clip planes is NOT the same as other methods of hidden surface removal because it happens BEFORE the HSR comes into play, effectively reducing the actual workload the card sees. That isn't optimizing the card, that's modifying the benchmark itself. Thus, the benchmark the FX5900 and the R9800 are two wildly different benchmarks, and given the R9800 is running the benchmark as it was intended, one can conclude scores obtained by the FX5900 are essentially meaningless except when comparing among other FX5900s. Furthermore these "optimizations" gain the user higher scores only in benchmarks because nVidia can employ these static clip planes only in deterministic benchmarks. Thus the scores one receives in a benchmark with the FX5900 is incongruous to the actual performance one will achieve in gameplay.
So...*ahem*...that isn't cheating?