A benchmark should be; "How fast does it do this task".
3dMark on the other hand is; "How fast does it do this bit of code, without the option to massage it for the hardware".
Exactly right - and if the task is 'run this piece of code, which I wrote in a standard programming language' then the driver/compiler/processor should do just that. It shouldn't decide that 16 or 12 bits is enough precision. That's not what I asked it to do.
I have no problem with compiler technology. But having the compiler decide which parts of the task are 'important' is not a valid optimization, at least for a benchmark.
If some enterprising hard drive manufacturer decided that all the reads/writes to a temporary file during a benchmark didn't do any meaningful work (after all, there's nothing left on the drive, is there?) and decided to just skip the whole thing and report 'done', would we call it a cheat, or congratulate them for an aggressive optimization? If I benchmark MP3 encoding on a CPU, is performing the encoding at 128kbps instead of the requested 256kbps acceptable if I can't hear the difference? Personally, I think not.
It's all about equal work. Who cares if the work being requested is inefficient? Just do the work (all of it) in the most efficient way you can. No sweeping things under the rug. No cutting corners. Just do it.
Is this too much to ask?