I'm not even sure that the final output is going to be too different, since (as someone else pointed out earlier) the output medium / devices can't really handle that much color (er, spectrum) anyway. The internal processing is where the number of pixels used is important, just to maintain accuracy as blends and such occur.
Also, too high an internal bit depth is just wasted bandwidth, imo. When the output pixel has to be fit into the more limited range available on, say, a computer monitor, going beyond a certain color depth internally won't produce any difference in the final adjusted output. eg. 0.898 calculated internally vs. 0.878 internally, when the output device is working in tenths, resulting in the output for both becoming 0.9. Maybe a dumb example, but I think the idea comes across (or I hope so, anyhow).
I'm not necessarily agreeing that 128bit color isn't useful or effective, because I don't really know the true limits of output devices or the current internal operations of the GPUs in question. Without having to explain this, since it's been discussed time and time again, basically it concerns rounding errors between bit depth conversions that accumulate as more passes are made through the pipeline during processing.
Last edited by SnakeEyes; 09-25-02 at 03:51 PM.