Radeon 10K Pro
Join Date: Aug 2002
Originally posted by ChrisRay
Look buddy I disagree with you on several levels ehre, So let me tackle them.
Improved programmability is never a bad thing, Coming from a programmer,
And the situation where the r300 is"limited" in programmability compared to the r350 and Geforce FX, is somewhat disapointing, Its in ability to be programmed at multiple levels of precision makes it "under" DirectX 9.0 specifications and limits what can be done with DirectX 9.0. There's no argument the r300 is DX 9.0 compliant in a way that leaves no room for imagination. Savvy?
Ok, I'll tackle it this way. ALL THREE OF THESE CARDS ARE TOO SLOW WITH SHADERS EVEN APPROACHING THE LIMITS OF THE R300, MUCH LESS THE FX OR THE 9800. I don't believe I said that technically it wasn't less programmable, but there ARE other limiting factors, you know that little thing called performance. Your "imagination" is already limited by the speed of these cards, at least in the gaming arena, which is what 95% of us do with these cards. The extra shaders are fine and dandy for non real time effect rendering and stuff, but that is pointless for most of us.
Irrelevent, Whether it gets used in DirectX 9.0 or DirectX 10. More programmability in the future is good. The r300 is the most limited DirectX 9.0 card available right now. If you don't buy for new tech, Then what are you buying for? People claim the r300 is very future proof, In Actuality, It's not, Your argument basically reiterates that.
I'm buying it to play games right now and for the next year or maybe two. I'm NOT buying it for a feature that will not be used until this card is equivilent to a GF2 MX card and that's how fast current games at that time play. I'll bet all those people with GF1's got a ton of games that pushed the T&L unit of it to it's max in its lifetime....oh wait, they didn't. I'll bet all those people who got the Radeon for its 3rd Texture unit made heavy use of it in games while they had the card...oh wait, they didn't. Well, those people with the original GF3's got a whole ton of games pushing that programmable T&L unit to the max....oh wait, they didn't. Those people who're saying "future proof" are fooling themselves, because there really is no such thing. It should have a slightly longer life than most other cards before it did, simply because it was SO fast on games at it's release(name another card that could run a current game at it's release at 1600x1200 4xAA and 8x aniso). I'm glad I reiterated that it's not "future proof", because there is no such thing, nor did I ever claim it was.
What the hell are you talking about here? He specifically stated he would prefer 16 bit precision, he also stated 24 bit yielded no significant IQ improvement over the r200's 16 bit. He also stated that 32 bit had marginal Image Quality improvements over 24 bit.
If you don't believe me. All you gotta do is load up your browser, Go to beyond3d.com and check out their carmack interviews. I'd point you there, But I am not gonna spoon feed it
The only time where I specifically saw Carmack state he'd run into limits with the R300 was when he was talking about instruction count. Where I saw him mention the FP precision was where he talked about the codepaths available to the cards, and that the GF3 had 2(16-bit and 32-bit) modes, whereas the 9700 had one(24). The FX ran it's mixed codepath slightly faster most of the time than R300's 24-bit one. The 9700 had a marginal increase in quality for a marginal. The FX running it's 32-bit FP at HALF the speed the 9700's 24-bit path with a marginal quality boost. I don't remember him saying that he'd run into "limits" with it only doing 24-bit. Anyway, the ONLY difference between running these different depths SHOULD be the bandwidth required to trasmit the data, as it should still do the the same number of calculations in a given amount of time(ie if the FX can do 1 16-bit every 10 clock cycles, it SHOULD be able to do 1 32-bit every 10 cycles as long as there is plenty of bandwidth for both, and the FX benchmarks don't indicate this to be so for it.) If we looked at the same interview at beyond3D, I think YOU may need the spoonfeeding here buddy.
Uh its irrelevent because its not actually outputting at that precision, How thats being done is irrelevent, Its the end result that matters.
Here is what you said
"Since aparently its just 24 bit downsampling to 16 bit (which I think is retarded for any given number of reasons)
Either or, I think ATIS implementation of its floating point precision kinda leaves a little bit to be desired. Expecially when you consider DirectX 9.0 current specification. As ATis card is just a bare minimum for DX 9.0 I'm not quite sure they chose to stick with strict 24 bit precision. DX 9.0 specifications be damned. Probably to save Die space on their already crazily overloaded 0.15 micron proccess.
From a programmers point of view, They leave little room for modification or tweaking, And thats always a bad thing, I can see why John Carmack Stated he has become limited by the r300 programmability. Kinda disapointing to me. Oh well.
From a programmers point of view there should be NO REASON to "downsample" something to 16-bit on the R300. The only reason to do so would be to cater to the NV30's.
I find this quite strange since people have been dogging on Nvidia for using its Pixel Shader 2.0 to emulate Pixel Shader 1.4
I think people are dogging it because PS 2.0 is pretty slow on NV30(right now), hence using it to emulate 1.4 is also slow. [/b]
Here's my clever comment
Last edited by Steppy; 04-01-03 at 02:40 PM.