View Single Post
Old 04-01-03, 02:59 PM   #72
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
Originally posted by Steppy

Quote:
Ok, I'll tackle it this way. ALL THREE OF THESE CARDS ARE TOO SLOW WITH SHADERS EVEN APPROACHING THE LIMITS OF THE R300, MUCH LESS THE FX OR THE 9800. I don't believe I said that technically it wasn't less programmable, but there ARE other limiting factors, you know that little thing called performance. Your "imagination" is already limited by the speed of these cards, at least in the gaming arena, which is what 95% of us do with these cards. The extra shaders are fine and dandy for non real time effect rendering and stuff, but that is pointless for most of us.
That is completely dependent on the program your using. And what you are trying to render. There are cases where the extra shading power is relevent,

Whether your creating a nifty screensaver, or Developing a Texture embossing method for a Playstation emulator.

Your mind is narrowed down only to the games we have right now. And as such, your only thinking about games we have today


Quote:
From a programmers point of view there should be NO REASON to "downsample" something to 16-bit on the R300. The only reason to do so would be to cater to the NV30's.
Uhh from a programmers point of view, 16 bit Floating Point Calculations would be useful. As I said. Its relevent to the subjective situation, In PSX emulation It would preferrable to use texture embossing in a 16 bit fashion. As there would be no benefit to using 24 bit,

In this case, The 300 cannot benefit from the extra speed given by the lower precision.

We're talking about subjective values. To the specific coder.


Quote:
The only time where I specifically saw Carmack state he'd run into limits with the R300 was when he was talking about instruction count. Where I saw him mention the FP precision was where he talked about the codepaths available to the cards, and that the GF3 had 2(16-bit and 32-bit) modes, whereas the 9700 had one(24). The FX ran it's mixed codepath slightly faster most of the time than R300's 24-bit one. The 9700 had a marginal increase in quality for a marginal. The FX running it's 32-bit FP at HALF the speed the 9700's 24-bit path with a marginal quality boost. I don't remember him saying that he'd run into "limits" with it only doing 24-bit. Anyway, the ONLY difference between running these different depths SHOULD be the bandwidth required to trasmit the data, as it should still do the the same number of calculations in a given amount of time(ie if the FX can do 1 16-bit every 10 clock cycles, it SHOULD be able to do 1 32-bit every 10 cycles as long as there is plenty of bandwidth for both, and the FX benchmarks don't indicate this to be so for it.) If we looked at the same interview at beyond3D, I think YOU may need the spoonfeeding here buddy.

Never said he ran into limits with its precision, As a matter of fact, I said he preferred 16 bit precision for rendering in Doom 3, Savvy? The interview is there, It specifically says he would prefer 16 bit precision.

And the only reason he did somethings with 24 bit textures is because thats the way the r300 is programmed.


Quote:
I'm buying it to play games right now and for the next year or maybe two. I'm NOT buying it for a feature that will not be used until this card is equivilent to a GF2 MX card and that's how fast current games at that time play. I'll bet all those people with GF1's got a ton of games that pushed the T&L unit of it to it's max in its lifetime....oh wait, they didn't. I'll bet all those people who got the Radeon for its 3rd Texture unit made heavy use of it in games while they had the card...oh wait, they didn't. Well, those people with the original GF3's got a whole ton of games pushing that programmable T&L unit to the max....oh wait, they didn't. Those people who're saying "future proof" are fooling themselves, because there really is no such thing. It should have a slightly longer life than most other cards before it did, simply because it was SO fast on games at it's release(name another card that could run a current game at it's release at 1600x1200 4xAA and 8x aniso). I'm glad I reiterated that it's not "future proof", because there is no such thing, nor did I ever claim it was.

Thats all fine and dandy for you, But alot of people who buy a Geforce 4 Ti 4400 or a Radeon 9500 Pro will not upgrade these cards for like 3-4 years, So yes its rellevent.

Future proofing does exist, In a limited fashion, And most non hardware enthusiasts buy computers because they believe they are future proof.


Quote:
I think people are dogging it because PS 2.0 is pretty slow on NV30(right now), hence using it to emulate 1.4 is also slow.
Using Pixel Shader 2.0 in 16 bit precision to emulate Pixel Shader 1.4 would not be the cause of its current slow down. And theres nothing wrong with emulating 1.4 in 16 bit precision. As theres no benefits for using the higher precision on Pixel Shader 1.4
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members

Last edited by ChrisRay; 04-01-03 at 03:04 PM.
ChrisRay is offline   Reply With Quote