PDA

View Full Version : What will happen to NV30 performance when WQHL drivers arrive?


Pages : [1] 2

PreservedSwine
03-17-03, 02:26 PM
With all the speculation about FP16 perfromance vs FP32 performance, and MS DX9 standards, what do you think will happen, and why?

Chalnoth
03-17-03, 03:03 PM
It currently appears that the the GeForce FX has the exact same pure processing power with FP16 and FP32.

The differences are that FP16 takes twice the space, and thus has twice the temporary registers available.

This says to me that some programs, programs that do not use very many temporary registers and don't read or write floating-point data will run at the same performance with FP32 or FP16 (side note: I would expect most of the improvements in FP32 performance to come after a hardware revision...).

Anyway, I'd like to take a moment to gripe here. Microsoft should have just implemented specific data types to avoid this whole mess, with automatic casting to whatever the hardware actually supports (increase precision if possible, decrease if not...). Programmers have been dealing with data types since the beginning of time. Why do people seem so timid about them now?

But regardless, the NV30 profile benchmarks we've seen point to the GeForce FX having quite a bit of headroom in its PS 2.0 performance.

sebazve
03-17-03, 04:46 PM
its gonna suck big time even more i mean its sucking pretty bad right now :lol2::lol2::lol2:

ChrisRay
03-17-03, 04:47 PM
I heard that Nvidia is talking with Microsoft to make FP 16 bit shaders viable for DX 9.0.

Any legitimacy behind this?

Would seem like a good idea. As 16 bit to 24 bit isn't a huge difference.

sebazve
03-17-03, 04:48 PM
Originally posted by ChrisRay
I heard that Nvidia is talking with Microsoft to make FP 16 bit shaders viable for DX 9.0.

Any legitimacy behind this?

Would seem like a good idea. As 16 bit to 24 bit isn't a huge difference.

i heard that 16 to 24 is quite a difference

ChrisRay
03-17-03, 04:51 PM
Originally posted by sebazve
i heard that 16 to 24 is quite a difference

I think thats subjective to if the aplication is programmed for 24 bit/32 bit precision. Aparently Doom 3 will barely look any different in 16 bit precision as compared to higher modes of precision.

Just like theres very little difference between 16 bit/32 bit graphics if the textures aren't optimized for 32 bit rendering.

sebazve
03-17-03, 04:57 PM
probably:D

digitalwanderer
03-17-03, 05:32 PM
Could it be it's going to......BOMB?!?!?! :lol:

I'm voting for a bit of a downgrade in performance.....

ChrisW
03-17-03, 05:38 PM
I think they will get Microsoft to change the specs to include FP 16. This means nVidia will be running all games with reduced image quality. At the same time, nVidia's PR machine will continually mention how their card does 128 bit precision and that is a big reason why people should purchase it over a Radeon 9xxx card (and they will fail to mention that all games are using much lower precision than the Radeon 9xxx cards).

Chalnoth
03-17-03, 06:38 PM
This is such BS.

How many of you actually think that 16-bit will provide any noticeable image quality deficit compared to 32-bit? The truth is that except in extreme circumstances, or with non-color data, there will be no visible difference.

The reason Microsoft has 24-bit as the "minimum" is for texturing operations. All texture ops are done at 32-bit with the NV3x.

The apparent image quality issues with 3DMark03 could NOT have possibly been due to 16-bit color. If anything, they were due to rendering with FX12.

ChrisW
03-17-03, 07:38 PM
All I know is the pictures showing the difference between the Mother Nature 3DMark03 scenes are dramatically different. Yes, this may be because it is being rendered in FX12 (or whatever it is). If this is the case, then WHQL certified drivers will be even slower even if Microsoft allows FP16! The whole point here is nVidia is allowing people to believe their card is rendering scenes in 128bit floating point color when it is not. NVidia's 128 bit precision is about half he speed of ATI's 96 bit precision. Now, I would expect future drivers to improve this but does anyone really think it will double the speed?

digitalwanderer
03-17-03, 07:56 PM
Originally posted by Chalnoth
The apparent image quality issues with 3DMark03 could NOT have possibly been due to 16-bit color. If anything, they were due to rendering with FX12.

Ok, I'll bite and take my harshing for being a thicky....what is FX12? Please? :confused:

Chalnoth
03-17-03, 09:41 PM
12-bit integer

tamattack
03-17-03, 11:47 PM
Originally posted by Chalnoth
This is such BS.

Take a look in the mirror... :rolleyes:

Originally posted by Chalnoth
How many of you actually think that 16-bit will provide any noticeable image quality deficit compared to 32-bit? The truth is that except in extreme circumstances, or with non-color data, there will be no visible difference.

In general, I do agree with you. But I expect the the difference will be noticeable once it is pointed out. Kind of like most people wouldn't notice the double "the" in my previous sentence, but once it's pointed out, it stands out like a sore thumb.

Originally posted by Chalnoth
The reason Microsoft has 24-bit as the "minimum" is for texturing operations. All texture ops are done at 32-bit with the NV3x.

If that's the case, then why has NV admitted (IIRC, Cass over at cgshaders.org) to rewriting the drivers to account for WHQL due to a "last second change" in the spec? How else can you explain the fact that we haven't seen any WHQL drivers since November 2002? Talk about BS...

Originally posted by Chalnoth
The apparent image quality issues with 3DMark03 could NOT have possibly been due to 16-bit color. If anything, they were due to rendering with FX12.

I do agree with you here, tho.

ntxawg
03-18-03, 02:07 AM
i dont know about you guys but i can sure tell the difference between 16 and 32 bit

StealthHawk
03-18-03, 04:57 AM
Originally posted by ntxawg
i dont know about you guys but i can sure tell the difference between 16 and 32 bit

we're talking about FP16 and FP32, ie 64bit and 128bit color.

Moose
03-18-03, 05:33 AM
from a recent Richard Huddy Interview...

"NVIDIA have taken the DX9 spec and 'interpreted it' in a way that I believe was never intended. Instead of using full floating point precision throughout the graphics pipeline they've cut corners in an attempt to gain access to greater speed. But this approach has left them with a difficult message.

Their tag line is "Cinematic rendering" - but they can't offer that because if they try then their hardware is way too slow... And when they offer a 16 bit compromised solution it has so many artifacts that it clearly won't be acceptable to anyone who wants to use it for general DX9 gaming... That doesn't seem like a great choice does it? "Pretty but slow", or "fast and ugly".

It's like the mathematician's compromise between a car and a bicycle. Who wants a three-wheeler with one and a half wheels at the front, and one and a half wheels at the back?

One of the strongest characteristics of the 9700 family architecture is that it's not a compromise - it's just "right". It's "Both Pretty and fast"."

jbirney
03-18-03, 07:08 AM
What is funny is the NV folks have blasted ATI for only have 24bits in they B3D FX review (http://www.beyond3d.com/previews/nvidia/gffxu/index.php?p=24) as well as other place and yet they now lobby to get the spec lowered to 16bit. :rolleyes: :rolleyes: :rolleyes:

Hanners
03-18-03, 07:15 AM
The question is - If FP16 performance is acceptable on the NV30, why would they make the drivers take it down to 12-bit integer? :confused:

tamattack
03-18-03, 08:56 AM
Because 12 bit int is faster than fp16?

Sazar
03-18-03, 10:07 AM
Originally posted by Chalnoth
This is such BS.

How many of you actually think that 16-bit will provide any noticeable image quality deficit compared to 32-bit? The truth is that except in extreme circumstances, or with non-color data, there will be no visible difference.

The reason Microsoft has 24-bit as the "minimum" is for texturing operations. All texture ops are done at 32-bit with the NV3x.

The apparent image quality issues with 3DMark03 could NOT have possibly been due to 16-bit color. If anything, they were due to rendering with FX12.

and I would have to tend to agree with you there...

because all the evidence dissected thus far does seem to point to fixed integer instead of floating point for performance increase...

SurfMonkey
03-18-03, 10:32 AM
Originally posted by Hanners
The question is - If FP16 performance is acceptable on the NV30, why would they make the drivers take it down to 12-bit integer? :confused:

Because the single greatest ****-up that nvidia propagated was their dumb assed refusal to include PS1.4 in the GF4 when it would have been easy. They really, really suck for that.

Now they are stuck with a nice quick INT12 register combiner and pipeline that has to revert to using FP32 to do simple PS1.4 rendering. Reducing performance to a crawl. It's like using a sledge hammer to crack a peanut, in a vacuum.

They have FP16 which will produce output that 99% of users won't be able to differentiate from FP32, but it's not in spec anymore.

They have FP16 which is only twice as fast because you can work on two chucks at a time, or one FP32. The actual performance is the same.

On a lighter note, things do definately sound a whole world better with the NV35 ;)

Chalnoth
03-18-03, 11:19 AM
Originally posted by SurfMonkey
Because the single greatest ****-up that nvidia propagated was their dumb assed refusal to include PS1.4 in the GF4 when it would have been easy. They really, really suck for that.
Um, no. It would have been quite a change.

Now they are stuck with a nice quick INT12 register combiner and pipeline that has to revert to using FP32 to do simple PS1.4 rendering. Reducing performance to a crawl. It's like using a sledge hammer to crack a peanut, in a vacuum.
Current drivers are just immature. As the OpenGL NV30-specific fragment program results have shown, the performance is often much higher than PS 2.0 performance.

They have FP16 which is only twice as fast because you can work on two chucks at a time, or one FP32. The actual performance is the same.
That didn't make any sense. It can store twice as much temporary data, not work on more at once.

SurfMonkey
03-18-03, 11:34 AM
Oh, I though it could process 2xFP16 + 1xINT12 per clock or 1xFP32 + 1xINT12 per clock. That was where the speed gain came from. :confused:

And maybe the inclusion of ps1.4 support in the GF4 may have been problematic (mostly as the design was already complete), but that didn't stop them putting support in the FX series??

And I can't believe that the speed difference between the ARB2 pathway and the NV30 pathway is down to just driver performance, admittedly controlling the scheduling must be a complete nightmare, but still...

digitalwanderer
03-18-03, 12:43 PM
Thanks for the explanation Chalnoth...but then y'all went and stepped it up about 4 levels above me. :rolleyes: :confused:

I understand the FP16/32 thing pretty well I think, (although I'd really like a chance to see the visual difference myself rather than rely on screenshots...but I just ain't made of money :( ), I don't get how the "Because 12 bit int is faster than fp16" stuff.

How does 12-bit integer relate to FP16/32? Sorry again that I ain't up to speed, but I come to these places to learn more than to rant. (Believe it or not. ;) )