Go Back   nV News Forums > Graphics Card Forums > NVIDIA GeForce 7, 8, And 9 Series

Newegg Daily Deals

View Poll Results: Will nvidia remove illegitimate optimizations from their drivers?
No, they won't remove "optimizations." 90 92.78%
Yes, they will remove "optimizations" in the future. 7 7.22%
Voters: 97. You may not vote on this poll

Closed Thread
 
Thread Tools
Old 07-13-03, 05:50 PM   #109
rokzy
 
Join Date: May 2003
Posts: 158
Default

Quote:
Originally posted by Ruined
Again though according to digit life, Nvidia with its latest 44.67 drivers is doing the same thing in3dmark and actually coming closer to the reference/more accurate than ATI, which most assume isn't using optimizations - so is that really a clue?
that comparison is flawed: it basically says that nvidia's FP32 with cheats is closer to a FP32 version without cheats than ATI's FP24 is. the differences for the ATI are due to the different precision used, NOT a difference in rendering.

it's like nvidia's values are mostly okay, but some sections are way off because they use different calculations. whereas ATI's values are all different, but the difference is in the 10th decimal place and just a result of different precision NOT different calculations.




also, whether closer or not than ATI, HOW CAN REPLACING THE PROGRAMMER'S CODE WITH CODE THAT GIVES DIFFERENT RESULTS POSSIBLY BE WHAT THE PROGRAMMER INTENDED!??!?!!

sorry for shouting, but people are losing sight of the actual points in attempts to make one company seem better than another. just like... it doesn't matter who is faster or has better IQ, NVIDIA IS IGNORING THE SETTINGS USERS SELECT AND USING ITS OWN JUST TO LOOK BETTER IN BENCHMARKS

Last edited by rokzy; 07-13-03 at 05:59 PM.
rokzy is offline  
Old 07-13-03, 05:51 PM   #110
NickSpolec
 
NickSpolec's Avatar
 
Join Date: Aug 2002
Location: Burning Inferno, Arizona
Posts: 371
Default

I don't even know why FutureMark decided to use FP32 in the first place..
NickSpolec is offline  
Old 07-13-03, 05:53 PM   #111
extreme_dB
Registered User
 
Join Date: Jun 2003
Posts: 337
Default

Quote:
Originally posted by NickSpolec
Hey, that's Nvidia's fault. They should have included FP24 support (like the DX9 standard calls for), instead of playing the PR bullsh*t game and completely skipping FP24 by instead offering the "superior" FP32 (which isn't that superior, concidering how poorly the GFFX series actually performs with it).
Yes, it's Nvidia's fault, just like it's Nvidia's fault that they don't have a card that gets 1000000 3dmarks. Those are just design decisions. If Nvidia's card can't perform well in 3dmark03 by following the rules, then that's the way it should stand.

If Nvidia can perform very well in DX9 games by only slightly reducing image quality, contrary to 3dmark03's results within the rules, that should be shown in reviews as well, but not by directly comparing against ATI in an unfair manner as has been happening.
extreme_dB is offline  
Old 07-13-03, 06:04 PM   #112
Deathlike2
Driver Reinstall Addict
 
Join Date: Apr 2003
Location: Nowhere Near NVidia or NVNews
Posts: 336
Default

Honestly.. it's hard to debate apple to apple comparisons to the current generation of hardware.... I feel it should be dealt with later...

Anything that is rendered that was unintended by the user (like lower quality when high quality is asked for by the user) is a pretty bad cheat (or "optimization")

Currently what's at debate is the performance hit associated with the use of FP32 and FF24...

FP32 > FP24 - image quality (but how much of a difference is questionable)

FP24 < FP32 - performance (in theory.. more math that has to be done will incur some performance loss)

Then there's debates between the architectures... what I'm understanding that ATI's architecture will force FP24 no matter what (which may be good and bad in different aspects)

NVidia's architecture seems to be not favorable to using FP32 in full.. that would suggest a bigger performance hit just for having the option and this may translate into becoming "almost unusable"

The problem now is that NVidia has to cheat by using LOWER precision to gain a favorable score in a synthetic benchmark (in other words, since using lower quality, it would favor the NV3X architecture)...

There really is NO reason to LOWER the quality to increase performance (the driver must do what the application and user requests)...

You cannot make apples to apples comparison if things are not done as intended...
__________________
PR = crap
War Against FUD
What is FUD? - http://www.geocities.com/SiliconValley/Hills/9267/fuddef.html
Deathlike2 is offline  
Old 07-13-03, 06:05 PM   #113
aapo
Registered User
 
aapo's Avatar
 
Join Date: May 2003
Location: Finland
Posts: 273
Default

Quote:
Originally posted by Ruined
But again, most of the Nvidia optimizations aren't degrading quality in a way that can be seen without using Photoshop or some other external program to manipulate the image - and the ones that could be seen in 3dmark have been fixed in 44.67 det drivers. If I can't see the change while I am playing, I don't really care about it. And in the case where it was a problem (3dmark03), Nvidia fixed it, and according to digit life is now closer to the 3dmark IQ reference than ATI despite 'optimizations.'

Maybe an advanced tab to disable such optimizations would be cool, but why on earth if you owned an Nvidia card would you want to disable optimizations that speed up performance with no noticable impact on IQ?
There are at least two problems with your viewpoint:

1. Some users in some cases see the differences without Photoshop. For example, in an another thread in this forum Eymar recognized a problem with the AF of the FX5900 (presumably forcing mixed mode bi-tri-linear AF), where mipmap boundaries were visible in ST:EF2. That could be fine if the user would have a choice to get true full trilinear instead of the little lesser mode. Link: http://www.nvnews.net/vbulletin/show...5&pagenumber=2

2. What happens when FP shaders become everyday features in games? nVidia doesn't have resources (or room in drivers) to rewrite shaders for every game on earth. Only the most popular titles will be optimized, whereas 'lesser' games, demos, freeware and shareware programs might run generally badly. A lot of users might get pissed.

PS. I say might, because the situation is not necessarily as bad as it seems to me. But I'd bet it is.
__________________
no sig.
aapo is offline  
Old 07-13-03, 06:10 PM   #114
NickSpolec
 
NickSpolec's Avatar
 
Join Date: Aug 2002
Location: Burning Inferno, Arizona
Posts: 371
Default

But it's Nvidia that dug the hole of "When FP24 needed, either use FP32 and get correct image quality and slow performance, or use FP16 and sacrifice image quality for performance".

FP24 is DX9 standard. It's the only percicision standard in DX9. There will be games that specifically call for it.

But here is an interesting notion...

Why would Nvidia not include FP24 and instead include FP32 when they problably knew that FP32 would suck on their cards? Of course, because it's vender hype. "Bigger" numbers. 32 is better then 24.

But still.. Why wouldn't they just add FP24? It wouldn't have been that hard...

Maybe because FP24 sucks on their hardware the same FP32 does, and FP24 just so happens to be DX9 standard (and hence, the standard will probably be used for DX9 apps).

Maybe they thought that FP32 would never be actually called for, and since they don't support the DX9 standard of FP24, all things that use FP24 could just be downgraded to FP16 instead of upgraded. This also could be the reason why they got so upset when 3DMark2k3 was being developed, because it would actually call for FP32 and Nvidia didn't like that (because their cards suck as FP32).

So basically, they could have their cake ("better" hardware features then ATI) and eat it too (never actually have to use it because it blows on their hardware).

So, maybe FP32 is a feature that isn't actually supposed to be used. Maybe it's just for boasting.
NickSpolec is offline  
Old 07-13-03, 06:14 PM   #115
extreme_dB
Registered User
 
Join Date: Jun 2003
Posts: 337
Default

Quote:
Originally posted by Ruined
I agree, it would be cool to have a tab where you could enable or disable every possible featue of the card - though that tab would probably be large and formidable, and probably wouldn't be approved by the marketing dept... After all, the biggest market for videocards doesn't even read this board! As far as the FX5900 goes, though, I'd much rather have Nvidia program when its best to use FP32 and when its best to use FP16 rather than just lose performance all the time with FP32 or lose precision all the time with FP16. Again if ATI and Nvidia's hardware was the same it would make benchmarking the two with identical IQ more plausible, but they aren't.
You do make sense there Ruined. If you think Nvidia's optimizations are valid for benchmarking (although some are simply forcing options off secretively ) then a reviewer should adjust ATI's settings accordingly to match the cards as closely as possible for a fair comparison. That's the bottom line.

At the moment, Nvidia's performance advantages can actually be totally misleading. You have to admit that.
extreme_dB is offline  
Old 07-13-03, 06:53 PM   #116
aapo
Registered User
 
aapo's Avatar
 
Join Date: May 2003
Location: Finland
Posts: 273
Default

Quote:
Originally posted by NickSpolec
FP24 is DX9 standard. It's the only percicision standard in DX9. There will be games that specifically call for it.

But here is an interesting notion...

Why would Nvidia not include FP24 and instead include FP32 when they problably knew that FP32 would suck on their cards? Of course, because it's vender hype. "Bigger" numbers. 32 is better then 24.

But still.. Why wouldn't they just add FP24? It wouldn't have been that hard...

Maybe because FP24 sucks on their hardware the same FP32 does, and FP24 just so happens to be DX9 standard (and hence, the standard will probably be used for DX9 apps).
It's not that simple, you're confusing things. DX9 standard calls for FP32 data type (i.e. the size of one pixel value is 4 bytes), but it allows pixel shader computations to be done for 24 bits (i.e. 3 bytes) internally in hardware. However, 32 bits are completely valid and even better; and one could say FP32 is more of a standard as vertex shaders have always been FP32, and in the future we'll be seeing unified shader architectures. Unified shader architecture means that the shader program can combine vertex and pixel shader calculations, which is impossible at the moment, but in the future it should provide some neat effects. It's possible that nVidia is getting ready to implement unified shaders in some future architecture, and they are trying the FP32 shaders in advance in the current architecture.

Then there is an additional new PartialPrecision mode, where calculations are done in 16 bits (i.e. 2 bytes) at least, but it can be used only when the application gives permission to use it. It's not often mentioned, but I think it exists in the DX9 spesification.

And another thing, it's been known for a while that the NV3x calculates FP32, FP24, FP16 and FPwhatever-below-32 at the same speed. The performance difference comes from temporary register usage penalty - because FP32 is twice as big as FP16, it needs twice the temporary register space of the FP16, which leads to a (much) slower performance. However, if only one register is used, the FP16 and FP32 modes are equally uber-fast. It seems that nVidias optimizations are concentrated on minimizing the number of the FP16 and FP32 registers in their shaders. Remember, they can use both at the same time, and therefore they can (in some cases) get good precision with good speed. But sometimes there is catastrophic slowdowns with some shaders.
__________________
no sig.
aapo is offline  

Old 07-13-03, 07:29 PM   #117
CaptNKILL
CUBE
 
CaptNKILL's Avatar
 
Join Date: Jan 2003
Location: PA, USA
Posts: 18,844
Default

Do you guys realize that you have been saying the same things over and over since the first page of this thread?

And I dont see an end coming any time soon
__________________
---- Primary Rig ---- CoolerMaster 690 II Advance - Gigabyte GA-EP45-UD3P - Intel Core 2 Quad Q9550 @ 4.0Ghz + Thermalright Ultra 120 Extreme
6GB DDR2 @ 942Mhz 5-5-5-20 1.9v (2x1Gb Wintec AMPX PC2-8500 & 2x2Gb G.Skill PC2-6400) - EVGA Geforce GTX 470 @ 750/1500/1850 (1.050v)
Sparkle Geforce GTS 250 1Gb Low-Profile (Physx) - Crucial RealSSD C300 64Gb SSD - Seagate 7200.12 500Gb SATA - Seagate 7200.10 320Gb SATA
ASUS VW266H 25.5" LCD - OCZ GameXStream 700W PSU - ASUS Xonar DX - Logitech Z-5500 5.1 Surround - Windows 7 Professional x64
---- HTPC ---- Asus M3A78-EM 780G - AMD Athlon X2 5050e 45W @ 2.6Ghz - 2x2GB Kingston PC2-6400 DDR2 - Sparkle 350W PSU
Seagate 7200.10 320Gb SATA - Seagate 7200.10 250Gb SATA - Athenatech A100BB.350 MicroATX Desktop - Creative X-Fi XtremeMusic
CaptNKILL is offline  
Old 07-13-03, 07:39 PM   #118
The Baron
Guest
 
Posts: n/a
Default

Quote:
Do you guys realize that you have been saying the same things over and over since the first page of this thread?
Fair enough. I've grown tired of checking it for flames. How's about an agreement to disagree...
 
Closed Thread


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA Drivers Receive Windows 8 Certification News Archived News Items 0 06-01-12 05:30 AM
Radeon 9700 not all that? sancheuz Other Desktop Graphics Cards 200 10-12-02 09:31 PM
nvidia drivers in a motherboard with AGP 1.0 (motherboard MVP3+) knocker NVIDIA Linux 1 08-19-02 01:57 AM
downgrading nvidia drivers ( howto remove newer drivers?) one NVIDIA Linux 5 08-11-02 03:48 PM
NVIDIA 2960 Drivers & RH 7.3 W/2.4.18-5 XASCompuGuy NVIDIA Linux 6 08-02-02 11:53 AM

All times are GMT -5. The time now is 11:43 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.