Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 11-13-03, 05:20 AM   #241
Deathlike2
Driver Reinstall Addict
 
Join Date: Apr 2003
Location: Nowhere Near NVidia or NVNews
Posts: 336
Default

Quote:
ChrisW said here that the score for NV3x now drops in the 4000+ range, not in 3xxx range like before. So, I guess that Futuremark approved some of the 'optimizations' by Nvidia. I assume that Nvidia goes even further, something what Gabe mentioned for HL2 scenario.
This assumes you were previously in the 5000+ range or so...

Futuremark did not "approve" of any optimizations from NVidia... they only disabled them for a fair raw comparison between cards with ATI and NVidia. Remember patch 330 for the 44.03 Dets? Patch 340 is for the 52.16 Dets (and previous Dets)


Quote:
The score that NV38 produces using the latest drivers in patched 3DMark03 which is in 4xxx range, was done by using FP16. I think Futuremak allowed it, and that's why we se 4xxx range score, but not 3xxx.
That's incorrect, Futuremark hasn't allowed it (no PP commands were given to any of the existing data)

Quote:
ANYWAYS, how come Futuremark isnt doing anything about this if it directly violates the white paper they released stating teh guidelines ? i mean wtf ... why eeven release it in the first place?
They have already defeated probably a good portion of the cheating in build 340 of 3DMark, hence why they have "reviewed and approved" the 52.16 Dets. There's no inexplainable symantics there.

Quote:
I'm glad the truth is finally unveiled, as the NV35, NV36, and NV38 do not have FX12 shader units.
That's correct. These are FPU units (or so I understand it to be).. but they are rather undefined at the moment (these units are to help accelerate some of the more basic operations, not all operations are supported)... NVidia hasn't spent enough time with its compiler to expose it... and from the creation of the Det 50s, they are exposing them (resulting in performance gains). This stuff was originally in the NV35... and only just recently exposed...

You could only wish NVidia was doing the right thing before... but they have done lots of PR related things that there isn't much to believe anymore.
__________________
PR = crap
War Against FUD
What is FUD? - http://www.geocities.com/SiliconValley/Hills/9267/fuddef.html
Deathlike2 is offline   Reply With Quote
Old 11-13-03, 06:11 AM   #242
Deathlike2
Driver Reinstall Addict
 
Join Date: Apr 2003
Location: Nowhere Near NVidia or NVNews
Posts: 336
Default

Also... note that the scores are lower for FX cards NOT because of Futuremark lowering them intentionally... it is because Futuremark has deemed certain "optimizations" invalid.. which resulted in lower scores in some of the tests... which then resulted in the obvious lower score (deriving from the 3DMark calculations)

The code in which to render the shaders have changed.. but they literally do THE SAME THING (just variable changes is good enough) so doing the same work as was done prior shouldn't affect scores... hence ATI (and even Matrox!!!) scored the same. However, the same cannot be said with NVidia...
__________________
PR = crap
War Against FUD
What is FUD? - http://www.geocities.com/SiliconValley/Hills/9267/fuddef.html
Deathlike2 is offline   Reply With Quote
Old 11-13-03, 06:49 AM   #243
NickSpolec
 
NickSpolec's Avatar
 
Join Date: Aug 2002
Location: Burning Inferno, Arizona
Posts: 371
Default

With this report by Xbitlabs, and Nvidia now running with the idea that the 340 patch is disabling the GFFX unified complier, anyone think Nvidia will call out the lawyers?


From http://www.xbitlabs.com/news/video/d...112181114.html

Quote:

After we published our story about a Gainward representative accusing 3DMark03 build 340 of disabling certain features of NVIDIA drivers, we received some additional clarifications from Futuremark Corporation, the developer of 3DMark03 benchmark, and NVIDIA, the developer of the GeForce FX hardware and software.

An official from NVIDIA Corporation confirmed Mr. Tismer’s accusation that “patch 340 disables the GPU compiler. The compiler has to run on the CPU instead resulting in code harder to digest and taking away 20% of the performance.” “Yes, that is actually the case with the new patch 340 that Futuremark posted,” said an NVIDIA spokesperson on Wednesday.

“Few weeks ago we released our 52.16 driver that includes our brand new unified compiler technology. With the new patch the benchmark, our unified compiler gets not used by the app so it goes to CPU and we are definitely slower,” Luciano Alibrandi, NVIDIA’s European Product PR Manager, added.

In a response to these accusations, Executive Vice President of Sales and Marketing for Futuremark said: “Wolfram is totally wrong here because what he suggests is not even feasible technically. 3DMark03 does not talk to graphics driver, it talks to the DirectX API, which then talks to the driver. Thus, it is impossible for the application to disable GPU compiler.”

“One of the things the unified compiler does is to reinstruct the order of lines of code in a shader. By simply doing this the performance can increase dramatically since our technology is very sensitive to instruction order. So if this is not happening we have a performance penalty,” said NVIDIA’s representative today.

“The only change in build 340 is the order of some instructions in the shaders or the registers they use. This means that new shaders are mathematically equivalent with previous shaders. A GPU compiler should process the old and the new shader code basically with the same performance. Of course, if there are application specific optimizations in the driver that depend on identifying a shader or parts of it, then you might see performance differences because these optimizations will not work if the driver is not able to detect the shader, ” Tero Sarkkinen added.

“Our position is that our unified compiler delivers the best gaming experience possible, and as long as we produce the right image and simply do not accelerate just a benchmark then it is good to optimize and use a compiler,” the official from NVIDIA acknowledged.

“Let's also repeat that 3DMark specific driver optimizations are forbidden in our run rules because they invalidate the performance measurement and the resulting score is not comparable to other hardware. Thus, the right conclusion is that the new version of 3DMark03 is now very suitable for objective performance measurement between different hardware,” Executive Vice President of Sales and Marketing for Futuremark summarized.

There seems to be another big misunderstanding between two companies. One side says the Unified Compiler is disabled by Futuremark 3DMark03, while the developers of the benchmark acknowledge us about technical impossibility of this task. Futuremark has never been caught on disabling anything useful in drivers for graphics cards; on the other hand, NVIDIA was accused of 3DMark03 specific optimisations earlier this year.

Stay tuned, as we are going to post more clarifications on the matter later.



If Nvidia does call out the lawyers, who thinks FutureMark will sissy-out like they did last time, and once again become Nvidia's b*tch?
__________________
Snake's System:

[size=1][list][*]AthlonXP Mobile 2500+ (@2.5ghz, 1.850v)[*]Albatron KX18D PRO (NForce 2 Ultra @227FSB)[*]512MB's OCZ Platinum PC3200 EL (@DDR454, CAS 2.5, 6,3,3)[*]GeForce3 (@230c, 460m)[*]Fortissimo III, Gamesurround 7.1[*]POS Intel 56k Modem (soon to get high speed, though)[*]Maxtor DiamondPlus 9 120GB, ATA133 (8mb)[*]Samsung DVD/CD-RW Combo (52x32x52x16x)[*]Lite-On LTR 16102B (16x8x40x)

[/size][/list]
NickSpolec is offline   Reply With Quote
Old 11-13-03, 06:57 AM   #244
AnteP
Nordic Nerd
 
AnteP's Avatar
 
Join Date: Dec 2002
Location: Sweden, Lund
Posts: 552
Send a message via ICQ to AnteP
Default

Quote:
Originally posted by NickSpolec
With this report by Xbitlabs, and Nvidia now running with the idea that the 340 patch is disabling the GFFX unified complier, anyone think Nvidia will call out the lawyers?


From http://www.xbitlabs.com/news/video/d...112181114.html


If Nvidia does call out the lawyers, who thinks FutureMark will sissy-out like they did last time, and once again become Nvidia's b*tch?
Since the comment about disabling the "unified compiler" is a blatant lie, especially the part abour forcing stuff to run on the CPU, I don't think lawyers would get them anywhere.
AnteP is offline   Reply With Quote
Old 11-13-03, 07:24 AM   #245
Skuzzy
Bit Bumper
 
Join Date: Aug 2002
Location: Here
Posts: 782
Default

Kind of wish they would call in the lawyers. Only if FM would stand their ground.

Pure PR BS is all this is. FM is quite right. A DirectX app has no control over the shader compiler at all. None, nada! Would like to see NVidia explain that in a courtroom.
The compiler runs wherever the driver loads it to run. However, it would be stupid to run the compiler directly on teh GPU anyways. They would spend too much time passing information back and forth over the AGP bus.
The compiler runs in the local CPU and generates the code for the GPU and that code is loaded in one shot. It would be insane to run the compiler on the GPU.
The PR BS spewing forth about this compiler is an insult to the intelligence of everyone. NVidia must really think all consumers to be complete and utter dolts.
__________________
Stuff Happenz!
Skuzzy is offline   Reply With Quote
Old 11-13-03, 07:45 AM   #246
NickSpolec
 
NickSpolec's Avatar
 
Join Date: Aug 2002
Location: Burning Inferno, Arizona
Posts: 371
Default

Quote:
NVidia must really think all consumers to be complete and utter dolts.
Of course Nvidia's does... Why else would they have released the NV30?
__________________
Snake's System:

[size=1][list][*]AthlonXP Mobile 2500+ (@2.5ghz, 1.850v)[*]Albatron KX18D PRO (NForce 2 Ultra @227FSB)[*]512MB's OCZ Platinum PC3200 EL (@DDR454, CAS 2.5, 6,3,3)[*]GeForce3 (@230c, 460m)[*]Fortissimo III, Gamesurround 7.1[*]POS Intel 56k Modem (soon to get high speed, though)[*]Maxtor DiamondPlus 9 120GB, ATA133 (8mb)[*]Samsung DVD/CD-RW Combo (52x32x52x16x)[*]Lite-On LTR 16102B (16x8x40x)

[/size][/list]
NickSpolec is offline   Reply With Quote
Old 11-13-03, 08:05 AM   #247
silence
 
silence's Avatar
 
Join Date: Jan 2003
Location: Zagreb, Croatia
Posts: 425
Default

Quote:
Originally posted by NickSpolec
Of course Nvidia's does... Why else would they have released the NV30?
LOL.....
silence is offline   Reply With Quote
Old 11-13-03, 09:36 AM   #248
DMA
Zzzleepy
 
DMA's Avatar
 
Join Date: Apr 2003
Location: In bed
Posts: 997
Default

Quote:
Originally posted by AnteP
Why can't I laugh, I just love conspiracy theories.
Yours is especially funny.

To be honest though I assume your accusations are guided towards me since I'm the only person right now that uses a combination of the games and settings you mention.

To clarify how the games are chosen here we go:

I choose games on a few criteria:
I look at reseller listings (ie top sales list)
I try to browse the internet to find what games seem to be popular at the moment
I try to find statistics on which online games are popular
I also try to find a good blend of different games (all the way from pre-TNL up to DX9 and both OGL and DX)
Sometimes I also chose games which I personally like very much (Mafia is the game I'm referring to here)

I also have some limitations when I do my selection:
I have to have access to a game to benchmark it (ie either the dev/publisher sent it to me or I go out and buy it)
if not then at least there must be a demo that's fully representative of the full version game (something that the UT2003 demo is not for an example)
There also must be some easy way to benchmark the game, if not a benchmark mode then at least some predectability so that I can use FRAPS

I also listen to ATI and nVIDIA since they often have recommendations on which games to test and not to test
usually I rather listen to which games they'd prefer me not to test and if their arguments stand valid I'll exclude the mention titles.
this hasn't happened yet that I can remember though

As for the settings I chose 4xAA/8xAF since I think, at 1280x1024, it gives a perfect blend of acceptabel performance and very good image quality.
I could of course go with a higher res but fact is that many people still have monitors that are limited to lower res or at least they cant run 16 by 12 in a high enough refreshrate to make it a viable options.

I personally belive that a high end 500 dollar video card should, in the winter of 2003, produce acceptable framerates in most games with these settings.

As for not testing more settings it's simply a matter of weighing in the options.
If I have more settings I'd either have to test less games or else I'd have to test less boards. Since there simply isn't enough time.
Also I personally think that unless performance is way way below par there's simply no reason to test with low image quality settings.
Just as we don't see any sites benchmarking with just bilinear filtering, or 16 bpp nowadays.
It's evolutions, the older stuff has to go sometime. I let it go a few months ago since I think we've finally reached a point where basically ALL videocard available except for the sub 75 USD perhaps can perform to some extent even with AA/AF.

There you have it. Feel free to throw more conspiracy theories spiced up with sarcasm.

Just to make you personally more content I actually e-mailed nVidias PR manager today and asked him what he though of my test suite of 12 games and the settings I use.

And lat but not least: no 3dmark is not making a return back into any review of mine except perhaps when a new architecture is presented.
I use games, since games is what you use the board for.
Synthetic benchmarks have their place in exploring new architextures to find out their strengths and weaknesses in the preliminary tests.

If you have any questions please let me know and I'll be happy to answer them. And of course if/wen I get the reply from nVidia Ill let you know what they think about the test suite/settings.

Cheers.

EDIT:
as for the TWIMTBP games I simply don't care, TWIMTBP is a marketing program, I won't let such activities interfere with my work as an editor.
To be honest I don't even keep track of which games are and are not part of that program. The ones I can mention of the top of my head are UT2003 and Tomb Raider. (As for tom raiders inclusion in the suite it's simple: it's the first DX9 game availabkle comercially and thus is of interest in my opinion)
As for Quake 3 I thought it would be nice to have a legacy game in there. Besides quite a few players still play Q3 and the mods for it. (Including my boss hehe) But to be honest I just stuck to it because it's probably THE easiest game to benchmark ever, it takes like ten seconds to benchmark, you get predictable results and it's just a one click process if you use Q3Bench)

So there ya have it.
Actually i just picked a few games that i know run like the wind on ATI cards. So no, nothing personal.

I don't know, i guess i just wanna see reviews with more than one setting and not only 4xAA/8xAF or whatever.
But who am i to bitch about this? I haven't even written a review so i don't know how much time it takes

And even though todays high end cards should be able to deal with AA/AF with no problems what so ever, i think it's important to post results with these settings disabled too, or at least mix it up a little.

I bet that over 90% of the "review readers" just look at the graphs and make their own decisions. Kinda like "A picture is worth a thousand words".
And if the graphs show these huge differences that don't really tell the whole story well..
__________________
Gigabyte EP35-DS4 | E6850@3.8GHz |
8800GT@800/2000MHz | 4 GB Corsair 6400|
DMA is offline   Reply With Quote

Old 11-13-03, 10:11 AM   #249
False Christian
Registered User
 
False Christian's Avatar
 
Join Date: Oct 2002
Location: Oshawa, Ontario, Canada
Posts: 36
Smile

Does it really suprise us that nVidia is cheating with driver optimizations for 3DMark2003? It's just human nature. We'd all probably do the same thing, wouldn't we? ATI is the #1 high-end graphics maker right now and nVidia's scrambling to save face as it hasn't been in 2nd place since the Voodoo2 by 3dfx. When the nV40 comes out nVidia won't need to cheat and neither will ATI with the upcoming R400.

The ATI Radeon 9700 Pro/9800 Pro/9800 XT and the nVidia GeForce FX-5900 Ultra/5950 Ultra are all great cards and I'd be happy with any of them (I'm sure happy with my AiW Radeon 9700 Pro).

nVidia's cards rock for overclocking headroom and stability. May both ATI and nVidia have continued success for a long, long time...for the sake of our wallets.
__________________
-ABIT-NF7-S V1.2
-Athlon XP M 2600+ oc'ed to 2.6GHz (13x200) 1.85 volts
-Antec Aeroflow and a 5200 BTU Air-conditioner (3 C, 20 C)
-1GB GEIL/Corsair PC3200 DDR (8-3-3-2.0)
-256MB Sapphire Radeon X800 XT PE oc'ed to 552/1184
-SB Live! Audigy 2
-80 GB WD UATA100 (7200 rpms)
-40.9 GB Fujitsu UATA100 (5400 rpms)
-MSI 16x/48x DVD-ROM
-32x12x40 ASUS CDR/RW
-21" ViewSonic P810 (.25mm)
-Cable Internet (Rogers)
-Winblows XP PRO SP1
-3DMark2001SE (330): 22,850
-3DMark2003: 12,800
False Christian is offline   Reply With Quote
Old 11-13-03, 11:10 AM   #250
vampireuk
**** Holster
 
vampireuk's Avatar
 
Join Date: Mar 2001
Location: The armoury
Posts: 2,813
Send a message via AIM to vampireuk
Default

Aww I wish I could have banned that moron phial
__________________
I put children in microwaves.
vampireuk is offline   Reply With Quote
Old 11-13-03, 11:19 AM   #251
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

---
Posted this on B3D, but I thought it might interest the ones of you who don't visit that forum
---

Wait, wait, wait...

Are you guys implying the compiler is nothing but fraud? Not AFAIK.

The real "unified compiler" can do the following thing:
- Reduce register usage by: reusing registers* and using swizzle**.
- Reorder instructions with the goal of exploiting parallelism ( Doing 2 TEX operations in a row as much as possible, for example )***.

*: Already done to an extent in the Detonator FX and 45 series.
**: Naive example of reducing register usage thanks to Swizzle: If, for two registers, you only access .xy - then it is possible to only use one register, by making the second register actually be the .zw part of the first register.
***: Fully introduced in the Det50s.

NVIDIA PR and Marketing however, seems to have decided to include "hand-made shader replacements" into their definition of the "unified compiler" technology. I suspect that in the case "hand-made shaders" are found to replace the standard shader, the real "unified compiler" technology is not used, as these shaders are already considered optimal.

This implies that:
1) NVIDIA will now describe their unified compiler technology as a mix of automatic and manual shader optimizations with "no IQ degradations".
2) In the Detonator50s, many new techniques for automatic shader optimizations were added in the compiler. Such a compiler already existed, but in a very primitive form, in older driver sets. That was roughly similar to ATI having had a basic compiler in their drivers since the 3.6. release.
3) According to NVIDIA, any application preventing them from using "homemade shader replacements" disables their "Unified Compiler". In reality, it only disables part of it, and this part is exactly as FutureMark describes; it's pure and simple application detection, as prohibited by their guidelines.


Hope that makes it clearer!


Uttar
Uttar is offline   Reply With Quote
Old 11-13-03, 11:32 AM   #252
euan
Fully Qualified FanATIc
 
euan's Avatar
 
Join Date: Oct 2002
Location: Glasgow, Scotland.
Posts: 387
Default

Quote:
Originally posted by Uttar
---
Posted this on B3D, but I thought it might interest the ones of you who don't visit that forum
---

Wait, wait, wait...

Are you guys implying the compiler is nothing but fraud? Not AFAIK.

The real "unified compiler" can do the following thing:
- Reduce register usage by: reusing registers* and using swizzle**.
- Reorder instructions with the goal of exploiting parallelism ( Doing 2 TEX operations in a row as much as possible, for example )***.

*: Already done to an extent in the Detonator FX and 45 series.
**: Naive example of reducing register usage thanks to Swizzle: If, for two registers, you only access .xy - then it is possible to only use one register, by making the second register actually be the .zw part of the first register.
***: Fully introduced in the Det50s.

NVIDIA PR and Marketing however, seems to have decided to include "hand-made shader replacements" into their definition of the "unified compiler" technology. I suspect that in the case "hand-made shaders" are found to replace the standard shader, the real "unified compiler" technology is not used, as these shaders are already considered optimal.

This implies that:
1) NVIDIA will now describe their unified compiler technology as a mix of automatic and manual shader optimizations with "no IQ degradations".
2) In the Detonator50s, many new techniques for automatic shader optimizations were added in the compiler. Such a compiler already existed, but in a very primitive form, in older driver sets. That was roughly similar to ATI having had a basic compiler in their drivers since the 3.6. release.
3) According to NVIDIA, any application preventing them from using "homemade shader replacements" disables their "Unified Compiler". In reality, it only disables part of it, and this part is exactly as FutureMark describes; it's pure and simple application detection, as prohibited by their guidelines.


Hope that makes it clearer!


Uttar
Sounds even more like fraud now.

[edit]
Uttar,

Even if the "complier" is not as good as the hand crafted stuff, NVIDIA should just take it like mature grown ups and use the compiler. Then we can all stop having to fight about hand crafted shaders being cheating which is exactly what they are (application detection by shader).

If they had used the compiler (assuming it is real) then when FM changed their code, the results would have changed. However, when they changed them again, the next results would be slightly different (less or more optimal). That maybe acceptable to some. IE those who except shader replacement as being valid. However, to change the code all the results flatline appears to be cheating.

If the compiler had kicked in now that the handcrafter shaders for 3dmark2003 cannot be used then it shows us clearly that the compiler is pretty much useless for real world applications. Hence it is a POS.

Why does changing code disable a compiler??? Because it is not a compiler.
__________________
Sys.txt

Last edited by euan; 11-13-03 at 11:42 AM.
euan is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA Driver 295.53 installs into kernel 3.4 without a patch! jdmcdaniel3 NVIDIA Linux 3 06-08-12 09:41 AM
Need Help Installing NVIDIA Tesla M2070Q in Linux RHEL5 Ferianto85 NVIDIA Linux 0 05-18-12 08:35 PM
Rumor regarding lack of 680 availability ViN86 Rumor Mill 6 05-09-12 04:48 PM

All times are GMT -5. The time now is 01:20 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright ©1998 - 2014, nV News.