Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 06-02-03, 09:57 PM   #145
ChrisW
"I was wrong", said Chris
 
Join Date: Jul 2002
Location: standing in the corner!
Posts: 620
Default

Well, now ATI with their 1.9% increase will officially be labeled 'cheaters' while nVidia with their 24% increase and special clipping planes and scene detection that would not clear the frame buffer and changing pixel shader 2.0 into some wierd combination of FP16 and int12 is officially 'optimizing'. Seems ATI gets screwed no matter what they do. The worst part of this is almost nobody outside of these fansites will ever know any of this happened. I guess all the review sites will now use 3DMark03 in their reviews now that nVidia is happy. You guys can start praising 3DMark03 as a benchmark now since all you do it repeat whatever stance nVidia has at the moment.
ChrisW is offline   Reply With Quote
Old 06-02-03, 09:59 PM   #146
Neural
Registered User
 
Join Date: Jun 2003
Posts: 9
Default

Quote:
Originally posted by reever2
But in the realm of processors we KNOW what is being optimized and what is not, we also know that both the processors support and can be optimized for SSE, in benchmarks if one board can be optimized one way, and the other one cant its not a fair benchmark, in games its a different story
So if nVidia published their optimizations that would clear alot of this up wouldnt it, hmmm i hadnt thought of that, I think its something they should think about.

Quote:
Originally posted by Ady
Neural: Did you really mean to say "how nVidia could hide the clip panes in their drivers?" or how did they put them there? Obviously hiding anything in drivers isn't really an issue. it's not like everyone searches through every file to discover such things. You wouldn't be able to find anything unless you had the source code and you seriously knew what you were looking for anyways.

I don't really know anything of Intel optimizing or whatever. I think it would be obvious though if intel decided to optimize just for a particular synthetic benchmark then that would be completely wrong. Especially if it gave consumers a wrong idea of how a particular CPU performed. Obviously optimizing for an actual program is great. If it runs it runs and if it runs even faster then all the better.

It's different when you are talking graphics though. It's the same if you're talking about a synthetic benchmark, it shouldn't be specifically optimized for to improve it's performance at all. When it comes to games then us consumers deserve to see the game exact as it's developer intended it to be shown. It's not right to be cheated of that right. Cheating in a game benchmark is the worst of them all. Giving us consumers the wrong impression of how a video card will perform in that actual game. I know I sure don't want to be fooled.
It actually swings both ways, Intel makes sure most of the benchmarks are optimized, not because they care what we the consumer do with the benchmarks, but because many if not all of the manufacturers/distributors use these to determine what they will buy/sell. Only SSE intsructions are only implemented in software we use sometimes months even years after they are used in benchmarks, because mainstream software is written for now so that it becomes obsolete sooner, whereas benchmarks are more forward looking. Part of me would like to see nVidia take the Intel approach if their achitecture is so much different that it needs special instructions, be up front, and allow other companys to liscense it. The other part of me is afraid of this because of how Intel uses their instructions, and the other companys inability to keep up with the ever changing SSE support, to bully the market. If ATi ends up in AMD's position , we could all be in trouble.
Neural is offline   Reply With Quote
Old 06-02-03, 09:59 PM   #147
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by ChrisRay
gotcha. Heh It's really hard for me to form an opinion on this issue. I'm kinda torn between my hatred for 3dmark and the kinda utter ludicrousy of the whole situation
Futuremark caving in for nvidia should only increase your hate for them.

I know that I've lost a lot of respect for Futuremark, and if nvidia is able to use their cheating drivers to get higher scores then 3dmark03 is now officially 100% useless(if it wasn't already) beyond the individual feature tests like fillrate, etc. Futuremark was forced to shoot themselves in the foot, sad really.
  Reply With Quote
Old 06-02-03, 10:01 PM   #148
Zenikase
Registered User
 
Join Date: May 2003
Posts: 99
Default

Being a longtime nVidia fanboy (and I probably continue to be so for a long time), this is pretty crappy.

I don't really care much for the politics of the whole situation. For me, what matters in the end is getting the best performance possible out of my product without making compromises in image quality and without altering the original intended result. But things like useless benchmark cheats (that's what they are, apparently) only serve to add driver code bloat and the end user loses, since you'll only get that type of speedy performance on that specific program. Personally, I think the rule of thumb for video card companies should be to create drivers that are optimized for ALL GENERAL 3D APPLICATIONS. In my opinion, this is the only way one can produce comparable results for different products, based on hardware and driver efficiency.

To touch on another subject, all this talk about NV3x's poor fragment shader performance is really unjustified. A quick glance at Carmack's last .plan update shows a comparison of ATi and nVidia's top dogs:

Quote:
The R200 path has a slight speed advantage over the ARB2 [DX9-optimized] path on the R300, but only by a small margin, so it defaults to using the ARB2 path for the quality improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 [faster than NV20, but not DX9-optimized] path. Half the speed at the moment. This is unfortunate, because when you do an exact, apples-to-apples comparison using exactly the same API, the R300 looks twice as fast, but when you use the vendor-specific paths, the NV30 wins.

The reason for this is that ATI does everything at high precision all the time, while Nvidia internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by Nvidia, so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.


This is the kind of stuff that comes with new technology, and you can't really compare the two side-by-side until some type of standard is set (which probably won't come for a while).
Zenikase is offline   Reply With Quote
Old 06-02-03, 10:04 PM   #149
muzz
 
muzz's Avatar
 
Join Date: Feb 2003
Posts: 816
Default

Yep.

Only problem is the "STANDARD" won't be good enough or something else foolish like that..... and all benchies will be won by the almighty $
__________________
muzz
muzz is offline   Reply With Quote
Old 06-02-03, 10:08 PM   #150
jbirney
Registered User
 
jbirney's Avatar
 
Join Date: Jul 2002
Posts: 1,430
Unhappy

This is a really sad day.

It goes to show you that you can lower iq to cheat in benchmarks (notice they did not say anything about clip planes or zbuffer errors) and get away for it so long as your the biggest company. what I find even more sad is that a future version of 3dmarks may allow for 12/16 bit precision when the DX9 spec is 24 bit. It went from bad to worse now to just down right pittfiull.
jbirney is offline   Reply With Quote
Old 06-02-03, 10:10 PM   #151
Ady
...
 
Ady's Avatar
 
Join Date: Nov 2002
Location: Australia
Posts: 502
Default

Quote:
Originally posted by Neural
Part of me would like to see nVidia take the Intel approach if their achitecture is so much different that it needs special instructions, be up front, and allow other companys to liscense it.
Wouldn't this just be a bad thing for us all round by slowing down the industry? I thought we would need some solid standards for technology to move along at an acceptable rate. I thought this was a of negative aspect of the ps2.
__________________
Dying is not going to kill me.
Ady is offline   Reply With Quote
Old 06-02-03, 10:13 PM   #152
Zenikase
Registered User
 
Join Date: May 2003
Posts: 99
Default

The folks at Microsoft in charge of DirectX should've been more specific about how precise certain functions of the fragment shader need to be, instead of leaving it in the air and consequently creating a huge mess with compatibility and performance.
Zenikase is offline   Reply With Quote

Old 06-02-03, 10:16 PM   #153
muzz
 
muzz's Avatar
 
Join Date: Feb 2003
Posts: 816
Default

They were too worried about folks who cheat with their software ( aka Piracy) to think that an IHV would try and circumvent/try and interpret to the best of their hardwares ability - what they laid out.

Heard the word Loopholes B4?
Alot of crooks get off because of those.
__________________
muzz
muzz is offline   Reply With Quote
Old 06-02-03, 10:17 PM   #154
Neural
Registered User
 
Join Date: Jun 2003
Posts: 9
Default

Quote:
Originally posted by Ady
Wouldn't this just be a bad thing for us all round by slowing down the industry? I thought we would need some solid standards for technology to move along at an acceptable rate. I thought this was a of negative aspect of the ps2.
Yes it potentially could be bad, I said that earlier in the bold>>

Quote:
Originally posted by Neural
So if nVidia published their optimizations that would clear alot of this up wouldnt it, hmmm i hadnt thought of that, I think its something they should think about.

It actually swings both ways, Intel make sure most of the benchmarks are optimized, not because they care what we the consumer do with the benchmarks, but because many if not all of the manufacturers/distributors use these to determine what they will buy/sell. Only SSE intsructions are only implemented in software we use sometimes months even years after they are used in benchmarks, because mainstream software is written for now so that it becomes obsolete sooner, whereas benchmarks are more forward looking. Part of me would like to see nVidia take the Intel approach if their achitecture is so much different that it needs special instructions, be up front, and allow other companys to liscense it. The other part of me is afraid of this because of how Intel uses their instructions, and the other companys inability to keep up with the ever changing SSE support, to bully the market. If ATi ends up in AMD's position , we could all be in trouble.
Neural is offline   Reply With Quote
Old 06-02-03, 10:20 PM   #155
Ady
...
 
Ady's Avatar
 
Join Date: Nov 2002
Location: Australia
Posts: 502
Default

Quote:
Originally posted by Neural
Yes it potentially could be bad, I said that earlier in the bold>>
Yeah I did realise that. Not trying to pick on you mate. Just discussing.
__________________
Dying is not going to kill me.
Ady is offline   Reply With Quote
Old 06-02-03, 10:24 PM   #156
digitalwanderer
 
digitalwanderer's Avatar
 
Join Date: Jul 2002
Location: Highland, IN USA
Posts: 4,944
Thumbs down On re-reading, something else bothers me about the statement.

Quote:
Joint NVIDIA-Futuremark Statement
Both NVIDIA and Futuremark want to define clear rules with the industry about how benchmarks should be developed and how they should be used. We believe that common rules will prevent these types of unfortunate situations moving forward.[/i]
How many people here think these two companies should be defining the rules of benchmarking? Let's have a show of hands...
__________________
[SIZE=1][I]"It was very important to us that NVIDIA did not know exactly where to aim. As a result they seem to have over-engineered in some aspects creating a power-hungry monster which is going to be very expensive for them to manufacture. We have a beautifully balanced piece of hardware that beats them on pure performance, cost, scalability, future mobile relevance, etc. That's all because they didn't know what to aim at."
-R.Huddy[/I] [/SIZE]
digitalwanderer is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
User Response : PR Response to Linus Torvald's Inflammatory Comments Blackcrack NVIDIA Linux 16 06-29-12 04:57 AM
PR Response to Linus Torvald's Inflammatory Comments News Archived News Items 0 06-19-12 12:00 AM
PR Response to Linus Torvald's Inflammatory Comments MikeC NVIDIA Linux 0 06-18-12 10:14 PM
NV30 name poll sancheuz NVIDIA GeForce 7, 8, And 9 Series 72 10-19-05 01:23 AM
Any details on Nvidia's failed NV2 for SEGA? suburbanguy Rumor Mill 1 08-21-02 10:30 PM

All times are GMT -5. The time now is 06:00 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.