Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 05-25-03, 11:00 AM   #13
evilangel
Registered User
 
Join Date: May 2003
Posts: 15
Default

Quote:
Originally posted by silence
well....my point was more like...did NV have in mind something like making 3Dmark useless?cause if they did, then they'll stick with their story and they acchieved their goal. it's bad PR and even worse business politics, but now u can't trust 3Dmark scores and that's something NV wants all the way.

That's actually a good point. If Nvidia proves it's easy to cheat but don't admit it, it may work the other way and hurt FutureMark. It may make enough people paranoid about what's what that they may say screw it with synthetic benches. Although FutueMArk patched the bench, what's to say Nvidia won't just do it again in a smarter way.

Also this could be where Nvidia might want to create their own standards, like Microsoft. This is where people get bent out of shape. Some people say stick to the standards that are in place. If Nvidia wants to create a different way of doing things that benefits the games being played on their cards, doesn't bother me. If you don't like it, don't buy an Nvidia card.

Personally, i'm wondering if there's just bad blood between FutureMark and Nvidia at this point just for the fact that Nvidia didn't pay the couple of hundred thousand dollars to participate in the program.

I think new games coming out should just have a playable demo you can download a few months before it arrives that incorporates native benching and that's that. This way you'll know how the game is going to run on your card before it comes out.

The whole 3DMark thing just never appealed to me, it seems like it's more a sport now than what it was intended to be.

Last edited by evilangel; 05-27-03 at 04:24 PM.
evilangel is offline   Reply With Quote
Old 05-26-03, 09:37 AM   #14
Morrow
Atari STE 4-bit color
 
Morrow's Avatar
 
Join Date: May 2003
Posts: 798
Default

Quote:
Originally posted by evilangel
Although FutueMArk patched the bench, what's to say Nvidia won't just do it again in a smarter way.
Why should they do it in a smarter way if they deliberately want the cheats to be found? Maybe this is really their goal, to screw FutureMark with continously releasing cheats in their drivers to synthetically increase the score for their cards?

I mean, 3dmark03 is obviously an unfair benchmark. FM wants to bench standard performance however games use optimized paths where nvidia hardware shines (like with SSE2 optimizations on P4, the P4 is unbeatable with SSE optimized instructions, that's the reason why AMD was so interested in integrating SSE2 in their Hammer CPUs). The R3x0 has more raw power which favorites Radeons in 3dmark03.

Adding to that, the fact that Radeons use the faster FP24 mode in contrast to the much slower FP32 but higher IQ mode of the GeForces doesn't say anything positive about the fairness of this bench. When nvidia used FP16 for 3dmark03 they were called cheater because they used lower precision. Now that nvidia is using FP32 and ATI still using the faster FP24 because it's the only mode they support, does that make ATI also cheater because they are using a lower precision than nvidia?

Hopefully you have noticed by now that 3dmark03 can impossible be a fair benchmark. The current two biggest architectures (R3x0 and nv3x) are too different to create a fair benchmark. Synthetic benchmarks are ending here...

Last edited by Morrow; 05-26-03 at 09:51 AM.
Morrow is offline   Reply With Quote
Old 05-26-03, 02:13 PM   #15
evilangel
Registered User
 
Join Date: May 2003
Posts: 15
Default

Quote:
Originally posted by Morrow
Why should they do it in a smarter way if they deliberately want the cheats to be found? Maybe this is really their goal, to screw FutureMark with continously releasing cheats in their drivers to synthetically increase the score for their cards?

I mean, 3dmark03 is obviously an unfair benchmark. FM wants to bench standard performance however games use optimized paths where nvidia hardware shines (like with SSE2 optimizations on P4, the P4 is unbeatable with SSE optimized instructions, that's the reason why AMD was so interested in integrating SSE2 in their Hammer CPUs). The R3x0 has more raw power which favorites Radeons in 3dmark03.

Adding to that, the fact that Radeons use the faster FP24 mode in contrast to the much slower FP32 but higher IQ mode of the GeForces doesn't say anything positive about the fairness of this bench. When nvidia used FP16 for 3dmark03 they were called cheater because they used lower precision. Now that nvidia is using FP32 and ATI still using the faster FP24 because it's the only mode they support, does that make ATI also cheater because they are using a lower precision than nvidia?

Hopefully you have noticed by now that 3dmark03 can impossible be a fair benchmark. The current two biggest architectures (R3x0 and nv3x) are too different to create a fair benchmark. Synthetic benchmarks are ending here...
Good points. We agree that we don't like 3DMark and it's useles.
I think you will see ATI and Nvidia growing more apart and doing things their own way.

Last edited by evilangel; 05-27-03 at 04:25 PM.
evilangel is offline   Reply With Quote
Old 05-26-03, 02:47 PM   #16
SlyBoots
Registered User
 
SlyBoots's Avatar
 
Join Date: Jul 2002
Location: La Grande, OR
Posts: 339
Lightbulb

Quote:
Originally posted by evilangel
Good points. We agree that we don't like 3DMark and it's useles.
I think you will see ATI and Nvidia growing more apart and doing things their own way.
In reference to Kyle's comment that 3dmark03 was usless, Kristof wrote>

"Note how he talks about the overall score of 3DMark, I think everybody agrees that the total score as reported by 3DMark has no real value - the detail scores however are very valuable and there is no way to possibly claim that they are not useful. Afterall when do you expect that a game will come out with the shader load available in 3DMark today and decent benchmarking functionality ? They keep talking about Quake3, which for todays graphics hardware is pretty much turning into a CPU test (how low can your driver overhead go ?).

In a real game how is he going to check each shader ? Can he run every game using the ref rast so he can judge if the correct accuracy is being maintained and not some hacked looking similar shader ?

All in all 3DMark is a very valuable test set, the score is just a number which indicates that higher is better and it satisfies the most basic user that only cares to check if his system is performing roughly as it should - it also satisfies the tweak freaks so they can battel for the highest scores. "
SlyBoots is offline   Reply With Quote
Old 05-26-03, 09:31 PM   #17
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by Morrow
Why should they do it in a smarter way if they deliberately want the cheats to be found? Maybe this is really their goal, to screw FutureMark with continously releasing cheats in their drivers to synthetically increase the score for their cards?
Along with making Futuremark and 3dmark03 look bad, nvidia has also raised the possibility of cheating in EVERY benchmark(including game benchmarks), as well as cheating in shader programs used in games.

Are we really supposed to be reassured by this?

Quote:
Adding to that, the fact that Radeons use the faster FP24 mode in contrast to the much slower FP32 but higher IQ mode of the GeForces doesn't say anything positive about the fairness of this bench. When nvidia used FP16 for 3dmark03 they were called cheater because they used lower precision. Now that nvidia is using FP32 and ATI still using the faster FP24 because it's the only mode they support, does that make ATI also cheater because they are using a lower precision than nvidia?
Sigh. Look at a real world example. John Carmack said that NV30 ran the ARB2 path 50% slower than an R300 did. NV30 is using FP32, and R300 is using FP24. Now ask yourself why a 33% increase in precision would drop performance by 50%?

You also disregard some facts such as FP24 is the minimum precision required by DX9. No one told nvidia to support FP32. No one mad nvidia not support FP24 AND FP32. Dropping down to FP16(which nvidia did in 3dmark03) means that nvidia is cheating, because they are no longer in the DX9 spec. Sorry, nvidia has no one to blame but themselves.
  Reply With Quote
Old 05-27-03, 01:23 AM   #18
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by Morrow
I mean, 3dmark03 is obviously an unfair benchmark. FM wants to bench standard performance however games use optimized paths where nvidia hardware shines (like with SSE2 optimizations on P4, the P4 is unbeatable with SSE optimized instructions, that's the reason why AMD was so interested in integrating SSE2 in their Hammer CPUs). The R3x0 has more raw power which favorites Radeons in 3dmark03.
And from a developer's perspective:
Quote:
I find it interesting that Futuremark works with ATI and Nvidia to make the benchmark run as fast as possible on specific hardware. Game developers don't have the luxury of doing this. Publishers want them to finish the game as quick as possible and most of them don't have months to spend tweaking code paths for specific cards, especially when new cards are coming out every few months.

I think Nvidia saying that Futuremark is out to get them is nonsense. An application should run faster than your competition regardless of how it was programmed. Game developers aren't going to design games that match your hardware's abilities exactly, in terms of number of textures used, shader instructions, etc. Picking apart the techniques used in 3DMark is rather pointless. You might as well pick at every game and complain how they didn't optimize for your card.

3DMark should be used by Nvidia to tell where their card needs improvement, not as an advertisement to make their cards look good.
http://www.beyond3d.com/forum/viewtopic.php?t=6087
  Reply With Quote
Old 05-27-03, 01:30 AM   #19
Sazar
Sayonara !!!
 
Sazar's Avatar
 
Join Date: Jan 2003
Location: Austin, Texas
Posts: 9,297
Default

Quote:
Originally posted by StealthHawk
Along with making Futuremark and 3dmark03 look bad, nvidia has also raised the possibility of cheating in EVERY benchmark(including game benchmarks), as well as cheating in shader programs used in games.

Are we really supposed to be reassured by this?

Sigh. Look at a real world example. John Carmack said that NV30 ran the ARB2 path 50% slower than an R300 did. NV30 is using FP32, and R300 is using FP24. Now ask yourself why a 33% increase in precision would drop performance by 50%?

You also disregard some facts such as FP24 is the minimum precision required by DX9. No one told nvidia to support FP32. No one mad nvidia not support FP24 AND FP32. Dropping down to FP16(which nvidia did in 3dmark03) means that nvidia is cheating, because they are no longer in the DX9 spec. Sorry, nvidia has no one to blame but themselves.
those are pretty much my thoughts on this matter

the pixel shader/vertex shader performance is lower than one should expect... and IMO should be questioned and FIXED by nvidia before they decide to tape out other gpu's... better ps/vs performance == ati will have to work harder as well due to their having to compete == consumers win...
Sazar is offline   Reply With Quote
Old 05-27-03, 04:05 AM   #20
Morrow
Atari STE 4-bit color
 
Morrow's Avatar
 
Join Date: May 2003
Posts: 798
Default

Quote:
Originally posted by StealthHawk
Sigh. Look at a real world example. John Carmack said that NV30 ran the ARB2 path 50% slower than an R300 did. NV30 is using FP32, and R300 is using FP24. Now ask yourself why a 33% increase in precision would drop performance by 50%?
Did you read my post?

I said optimizations is the aspect where nvidia hardware shines, not standard instructions (like the ARB2)! We all know that the nv3x is slow in ARB2 but we also all know that Doom3 won't use ARB2 to render the graphics on nv3x hardware. So what is the problem besides 3dmark03 using standard code?

Isn't it obvious, the nv3x is faster in real-life games (except in SplinterCell) and the Radeons are faster in 3dmark03. So what do you want to play? games or 3dmark?

Anyway, if 16-bit color (65535 colors) is twice the precision of 8-bit (256 colors), yes , in that case FP32 is 33% more precision than FP24. Just why does FP32 have 256 times more colors available for rendering than FP24? It's certainly only for marketing like 32-bit colors in current games
Morrow is offline   Reply With Quote

Old 05-27-03, 08:36 AM   #21
Hanners
Elite Bastard
 
Hanners's Avatar
 
Join Date: Jan 2003
Posts: 984
Default

Quote:
Originally posted by Morrow
Did you read my post?

I said optimizations is the aspect where nvidia hardware shines, not standard instructions (like the ARB2)! We all know that the nv3x is slow in ARB2 but we also all know that Doom3 won't use ARB2 to render the graphics on nv3x hardware. So what is the problem besides 3dmark03 using standard code?
I think StealthHawk's point is that most games aren't optimised for particular hardware because developers simply don't have time to, as shown in the quote StealthHawk made from the developer over at Beyond3D.
__________________
Owner / Editor-in-Chief - Elite Bastards
Hanners is offline   Reply With Quote
Old 05-27-03, 03:43 PM   #22
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by Morrow
Did you read my post?

I said optimizations is the aspect where nvidia hardware shines, not standard instructions (like the ARB2)! We all know that the nv3x is slow in ARB2 but we also all know that Doom3 won't use ARB2 to render the graphics on nv3x hardware. So what is the problem besides 3dmark03 using standard code?

Isn't it obvious, the nv3x is faster in real-life games (except in SplinterCell) and the Radeons are faster in 3dmark03. So what do you want to play? games or 3dmark?
No, that's not what you said at all. You said "games use optimized paths where nvidia hardware shines." You also bring up AMD vs Intel so you are implying that most games will have specific paths which are optimized for each card.

This is not true of most games. Sure, someone like Carmack has the ability, time, and money to finish a game engine to his liking. Most most devs don't have his luxury.

Quote:
Just why does FP32 have 256 times more colors available for rendering than FP24?
I'm not sure if that number is correct, so I won't comment on it.

Quote:
It's certainly only for marketing like 32-bit colors in current games
Now you confuse me. Are you saying that 32bit color in games is unnecessary? There is a vast difference between 16bit color and 32bit color in games today. There wasn't back when 32bit color was introduced, because the art was not tuned for 32bit color.
  Reply With Quote
Old 05-27-03, 04:28 PM   #23
evilangel
Registered User
 
Join Date: May 2003
Posts: 15
Default

I think you'll see more and more games optimized for either ATI or Nvidia in the future. Whoever sells more cards and has more market share will have more games optimized for them, just like with consoles. I think standards are going to go down the toliet.
evilangel is offline   Reply With Quote
Old 05-27-03, 05:52 PM   #24
Morrow
Atari STE 4-bit color
 
Morrow's Avatar
 
Join Date: May 2003
Posts: 798
Default

Quote:
Originally posted by StealthHawk
You also disregard some facts such as FP24 is the minimum precision required by DX9. No one told nvidia to support FP32. No one mad nvidia not support FP24 AND FP32. Dropping down to FP16(which nvidia did in 3dmark03) means that nvidia is cheating, because they are no longer in the DX9 spec. Sorry, nvidia has no one to blame but themselves.
FP24 minimum for DX9? Please explain the following quote:

"DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit."

Isn't that exactly what I was saying, namely that ATI is rendering in a lower precision than they are supposed to and "assumed by DX9". So my question if ATI is cheating since they are using the faster FP24 mode and nvidia the slower but higher precision FP32 mode was not that wrong to ask after all

In fact, I see a conspiracy raising here which basically indicates that ATI deliberately only added FP24 mode to their R3x0 core so no one can force them to use another FP mode for DX9 compatibility. Nvidia on the contrary wanted to add flexibility to their nv3x architecture but now is forced by MS to default to FP32 if not requested otherwise by the game/program. Of course, speed-wise FP32 can't keep up with FP24 eventhough it offers a better IQ...

BTW, the quote from above is from no other person than John Carmack himself, posted today at slashdot.org

Last edited by Morrow; 05-27-03 at 06:02 PM.
Morrow is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
Base mosaic and rotation? happyfunbot NVIDIA Linux 5 07-13-12 12:23 PM
Upcoming Linux Hardware Benchmarks For June News Archived News Items 0 06-09-12 11:10 AM
Diablo III Gaming Benchmarks on Ivy Bridge & Trinity Laptops News Archived News Items 0 05-16-12 10:40 AM
Better Benchmarks for Big Data ' This Week on inside* Publications News Archived News Items 0 05-05-12 02:00 AM
what benchmarks for nv30 / r300/ 350 borntosoul Rumor Mill 10 10-11-02 04:34 PM

All times are GMT -5. The time now is 08:59 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.