Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 06-03-03, 01:01 PM   #265
SmuvMoney
Registered User
 
SmuvMoney's Avatar
 
Join Date: Feb 2003
Location: Chicago, IL
Posts: 147
Send a message via ICQ to SmuvMoney Send a message via AIM to SmuvMoney Send a message via MSN to SmuvMoney Send a message via Yahoo to SmuvMoney
Default 06/02/2003 - The Day The Benchmark Died...

Am I only one feeling this way after reading the FM/nV statement?

June 2, 2003 - The Day The Benchmark Died...

IMHO, nVidia's lawyers said to FM's lawyers, "Remember 3DFX?" and it just went downhill from there. Like 3DFX, they had (the potential for) a valid gripe/complaint/lawsuit. Unfortunately, FM didn't have enough cash to legally box with nV for years to come - I think they have a staff of 26 employees or something like that. I guess FM didn't want to go out like 3DFX, i.e., assimilation/destruction, so FM did what it could to keep itself alive. Unfortunately, FM's immediate/short term goal for survival may have opened/unleashed Pandora's Box when it comes to any and all benchmarks be they synthetic or ingame.

I mean how many game benchmarks are going to have dev tools like 3dmark03? I'd rather have the devs working on the GAME ITSELF to improve the actual game experience. That being said, what will stop IHVs from cheating in any game benchmark especially if the dev can't provide decent benchmark tools like FM did? Like FM or not, the benchmark attempt to create a level playing field to compare video cards even if the methods were not the absolute cleanest or best. With this statement, it looks like you can bend any benchmark to your will. Official/formal/review benchmarking as I see it now is completely boned...
__________________
Peace & God Bless,

$muvMoney
John 14:27 & Numbers 6:24

A64 3500+ 939 | Epox 9NDA3+ | 1 GB RAM | X800XT PE | Audigy2 ZS | 120 GB HD
3com 3C905C-TX | USR 56K | Thermaltake Tsumani | Silent Purepower 410W PSU
SmuvMoney is offline   Reply With Quote
Old 06-03-03, 01:16 PM   #266
DivotMaker
 
Join Date: Jul 2002
Posts: 823
Default

Quote:
Originally posted by BigBerthaEA
To my knowledge, ATI has complied to the DX9 standard. nVidia has not at least from the shader perspective.
Quote:
Originally posted by Hellbinder
I want to point out again that FP32 is not any harder to do than FP24 *if* you have the hardware in place to handle the load.
I see your point, incorrect wording on my part. What I meant to say is that for whatever reason, nVidia did not allow for FP24 in their hardware. I don't know the "why, when, where, or how" this occurred, but if nVidia did have FP24, then it is possible this entire issue could possibly have been avoided. I think FP32 looked glamorous at the time it was implimented, but I don't think enough was known at the time about:

a) how much raw power was going to be rquired to make FP32 a true advantage over FP24

b) how competitive R3xx would be at FP24

From what I am hearing, FP24 and FP32 are overkill for the games coming up this year and possibly even next year with respect to the amount of sheer horsepower to run those features as developers intend for them to be run. I guess this can be chalked up to the incessant competition and "one-up-manship" of competing in today's graphics market. At the end of the day, I feel this situation, as it is today, is far more confusing to the average consumer than it ever should be. Hell, if we don't know what to make of it, how is the average Joe Q. Public going to make and informed decision?

Last edited by BigBerthaEA; 06-03-03 at 01:19 PM.
DivotMaker is offline   Reply With Quote
Old 06-03-03, 01:18 PM   #267
DivotMaker
 
Join Date: Jul 2002
Posts: 823
Default

Quote:
Originally posted by gordon151
Hellbinders comments making sense and offering unbiased useful information. Hrm, that's not good....
Actually it is VERY good...

I have to say that I respect what he has said and how he has handled himself here recently. He and I had a PM chat and I think we understand each other much better.
DivotMaker is offline   Reply With Quote
Old 06-03-03, 01:37 PM   #268
frenchy2k1
Registered User
 
Join Date: Aug 2002
Location: San Jose, CA
Posts: 449
Default Re: 06/02/2003 - The Day The Benchmark Died...

Quote:
Originally posted by SmuvMoney
Am I only one feeling this way after reading the FM/nV statement?

June 2, 2003 - The Day The Benchmark Died...
Actually, this is probably not such a bad thing. Now, instead of just spending our days running useless benchmarks to compete for bragging rights, we could return to playing games...

Seriously, as stated in the PR, all the other bench are using processor specific optimisations. The graphic chips are getting closer and closer to general purpose processors, with shaders used instead of fixed treatment. Intel wins in most benchmarks because they are using their extensions SSE2, do you see AMD complain? They won in FP before thanks to their architecture. Graphic is just becoming the same way.

And dont talk about 3DFX. They were the first to optimize this way: nobody remember the miniGL drivers? What do you think those were?

After all, if nvidia spends time to optimize their drivers on popular games, more power to them. Sure, they look better at UT2k3 Flyby, but the game benefits also. 3Dmark is only here for bragging rights! (and so is Quake3 now, with its 300+ fps...)
__________________
As the universe is curved, there cannot be a straight answer...
frenchy2k1 is offline   Reply With Quote
Old 06-03-03, 01:38 PM   #269
Hellbinder
 
Hellbinder's Avatar
 
Join Date: Nov 2002
Location: CDA
Posts: 1,510
Default

Quote:
From what I am hearing, FP24 and FP32 are overkill for the games coming up this year and possibly even next year with respect to the amount of sheer horsepower to run those features as developers intend for them to be run. I guess this can be chalked up to the incessant competition and "one-up-manship" of competing in today's graphics market. At the end of the day, I feel this situation, as it is today, is far more confusing to the average consumer than it ever should be. Hell, if we don't know what to make of it, how is the average Joe Q. Public going to make and informed decision?
This is the problem.

I hate to point this out but Percision has nothing to do with the horsepower required to run full DX9. I notice this very often, that people here generally have a slighty quirky View of what the real issues are. Based mostly imo on Nvidia PR statements.

ATi Can run every application now or in the Future at full 24 bit Float with no performance loss. Why?? becuase they have 8 Fully supported FP24 pilenines. That are built from the ground up to use *only* FP24. No more no less. Where slowdown comes in is in Shader instruction processing and all the possible processing that gets involved that may take multiple passes. FP24 and FP32 color has literally no overhead cost except for Tansistors. You put the Transistors on the chip to handle it and thats all that is required. This is not a Bandwidth issue, or a Fillrate issue.

Nvidia simply shot their foot off in the Number of 32bit registers they have to work with. In fact as I have been getting more informed lately on the Nvidia side of thigs what they are really doing is not even dropping to FP16 as they have similar Register support for FP16 and FP32. What they are really doing is Using Cg to recompile the shaders to use FX12 (Fixed Function 12 bit) in many cases.
__________________
Overam Mirage 4700
3.2ghz P4 HT
SIS 748FX Chipset 800mhz FSB
1Gig DDR-400
60Gig 7200RPM HD
Radeon 9600M Turbo 128 (400/250)
Catalyst 4.2
Latest good read. [url]http://www.hardocp.com/article.html?art=NTc4LDE=http://www.hardocp.com/article.html?art=NTc4LDE=[/url]
Hellbinder is offline   Reply With Quote
Old 06-03-03, 01:48 PM   #270
Zenikase
Registered User
 
Join Date: May 2003
Posts: 99
Default

The fact is that even if you have an unoptimized benchmark test, it will always favor certain products over others simply because of the way they are coded and compiled. Athlons and Pentium 4s have very different methods of fetching, translating, organizing, and executing data throughput. It's really almost impossible to make an apples-to-apples comparison of the two due to the way their architectures are designed to work. Athlons feature fully-pipelined execution units and the EV6 bus while the P4 is equipped with an extraordinarily long pipeline for scalability and the innovative trace cache, and all this goes without mention of the pipeline sequence and layout.

The same thing occurs in the video card industry. Manufacturers are continuously coming up with new ways to optimize bandwidth and efficiency, such examples being the memory crossbar interface, hidden surface removal algorithms, depth/color/texture compression, and specialized caches for depth, vertexes, primitives, textures, and pixels. And although both sides have been reluctant to give more detailed information about their 3D pipelines, it is most likely that they have many significant architectural differences. Thus, it is becoming more and more difficult to make fair comparisons with a single test, and if this continues to happen then benchmarks will eventually lose all credibility.

The approach Futuremark wants to take is to create separate optimized builds or code paths for both camps (much like game developers), each utilizing its corresponding platform's unique architecture to the fullest extent with standardized non-vendor extensions (ie, OpenGL) and thus allowing for more accurate, reliable comparisons. nVidia and ATi will continue to work closely with Futuremark so that both the benchmark and the hardware will improve. This could potentially be a very good thing.

Of course, there are some very major problems involved, mainly being the influence of money. Business is business, and the company that can give more cash will be able to throw its weight around more easily. The only way that 3DMark will remain a valid benchmark is if they drop the BETA program's fee entirely, removing most fear and paranoia of sneaky corporate tactics (although this will probably never happen).

In addition, there will be endless arguments as to how precise certain functions of the benchmark should be carried out, especially when it can affect performance as much as it does now. The fact is that there is no current solution to this, because there are so many different methods of implementation and the performance range varies so wildly that there can be no "right way" of doing it. Again, the solution will come in time when we have rock solid standards that define a single precision for each type of register in the fragment shader and the manufacturers (ideally) comply.

Last edited by Zenikase; 06-04-03 at 01:05 AM.
Zenikase is offline   Reply With Quote
Old 06-03-03, 02:00 PM   #271
ImaginaryFiend
Registered User
 
Join Date: Jun 2003
Location: Oregon
Posts: 9
Default

I really don't understand why so many of you are determined to think Nvidia is evil. They're not "the man." Remember, they're not much more than a startup. They're less than one one-hundredth the size of Intel. They even have fewer employees than ATI.

Also, think about how much they've done for you. I guarantee that our game playing would not be anywhere near the quality it is without Nvidia, whether or not the card currently in your machine was made by them. They invented register combiners. They invented vertex programs. They invented cube maps in hardware. They are the only company that offers fragment programs good enough to do "cinematic shading". These are huge contributions.

As for ATI, they are just as big a company. They play all the same moves as Nvidia. They haven't contributed as much to the industry, and over the long term and for the near future, their products aren't as good (though they were ahead lately).

Remember that ATI cheated on 3dmark, too. ATI cheats on Quake3. ATI cheats on all trilinear MIPmapping and on aniso filtering.

As for PR fiascos, remember that ATI got in deep trouble with Id for leaking a Doom 3 prerelease last year. They got in big trouble with Apple for leaking specs of Apple's new machines before MacWorld. They got themselves in the Quake/Quack fiasco. They trashed on the Geforce 4 MX's name because it's a DirectX 7 part and stated that ATI would always name their products clearly, with the first digit being its DX version. But then they misled us with the Radeon 9000 and 9200, which are DX 8 parts.

I really think that neither Nvidia nor ATI are evil. They're just trying to make and sell as many of the best products they can, as expected.

Futuremark isn't evil either, but Nvidia's right that their benchmark isn't written the way games are written, making it not very useful for anyone. It makes perfect sense to me that Futuremark and Nvidia would want to resolve this dispute; it's to both their benefit and it doesn't mean that either party caved or that either party strong-armed the other. Disputes are supposed to be resolved. That's progress!

Imaginary Fiend.
ImaginaryFiend is offline   Reply With Quote
Old 06-03-03, 02:09 PM   #272
Hellbinder
 
Hellbinder's Avatar
 
Join Date: Nov 2002
Location: CDA
Posts: 1,510
Default

ATi did not leak Doom 3. Even Id publcally stated this.

Another case where Nvidia fansites continue to spread misinformation and at the same time try to claim that Nvidia is not evil..

I also have *factual* issues with nearly every scentance posted in the above thread..

Example
Quote:
They invented register combiners. They invented vertex programs. They invented cube maps in hardware. They are the only company that offers fragment programs good enough to do "cinematic shading". These are huge contributions.
None of the above is True at all. And thats just to start.

And you guys wonder why i resort to flaming on occasion...
__________________
Overam Mirage 4700
3.2ghz P4 HT
SIS 748FX Chipset 800mhz FSB
1Gig DDR-400
60Gig 7200RPM HD
Radeon 9600M Turbo 128 (400/250)
Catalyst 4.2
Latest good read. [url]http://www.hardocp.com/article.html?art=NTc4LDE=http://www.hardocp.com/article.html?art=NTc4LDE=[/url]
Hellbinder is offline   Reply With Quote

Old 06-03-03, 02:40 PM   #273
CtB
Registered User
 
Join Date: Jan 2003
Posts: 11
Default

For all the people talking about Intel optimizing for its architecture and Nvidia doing the same, imagine if your CPU started doing only half the work sent to it...

Of course, most of the people with such arguments probably wouldn't know a CPU from a hard drive if it hit them in the face...
CtB is offline   Reply With Quote
Old 06-03-03, 02:45 PM   #274
Zenikase
Registered User
 
Join Date: May 2003
Posts: 99
Default

Intel can only optimize for general purpose functions, since they themselves have no actual control over how well their CPUs will perform in certain applications (unless they are collaborating with that application's developer in improving software support).

nVidia's case is different because they take advantage of their situation with driver compilers and use it so that it detects an application and modifies the existing code to reduce the workload.
Zenikase is offline   Reply With Quote
Old 06-03-03, 03:02 PM   #275
ImaginaryFiend
Registered User
 
Join Date: Jun 2003
Location: Oregon
Posts: 9
Default

Quote:
Originally posted by Hellbinder
ATi did not leak Doom 3. Even Id publcally stated this.

Another case where Nvidia fansites continue to spread misinformation and at the same time try to claim that Nvidia is not evil..

I also have *factual* issues with nearly every scentance posted in the above thread..

Example
They invented register combiners. They invented vertex programs. They invented cube maps in hardware. They are the only company that offers fragment programs good enough to do "cinematic shading".

None of the above is True at all. And thats just to start.

And you guys wonder why i resort to flaming on occasion...


Sounds like you're right about the Id thing. I'd never heard that.

I don't know where you're getting your information, but here's one of Nvidia's several vertex program patents:

Quote:
Method, apparatus and article of manufacture for a sequencer in a transform/lighting module capable of processing multiple independent execution threads
Remember, vertex programs first came out in the Geforce 3, 6 months before the Radeon 8500.

Here's Nvidia's register combiners patent. It was first used in the Geforce 256 in August 99. The Radeon didn't come out until May 2000.
Quote:
Graphics pipeline including combiner stages
Cube environment maps were in this same generation. They are in DX7 because Nvidia put them in hardware.

Here's Nvidia's first pixel shader patent. The reason I think they are the only ones capable of cinematic shading is that ATI can only do 64 instruction-long programs. Nvidia can do at least 1024 instructions. Nvidia also has fp32, which cinematic renderers use. ATI also has severe restrictions on the texture lookups, which is unacceptable for renderman-like shaders.

Quote:
System, method and article of manufacture for pixel shaders for programmable shading
Sweeping dismissals of detailed arguments just don't work. If you're going to disagree with me, it will only be effective if you provide information to back up your point of view. You can't just say I'm wrong and expect anyone to believe you.
ImaginaryFiend is offline   Reply With Quote
Old 06-03-03, 03:11 PM   #276
Hanners
Elite Bastard
 
Hanners's Avatar
 
Join Date: Jan 2003
Posts: 984
Default

Quote:
Originally posted by ImaginaryFiend
The reason I think they are the only ones capable of cinematic shading is that ATI can only do 64 instruction-long programs. Nvidia can do at least 1024 instructions.
R350 supports an infinite number of instructions (theoretically of course).

I would go through all the other points you got wrong (particularly in your post before last) but it would take way, way too long.
__________________
Owner / Editor-in-Chief - Elite Bastards
Hanners is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
User Response : PR Response to Linus Torvald's Inflammatory Comments Blackcrack NVIDIA Linux 16 06-29-12 04:57 AM
PR Response to Linus Torvald's Inflammatory Comments News Archived News Items 0 06-19-12 12:00 AM
PR Response to Linus Torvald's Inflammatory Comments MikeC NVIDIA Linux 0 06-18-12 10:14 PM
NV30 name poll sancheuz NVIDIA GeForce 7, 8, And 9 Series 72 10-19-05 01:23 AM
Any details on Nvidia's failed NV2 for SEGA? suburbanguy Rumor Mill 1 08-21-02 10:30 PM

All times are GMT -5. The time now is 01:14 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.