Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 11-11-03, 05:02 PM   #97
Razor04
Registered User
 
Join Date: Jan 2003
Posts: 205
Default

Dave over at B3D commented on the 3.9's...he said there was no difference between them and the past few ATI releases.
Razor04 is offline   Reply With Quote
Old 11-11-03, 05:04 PM   #98
titan
Registered User
 
Join Date: Nov 2003
Posts: 10
Default

Quote:
Originally posted by nForceMan
Have you tried it with ForceWare 52.20 as well?

yes i've got the latest drivers
titan is offline   Reply With Quote
Old 11-11-03, 05:06 PM   #99
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
Originally posted by titan
yes i've got the latest drivers
Where?
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 11-11-03, 05:07 PM   #100
Nutty
Sittin in the Sun
 
Nutty's Avatar
 
Join Date: Jul 2002
Location: United Kingdom
Posts: 1,835
Send a message via MSN to Nutty
Default

Quote:
Because unless you examine a game closely and make comparison, you won't know when, where, and how IQ gets sacrificed, and you won't know what future areas it might break. What might get broken if the developer makes a core patch or introduces new maps, or modders make their own variants on an engine while unaware of different ways IHV's approach it? And considering how many ways IHV's could optimize only for game benchmarks--without compromizing IQ--how can you trust the numbers OR the quality in comparison to the way the actual game will perform? FRAPS is not repeatable enough yet, and not every game allows one to record their own runthroughs to use FRAPS on, so for right now these kinds of worries are entirely valid. How do you get proper analysis of the future (something very important to consider for any smart consumer) off things we can only see now and can't trust their whole process--the important part--and can just see the visual end-results?
True, but if it breaks future things, then an IHV can only blame themselves. Future optimizations should all be done via in-driver compilers, and these will be extensively tested by simulations for virtually every possible outcome. Thats what NV's huge rooms of servers are for.

Quote:
Basically, though the ABSOLUTE LAST step may not really matter in this case, all the intermediary ones can have undesirable side-effects, and would just seem to be the wrong paths to reward. It's fragmenting even more an area of the computer world that's hard enough to follow, and that we've spent many years trying to SOLIDIFY.
I dont think its fragmenting. I think alot of things are unifying. Vertex and fragment shaders for example, are currently quite different. In the future, they will have the same instruction sets, and will probably be executed on the same types of units. Doing this enables single compilers to optimize both at the vertex and fragment level.

Change is inevitable in the computer industry though. Especially graphics that seems to move faster than the other parts. Often a problem is this moving too fast, is that there isn't any standards put in place to make sure we go in the right direction. Wasn't it only after NV's cut down precision renderings that MS decided to implement a refrast comparison system?
Nutty is offline   Reply With Quote
Old 11-11-03, 05:16 PM   #101
digitalwanderer
 
digitalwanderer's Avatar
 
Join Date: Jul 2002
Location: Highland, IN USA
Posts: 4,944
Default

Quote:
Originally posted by ChrisRay
Where?
"Psssst! They're playing with you Chris!", whispers the Dig non-chalantly.
__________________
[SIZE=1][I]"It was very important to us that NVIDIA did not know exactly where to aim. As a result they seem to have over-engineered in some aspects creating a power-hungry monster which is going to be very expensive for them to manufacture. We have a beautifully balanced piece of hardware that beats them on pure performance, cost, scalability, future mobile relevance, etc. That's all because they didn't know what to aim at."
-R.Huddy[/I] [/SIZE]
digitalwanderer is offline   Reply With Quote
Old 11-11-03, 05:16 PM   #102
titan
Registered User
 
Join Date: Nov 2003
Posts: 10
Default

Quote:
Originally posted by ChrisRay
Where?
oops i though 52.16 were the newest.. are the 52.20 allready there?
titan is offline   Reply With Quote
Old 11-11-03, 05:26 PM   #103
ChrisW
"I was wrong", said Chris
 
Join Date: Jul 2002
Location: standing in the corner!
Posts: 620
Default

Quote:
Originally posted by fallguy


800 down from 5300 is a pretty large percentage....
As I remember, scores were originally in the ~3500 range before.
__________________
AIW 9700 Pro | 1.3GHz CeleronT | 512MB PC133 SDRAM | ECS PCIPAT Mobo (Intel 815EP)
RadLinker/RadClocker
ChrisW is offline   Reply With Quote
Old 11-11-03, 05:30 PM   #104
Hellbinder
 
Hellbinder's Avatar
 
Join Date: Nov 2002
Location: CDA
Posts: 1,510
Default

Quote:
Originally posted by Skuzzy
Calm down HB, yer gonna blow a gasket.

Times are changing and next year, about now, should be pretty interesting. Devs are not idiots,..well...most are not idiots. Quite a few devs are switching things around just to be able to properly code things and not spend painful days and nights wondering why things are not working like they should be.
Call for some support and you get told to use some proprietary path/feature in order to get around stuff.

Most people do not know they are not seeing everything that should be seen HB. No sense in you getting all wound up. They honestly believe what they are saying and will continue to do so, regardless of what you are I, or anyone says. I am not saying that to be mean or sarcastic.

Hmmm,..if you didn't know cars were made by other countries you might just believe that Cadillac is the best luxury car on the planet and would express that.
Woah there Hooomie...

Who said i am overly upset about it...

I was Simply trying to make a SOLID point. My days of being overly upset about anything like this are over.
__________________
Overam Mirage 4700
3.2ghz P4 HT
SIS 748FX Chipset 800mhz FSB
1Gig DDR-400
60Gig 7200RPM HD
Radeon 9600M Turbo 128 (400/250)
Catalyst 4.2
Latest good read. [url]http://www.hardocp.com/article.html?art=NTc4LDE=http://www.hardocp.com/article.html?art=NTc4LDE=[/url]

Last edited by Hellbinder; 11-11-03 at 07:04 PM.
Hellbinder is offline   Reply With Quote

Old 11-11-03, 05:39 PM   #105
digitalwanderer
 
digitalwanderer's Avatar
 
Join Date: Jul 2002
Location: Highland, IN USA
Posts: 4,944
Default

Quote:
Originally posted by Hellbinder
I was Simply trying to make a SOLID point. My days of being overly upset about anything like this are over.
A much healthier attitude, you'll live longer.

Trust me.
__________________
[SIZE=1][I]"It was very important to us that NVIDIA did not know exactly where to aim. As a result they seem to have over-engineered in some aspects creating a power-hungry monster which is going to be very expensive for them to manufacture. We have a beautifully balanced piece of hardware that beats them on pure performance, cost, scalability, future mobile relevance, etc. That's all because they didn't know what to aim at."
-R.Huddy[/I] [/SIZE]
digitalwanderer is offline   Reply With Quote
Old 11-11-03, 05:44 PM   #106
vandersl
Registered User
 
Join Date: Mar 2003
Location: NJ
Posts: 43
Default

Quote:
A benchmark should be; "How fast does it do this task".
3dMark on the other hand is; "How fast does it do this bit of code, without the option to massage it for the hardware".
Exactly right - and if the task is 'run this piece of code, which I wrote in a standard programming language' then the driver/compiler/processor should do just that. It shouldn't decide that 16 or 12 bits is enough precision. That's not what I asked it to do.

I have no problem with compiler technology. But having the compiler decide which parts of the task are 'important' is not a valid optimization, at least for a benchmark.

If some enterprising hard drive manufacturer decided that all the reads/writes to a temporary file during a benchmark didn't do any meaningful work (after all, there's nothing left on the drive, is there?) and decided to just skip the whole thing and report 'done', would we call it a cheat, or congratulate them for an aggressive optimization? If I benchmark MP3 encoding on a CPU, is performing the encoding at 128kbps instead of the requested 256kbps acceptable if I can't hear the difference? Personally, I think not.

It's all about equal work. Who cares if the work being requested is inefficient? Just do the work (all of it) in the most efficient way you can. No sweeping things under the rug. No cutting corners. Just do it.

Is this too much to ask?
vandersl is offline   Reply With Quote
Old 11-11-03, 05:49 PM   #107
ChrisW
"I was wrong", said Chris
 
Join Date: Jul 2002
Location: standing in the corner!
Posts: 620
Default

You guys are assuming that ATI can't also gain performance from optimizing for 3DMark03. Do you really think ATI can't also gain performance by replacing shaders that look "close enough" or some other things?
__________________
AIW 9700 Pro | 1.3GHz CeleronT | 512MB PC133 SDRAM | ECS PCIPAT Mobo (Intel 815EP)
RadLinker/RadClocker
ChrisW is offline   Reply With Quote
Old 11-11-03, 05:54 PM   #108
Skuzzy
Bit Bumper
 
Join Date: Aug 2002
Location: Here
Posts: 782
Default

Quote:
Originally posted by Hellbinder
Woah there Hoomie...

Whi said i am overly upset about it...

I was Simply trying to make a SOLID point. My days of being overly upset about anything like this are over.
LOL! I was doing a preemptive strike there HB. Ya know, adding more brick to the wall to keep the dam from bustin out all over.
__________________
Stuff Happenz!
Skuzzy is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA Driver 295.53 installs into kernel 3.4 without a patch! jdmcdaniel3 NVIDIA Linux 3 06-08-12 10:41 AM
Need Help Installing NVIDIA Tesla M2070Q in Linux RHEL5 Ferianto85 NVIDIA Linux 0 05-18-12 09:35 PM
Rumor regarding lack of 680 availability ViN86 Rumor Mill 6 05-09-12 05:48 PM

All times are GMT -5. The time now is 07:40 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.