Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 03-30-03, 01:50 PM   #13
Cotita
Nvidia God
 
Join Date: Jul 2002
Posts: 341
Default

I think its acceptable that nvidia use FP32 for application settings, a FP16/FP32mix for quality settings and use FP16 for performance settings.
__________________
Sometimes I hate being right everytime.
Cotita is offline   Reply With Quote
Old 03-30-03, 02:27 PM   #14
Nutty
Sittin in the Sun
 
Nutty's Avatar
 
Join Date: Jul 2002
Location: United Kingdom
Posts: 1,835
Send a message via MSN to Nutty
Default

Some things are FP32, even in the "hacked" drivers. Texture addressing for one is always FP32 addressing.

I dont think its bad that nvidia will use FP16 instead of FP32 unless specifically asked.

Its just the same as using "int" in C/C++. You dont know the size of int, but when you use it you dont care. You just want it fast. On some systems its 32bit on some its 16bit.

IF you really need 32bit, then you make sure you get 32bit, by choosing a more appropriate integer format. The compiler merely uses the most appropriate size for the hardware.

Same thing here.. IF you dont specifically request that a certain thing _must_ be FP32, then the compiler in nvidia's drivers will optimize it to use FP16.

I cant see what the problem is, as long as the image looks right.
Nutty is offline   Reply With Quote
Old 03-30-03, 02:35 PM   #15
MuFu
should be at a lecture
 
MuFu's Avatar
 
Join Date: Jul 2002
Location: Loughborough Uni/Aylesbury, UK
Posts: 462
Default

They are probably swapping out GT4 shader code for CineFX-optimised routines compiled into the drivers. That'd explain the apparent geometry quirks and suprisingly good performance given the results of the synth tests.

MuFu.
MuFu is offline   Reply With Quote
Old 03-30-03, 03:04 PM   #16
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
Originally posted by Captain Beige
ATI cards don't support FP16, only FP24, since FP24 is part of the DX9 specification but FP16 is not and is therefore useless for a true DX9 card. this is not like vsync. vsync is an option not part of a standard. nvidia cards using FP16 unless specifically asked for FP32 is ridiculous.

it would be like a company claiming to have an equal oportunities policy but discriminating against people unless you specifically told them not to be prejudiced against every possible kind of lifestyle and if you accidentally left anyone out they'd bully them until they accpeted lower pay, and then saying it was okay because you didn't say you wanted them to be treated fairly and boasting about how great they are at cutting costs.
Errm ATI cards fully support Floating Point 16 bit precision, Using the r200 pathway on Doom 3 is ran in 16 bit precision.

I dunno where the heck you learned that the r300 cannot do 16 bit precision.

And Vsync used to be part of DX specification. Microsoft would not certify any drivers that allowed for Vsync to be turned for until like the year 2001.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 03-30-03, 03:13 PM   #17
digitalwanderer
 
digitalwanderer's Avatar
 
Join Date: Jul 2002
Location: Highland, IN USA
Posts: 4,944
Default Too be honest....

....I don't really understand it all when you guys get all "techy" on me, I just hate to see nVidia weaseling there way out of this with a whimper and no roar!

(Yeah, this comming from the guy who's all excited he just bought a Gainward GF4 4400 GS. Yes, I am a hypocrite. )
__________________
[SIZE=1][I]"It was very important to us that NVIDIA did not know exactly where to aim. As a result they seem to have over-engineered in some aspects creating a power-hungry monster which is going to be very expensive for them to manufacture. We have a beautifully balanced piece of hardware that beats them on pure performance, cost, scalability, future mobile relevance, etc. That's all because they didn't know what to aim at."
-R.Huddy[/I] [/SIZE]
digitalwanderer is offline   Reply With Quote
Old 03-30-03, 03:23 PM   #18
walkndude
Guest
 
Posts: n/a
Default

Chris, the r300 is fixed at 24fp.

It will run any application that asks for 16fp at 24fp... not a good or a bad thing just the way its designed...
  Reply With Quote
Old 03-30-03, 03:24 PM   #19
Chalnoth
Registered User
 
Join Date: Jul 2002
Posts: 1,293
Default

I think the problem is that DirectX offers no way to expose FX12 functionality (12-bit integer).

Since the FX can execute FX12 and floating-point ops in serial, this is a major problem for the performance of the FX.
__________________
"Physics is like sex. Sure, it may give some practical results, but that's not why we do it." - Richard P. Feynman
Chalnoth is offline   Reply With Quote
Old 03-30-03, 03:36 PM   #20
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
Originally posted by walkndude
Chris, the r300 is fixed at 24fp.

It will run any application that asks for 16fp at 24fp... not a good or a bad thing just the way its designed...

Hmm From what I have read, The Radeon 9700 Pro supports 64 Bit floating point frame buffers,

To me that actually seems very limited, As doom 3 is run in 16 bit precision on the r200 pathway.

My understanding of DirectX 9.0 thats actually overkill forcing 24 bit all the time in situations where its not neccasary


This is my understanding of the DirectX 9.0 specifications.

Hardware Support:
ATI R3xx - offers 16FP and 24FP (32-bit formats must therefore be reduced) pixel shader precision
NVIDIA NV3x - offers 16FP and 32FP pixel shader precision

DX9 specification requirements:
PS2.0 registers
colour = 8-bit integer only


constant float = minimum 16-bit FP but this limits the actual number of constants that can be used


input texture coordinate = minimum 24-bit FP, preferred 32-bit FP, 16-bit FP partial precision for dependent reads(

sampler = minimum 16-bit FP to support 16-bit texture formats

temporary = minimum 16-bit to support anything taken from a 16-bit FP source
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote

Old 03-30-03, 08:21 PM   #21
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by ChrisRay
Well I do know Game Test 1, 2, 3 do not require FP 24 bit precision.
this is true, because they do not use DX9
  Reply With Quote
Old 03-30-03, 08:45 PM   #22
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by ChrisRay
Errm ATI cards fully support Floating Point 16 bit precision, Using the r200 pathway on Doom 3 is ran in 16 bit precision.

I dunno where the heck you learned that the r300 cannot do 16 bit precision.

And Vsync used to be part of DX specification. Microsoft would not certify any drivers that allowed for Vsync to be turned for until like the year 2001.
I don't know where you heard R300 can do FP16, I have never heard that stated, and it does not seem likely. people "in the know" always say that R300 does not support FP16, and does not support true 128bit color either(FP32). we all know it's max is FP24.

i'm not really sure where you heard the R200 pathway uses 16bit floating point precision, AFAIK Carmack has never explicitly stated such a thing.

Carmack said this in his .plan
Quote:
The reason for this is that ATI does everything at high precision all the time, while Nvidia internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by Nvidia, so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.
he says that ATI does everything in "high precision all the time." he also says that nvidia supports 3 formats, which we know to be INT12, FP16, and FP32.

he then says that "the exact precision ATI uses in in between the floating point precisions offered by Nvidia." again, nvidia offers FP16, and FP32, this suggests that the R300 can only use FP24.
  Reply With Quote
Old 03-30-03, 09:37 PM   #23
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
I don't know where you heard R300 can do FP16, I have never heard that stated, and it does not seem likely. people "in the know" always say that R300 does not support FP16, and does not support true 128bit color either(FP32). we all know it's max is FP24.

i'm not really sure where you heard the R200 pathway uses 16bit floating point precision, AFAIK Carmack has never explicitly stated such a thing
Where I got that was from mis representation of its specs from ATI,

Then again ATI specifically states it supports 128 bit precision too

According to their specifications its support 64 bit floating frame buffers, But thats another things,

Since aparently its just 24 bit downsampling to 16 bit (which I think is retarded for any given number of reasons)

And ya, Carmack does state the r200 uses 16 bit precision in Its shading, It has too. To support r200 features. So therefore, The R300 is aparently down sampling its quality, Which isn't exactly a good representation of performance.

Actually carmack openly supports 16 bit precision over 24 bit and 32 bit. This can be seen in his beyond3d interview

IMO being strictly 24 bit is ATis Gift and its curse, Because they won't benefit from good programming such as carmacks precision modifiers in 16 bit, As 24 bit isn't really offering any IQ over 16 bit.

But they also have the benefit of being "just enough" For specifications,


Either or, I think ATIS implementation of its floating point precision kinda leaves a little bit to be desired. Expecially when you consider DirectX 9.0 current specification. As ATis card is just a bare minimum for DX 9.0 I'm not quite sure they chose to stick with strict 24 bit precision. DX 9.0 specifications be damned. Probably to save Die space on their already crazily overloaded 0.15 micron proccess.

From a programmers point of view, They leave little room for modification or tweaking, And thats always a bad thing, I can see why John Carmack Stated he has become limited by the r300 programmability. Kinda disapointing to me. Oh well.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 03-30-03, 10:18 PM   #24
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by ChrisRay
And ya, Carmack does state the r200 uses 16 bit precision in Its shading, It has too. To support r200 features. So therefore, The R300 is aparently down sampling its quality, Which isn't exactly a good representation of performance.
can you explain this? why does the R200 path "have to" use FP16 for the R300?
  Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
Nforce AGP & unreal 2003 nichos NVIDIA Linux 1 10-18-02 05:21 PM
Does anyone like the cool water reflection effect in unreal 2003? imtim83 Gaming Central 15 09-20-02 10:18 PM
NV30 not shipping until Feb. 2003? sbp Rumor Mill 40 09-17-02 10:41 PM

All times are GMT -5. The time now is 02:18 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.