Go Back   nV News Forums > Hardware Forums > Benchmarking And Overclocking

Newegg Daily Deals

Reply
 
Thread Tools
Old 04-06-03, 06:30 AM   #109
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by sapient
[b]Also your other comment about programmability I think u are forgetting something HLSL . It doesn't matter how "programmable " your gpu is running proprietry/own coding language if the only coding language people are going to use is a 3rd party language designed for a base minimum requirement defined by the said 3rd party. Why do u think nvidia is pushing for Cg?[b]
um, of course it matters how programmable something is. are you forgetting that DX9 does not have fixed specs? DX9 supports PS2.0, PS2.0+, and PS3.0 officially, at least. all of these offer differing programmability, with obviously no one having PS3.0 support yet.

HLSL isn't some magic language that lets you do anything you want. it just makes programming easier.
  Reply With Quote
Old 04-06-03, 08:14 AM   #110
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

Quote:
Originally posted by Chalnoth
I think nVidia is pushing Cg because it's the only realistic way to get optimum performance out of an NV3x part on the shader side (in OpenGL, at least...Microsoft's own HLSL compiler seems to work fairly well, though I think MS still should have some integer data types in PS 2.0).
nVidia would kill for having integer support in DX9. I doubt they'd even care if it was in PS2.0. - PS2.0.+ is sufficent for them.

INT12 is 3 or 4 times faster than FP16 on a NV3x, depending on the operation

Link: http://www.beyond3d.com/forum/viewtopic.php?t=5150

IMO, that's one of the best threads about the NV3x. Ever.
If you are serious about understanding the NV3x, it's a must read.


Uttar
Uttar is offline   Reply With Quote
Old 04-06-03, 12:20 PM   #111
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

Quote:
Originally posted by sapient
3-4 years?????
You do realize that the rate CPUs are getting faster 3-4 years means that your cPU is ancient. I think most people usually upgrade within 3 years and since grapohic cards that you are talking about (9500 & 5600) are mid level cards people will upgrade quicker owing to the fact that new games will require faster cards.

I myself upgrade video cards every year but I only buy mid level cards. I bought Radeon 7200 a year later (when 8500 was introduced) and bought 8500 when 9700 was introduced (both times I paid less than 100$ for a card that was selling for 399$ a year earlier.

Also your other comment about programmability I think u are forgetting something HLSL . It doesn't matter how "programmable " your gpu is running proprietry/own coding language if the only coding language people are going to use is a 3rd party language designed for a base minimum requirement defined by the said 3rd party. Why do u think nvidia is pushing for Cg?

Keeping in mind the point above a developer is going to code in HLSL or OGL (both of which are supported by "both" companies) except the implementation is different. R300 is said to run "non propritary" i.e. standard DX paths faster than nv30 so which one is better in you opinion now?

What good is a feature that is never used? ATI's TRUFORM and nv30s longer shaders/dynamic looping (maybe i am mistyping) etc look very good on paper but are they really worth paying extra for when they are not being used?

The problem is, I wasn't comparing me and you, Most people "non Enthusiastt" people.

Heck I was chatting with a friend online yesterday (she doesnt know much about computers) And was telling me how her pentium 4 1.4 Ghz, Geforce 3 card was the top bread and butter and would last her another few years before she would have to upgrade.

The problem with enthusiasts is we make up such a small percentage of the computer market.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 04-06-03, 01:38 PM   #112
Chalnoth
Registered User
 
Join Date: Jul 2002
Posts: 1,293
Default

Quote:
Originally posted by Uttar
[b]nVidia would kill for having integer support in DX9. I doubt they'd even care if it was in PS2.0. - PS2.0.+ is sufficent for them.

INT12 is 3 or 4 times faster than FP16 on a NV3x, depending on the operation
No, INT12 has twice the execution units, apparently. I still think more tests need to be done (the total number of execution units just doesn't add up...). By combining floating-point ops and integer ops, then, maximum performance can be achieved.

Just like with CPU's, I see no reason to believe that integer operations are pointless in a GPU.
__________________
"Physics is like sex. Sure, it may give some practical results, but that's not why we do it." - Richard P. Feynman
Chalnoth is offline   Reply With Quote
Old 04-06-03, 02:39 PM   #113
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

Quote:
Originally posted by Chalnoth
No, INT12 has twice the execution units, apparently. I still think more tests need to be done (the total number of execution units just doesn't add up...). By combining floating-point ops and integer ops, then, maximum performance can be achieved.

Just like with CPU's, I see no reason to believe that integer operations are pointless in a GPU.
Actually, it seems that there is one FP/Texture unit, and two INT units. But the FP unit can do INT stuff too - that's why it's up to 3 times faster.

The idea would be to mix both FP & Int, obviously, and that would be on average probably two times faster. Also, minimum register usage is *critical* on the NV3x.

BTW, I'm also of the opinion that keeping integer hardware on GPUs is not useless. But if you don't have sufficent float power, it still isn't a good idea.


Uttar
Uttar is offline   Reply With Quote
Old 04-06-03, 02:54 PM   #114
Chalnoth
Registered User
 
Join Date: Jul 2002
Posts: 1,293
Default

Well, as I said, the number of execution units doesn't add up. I'm not sure the person who wrote those shaders tested adequately for parallelism (that is, the possibility of executing two FP instructions per clock is still open...if those two instructions must be executed in parallel instead of serial...which would mean that they would have to have independent data...).
__________________
"Physics is like sex. Sure, it may give some practical results, but that's not why we do it." - Richard P. Feynman
Chalnoth is offline   Reply With Quote
Old 04-06-03, 10:54 PM   #115
pancakebunny
Registered User
 
Join Date: Jan 2003
Posts: 7
Default

Well, I'm going 9800 Pro but I just want to state that, the NV30 is just garbage. Why? Because the design of the thing does not have DX9 in mind. They shouldn't have hot wired features in the card that wouldn't work with DX9 gracefully. So now it needs special attention for it to work properly, which is BS imo. The R300 isn't as programmable as the FX but the R350 is far more programmable than the FX. From proof I've seen, from both sides and examples, there is just no convincing me that the NV30 is a great product.

It's design is flawed and did not go with what MS had called for in DX specs. That's not MS's fault and Nvidia has alot of gall requesting for MS to lower the precision requests in DX9 when they KNEW what DX9 was going to call for. CG might be useful and then again CG could just be Nvidia's way of fixing their mistake. I see CG as another Glide 2.0 if it doesn't support other programmable GPU cards. And frankly, for NV35, if its the same GPU like the NV30 but with a 256-bit memory interface...big whoop then. Sure it'll be faster, but it still will be a crap design product. Have to wait and see for that one.

I'm not nvidiot nor atidiot but I know what's been delivering the goods and whats been honest. Both sides have screwed up before, but now Nvidia is really in the corner with their own drivers and hardware and where ATi is trying to aim high in quality and performance. I've heard alot of you guys argue with techniques and specs but in the end, in just direct tests, both cards are fairly fast agains't one another but I'm just tired of over hype. Nvidia burned me with their BS pr and I certainly had enough of that with Mark Rein from Epic.

Uttar, I read that link but since DX9 is it's own api, of course it won't use FX12 which is Nvidia's own proprietary. Again, a dumb move to have support features that won't be called for in an API unless given special attention.
pancakebunny is offline   Reply With Quote
Old 04-06-03, 11:02 PM   #116
AngelGraves13
Registered User
 
Join Date: Aug 2002
Posts: 2,383
Default

I only have one thing to say "nv35"
AngelGraves13 is offline   Reply With Quote

Old 04-06-03, 11:19 PM   #117
Chalnoth
Registered User
 
Join Date: Jul 2002
Posts: 1,293
Default

Quote:
Originally posted by pancakebunny
Well, I'm going 9800 Pro but I just want to state that, the NV30 is just garbage. Why? Because the design of the thing does not have DX9 in mind.
Now, why? DX9 was designed after nVidia had essentially finilized the specs for then NV30. nVidia got screwed because Microsoft doesn't believe in data types (apparently).

It's hardware first, software later. Microsoft designs the next DirectX release based upon upcoming hardware.

And the number of PS instructions available is only one part of programmability. The GeForce FX still has more available instructions in the pixel shader (such as DDX/DDY, as one example, that are necessary for filtering in the pixel shader, which can be used for a variety of purposes), and has quite a bit over the R350 in vertex shader capabilities.

Quote:
That's not MS's fault and Nvidia has alot of gall requesting for MS to lower the precision requests in DX9 when they KNEW what DX9 was going to call for.
Hardware before software. The NV30 architecture is based upon the supposition that different types of processing require different precisions to work optimally. Different data types have been used in CPU's for quite some time, why should GPU's be any different?

Quote:
CG might be useful and then again CG could just be Nvidia's way of fixing their mistake. I see CG as another Glide 2.0 if it doesn't support other programmable GPU cards.
It does.

Quote:
And frankly, for NV35, if its the same GPU like the NV30 but with a 256-bit memory interface...big whoop then. Sure it'll be faster, but it still will be a crap design product.
Aren't your complaints with the NV30 based upon speed? Do you see the problem with the above statement, then?
__________________
"Physics is like sex. Sure, it may give some practical results, but that's not why we do it." - Richard P. Feynman
Chalnoth is offline   Reply With Quote
Old 04-06-03, 11:55 PM   #118
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default

I'm really confused to why people think CG is only for Nvidia.


There are a few problems with CG, Granted its current Lack of Pixel Shader 1.4 (tho its been suggested that 1.4 will be in a future release)

CG also contains come Nvidia only code, But it also contains the HLSL code and Arb code needed for backward compatibility among GPUS...
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 04-07-03, 04:42 AM   #119
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

Well, CG is sometimes slower than HLSL, even on a NV3x. This was proved by pocketmoon over at Beyond3D. So on a non-NV3x, I wouldn't be too optimistic on performance.

Microsoft got a LOT of experience with compilers. nVidia don't. I guess it'll get better in the future. But for now, Cg is still slower than HLSL.


Uttar
Uttar is offline   Reply With Quote
Old 04-07-03, 07:18 AM   #120
Hanners
Elite Bastard
 
Hanners's Avatar
 
Join Date: Jan 2003
Posts: 984
Default

Quote:
Originally posted by ChrisRay
I'm really confused to why people think CG is only for Nvidia.
I think people tend to replace the phrase 'optimised for' with 'only for' for effect.
__________________
Owner / Editor-in-Chief - Elite Bastards
Hanners is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
Nforce AGP & unreal 2003 nichos NVIDIA Linux 1 10-18-02 05:21 PM
Does anyone like the cool water reflection effect in unreal 2003? imtim83 Gaming Central 15 09-20-02 10:18 PM
NV30 not shipping until Feb. 2003? sbp Rumor Mill 40 09-17-02 10:41 PM

All times are GMT -5. The time now is 02:50 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.