View Single Post
Old 07-31-09, 07:46 AM   #17
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by pakotlar View Post
Yeah it was just there was no comedy in it because it's based on an non-existant irony. It would have been funny if people all along had actually thought that Geforce 6800 had a broken SM 3.0 implementation, and that EQ2 performed poorly because of that. But no one actually did. No one thought that deeply about its performance, beyond that the 6800 had generally weaker shader performance overall. The funny part is that you imagined a fiction, made an ironical connection to a fictitious revelation, and then vehemetly defended your position. That was funny
Except. You are wrong. Alot of people did believe EQ2 was a DirectX 9.0 C game and using Shader model 3.0. Thats the entire reason I dumped the shader code. I wanted to disprove that. What it sadly did was create alot of people assuming the performance problems were related to EQ using 1.1 shaders rather than 2.0/3.0 Shaders. There was no concrete information on EQ 2's shader technology and code when the game was released. Even to this day Sony keeps that code close to their chest. In hind sight. I wish I hadn't dumped the code because it created a ton of negative feedback regarding something most did not seem to understand.

The only irony in this thread is people like you who still seem to believe that EQ 2 stuttering problems and underwhelming performance were related to the shader implementation. When infact it had nothing to do with that at all. As I said. Anyone can load up a Geforce 6/7 card these days and get no stuttering at all. Even more amusing is your attempt to turn this into some ancient X800 verses Geforce 6 debate like anyone gives a rat's colon anymore. I spent more time working on this issue with Nvidia than perhaps any other large software problem I have dealt with to date.

Now if you want to pick at someone. I'll try picking at you for a bit,

Quote:
Now with PP (so 16bit floating point vs 24 bit on ATI): Evident in the above benchmarks as well as here:
Partial Precision on both the geforce 6 and Geforce 7 hardware did not give huge performance benefits. Infact 32 FP only created minor deficits. All partial precision did was lower registry constraints on the Geforce 6 cards. Geforce 6 cards were able to handle Pixel Shader 2.0 as well as shader 1.1. Also, Full 32P only reduced performance by about 10% max on Geforce 6 hardware as it wasn't really that limiting due to the increased registry space available. This performance impact was only decreased when the Geforce 7 series increase3d shader efficiency.

http://www.nvnews.net/vbulletin/show...&highlight=Cry

Quote:
However, SM 3.0 offers more powerful dynamic branching support and helps Geforce 6 out quite a bit -->
Far Cry did not use Dynamic branching. It used Static Branching. The only difference between the ATI/Nvidia pathways was that Nvidia got more shader instructions in a single pass. Where ATI was limited because they couldn't exceed 128 instructions. Not every shader benefited from this. Infact there were only a few places that these performance gains were even relevant. The gains were only seen in heavy lighting situations where the shaders were used to draw lighting. It's not coincidence that EQ 2 is using its new SM 3.0 code to improve lighting and shadowing conditions.

Quote:
However, SM 3.0 offers more powerful dynamic branching support and helps Geforce 6 out quite a bit -->
Dynamic Branching on the Geforce 6/7 cards was/is very slow. Even today it is not used on these cards because of performance impact. I suggest you read wikpedia to actually learn what the Geforce 6/7 cards did compared to their counterparts at the time. It wasn't until the Geforce 8 series did Nvidia increase their dynamic branching performance to where it was beneficial to be used. This isn't something that was made up. It's pretty common knowledge about the Nv4x hardware.

Quote:
The Geforce 6 had to use partial precision shaders [which is not DX9.0 spec]
Yes PP is DirectX 9.0 Spec. It's been a flag for all DirectX 9.0 compiler since the second iteration of it. For SM 3.0 Compliance you must support a minimum of FP32. However you are not required to use and you can still use partial precision flags in your compiler.

Quote:
A final example of X800/X850's superiority in SM2.0 heavy games (and Half Life 2 does not support PartialPrecision (PP) calls):
Half Life 2 didn't use PP because there was no gain to be had. Using any kind of DirectX modifying program such as 3Danalyze let you force the entire game in 16 PP. And the performance gain was minimal for Geforce 6 cards ((and 7 consequently)) very similar to my Far Cry post above. The performance benefit was not to be see and some of the shaders had rendering issues due to being designed around a 24 bit precision. Arguably it could have been done in 16 bit but the art team focused its efforts on the 24 bit or higher. However it was not worth the effort as the Geforce 6/7 hardware ran FP32 nearly just as fast. The only hardware that would have seen tremendous speedups was the Nv3x cards. But they were too slow at even FP16 rendering to even bother with. Let alone FP32.



Next time you want to "educate" me on the irony of game code and how "Delusional" I am. At least understand what the hell you are talking about when you reference technology. Because you dont seem to know a damn thing about what you are talking about. Also do me the favor of not quoting me out of context in this silly post war you are trying to have with me. Those posts are nearly 5 years old. Who cares? All they show is I dedicated an enormous amount of my personal time to bug testing EQ 2 for Nvidia.

Quote:
I could give a crap about it this whole debate. You mentioned a couple of things which were incorrect,
Except. You were wrong the entire time. All you did was create an argument about archaic technology that I'd be surprised if anyone even cared about anymore.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote