Originally Posted by ChrisRay
Except. You are wrong. Alot of people did believe EQ2 was a DirectX 9.0 C game and using Shader model 3.0. Thats the entire reason I dumped the shader code. I wanted to disprove that. What it sadly did was create alot of people assuming the performance problems were related to EQ using 1.1 shaders rather than 2.0/3.0 Shaders. There was no concrete information on EQ 2's shader technology and code when the game was released. Even to this day Sony keeps that code close to their chest. In hind sight. I wish I hadn't dumped the code because it created a ton of negative feedback regarding something most did not seem to understand.
The only irony in this thread is people like you who still seem to believe that EQ 2 stuttering problems and underwhelming performance were related to the shader implementation. When infact it had nothing to do with that at all. As I said. Anyone can load up a Geforce 6/7 card these days and get no stuttering at all. Even more amusing is your attempt to turn this into some ancient X800 verses Geforce 6 debate like anyone gives a rat's colon anymore. I spent more time working on this issue with Nvidia than perhaps any other large software problem I have dealt with to date.
Far Cry did not use Dynamic branching. It used Static Branching. The only difference between the ATI/Nvidia pathways was that Nvidia got more shader instructions in a single pass. Where ATI was limited because they couldn't exceed 128 instructions. Not every shader benefited from this. Infact there were only a few places that these performance gains were even relevant. The gains were only seen in heavy lighting situations where the shaders were used to draw lighting. It's not coincidence that EQ 2 is using its new SM 3.0 code to improve lighting and shadowing conditions.
Dynamic Branching on the Geforce 6/7 cards was/is very slow. Even today it is not used on these cards because of performance impact. I suggest you read wikpedia to actually learn what the Geforce 6/7 cards did compared to their counterparts at the time. It wasn't until the Geforce 8 series did Nvidia increase their dynamic branching performance to where it was beneficial to be used. This isn't something that was made up. It's pretty common knowledge about the Nv4x hardware.
Dynamic Branching on Geforce 6 wasn't optimal but useful. Once again Chris, you're absolutely wrong about its performance impact. It could certainly be beneficial for performance, and was faster than static branching on the hardware. Geforce 6's coarse granularity obviously hampered branching performance, but it wasn't "broken", just less useful than in future iterations. Let's see: http://ixbtlabs.com/articles2/gffx/nv43-p02.html
This is getting boring. Please for the love of god do some research.
About Dynamic Branching in Far Cry. I'll check your link out in a second. It does seem that most of the performance benefit came from the longer shader instruction allowance, and that Crytek chose not to use dynamic flow control in lieu of static branches and unrolled loops. Great, why is this important? It still stands that SM3.0 was fine on Geforce 6.
I never commented on EQ2 stuttering, just the lower overall performance.
Finally, once again: Chris, Geforce 6's slower performance with SM1.1 shader code OBVIOUSLY MUST AFFECT ITS PERFORMANCE WITH SM1.1 SHADERS IN-GAME. I hope you understand this. Its not the only factor with regards to EQ2, but given that this is probably the G6800Ultra's weakest point compared to X800XT PE or X850 XT, it is certainly an obvious one. I'm not sure what % of run-time was spent on execution of shader code, but it was clearly significant. The only reason you're arguing this is because someone is challenging your authority. It's silly my friend.
I'm completely sick of talking to you. I can admit when I don't know something. I was wrong about dynamic flow control with Far Cry, although that was easy to mistake, and does not take away from my assertion that SM3.0 helped Geforce 6.0. I'm not sure what you're trying to argue either. One moment you say that people are wrong because they blamed EQ2's poor performance on Geforce 6's SM3.0 implementation, next you're arguing that Geforce 6 performance on SM1.1 - SM2.0 code is a non factor in in-game performance, then you're saying that SM3.0 implementation is broken in Geforce 6 hardware. It's complete nonsense.
SM3.0 improved performance on G6, SM1.1-2.0 was slower overall compared to the competition (sometimes dramatically, especially with SM1.1 shaders), and EQ2 was certainly slower on G6 in part
because of its slower performance on SM2.0 MODEL code (1.1-2.0). One caveat here that even without PP calls, Geforce 6 wasn't necessarily slower per clock on all shader code. It sometimes lost quite dramatically (once again, up to 2x slower with SM1.1 shaders) but sometimes the loss was incidental to its lower clock speed.