Go Back   nV News Forums > Software Forums > Gaming Central > MMORPGs

Newegg Daily Deals

Reply
 
Thread Tools
Old 07-31-09, 02:43 AM   #13
SH64
MAXIMUM TECH
 
SH64's Avatar
 
Join Date: Jul 2003
Location: Indiana
Posts: 12,202
Default Re: EQ2 Shader 3.0 upgrade

WOW you guys are still arguing over this game?!
__________________


- "My name is RAM and my tank is full"

http://warhawk64nv.mybrute.com/ <-- pupils go thaarrr! Or,
http://silenthunter64.mybrute.com
SH64 is offline   Reply With Quote
Old 07-31-09, 03:26 AM   #14
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Definately wasn't my intention. It was a comical hindsight. Thats all.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 06:40 AM   #15
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
I did not say that. I said this game was built to be backwards compatible with a wide range of hardware. And that 1.1 was its baseline code and that all hardware ran that baseline code. When a developer builds a game. They have to program for what they feel is smallest base use at the time. The engine is now 4 years old. If they were doing it now they wouldnt even support DX 8.0 cards. All I said was the Shader implementation that was in EQ had nothing to do with why EQ 2 performed bad at the time. All the reasons EQ 2 had performance issues on geforce 6/7 cards are now gone in the latest build of EQ 2. And the shader model baseline has not changed any.

Nope. I think I wasn't clear enough when I said I did not care about how the X800 performed in this game. Or how the X800 compared to the Geforce 6 period. I could argue for hours about how the Geforce 6/7 series performed verses the X800 and the benefits/downsides of such a decision. But once again. I don't care. And its pointless discussion about hardware no one would buy today. So please. I am begging you. Spare me the X800/Geforce 6 debate,. I could give a crap about it.

Sorry. But if you felt duped by Sony or Nvidia's marketing here then you were a victom of wishful thinking. Sony never once advertised that EQ 2 was a SM 3.0 game. Sony called EQ 2 a "Next Generation MMORPG". And there are many ways that "next" gen could be interpretted. Sony simply said that the game was built with future hardware in mind. And hardware has progressed alot and its definately helped the game run better.
I could give a crap about it this whole debate. You mentioned a couple of things which were incorrect, I pointed them out. I really don't want to argue with you over whatever it is you're trying to say.

To be honest I'm not really interested on your views about what developers target when they make games. I was speaking specifically about your Sm 3.0 and Geforce 6 comment, which was flat out bs, and some of the things you posted after. Its a public forum, so when you make an assertion based on nothing but your personal opinion you should expect to get called on it.

And I didn't feel duped at all. A lot of other people did though, which you've mentioned yourself over and over again. I never wrote that Sony advertised EQ 2 as anything. I said nvidia advertised it. Thanks, having this kind of debate with you is really pointless.

About your thoughts on the matter. Chris, I'm getting the feeling that its not just the stuff you posted then that's meaningless.
  Reply With Quote
Old 07-31-09, 06:50 AM   #16
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
Definately wasn't my intention. It was a comical hindsight. Thats all.
Yeah it was just there was no comedy in it because it's based on an non-existant irony. It would have been funny if people all along had actually thought that Geforce 6800 had a broken SM 3.0 implementation, and that EQ2 performed poorly because of that. But no one actually did. No one thought that deeply about its performance, beyond that the 6800 had generally weaker shader performance overall. The funny part is that you imagined a fiction, made an ironical connection to a fictitious revelation, and then vehemetly defended your position. That was funny
  Reply With Quote
Old 07-31-09, 08:46 AM   #17
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by pakotlar View Post
Yeah it was just there was no comedy in it because it's based on an non-existant irony. It would have been funny if people all along had actually thought that Geforce 6800 had a broken SM 3.0 implementation, and that EQ2 performed poorly because of that. But no one actually did. No one thought that deeply about its performance, beyond that the 6800 had generally weaker shader performance overall. The funny part is that you imagined a fiction, made an ironical connection to a fictitious revelation, and then vehemetly defended your position. That was funny
Except. You are wrong. Alot of people did believe EQ2 was a DirectX 9.0 C game and using Shader model 3.0. Thats the entire reason I dumped the shader code. I wanted to disprove that. What it sadly did was create alot of people assuming the performance problems were related to EQ using 1.1 shaders rather than 2.0/3.0 Shaders. There was no concrete information on EQ 2's shader technology and code when the game was released. Even to this day Sony keeps that code close to their chest. In hind sight. I wish I hadn't dumped the code because it created a ton of negative feedback regarding something most did not seem to understand.

The only irony in this thread is people like you who still seem to believe that EQ 2 stuttering problems and underwhelming performance were related to the shader implementation. When infact it had nothing to do with that at all. As I said. Anyone can load up a Geforce 6/7 card these days and get no stuttering at all. Even more amusing is your attempt to turn this into some ancient X800 verses Geforce 6 debate like anyone gives a rat's colon anymore. I spent more time working on this issue with Nvidia than perhaps any other large software problem I have dealt with to date.

Now if you want to pick at someone. I'll try picking at you for a bit,

Quote:
Now with PP (so 16bit floating point vs 24 bit on ATI): Evident in the above benchmarks as well as here:
Partial Precision on both the geforce 6 and Geforce 7 hardware did not give huge performance benefits. Infact 32 FP only created minor deficits. All partial precision did was lower registry constraints on the Geforce 6 cards. Geforce 6 cards were able to handle Pixel Shader 2.0 as well as shader 1.1. Also, Full 32P only reduced performance by about 10% max on Geforce 6 hardware as it wasn't really that limiting due to the increased registry space available. This performance impact was only decreased when the Geforce 7 series increase3d shader efficiency.

http://www.nvnews.net/vbulletin/show...&highlight=Cry

Quote:
However, SM 3.0 offers more powerful dynamic branching support and helps Geforce 6 out quite a bit -->
Far Cry did not use Dynamic branching. It used Static Branching. The only difference between the ATI/Nvidia pathways was that Nvidia got more shader instructions in a single pass. Where ATI was limited because they couldn't exceed 128 instructions. Not every shader benefited from this. Infact there were only a few places that these performance gains were even relevant. The gains were only seen in heavy lighting situations where the shaders were used to draw lighting. It's not coincidence that EQ 2 is using its new SM 3.0 code to improve lighting and shadowing conditions.

Quote:
However, SM 3.0 offers more powerful dynamic branching support and helps Geforce 6 out quite a bit -->
Dynamic Branching on the Geforce 6/7 cards was/is very slow. Even today it is not used on these cards because of performance impact. I suggest you read wikpedia to actually learn what the Geforce 6/7 cards did compared to their counterparts at the time. It wasn't until the Geforce 8 series did Nvidia increase their dynamic branching performance to where it was beneficial to be used. This isn't something that was made up. It's pretty common knowledge about the Nv4x hardware.

Quote:
The Geforce 6 had to use partial precision shaders [which is not DX9.0 spec]
Yes PP is DirectX 9.0 Spec. It's been a flag for all DirectX 9.0 compiler since the second iteration of it. For SM 3.0 Compliance you must support a minimum of FP32. However you are not required to use and you can still use partial precision flags in your compiler.

Quote:
A final example of X800/X850's superiority in SM2.0 heavy games (and Half Life 2 does not support PartialPrecision (PP) calls):
Half Life 2 didn't use PP because there was no gain to be had. Using any kind of DirectX modifying program such as 3Danalyze let you force the entire game in 16 PP. And the performance gain was minimal for Geforce 6 cards ((and 7 consequently)) very similar to my Far Cry post above. The performance benefit was not to be see and some of the shaders had rendering issues due to being designed around a 24 bit precision. Arguably it could have been done in 16 bit but the art team focused its efforts on the 24 bit or higher. However it was not worth the effort as the Geforce 6/7 hardware ran FP32 nearly just as fast. The only hardware that would have seen tremendous speedups was the Nv3x cards. But they were too slow at even FP16 rendering to even bother with. Let alone FP32.



Next time you want to "educate" me on the irony of game code and how "Delusional" I am. At least understand what the hell you are talking about when you reference technology. Because you dont seem to know a damn thing about what you are talking about. Also do me the favor of not quoting me out of context in this silly post war you are trying to have with me. Those posts are nearly 5 years old. Who cares? All they show is I dedicated an enormous amount of my personal time to bug testing EQ 2 for Nvidia.

Quote:
I could give a crap about it this whole debate. You mentioned a couple of things which were incorrect,
Except. You were wrong the entire time. All you did was create an argument about archaic technology that I'd be surprised if anyone even cared about anymore.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 09:43 AM   #18
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Quote:
To be honest I'm not really interested on your views about what developers target when they make games. I was speaking specifically about your Sm 3.0 and Geforce 6 comment, which was flat out bs, and some of the things you posted after. Its a public forum, so when you make an assertion based on nothing but your personal opinion you should expect to get called on it.
If you don't care. Then don't post. I worked directly with Nvidia's driver team on this. Relaying feedback, experience, and data. Which is where I learned that the SM 3.0 is not the culprit for the low performance problems in EQ 2. It was in fact a driver/software problem that both Sony and Nvidia had to work together for a fix on. Which mind you they did eventually fix. As I have pointed out numerous times. You can take a Geforce 6/7 into this game now and not have any of the early problems that were experienced by users. After Nvidia confirmed that the game was not being slowed down due to any SM 3.0 implementation. I decided to investigate myself and dump the code for it. I did not make any of this up based on my "personal" opinion. My part of being an Nvidia user Group member is to collect data, Work with Nvidia on hot topics, Driver issues, and test new hardware before it becomes available to the press. So I have a pretty good understanding of the issues I am talking about.

I would love to post my e-mail correspondant with Nvidia on this issue. I cant however since I am restricted by NDA regarding most of my e-mail conversations with Nvidia due to the issues of driver patents and other various NDA issues. Am I pissy about this? Yes. Paticularly because you've taken the time to spit on something that I put alot of time and effort into back when it was relevant to alot of people. Needless to say. I'm done arguing with you. I've already lost my patience. This is not worth losing my temper over. Hence I'm done.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 09:47 AM   #19
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
Yes PP is DirectX 9.0 Spec. It's been a flag for all DirectX 9.0 compiler since the second iteration of it. For SM 3.0 Compliance you must support a minimum of FP32. However you are not required to use and you can still use partial precision flags in your compiler.

Half Life 2 didn't use PP because there was no gain to be had. Using any kind of DirectX modifying program such as 3Danalyze let you force the entire game in 16 PP. And the performance gain was minimal for Geforce 6 cards ((and 7 consequently)) very similar to my Far Cry post above. The performance benefit was not to be see and some of the shaders had rendering issues due to being designed around a 24 bit precision. Arguably it could have been done in 16 bit but the art team focused its efforts on the 24 bit or higher. However it was not worth the effort as the Geforce 6/7 hardware ran FP32 nearly just as fast. The only hardware that would have seen tremendous speedups was the Nv3x cards. But they were too slow at even FP16 rendering to even bother with. Let alone FP32.

Next time you want to "educate" me on the irony of game code and how "Delusional" I am. At least understand what the hell you are talking about when you reference technology. Because you dont seem to know a damn thing about what you are talking about. Also do me the favor of not quoting me out of context in this silly post war you are trying to have with me. Those posts are nearly 5 years old. Who cares? All they show is I dedicated an enormous amount of my personal time to bug testing EQ 2 for Nvidia.
First off, who claimed that Geforce 6 wasn't orthogonally FP32? Btw, before we continue, quick trivia question: For hardware that is int32 throughout, are there any benefits in rendering at say, Int16 color depth? So if we're talking about a card that renders internally at 32bit integer precision, is there any benefit in lowering the color depth of the game to 16bits?

I'll hit the rest of your post a little later. But these portions are pretty laughable. The reason PP wasn't exposed in HL2 has NOTHING to do with performance benefits. Of course PP has performance benefits. The reason HL2 ran without explicit PP shaders is because PP was never considered to be full DX 9.0 spec, and FP16 shaders often compromised quality. Valve was never happy with nVidia's solution, so they didn't use it. That's it. Anyways NV30/35 drivers "did" force pp in HL2, flaggin certain shaders and replacing them with pp versions exposed through the drivers. Ex: http://techreport.com/articles.x/5642.

Like you just said yourself, PP was introduced in the 2nd revision of Dx9.0. It was a shoehorn attempt by nvidia to give the NV30 a chance to compete. The reason it's called partial precision is because it quite literally is not the full precision required by Direct x 9.0 at a minimum. That's why you have to flag the shader. No need to educate me on things like this Chris.

Here, maybe you should read up on this yourself before berating me with unsupported claims: http://www.beyond3d.com/content/interviews/23/

B3D: "Note the word "choices". DX9 specifies that for hardware to be DX9 "compliant", the minimum specification for floating point is 24-bit. 24-bit Floating Point is essentially equivalent to 96-bit colors"

About 6800. I even provided the links (beyond3d evaluations in my 1st post). The 6800 was FP32 throughout the pipeline of course. However the use of PP alleviated some register pressure, and freed up ~30% performance, depending on the shader (some received no benefit). The 6800 was slower in a vast # of shaders, 1.1 to 2.0, so using PP where possible was still preferrable, although not nearly as critical as in NV30/35's case. Still, with 1.1 shaders you could find examples where ATI's 2nd gen DX 9.0 part performed twice as fast as NV40. In some cases the pp version allowed the Geforce 6 to catch up, or even take over in performance compared to ATI, although of course, in many cases register pressure wasn't a bottleneck and 6 series received no performance gains. Again, all the information is out there.

That's why I'm saying discussing this with you isn't going anywhere. Let's just drop it ok? No one is trying to "educate" you. I really don't care what you believe. It's just that certain things are true, and some of what you're claiming isn't. Like in the above quoted. Literally all of the empirical bits are wrong (minus your dedication to EQ2..).

See unlike you, I enjoy facts! Btw, the fact that you beta tested drivers for nvidia really doesn't impress me. I think it'd be interesting to talk about in a different context. There is a reason why students of debate are taught to immediately call out argument from authority (it's an obvious logical fallacy). You'll never see good logicians or politicians supporting empiricist assertions with their authority in the subject matter. It is immediately picked apart and you look like a fool (because really, what does your authority have to do with the validty of a fact...it is either true or not true). Anyways, I've alraedy given you props for being involved in the exposure of EQ2 as a non-SM3.0 game. So no need to stuff our faces with what we already give you credit for.ngs when you do that. It achieves the opposite effect of what I know you're intending.
  Reply With Quote
Old 07-31-09, 09:54 AM   #20
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
If you don't care. Then don't post. I worked directly with Nvidia's driver team on this. Relaying feedback, experience, and data. Which is where I learned that the SM 3.0 is not the culprit for the low performance problems in EQ 2. It was in fact a driver/software problem that both Sony and Nvidia had to work together for a fix on. Which mind you they did eventually fix. As I have pointed out numerous times. You can take a Geforce 6/7 into this game now and not have any of the early problems that were experienced by users. After Nvidia confirmed that the game was not being slowed down due to any SM 3.0 implementation. I decided to investigate myself and dump the code for it. I did not make any of this up based on my "personal" opinion. My part of being an Nvidia user Group member is to collect data, Work with Nvidia on hot topics, Driver issues, and test new hardware before it becomes available to the press. So I have a pretty good understanding of the issues I am talking about.

I would love to post my e-mail correspondant with Nvidia on this issue. I cant however since I am restricted by NDA regarding most of my e-mail conversations with Nvidia due to the issues of driver patents and other various NDA issues. Am I pissy about this? Yes. Paticularly because you've taken the time to spit on something that I put alot of time and effort into back when it was relevant to alot of people. Needless to say. I'm done arguing with you. I've already lost my patience. This is not worth losing my temper over. Hence I'm done.
CHRIS, jeezus man, I NEVER ONCE CLAIMED the performance issues had ANYTHING to do with SM3.0. The fact that the game ran a bit slower on nvidia hardware at the time was at least partially related to the fact that nvidia's 6800 nearly always executed shaders a bit slower compared to X800/x850 (not in all cases slower/clock but after taking clock speed into account, nearly always slower). That's why SM3.0 was such a big deal for nvidia at the time. Well that and HDR. Nvidia had no problems with its SM3.0 implementation. It was actually a big selling point in the cases it was exposed (SC:CS was a great example). That made me want one more than anything else. ATI's solution always felt like it was just "getting by" with FP24 pixel shaders, despite the fact that it was often much faster.

Fine, so you're telling me that nvidia spent a bunch of time optimizing its hardware to run the game well? Great, I guess I'll take your word on it. Its what the company was doing from Nv30 up until G71. So it just had less to fix during Geforce 6 days because the 6 series was a significantly better engineered part. I'm not sure if NV40 did any shader replacement, but I'm certain that I could dig that up. Ok, why are you telling me this?

Are we clear buddy?
  Reply With Quote

Old 07-31-09, 10:05 AM   #21
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Quote:
I'll hit the rest of your post a little later. But these portions are pretty laughable. The reason PP wasn't exposed in HL2 has NOTHING to do with performance benefits. Of course PP has performance benefits.
Minor Performance benefits. Which created issues with glass shaders. If you don't believe me. Take tommti's 3D Analyzer software. And Force "PP" flag on all shaders in Half Life 2. You will set a speedup. But its roughly in the range of 5% and created rendering areas with places like "glass" shaders.

Quote:
Anyways NV35 "did" force pp in HL2
The NV35. Aka Geforce FX 5900 uses DirectX 8.1 as its default renderer for HL2. It Always has. I wasn't talking about the Geforce 5 cards was I?

Quote:
bout 6800. I even provided the links (beyond3d links in my 1st post). The 6800 was FP32 throughout the pipeline. However PP alleviated some register pressure on the 6 series, and freed up ~30% performance, depending on the shader (some received no benefit)
Yes. In a full throughput test where the primary bottleneck was registry performance. The Geforce 6 series could lose somewhere to 30% performance using FP32. However this is a "Synthethic" test. IE. It doesn't take into account that no shader is designed to "Run" like these ones in modern games. Shaders pass through pipeline and then another comes through the pipeline. These specific tests are designed to run the same shader code through the GPU pipeline repeatively as a benchmarks. Its not a real world result. This is why you never saw this kind of performance deficit using FP32 verses FP16. The performance benefits in Half Life 2 were not there.

Quote:
Here is a bit on DX9.0 and partial precision: http://www.beyond3d.com/content/interviews/23/
And your point? I was not arguing that partial precision was good or bad. I said that the performance impact on Geforce 6/7 hardware not large when moving from FP16. Especially compared to Geforce FX. Everyone knows the Geforce FX did not have the registry space to run DirectX 9.0 shaders in FP32 fast. Let alone FP16. The Nv35 in comparison to the NV30 did replace 2 of its integer units with FP units. But was still confined by registry making it unable to cope with heavy registry usage.

Quote:
Minimum Dx9.0 was always considered S16e7. Ok? PP is an nVidia addition to the spec, which Microsoft allowed.
Thats not what you said. You said PP is not a part of DirectX 9.0 spec. That is wrong. Partial Precision is infact a part of all recent DirectX 9.0 builds. And has been since the debut of the Geforce FX. Yes it may not have originally been intended for DirectX 9.0 specification. It was however updated to be included. I am not saying otherwise either.

Quote:
It was always clear this was an extension to MS spec at nvidia's request, and not orthodox Directx 9.0. Anyways, like I said, I don't particularly care what you believe.

See unlike you, I enjoy facts!
There is nothing "non factual" about what I said. SM 3.0 was not initially a part of DirectX 9.0 spec either. The fact is. PP is and has been a supported standard for the majority of time DX 9.0 accelerators have been available. Yes it may have been changed. That doesn't change that your initial comment about PP "Not being DX Spec" was wrong. Because it is a part of baseline DirectX 9.0 spec. Before SM 2.0A, Before SM 2.0B, And Before SM 3.0. You seem to suffer a major point of confusion regarding the Geforce 5/ Geforce 6, Geforce 7, and Geforce 8 series. All which behave very different when operating FP16 or higher calls. You do realise that all your links have done is reinforce my original point?

Quote:
Nvidia had no problems with its SM3.0 implementation.
This is wrong. There were some drawbacks to Nvidia's SM 3.0 implementation. Firstly its Dynamic Branching performance was insufferably slow and unusable. Hence why anything that used SM 3.0 was using Static Branching. ((See Far Cry SM 3.0 implementation. ATI even did a whole campaign of the HD1900XT to where they stated they did "SM 3.0 Right" because their dynamic branching granularity was much better than Nvidias. As well as supporting HDR with AA. ((Though HDR was not specific to SM 3.0)). It wasn't until the Geforce 8 that Nvidia actually supported AA + HDR via the ROPS.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 10:11 AM   #22
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

Quote:
The fact that the game ran a bit slower on nvidia hardware at the time was at least partially related to the fact that nvidia's 6800 nearly always executed shaders a bit slower compared to X800/x850
You have been trying to correlate the initial problems percieved with this game on Nvidia hardware. ((IE the major stuttering problems that many believed to be due to SM 3.0 implementation)) to the shader performance of the Nvidia cards. This is not the case. The performance problems experienced with early builds of Everquest was not shader related. It was an entirely different issue that Sony/Nvidia fixed together. Hence why the same hardware can play the game without stuttering today that was apparent in early builds of the game.

You keep to seem operating under the assumption that this performance problem had anything at all to do with shader performance. It didn't. It was an entirely different issue. There are many functionality issues with a graphic card that go beyond just its shaders. And this issue just happened to be one of the ones that wasn't related to shader performance.

As I said in a prior post. The only reason I dumped EQ 2 shader code was to point that the performance problems were completely unrelated to SM 3.0. Which alot of people believed at the time. Performance variables between the X800/geforce 6 or any piece of hardware are not just limited to the capability of its shader units. Pixel Fillrate, Texture fillrate, Zfill, AA Cycles within the ROPS all play an integral role in performance of a piece of hardware.

At which point. I have gone past the point of sheer annoyance. Hence I am done responding to you. I will close with this. Nothing I have said is factually incorrect. Everything you have linked has reinforced my posts regarding what I have said. That being said. I am going back to the initial discussion.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 10:32 AM   #23
ChrisRay
Registered User
 
ChrisRay's Avatar
 
Join Date: Mar 2003
Location: Tulsa
Posts: 5,101
Default Re: EQ2 Shader 3.0 upgrade

My original post was directed at the irony that alot of people believed that EQ 2 was using SM 3.0 because of Sony and Nvidia's heavy promotion of the game upon release. Despite neither company making a claim to the game being SM 3.0. Part of my chuckle was internally related to what I remember going through 4 and half years ago in regards to this subject. As I said. In Hind Sight. I regret dumping the shader code. Though it probably would have come to light eventually. It gave alot of people the wrong impression as to why EQ was having performance issues with Geforce 6 cards. Which was unrelated to the shader code.

For anyone who's actually interested in the topic at hand. Regarding EQ 2's use of SM 1.1 and how it has impacted their lighting models. One of the EQ devs has been very kind to discuss it in rather good detail.

http://forums.station.sony.com/eq2/p...opic_id=454116

It's very interesting because he talks about how EQ 2 was built from the ground up as a SM 1.1 game. And the engine is partially limited by it. And how the SM 3.0 upgrades will primarily effect lighting and light sources from your charactor. The beauty of such a change is it will likely have no impact on a modern GPU's performance at a moderate resolution. 1680x1050. Due to the fact that the pixel shaders are not the games primary bottleneck. For most people this will be a quality gain at no performance cost. And on some hardware there may be some minor speedups.

He also goes into the possibility of introducing geometry instancing into future foliage. Which could improve performance in some CPU limited scenerios where foliage is a culprit for hurting performance. This is likely the biggest spot where you'll see performance benefits.
__________________
|CPU: Intel I7 Lynnfield @ 3.0 Ghz|Mobo:Asus P7P55 WS Supercomputer |Memory:8 Gigs DDR3 1333|Video:Geforce GTX 295 Quad SLI|Monitor:Samsung Syncmaster 1680x1080 3D Vision\/Olevia 27 Inch Widescreen HDTV 1920x1080

|CPU: AMD Phenom 9600 Black Edition @ 2.5 Ghz|Mobo:Asus M3n HT Deluxe Nforce 780A|Memory: 4 gigs DDR2 800| Video: Geforce GTX 280x2 SLI

Nzone
SLI Forum Administrator

NVIDIA User Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members
ChrisRay is offline   Reply With Quote
Old 07-31-09, 11:05 AM   #24
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
Except. You are wrong. Alot of people did believe EQ2 was a DirectX 9.0 C game and using Shader model 3.0. Thats the entire reason I dumped the shader code. I wanted to disprove that. What it sadly did was create alot of people assuming the performance problems were related to EQ using 1.1 shaders rather than 2.0/3.0 Shaders. There was no concrete information on EQ 2's shader technology and code when the game was released. Even to this day Sony keeps that code close to their chest. In hind sight. I wish I hadn't dumped the code because it created a ton of negative feedback regarding something most did not seem to understand.

The only irony in this thread is people like you who still seem to believe that EQ 2 stuttering problems and underwhelming performance were related to the shader implementation. When infact it had nothing to do with that at all. As I said. Anyone can load up a Geforce 6/7 card these days and get no stuttering at all. Even more amusing is your attempt to turn this into some ancient X800 verses Geforce 6 debate like anyone gives a rat's colon anymore. I spent more time working on this issue with Nvidia than perhaps any other large software problem I have dealt with to date.

Far Cry did not use Dynamic branching. It used Static Branching. The only difference between the ATI/Nvidia pathways was that Nvidia got more shader instructions in a single pass. Where ATI was limited because they couldn't exceed 128 instructions. Not every shader benefited from this. Infact there were only a few places that these performance gains were even relevant. The gains were only seen in heavy lighting situations where the shaders were used to draw lighting. It's not coincidence that EQ 2 is using its new SM 3.0 code to improve lighting and shadowing conditions.


Dynamic Branching on the Geforce 6/7 cards was/is very slow. Even today it is not used on these cards because of performance impact. I suggest you read wikpedia to actually learn what the Geforce 6/7 cards did compared to their counterparts at the time. It wasn't until the Geforce 8 series did Nvidia increase their dynamic branching performance to where it was beneficial to be used. This isn't something that was made up. It's pretty common knowledge about the Nv4x hardware.
Dynamic Branching on Geforce 6 wasn't optimal but useful. Once again Chris, you're absolutely wrong about its performance impact. It could certainly be beneficial for performance, and was faster than static branching on the hardware. Geforce 6's coarse granularity obviously hampered branching performance, but it wasn't "broken", just less useful than in future iterations. Let's see: http://ixbtlabs.com/articles2/gffx/nv43-p02.html.
http://www.behardware.com/articles/5...e-6600-gt.html
This is getting boring. Please for the love of god do some research.

About Dynamic Branching in Far Cry. I'll check your link out in a second. It does seem that most of the performance benefit came from the longer shader instruction allowance, and that Crytek chose not to use dynamic flow control in lieu of static branches and unrolled loops. Great, why is this important? It still stands that SM3.0 was fine on Geforce 6.

I never commented on EQ2 stuttering, just the lower overall performance.

Finally, once again: Chris, Geforce 6's slower performance with SM1.1 shader code OBVIOUSLY MUST AFFECT ITS PERFORMANCE WITH SM1.1 SHADERS IN-GAME. I hope you understand this. Its not the only factor with regards to EQ2, but given that this is probably the G6800Ultra's weakest point compared to X800XT PE or X850 XT, it is certainly an obvious one. I'm not sure what % of run-time was spent on execution of shader code, but it was clearly significant. The only reason you're arguing this is because someone is challenging your authority. It's silly my friend.

I'm completely sick of talking to you. I can admit when I don't know something. I was wrong about dynamic flow control with Far Cry, although that was easy to mistake, and does not take away from my assertion that SM3.0 helped Geforce 6.0. I'm not sure what you're trying to argue either. One moment you say that people are wrong because they blamed EQ2's poor performance on Geforce 6's SM3.0 implementation, next you're arguing that Geforce 6 performance on SM1.1 - SM2.0 code is a non factor in in-game performance, then you're saying that SM3.0 implementation is broken in Geforce 6 hardware. It's complete nonsense.

SM3.0 improved performance on G6, SM1.1-2.0 was slower overall compared to the competition (sometimes dramatically, especially with SM1.1 shaders), and EQ2 was certainly slower on G6 in part because of its slower performance on SM2.0 MODEL code (1.1-2.0). One caveat here that even without PP calls, Geforce 6 wasn't necessarily slower per clock on all shader code. It sometimes lost quite dramatically (once again, up to 2x slower with SM1.1 shaders) but sometimes the loss was incidental to its lower clock speed.
  Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:24 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.