View Single Post
Old 07-31-09, 08:47 AM   #19
pakotlar
Guest
 
Posts: n/a
Default Re: EQ2 Shader 3.0 upgrade

Quote:
Originally Posted by ChrisRay View Post
Yes PP is DirectX 9.0 Spec. It's been a flag for all DirectX 9.0 compiler since the second iteration of it. For SM 3.0 Compliance you must support a minimum of FP32. However you are not required to use and you can still use partial precision flags in your compiler.

Half Life 2 didn't use PP because there was no gain to be had. Using any kind of DirectX modifying program such as 3Danalyze let you force the entire game in 16 PP. And the performance gain was minimal for Geforce 6 cards ((and 7 consequently)) very similar to my Far Cry post above. The performance benefit was not to be see and some of the shaders had rendering issues due to being designed around a 24 bit precision. Arguably it could have been done in 16 bit but the art team focused its efforts on the 24 bit or higher. However it was not worth the effort as the Geforce 6/7 hardware ran FP32 nearly just as fast. The only hardware that would have seen tremendous speedups was the Nv3x cards. But they were too slow at even FP16 rendering to even bother with. Let alone FP32.

Next time you want to "educate" me on the irony of game code and how "Delusional" I am. At least understand what the hell you are talking about when you reference technology. Because you dont seem to know a damn thing about what you are talking about. Also do me the favor of not quoting me out of context in this silly post war you are trying to have with me. Those posts are nearly 5 years old. Who cares? All they show is I dedicated an enormous amount of my personal time to bug testing EQ 2 for Nvidia.
First off, who claimed that Geforce 6 wasn't orthogonally FP32? Btw, before we continue, quick trivia question: For hardware that is int32 throughout, are there any benefits in rendering at say, Int16 color depth? So if we're talking about a card that renders internally at 32bit integer precision, is there any benefit in lowering the color depth of the game to 16bits?

I'll hit the rest of your post a little later. But these portions are pretty laughable. The reason PP wasn't exposed in HL2 has NOTHING to do with performance benefits. Of course PP has performance benefits. The reason HL2 ran without explicit PP shaders is because PP was never considered to be full DX 9.0 spec, and FP16 shaders often compromised quality. Valve was never happy with nVidia's solution, so they didn't use it. That's it. Anyways NV30/35 drivers "did" force pp in HL2, flaggin certain shaders and replacing them with pp versions exposed through the drivers. Ex: http://techreport.com/articles.x/5642.

Like you just said yourself, PP was introduced in the 2nd revision of Dx9.0. It was a shoehorn attempt by nvidia to give the NV30 a chance to compete. The reason it's called partial precision is because it quite literally is not the full precision required by Direct x 9.0 at a minimum. That's why you have to flag the shader. No need to educate me on things like this Chris.

Here, maybe you should read up on this yourself before berating me with unsupported claims: http://www.beyond3d.com/content/interviews/23/

B3D: "Note the word "choices". DX9 specifies that for hardware to be DX9 "compliant", the minimum specification for floating point is 24-bit. 24-bit Floating Point is essentially equivalent to 96-bit colors"

About 6800. I even provided the links (beyond3d evaluations in my 1st post). The 6800 was FP32 throughout the pipeline of course. However the use of PP alleviated some register pressure, and freed up ~30% performance, depending on the shader (some received no benefit). The 6800 was slower in a vast # of shaders, 1.1 to 2.0, so using PP where possible was still preferrable, although not nearly as critical as in NV30/35's case. Still, with 1.1 shaders you could find examples where ATI's 2nd gen DX 9.0 part performed twice as fast as NV40. In some cases the pp version allowed the Geforce 6 to catch up, or even take over in performance compared to ATI, although of course, in many cases register pressure wasn't a bottleneck and 6 series received no performance gains. Again, all the information is out there.

That's why I'm saying discussing this with you isn't going anywhere. Let's just drop it ok? No one is trying to "educate" you. I really don't care what you believe. It's just that certain things are true, and some of what you're claiming isn't. Like in the above quoted. Literally all of the empirical bits are wrong (minus your dedication to EQ2..).

See unlike you, I enjoy facts! Btw, the fact that you beta tested drivers for nvidia really doesn't impress me. I think it'd be interesting to talk about in a different context. There is a reason why students of debate are taught to immediately call out argument from authority (it's an obvious logical fallacy). You'll never see good logicians or politicians supporting empiricist assertions with their authority in the subject matter. It is immediately picked apart and you look like a fool (because really, what does your authority have to do with the validty of a fact...it is either true or not true). Anyways, I've alraedy given you props for being involved in the exposure of EQ2 as a non-SM3.0 game. So no need to stuff our faces with what we already give you credit for.ngs when you do that. It achieves the opposite effect of what I know you're intending.
  Reply With Quote