Originally Posted by XMAN52373
I ferget who over there, but someone posted a link to a 16 core PhysX run of a fluid sim that was about on par with a 9600GT. 16 cores to equal 1 GPU for a fluid sim. Somethings are just better left to GPUs for now. And having looked at the PhysX SDK, it is the same one used for consoles which somehow manage to work fine with multi core CPUs. I'm more inclined to believe Devs laziness on this then somehow Nvidia playing with blocking of CPU cores..
A: A bench designed on and for a GPU core doesn't run well on a CPU. If it were re-coded to work better on a CPU, it would probably be a closer match. It would still get beat of course, but it might be closer.
B: Its not about benchmarks like "fluid sim" which I'm guessing is fluidmark. Its about what is needed for physics in games. Yeah, a GTX 280 could crush a CPU in random physics benchmark #1, but, is anyone going to use that much power in a game? No. Do they need that much power? No. If they had that much power on every system could they even use it in games? I doubt it. My point was, its to the point where some CPU cores is going to be enough, and certainly in the future with 6-8 cores, you will have enough power on CPU cores.
3: Last time I checked, fluidmark wasn't a game. Last time I checked, no game has any effects even close to similar to what fluidmark has. So then, what does fluidmark have to do with games? Nothing. Its furmark, but for physics.
E: Your final point, is that devs are lazy and that is why physx doesn't use multicore hardly at all, right? So you're saying, devs are lazy, so they spend extra time coding for a feature (physx) that most people won't use, but they won't spend extra time coding for something that most people can use, i.e. multicore support. Is that right?
Edit: Upon further review, the thread you referenced at B3D seems to have everyone arguing against you, and you got banned for it. Not sure you wanted to bring that up in support of yourself...