PDA

View Full Version : PhysX Benchmarks for 280GTX


Pages : [1] 2

bent98
07-07-08, 07:41 PM
I found this article which compares PPU/CPU/FPU benchmarks with UT3 mod pack. It benches the 9800gtx and shows that at higher resolutions game preformance is impacted when handle both 3d and physx on the same card. I am curious if anyone can provide same benchmarks with the G280?

http://techgage.com/article/nvidias_physx_performance_and_status_report/3

SH64
07-08-08, 01:43 AM
Hmm interesting results .. thanks for the link!!

Ninja Prime
07-08-08, 04:00 AM
This is fairly obvious, although fanboys don't like to admit it. If you use some of the GPU's power for something else, the performance in graphics suffers. You don't see this in Vantage because the Physx test doesnt do anything major with graphics, so it can just go to town with the physics calculations.

When it comes to games however, if you're using 15-20% of your GPU's power for Physx, you're going to lose 15-20% of your graphics horsepower. I suppose as long as GPUs have way more floating point performance than CPUs, and as long as physics takes a sizeable chunk of said calculations, then it makes sense to offload the calculations there rather than the CPU. Don't fool yourself into thinking its free though.

Ninja Prime
07-08-08, 04:03 AM
10 FPS decrease is not that bad, at least you don't need an additional card anymore!

10 frames of 60 frames is a ~18% loss in performance.

ASUSEN7900GTX
07-08-08, 04:22 AM
seems as a phys x card is stil the better choise this means every time a 280GTX owner plays games like GRAW 1 and 2 they loose some performace to that of some one using a separate phys X card...or am i wrong?

Ninja Prime
07-08-08, 04:40 AM
seems as a phys x card is stil the better choise this means every time a 280GTX owner plays games like GRAW 1 and 2 they loose some performace to that of some one using a separate phys X card...or am i wrong?

No, thats wrong actually. Only games that support the GPU version of Physx work with GPUs, and the only game that does is Unreal Tournament 3, and even then, only three levels support it, and even then they aren't included with the game, you have to download them. Almost seems like they tried to make it hard, lol.

harl
07-08-08, 04:55 AM
10 frames of 60 frames is a ~18% loss in performance.

Yes and from 31 (CPU) fps to 50 (GPU) there is a ~61% of increase ...
(at 1680x1050)

bent98
07-08-08, 06:14 AM
Keep in mind this is all based off a 9800GTX card. Ofcourse there will be a preformance hit with any card, my question still is what is the numbers with a faster card like a 280GTX. Are the pysX benchmarks acceptable a high resolutions. Also what is the impact when you have 2 or 3 cards in sli? I would assume most games will be playable except for Cysis.

With quad core processors, 1000W PSU requirements for GPU its amazing OS, video card drivers, and software developers cant optimize everything to take full advantage of all this power.

Ninja Prime
07-08-08, 06:15 AM
Yes and from 31 (CPU) fps to 50 (GPU) there is a ~61% of increase ...
(at 1680x1050)

Check out the one below that, once the GPU is strained the CPU version runs at the same rate.

Ninja Prime
07-08-08, 06:24 AM
So? 50 frame are still 50 frames, very playable frame rate! I don't care about % values, frame rate values are the real deal.

I don't think you get the point...

18% is a lot, for some people, 18% might be the difference between choppy and not choppy. 18% might be the difference between 4x AA and 8x AA. 18% might be the difference between running at 1600x1200 or having to lower it to 1280x1024. 18% might mean the difference between high settings and very high settings. 18% can mean a lot.

Seems like GPU Physx is horribly inefficent compared to a Physx card's chip, but thats to be expected, though I didn't think it would be that much. However, if this was done on a 9800GTX+, then the GTX 280 might only take half that hit, 9%, since it has twice the floating point power. I say might because I'm not sure if it scales like that, but it seems logical.

spajdr
07-08-08, 06:55 AM
2560x1600 isnt good resolution for single 9800GTX+ anyway, so whoever have 30" to use that resolution also mostly have money to spend it on something better then this card.

Sowk
07-08-08, 07:10 AM
2560x1600 isn't good for any single card... Not even a GTX280.

This is High End SLI territory... 8800 GTX SLI / 260 GTX SLI / 280 GTX SLI

And still not 2 X 9800GTX's because of memory bus limitations.

SH64
07-08-08, 07:55 AM
at least you don't need an additional card anymore!
Agr33d.

duffy_chucky
07-08-08, 08:28 AM
Check out the one below that, once the GPU is strained the CPU version runs at the same rate.

but the real question is , IS the physX effects identivally with gpu & cpu accel ?
if number of particle and object is devided by 50 , so , it's not the same performance.

walterman
07-08-08, 09:10 AM
Well, i think that nobody was expecting a higher framerate with Physx on the GPU. If you waste shading power in the PhysX calculations, your framerate will be lower. Anyway, with the many-cores architecture that intel is developing, i would prefer to keep the physics on the CPU & the gfx tasks on the GPU. Dividi et Venci :)

SH64
07-08-08, 09:26 AM
I'd still prefer the whole-in-one approach specifically for this case (i.e. Graphics + Physics in one solution).

-=DVS=-
07-08-08, 01:19 PM
Like it or not its the future , its not like Nvidia gonna start selling PhysX add-in cards.

Shocky
07-08-08, 01:26 PM
I'd like to see the same tests done with the GTX280/260, i'm really not that supprised that a 9800GTX can't handle both at the same speed as a dedicated PPU with high settings.

hell_of_doom227
07-08-08, 01:31 PM
Nvidia could sell separate Physics card for ATI users :D

SH64
07-08-08, 05:23 PM
Nvidia could sell separate Physics card for ATI users :D
That would be mean :o

Ninja Prime
07-08-08, 06:50 PM
Nvidia could sell separate Physics card for ATI users :D

Perhaps you missed the article where ATI's cards with a Physx driver conversion are considerably faster than Nvidias? Yeah, I thought so.

Ninja Prime
07-08-08, 06:56 PM
Well, i think that nobody was expecting a higher framerate with Physx on the GPU. If you waste shading power in the PhysX calculations, your framerate will be lower. Anyway, with the many-cores architecture that intel is developing, i would prefer to keep the physics on the CPU & the gfx tasks on the GPU. Dividi et Venci :)

I suspect GPU physics is only temporary. When a CPU comes out that can handle it just fine they'll probably move back to the CPU to keep GPU performance up. I'd imagine that the successor to Nehalem with 512 bit vector instructions could do enough physics calculations on a single core, with easier programming than on a GPU. Thats two years away though, with larrabee they might just have a single x86 code for physics that can run on either the CPU or a larrabee GPU as needed.

bent98
07-10-08, 01:59 PM
http://www.extremetech.com/article2/0,2845,2324322,00.asp

This was an insteresting article about what the future may hold for how physics and multiple GPUs are handled. Seems like we are probably 2 years off where we need to be. Open CL, DX 11, Intel's next gen cpu offering