PDA

View Full Version : I have seen the future....


Pages : 1 2 3 4 [5] 6 7

shadow001
03-24-10, 05:08 PM
Posted the benchmarks above, but wanted to comment on this little snippet. DX11 is as worthless as physx until people of both camps can run it.


Hence why i bought the cards primarily for triple display gaming,of which there are almost 30 games officially supported and the list keeps growing with every driver release.


I'm fully aware that the DX11 advantage is still a small one even in the few games that are starting to use some of it's features,as the gains in overall graphics quality are minimal....The one that's sticks out more here is aliens versus predator with it's use of higher levels of tesselation.

Toss3
03-24-10, 05:17 PM
Always thought nVidia simply offered PhysX to try to differentiate themselves from their competitors and to offer value for their customers. If one doesn't like it or can't find the right balance - they can disable it.
This is all well and fine as long as it doesn't affect owners of other brands. PhysX does; if a developer chooses to support gpu accelerated PhysX it means less effects for users not running Nvidia hardware, effects that are there in other games and don't require a dedicated physics processor. Take Mirror's Edge as an example - lots of objects are removed from the game when PhysX is turned off, like glass shards, flags, dust etc.. These effects were there in games like Red faction and Max Payne a very long time ago. So how come they suddenly became too much to handle for a CPU? Batman suffers from the same adverse effect.

Or they can try to find the right balance if they choose to with a single GPU or invest into a PhysX discrete card. For someone like me, used my dated 8800 GT which was collecting dust as a paper weight -- reborn to offer a bit more gaming. Sure, would like to see more compelling content, GPU phyX to be ported to OpenCL, but while I wait for more maturity and idealism, can enjoy some content now while costing me nothing extra. Think it is neat to see a dated GPU be used to improve gaming - sure wish I could do that with a dated CPU.

I had an 8800GTS prior to the 5850 I have now, but due to nvidia's aggressive marketing it became nothing more than a paper-weight. It's not like I'm trying to run PhysX on an ATI card - I'm trying to run it on a card that has PhysX-support written on its box! Nowhere does it say that this support is limited to you running only nvidia hardware. Retarded move on nvidia's part and only hurts the consumer as did the move to drop support for Ageia's PPUs.

shadow001
03-24-10, 06:01 PM
Now that i looked at Toss's mirrors edge results,it just doesn't add up overall,as the Ageia card is right in the middle between the results using a 9600GT for physics,and using a 9800GT card for the same,as there's only about a 5 Fps difference for the top 3 setups for both minimum and average FPS results,so that begs at least 2 questions:


1:Why drop support for Ageia's card if it's still performing very nicely overall,with more than high enough FPS values for smooth gameplay?.

2:The amount of shaders available on a 9600GT card is 64 shaders,which is what's used to calculate the physics,while there's 128 shaders for the 9800GTX cards,so it has more floating point power available to process the physics calculations much faster,yet the overall FPS is only about 5 FPS faster than using the 9600GT card for physics,and 2 FPS faster than using the supposedly outdated Ageia physics card,in both minimum and average FPS.


And switching to higher resolutions and graphics settings only puts more pressure on the graphics portion of the overall workload,not the physics calculations,which should remain the same regardless,so in the end,did Nvidia end the Ageia's physics processor lifespan much earlier than it really had to?,at least when viewing the mirrors edge results?....Does the physics workload need to be much higher than what's used in mirrors edge,before the Ageia physics card would bog down overall performance too much and become umplayable...It looks like Nvidia terminated that products usable life span way too early from here.

Toss3
03-24-10, 06:09 PM
Now that i looked at Toss's mirrors edge results,it just doesn't add up overall,as the Ageia card is right in the middle between the results using a 9600GT for physics,and using a 9800GT card for the same,as there's only about a 2~3 Fps difference for the top 3 setups,so that begs at least 2 questions:


1:Why drop support for Ageia's card if it's still performing very nicely overall,with more than high enough FPS values for smooth gameplay.


Because this way they can sell additional cards to previous PPU owners.

2:The amount of shaders available on a 9600GT card is 64 shaders,which is what's used to calculate the physics,while there's 112 to 128 shaders for the 9800GT/GTX cards,so it has more floating point power available to process the physics calculations much faster,yet the overall FPS is only about 5 FPS faster than using the 9600GT card for physics,and 2 FPS faster than using the supposedly outdated Ageia physics card,in both minimum and average FPS.
I don't think you can derive how well gpu-x calculates physics based on a benchmark limited by fps.

And switching to higher resolutions and graphics settings only puts more pressure on the graphics portion of the overall workload,not the physics calculations,which should remain the same regardless,so in the end,did Nvidia end the Ageia's physics processor lifespan much earlier than it really had to?,at least when viewing the mirrors edge results?....It looks like it.

I don't get why they dropped support for it. It still works with the same hack as the one for ati+physx though.

XMAN52373
03-24-10, 06:20 PM
1:Why drop support for Ageia's card if it's still performing very nicely overall,with more than high enough FPS values for smooth gameplay?.

2:The amount of shaders available on a 9600GT card is 64 shaders,which is what's used to calculate the physics,while there's 128 shaders for the 9800GTX cards,so it has more floating point power available to process the physics calculations much faster,yet the overall FPS is only about 5 FPS faster than using the 9600GT card for physics,and 2 FPS faster than using the supposedly outdated Ageia physics card,in both minimum and average FPS.

1. Propably because they are not making Ageia based cards since they bought them and are working towards increasing the load to which PhysX is used on furture games which would render the Ageia PPU useless.

2. The 9600GT is a very very effienct deigned GPU chip. The performance in games of it is about equal to teh 8800GTS(G80 96SP) card and 2 in SLI are about equal to a GTX260, 128SP vs 192 or 216, take your pick.

shadow001
03-24-10, 06:25 PM
Because this way they can sell additional cards to previous PPU owners.


Yup...Good old fashion greed.


I don't think you can derive how well gpu-x calculates physics based on a benchmark limited by fps.

And what limiting Fps in the first place?....I mean,if we need to use a faster CPU to get even higher Fps results,so that there's more physics calculations to be handled regardless of the option you pick,it kinda defeats the purpose of using it in the first place,since one of the marketing angles used for GPU physics is not having to buy the fastest CPU possible,since the the GPU is taking over the physics calculations,leaving less work for the CPU anyhow.


It's one of the grey areas regarding the issue to say the least.

shadow001
03-24-10, 06:29 PM
1. Propably because they are not making Ageia based cards since they bought them and are working towards increasing the load to which PhysX is used on furture games which would render the Ageia PPU useless.

2. The 9600GT is a very very effienct deigned GPU chip. The performance in games of it is about equal to teh 8800GTS(G80 96SP) card and 2 in SLI are about equal to a GTX260, 128SP vs 192 or 216, take your pick.


Still,they're comparing it to a 9800GTX + card,which not only has 128 shaders,but they also run at a higher clock speed,so it's probably more than 2X the floating point math ability regardless.


As for the future game argument and using even heavier physics workloads which the Ageia chip couldn't handle it,i'd say show me first once those games are released and let me decide for myself thanks....

Toss3
03-24-10, 06:34 PM
And what limiting Fps in the first place?....I mean,if we need to use a faster CPU to get even higher Fps results,so that there's more physics calculations to be handled regardless of the option you pick,it kinda defeats the purpose of using it in the first place,since one of the marketing angles used for GPU physics is not having to buy the fastest CPU possible,since the the GPU is taking over the physics calculations,leaving less work for the CPU anyhow.

It's one of the grey areas regarding the issue to say the least.

What I meant is that the GPU is limiting the FPS in that benchmark, not the CPU. A dedicated PhysX processor is always going to be limited by the GPU, not the other way around. You'd have to use a benchmark like fluidmark to properly compare the performance between the Ageia PPU, 9600 and 9800.

lee63
03-24-10, 06:38 PM
I have a gut feeling this is going to be a frustrating launch and a lot of people are gonna get pissed off....I hope I'm wrong.

Razor1
03-24-10, 06:41 PM
I'm talking about the physics calculations alone,not entire programs here,and i'll look to see if there's any benchmark results comparing the Ageia card versus an Nvidia GPU on physics workloads exclusively.

Its not a sole problem of the application, its parallism in any instance, hyperthreading was made to curtail that issue but its not an ideal solution by any means, we have had hyperthreading since the P4 days, and any good programmer knows its better to run programs on two CPU's then use hyperthreading, its like 2 SIMD's in a GPU, actually I kinda made a mistake with GPU, it can do hundreds of threads but each SIMD work on individual tasks, so its not in the hundreds of difference its more like x5or 6 times the difference possibly, unless since unification things have change where within the same SIMD different work can be done. Thats the reason why GPU's are better for parallel tasks, they were made to handle it.

http://en.wikipedia.org/wiki/Parallel_computing

Optimally, the speed-up from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speed-up. Most of them have a near-linear speed-up for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements.


As for DX11 games,there's Dirt 2,aliens versus predator,battleforge,battlefield bad company 2 and a couple of others,but like i stated before,the main reason i bought them is for the triple display support in games,for where there are already close to 30 games in the supported list,it doesn't require developer support,and the cards have only been out 6 months on the market,compared to 2 years for GPU physX.

You were saying?...;):D

So thats good right you had a reason to buy it, and 4 games is still less then the amount of games with PhysX. Just like nV's 3d vision, people bought thier cards for a reason.

Revs
03-24-10, 06:48 PM
I have a gut feeling this is going to be a frustrating launch and a lot of people are gonna get pissed off....I hope I'm wrong.

Shiiiit, I knew that by xmas :lol:

Toss3
03-24-10, 06:49 PM
So thats good right you had a reason to buy it, and 4 games is still less then the amount of games with PhysX. Just like nV's 3d vision, people bought thier cards for a reason.

Please don't listen to him - the whole dx11 debacle isn't comparable to physx. 3DVision is also a whole separate thing as is eyefinity.

Revs
03-24-10, 06:51 PM
I don't know why he can't just be happy with his cards and let be.

Toss3
03-24-10, 06:53 PM
I have a gut feeling this is going to be a frustrating launch and a lot of people are gonna get pissed off....I hope I'm wrong.

Isn't that always the case when a new generation of cards are launched? I thinks it's part of the fun! It's like the ManU vs barcelona of the tech world. :)

Revs
03-24-10, 06:54 PM
:D Very true

shadow001
03-24-10, 07:08 PM
So thats good right you had a reason to buy it, and 4 games is still less then the amount of games with PhysX. Just like nV's 3d vision, people bought thier cards for a reason.


DX11 has only been out a few months,and up until now,only 1 GPU maker had DX11 cards on the market,and DX11 is a unified standard,controled by none other than microsoft,so they have the final say if you want to play games on the windows platform,not anyone else.


Unified standards always get more support and a faster rate of adoption than proprietary standards ever will basically,no matter how much Nvidia complains about it,since it doesn't make their hardware any more special than that of competitors in the end,and they don't like that,if nothing else from a marketing standpoint.


It's also why ATI decided to support Eyefinity,since it has nothing to do with direct X,and doesn't even require any special attention from developers at all,both in current games already released,or any future game releases either,so it bypasses all the issues that Nvidia is facing by trying to push PhysX as much as it can,and enhances overall gameplay,like the examples i mentioned with battlefield bad company 2 and making me a better player...

shadow001
03-24-10, 07:11 PM
I don't know why he can't just be happy with his cards and let be.


I am extremely happy with my cards btw,and the leaked numbers shown so far aren't that impressive,which makes me wonder why some waited 6 months for this?....Just my 0.02 cents.

Rollo
03-24-10, 07:15 PM
I am extremely happy with my cards btw,and the leaked numbers shown so far aren't that impressive,which makes me wonder why some waited 6 months for this?....Just my 0.02 cents.

They don't work for ATi like you, so they think being able to see all the options available is a good thing rather than a possible layoff.......

Toss3
03-24-10, 07:17 PM
I am extremely happy with my cards btw,and the leaked numbers shown so far aren't that impressive,which makes me wonder why some waited 6 months for this?....Just my 0.02 cents.

Because some people don't own crystal balls? :rolleyes: Besides there weren't really that many games that needed the performance of the 58/970-series when they were launched so many chose to stay with their previous-gen cards and see what the other camp had to offer before making a decision.

Revs
03-24-10, 07:20 PM
:banghead:

shadow001
03-24-10, 07:23 PM
They don't work for ATi like you, so they think being able to see all the options available is a good thing rather than a possible layoff.......


And i usually have no problem with that on principle,and i would wait too if the release for both GPU's makers were only weeks apart,heck even up to 2 months apart,but wating 6+ months...No way in hell,as life is too short.


After all this time,the only way to make it up was if Fermi was consistently at least 25~30% faster across the board,not the much tighter race it looks like it's going to be....Even you have to be realistic about this.


And again,for what seems like the 100th time already,i don't work for ATI.

Toss3
03-24-10, 07:24 PM
:banghead:

:lol:

Rollo see what I mean with the arguments? I dislike his pro-ATi attitude as much as your pro-Nvidia one.

XMAN52373
03-24-10, 07:28 PM
Still,they're comparing it to a 9800GTX + card,which not only has 128 shaders,but they also run at a higher clock speed,so it's probably more than 2X the floating point math ability regardless.


As for the future game argument and using even heavier physics workloads which the Ageia chip couldn't handle it,i'd say show me first once those games are released and let me decide for myself thanks....

You can have double the SPs all you want, it doesn't mean it is going to improve effiecentcy by alot. The G94(9600GT chip) is of the G9x line but is of a different design than the G92(8800GT/S/9800GT/X/+/GTS250). which is why with its 64SPs it is on par with teh G80 96SP part and doesn't always lag behind the cetain G92s by alot either. Also, depending on the 9600GT they used, it could have been a 700/1700 part(XFX XXX edition comes clocked that way)

shadow001
03-24-10, 07:30 PM
Because some people don't own crystal balls? :rolleyes: Besides there weren't really that many games that needed the performance of the 58/970-series when they were launched so many chose to stay with their previous-gen cards and see what the other camp had to offer before making a decision.


There still aren't any games that need either one even now....This isn't about what games need when you're considering buying the very highest end cards on the market,it's about who gets the highest benchmark results,plain and simple.


One brand released their cards far earlier than the other and they kick ass plain and simple,so comparisons are bound to be made between both brands,and the overall performance differences and if they were worth all the waiting.


It would be naive to think that this 6 month wait won't influence the final reviews on some hardware review sites.

shadow001
03-24-10, 07:33 PM
You can have double the SPs all you want, it doesn't mean it is going to improve effiecentcy by alot. The G94(9600GT chip) is of the G9x line but is of a different design than the G92(8800GT/S/9800GT/X/+/GTS250). which is why with its 64SPs it is on par with teh G80 96SP part and doesn't always lag behind the cetain G92s by alot either. Also, depending on the 9600GT they used, it could have been a 700/1700 part(XFX XXX edition comes clocked that way)


It's the SP's alone that are responsable for the calculations on physics workloads,and nothing else in the architecture is getting used,as those parts are designed to handle graphics anyhow(texture units,rops,etc).