Originally Posted by Iruwen
That's another point where Fermi comes into play. It's not like Nvidia didn't do their homework in some points, they spent a lot of work on parallelization. That's why it's such a huge monster. AMD will surely go the same way with their next generation hardware (in 28nm if possible though).
This is not strictly Nvidia/ATI related: I think what we currently see in games is just the tip of the iceberg. GPU physics are used to blow things up, tesselation is used to make things round. And because it has to be noticable, it doesn't look realistic at all. It's not impressive because it's been done over and over before. It's more of a design problem.
The next generations of games will use those advanced technologies to actually make the virtual world more realistic. GPU physics won't be used for debris, but smoke, cloth and fluid simulations. This is also the point where CPU based physics simply won't be able to keep up since they are what they are, general purpose hardware with a very limited number of cores and simultaneous processing.
It's up to the extremely parallelized hundreds or even thousands of cores of GPUs to efficiently render real time particle physics. The same is true for tesselation, developers are just playing around with it in current implementations, the real power of displacement mapped subdivision surfaces will show in future generations of games and hardware. It allows extreme levels of detail while providing real LOD, it saves huge amounts of memory and bandwidth, it allows blending and morphing and perfectly integrates with physics.
This won't happen with the current generation of hardware and games, but that's why I hope ATI will actually come up with their open GPU physics solution and real DX11 cards (which means they need a programmable tesselator) soon. I don't think so because their hardware isn't capable of it yet, but maybe next year.
It's possible i guess,but i believe it's something to strive for within the next 10+ years,not right now,and for several reasons:
1:As it is,not everyone will own the latest cards as soon as they're released,with most users out there trying to use their hardware as long as possible...Only hardware enthusiats change hardware every year,simply because something much faster is being released,not because the games demand that kind of power....I only did it because these latest cards support 3 displays,as other than that,my old setup was handling things fine using 1 display....It wasn't for DX11 support in the least,as that will take time to become popular.
2:Games are taking ever longer to develop,sometimes as much as 4~5 years if they're developing their own graphics engine along with the game itself,and it's only shorter than that if they're licencing an existing game engine and building their game on top of that(think source engine,UT3 engine,ID tech 4 engine,etc).
3:Given that long time span for game development,it's easy to imagine that the overall situation for developers isn't easy,when in that period of time,there could be as many as 3~4 generations of new hardware releases,each one supporting more features and overall performance than the previous,so try being in developers shoes and deciding what to support and what not to,isn't easy in the least....I don't envy that aspect of having to chose between something they'd like to have in their game,but it'll take longer to implement,or having to lower their vision to cater towards the reality of the market and/or development budget constraints.
4:Most games are multiplatform these days,with the baseline being consoles,which the latest ones are still stuck with DX9 level GPU's in them,while we're already at DX11 on PC's,so there's already a huge gap right there in both raw speed and feature support,and only recently has there been a developer with the guts to release a game with no fallback to DX9 whatsoever and needs DX10 as the minimum,so it's basically restricted to PC's....It's from futuremark(makers of 3Dmark vantage) and it's called shattered horizon.
5:I think it was someone at Nvidia that stated at a conference last year,that the objective is to have on the market,by 2015 so just 5 years from now,GPU's with 20 terraflops of single precision power....The current Fermi chip has only 1.4 terraflops right now,so as crazy as it sounds,GPU's might get about 15 times faster in some aspects within the next 5 years if they hit that objective.....My setup has 10 terraflops right now in that area,but that's split between 4 GPU's,and they're talking about a single one packing twice that amount by 2015,so it's a crazy increase in performance to say the least.
The basic idea is that the hardware will get to the needed performance level a lot faster than the actual software to exploit that power and features,and that's been the case for years now,so i don't think that based on what we've seen in the past 10+ years,that that's going to change.