View Single Post
Old 02-17-03, 07:23 AM   #107
Skuzzy
Bit Bumper
 
Join Date: Aug 2002
Location: Here
Posts: 782
Default

As a developer, and I think most would concur, the baseline video card we program to today is the GF2 Ultra running in an 800Mhz P3.

The more powerful CPU's certainly allow a lot more to be done today than yesterday. But we cannot ignore that the GPU's have also gotten more powerful.

To that end, programmers today are looking at better physics and better scene/object interaction.
The physics is virtually all done on the CPU, as well as collsion detection. In order for developers to step up to the next level in these areas, we need to push some graphics chores off to the GPU.
Why? Even if the GPU is slower than the CPU, we can take advantage of the parallel processing of both. This is just a balancing act. How much can we push off to the GPU? That is a bit of an unknown quantity.
With today's pixel shaders (1.4 and later), there are things that can be done far more efficiently on the video cards versus using the CPU. As time goes forward, and the baseline video card is raised, there will be more and more programmers using the video card features.

Programmers face a problem today. In the past, video card technology actually did not move that quickly. Yes, the clock speeds increased, the amount of video ram increased, which help by allowing more scene details.
But today we are seeing video card technology jump forward by leaps and bounds in the feature set they offer. Unfortunately, these new video cards will not be the baseline cards for 3 to 5 years down the road.
And even then, they may not be the true baseline. Many factors come into play here. Both ATI and NVidia ship video cards today that are pretty brain dead. One of the more popular line of NVidia cards has no shaders worth mentioning (MX line I beleive).

Searching for the holy grail of balancing the graphics speed with the CPU speed is what it is all about. In an ideal world, the scene would be finished rendering just as the next frame is ready to go. To get there programmers cannot ignore what the GPU can do to help. If we do, then we are doing a disservice to the consumer and generally writting poor code.

My two nickles.....
Skuzzy is offline   Reply With Quote