Originally Posted by ChrisRay
Memory bandwith does "Not" account for running out of memory. Once you ruin out of memory. Your onboard GPUs bandwith becomes almost meaningless as you are then limited by the PCIE interfaces bandwith. You can attempt to allocate that memory differently. As I am sure ATI does. But your still using system memory.
I'm curious, then. Why do you think ATI chose to ignore VRAM and instead focus solely on bandwidth by tossing GDDR5 on the PCB?
The only tests I'm really seeing this "512 meg" thing possibly playing a hand in are those ran at 2560x1600 resolutions with AA applied on a single GPU. I'd say probably .01% of the population owns those displays or will own them in the near future, and those who do usually run multi-gpu. And this problem is remedied by going Crossfire which often scales over 100% at that resolution with a 2nd card enabled.
Here you have Call of Duty 4 running faster on the GTX260 at 2560x with 4xAA than on the 4870, but once you go Crossfire with them they're 9fps faster than 280 SLI at 2560x1600 with 4xAA applied. Crossfire scales 113% at these settings despite the 512MB frame buffer:
Again at 2560x, 4870s are outrunning GTX280 SLI on The Witcher. No AA on this one, though:
Again at 2560x w/ 4xAA in Oblivion, the 512MB 4870s in Crossfire are outgunning the 1GB GTX280 by over 30%:
I know every game is different and things can vary; it just seems that, given the benchmarks, this "deficiency" is a bit overstated at this point. I don't see games of the near future dropping from over 100fps down to 30 or less.