Originally Posted by ChrisRay
You can agree to disagree with me all you like. I'm disagreeing with you on grounds of testing and evidence.. A 768 meg card can complete a test. Despite it typically being slower while a 512 meg card cannot. Yet when you lower the resolution or AA to a managable framebuffer pool. The 512 card not only complete the tests. But performs competitively/better than the 768 meg card. Yes they are different pieces of hardware. But the problem is bleedingly obvious.
If your buying 1 card and sticking with 4x MSAA @ 1680x1050 resolution You are unlikely to encounter many problems with most software. At least not within the given 1 year time frame. Lets assume 1 card purchase is a 1 year investment.... With regards to the 8800GTX. Yes its a good card. But the 9800GTX + is easily faster than it in settings that dont hamper its framebuffer. Once that framebuffer is exceeded. The 8800GTX pulls far ahead of the 9800GTX despite the 9800GTX's superior shading capabilities and bandwith saving color compression,
I know you have forgotten more about video cards than I will ever know. I am asking the following with complete respect. Hope you can explain.
I totally get what you are saying. But in real world situations where 1GB is a good insurance policy for minimizing the chance of bad 512MB performance I have some questions.
In the following review you have a 768MB 8800GTX up against a 512MB 9800GTX. This is only one example of such reviews. But anyway as you look at all of the heaviest tests, the 512MB cards is doing better than the 768MB card. I know there are clock differences and other factors. But in the case of one situation where my brain THOUGHT would be a 768MB card winning was not.
COD4 2560x1600 4xAA Obviously maxed out. http://www.anandtech.com/video/showdoc.aspx?i=3340&p=3
The 512MB 9800GTX gets 40.7 fps
The 768MB 8800GTX gets 38.7 fps
Then I thought! Frig that! 768MB has GOTTA beat 512MB. I know Oblivion!
2560x1600 4xAA, 16xAF Maxed http://www.anandtech.com/video/showdoc.aspx?i=3340&p=5
512MB 9800GTX gets 24
768MB 8800GTX gets 20
All other tests regardless of settings and res, the 9800GTX appears to match or beat it.
Can you help me understand why? And at what point will the extra 256MB change this result?
And I am just looking for an honest answer here. Will a person notice more microstutter in that COD4 example even though the framerate is about the same?
In Crysis. It wins here at 2048x1536 4xAA 16xAF http://www.techpowerup.com/reviews/L...800_GTX/6.html
8800GTX got 18.5
9800GTX got 13.3
Again in the rest of the test suite http://www.techpowerup.com/reviews/L...00_GTX/15.html
Both cards are almost exactly the same performance. I would have thought with the extra 256MB ram, wider memory bus and 8 additional ROP's that the 8800GTX 768MB would have owned the 512MB 9800GTX or 4850 across the board once you were at 1920x1200 res or higher with AA and AF applied.
Then not only that but the 512MB 4850 is winning across the board in all but Crysis versus the 768MB 8800GTX?
Can you explain why this is like this. Because on paper. The 768MB beast should be tearing the 9800GTX and 4850's a new one at the 1920x1200 4xAA and beyond. But it's not. WTF is the deal!
PS - I know where 1GB WILL make a difference. As you already stated. In multi GPU situations where the GPU's are able to run unbound and free like the wind. Then they will need all the memory you can feed em. But only in extreme cases?
9800GTX SLI versus 8800GTX SLI would be a very interresting battle.