View Full Version : Will AGP 8X increase performance over AGP 4X?

07-28-02, 02:29 PM
I have a question. People are comparing 8x agp to 4x agp on the sis xabre card. It does give a little boost on 8x. However, why is that happening. Isnt 8x agp mean that there could be more room for more information passing through from the video card to cpu. Is this info traveling faster in 8x than 4x?

07-28-02, 09:06 PM
Playing today's average game, you will see very little improvement going from 4x to 8x AGP. Most folks don't seem to understand under what circumstances you can expect higher AGP speeds to help you.

People get confused when they see that high resolutions, with lots of AF and FSAA, do not get a performance increase with high AGP speed. That is because those functions are all about fill rate, frame buffer, and local memory bandwidth to access that frame buffer. You video cards brute force ability to render is tested.

high polygon counts and massive textures will benefit from AGP. If you have a very high polygon count, you will send alot of geometry information accross the AGP bus to the T&L engine on the Vid Card. There are games in existance that actually come pretty close to saturating the AGP bus in this fashion, and there are definitely professional applications which do so.

If your textures overflow the local texture memory on your card, you will see a big increase in speed going from 2x to 4x to 8x AGP. This doesn't happen very often. However, there is at least one, and perhaps more than one, game that I play that easily uses a few hundred megs of textures. Memory bandwidth, and AGP bandwidth begin to get inportant when you are constantly swapping 300megs of textures in and out of a 64 meg video card.

I hope that some folks will read this and understand that they are not going to get high 3Dmarks because of AGP 8x, but when you are rendering in the millions of polygons per second, and you are using 512MB of textures, you are going to notice that AGP8x.

It is actually under these sort of conditions that SBA and Fastwrites show their value. Under most conditions SBA and Fastwrites are not worth the headaches, but under sever stress of the AGP bus, they are. :)

Hope this will help folks with their "AGP decisions". :)


07-28-02, 09:32 PM
every card in the future and every motherboard will migrate to AGP 8x anyway. but AGP has never been anything to lose sleep over.

Lou Natic
07-28-02, 10:24 PM
AGP 3.0 (8x) is basically just a stepping stone until the PCI-X speeds hit primetime. Around 2004 most of the high-end gaming cards will probably migrate over to PCI-X due to it's higher bandwidth capabilities.

07-28-02, 10:47 PM
By that time we will have AGP 12x or over with all sorts of optimisations. In the end will it be worth it going to pci-x?

07-28-02, 10:48 PM
will these higher bandwidth solutions ever really come into play? most games don't ever saturate the video card itself because none of these interfaces can compete with the bandwidth of the card itself.

once t&l becomes baseline(absolute bottom) and polygon counts rise, we MAY see some benefit, but i really am not sure of this. i mean, yes we have seen some games that were designed to stress AGP, but the perfomance level is low either way, as i remember it. so low that such cases would be rare indeed. although if DX9 cards are used to accelerate professional graphics stuff like AGP may become more important, as DJB said

07-29-02, 04:54 AM
I don't see any increases in AGP having that much impact on games. As video cards tend to come with stacks of memory anyway there isn't much need for the card to access the system memory in order to grab textures etc.

The area in which it will have most impact, IMO, is in digital media streaming technologies. Where the bandwidth is required because you are displaying GBs\sec of data.

Mind you having said that the new 128bit colour specs will require a phenominal amount of bandwidth. At 1024x768 as close as I can make out, using nVidias method in the LMA docs, it would be:


and at higer resolutions it gets a bit insane. At 1600x1200:


Maybe super large textures will need the bandwidth that AGP can provide!?

By the way, at 32bit colour the requirement for a 1600x1200 display is 5.76Gb\sec which is pushing the envelope on the GF4. But gives plenty of space for the R300, maybe that's why ATi wanted the preview tests run at 1600x1200? At lower resolutions and without eyecandy on the difference isn't as impressive.

07-29-02, 07:57 AM
Nv30 has 32gb mem bandwidth and that is counting (I think) without the HSR.

07-29-02, 08:23 AM
howd you get those numbers

07-29-02, 10:24 AM
Originally posted by K.I.L.E.R
Nv30 has 32gb mem bandwidth and that is counting (I think) without the HSR.

That's only if they've gone with a 256bit bus and not 128bit, the NVMax data was based on a PR sheet prepared by nVidia and quite a few other journos have got copies. I wouldn't hold my breath on that one, you may be disappointed. Would be cool if they have though, maybe that's another reason for the delay. Backup plan X :D

[Corporal Dan]
howd you get those numbers

I used the method that's in the LMA documentation.

(Width x Height x Average Compexity x Pixel Data) x 60 fps.

I don't know how they work out average complexity but they have it set to 2.5!?

07-29-02, 08:37 PM
Originally posted by [Corporal Dan]
howd you get those numbers

it is assumed that NV30 will have memory clocked close to 900Mhz. Using a 128bit bus it would have 15-16GB of bandwidth. NVMax says that nvidia has said it would be using a 256bit bus. just double the numbers.