PDA

View Full Version : 5900U/9800Pro bandwidth


bkswaney
08-02-03, 10:23 PM
How do u figure it?
Mhz=what? They both have a 256bit memory bus I know.

I do not get it. The 5900 has 27. The 9800Pro only has 19. "i think it is"

The 5900 has 800 or 850 Mhz memory on it. The 9800Pro has 680.

Now if I push my 9800 up to 750 I still only have 19 or so GB's of bandwith?
I'm confused. :confused:

Help me out here. :)
How is the 5900 getting so much more?

MikeC
08-02-03, 10:54 PM
Memory Bandwidth (in MB) = (Memory Bus Width * Effective Memory Speed) / 8.

Dividing by 8 converts bits to bytes.

Then divide the result by 1024 to convert to GB.


Radeon 9800 Pro = (256 * 680) / 8 = 21760 MB / 1024 = 21.25 GB

GeForce FX 5900 Ultra = (256 * 850) / 8 = 27200 MB / 1024 = 26.56 GB


Or you can calculate it this way:
http://www6.tomshardware.com/graphic/20021118/geforcefx-04.html

bkswaney
08-03-03, 01:40 AM
So just guessing off the top of my head it would be somewhere around 23 GB. :)

Thanx. :)

Skuzzy
08-03-03, 08:26 AM
This seems as good a place to bring this up as any.

There is a bit more to calculating the raw bandwidth than using the clock rate of the chips.

Logically, the 5900U (at stock clocks) has higher bandwidth than the 9800Pro, yet in most tests, it appears there is not real advantage to this higher bandwidth. Why?

Speculation mode: Ram/memory timing, as most enthusiasts are aware, has to do with various elements, other than the clock rate. The CAS delay, RAS delay, wait states and so on.
Again speculating, I wonder if NVidia is running more delay time in memory access to allow for higher clock rates, but effectively running about the same rates as ATI's 9800Pro?

Just a thought. KInd of goes to marketing.

MikeC
08-03-03, 09:14 AM
Also, occlusion culling algorithms can increase theoretical memory bandwidth.

Skuzzy
08-03-03, 09:43 AM
But I would think that is something both ATI and NVidia have gotten down pat by now MikeC.

I just keep looking at all the numbers and they just do not add up. ATI's AA algorythm requires a bit more bandwidth than NVidia's does, yet ATI running slower core and memory timings is able to produce better AA with better overall peformance than NVidia.
It just does not add up. AA is bandwidth intense, and with the simpler algorythm NVidia is using, along with higher clock rates, they should trounce ATI, but they don't.

I am going to see if I can get a high speed logic analyzer, on loan, and check the memory timings. Something is whacked and it is bothering me.

Lfctony
08-03-03, 12:17 PM
750Mhz gives 23.4 GB/s

Ady
08-03-03, 01:37 PM
Originally posted by Skuzzy
This seems as good a place to bring this up as any.

There is a bit more to calculating the raw bandwidth than using the clock rate of the chips.

Logically, the 5900U (at stock clocks) has higher bandwidth than the 9800Pro, yet in most tests, it appears there is not real advantage to this higher bandwidth. Why?

Speculation mode: Ram/memory timing, as most enthusiasts are aware, has to do with various elements, other than the clock rate. The CAS delay, RAS delay, wait states and so on.
Again speculating, I wonder if NVidia is running more delay time in memory access to allow for higher clock rates, but effectively running about the same rates as ATI's 9800Pro?

Just a thought. KInd of goes to marketing.

Interesting. Memory timings can be done in bios so it makes me wonder about the higher clocked evga vard that had some problem problems and an official bios replacement.

I remember reading an article on different bios' for the 8500. The bios that had the best performance at stock clock didn't overclock hardly at all and the bios with the worst performance had the biggest overclock. The conclusion was that it was all due to different memory timings. I'll try and find the article again and post the link.

edit: here's the link (http://www.rage3d.com/articles/8500bios/)

Chalnoth
08-03-03, 02:11 PM
Originally posted by MikeC
Then divide the result by 1024 to convert to GB.
Side note:
I think bandwidth amounts are typically counted in increments of 1000. That is, 1000 Megabytes per Gigabyte.

This is contrary to storage, which is typically counted in increments of 1024.

Some people are trying to make this all a little bit less confusing by using MiB, GiB, etc. for counting in increments of 1024, but I doubt it will ever catch on.

Just remember:
Storage, most particularly RAM, will likely always prefer power of two amounts. Given that bandwidth is a bus width multiplied by a frequency, the amount of bandwidth is arbitrary, so there is no preference for power of two amounts. The preference for power of two amounts in RAM makes it much easier to just use power of two units (1024 is 2^10), so one says "128kb" instead of "131.072kb"

StealthHawk
08-03-03, 03:42 PM
Originally posted by Skuzzy
But I would think that is something both ATI and NVidia have gotten down pat by now MikeC.

I just keep looking at all the numbers and they just do not add up. ATI's AA algorythm requires a bit more bandwidth than NVidia's does, yet ATI running slower core and memory timings is able to produce better AA with better overall peformance than NVidia.
It just does not add up. AA is bandwidth intense, and with the simpler algorythm NVidia is using, along with higher clock rates, they should trounce ATI, but they don't.

I am going to see if I can get a high speed logic analyzer, on loan, and check the memory timings. Something is whacked and it is bothering me.

It could be that NV3x is just way more broken than we ever imagined.

But I think the case just boils down to architectural differences.

The Baron
08-03-03, 03:48 PM
/me invokes the spirit of Pelly to impress us all with his electrical engineering knowledge ;)

howard stern
08-05-03, 01:31 PM
Originally posted by StealthHawk
It could be that NV3x is just way more broken than we ever imagined.

But I think the case just boils down to architectural differences.

Kind of like AMD and Intel, AMD was able to keep up with Intel at much lower clock speeds, it was the architectural differences that helped AMD but not anymore.