PDA

View Full Version : G70 more efficient than R520?


Pages : [1] 2

anzak
10-10-05, 12:03 PM
Driver Heaven did a performance test with both cards at 16 pipelines and clocked at 450/1000. Who won? Check it out at http://www.driverheaven.net/articles/efficiency/

If this is in the wrong forum then please move it. Thanks.

Mr_LoL
10-10-05, 12:07 PM
Hmm very interesting. Now all Nvidia have to do is ramp up the clockspeed and open a can of whoopass on Ati.

Overall we would have to say that in the tests which really matter today the G70 is the most efficient design and therefore performance leader when both architectures are configured similarly. In more "future proofed" tests the balance swings again and when HDR is used more in games we may well see the R520 performing better overall when the same testing methods are applied.

anzak
10-10-05, 12:18 PM
Yep. If the refresh is indeed on the 90nm process then 550mhz seems reasonable. At that speed it would destroy the X1800XT.

Vagrant Zero
10-10-05, 12:32 PM
Don't the GTXs already come at 490 from some makers?

Skinner
10-10-05, 12:33 PM
That's a wrong approach. The XI1800XT's are made with high clockspeed in mind, smaller parrallel handling, but higher freqency. You cann't clock a regular 7800GTX on 625 mhz either. Btw the branchingperformance is way higher, even on same clockspeeds.
The 7800GTX do have more raw (pixel) performance though.

anzak
10-10-05, 12:39 PM
Don't the GTXs already come at 490 from some makers?
Yeah, but they require bigger cooling solutions. On 90nm they could do higher clock speeds while keeping the single slot cooler.

EDIT:

The XI1800XT's are made with high clockspeed in mind
True. This is a bad example, but its like lowering the FX 5800 Ultra down to 9700 Pro speeds. The 5800 was designed for higher clock speeds.

FraGTastiK
10-10-05, 12:48 PM
I wonder if they used the same X1800XT 512Mb they had for their initial X1x00 review,they dont mention what version X1800XT they are using for the efficiency test.it should be a 256Mb version for a fair comparison.

XFX has a 7800 GTX clocked at 490 that has the same stock cooling as other 7800s.

btw nVIDIA's GF7 chick is Luna not Nalu!:p DH forgot about that.

Vagrant Zero
10-10-05, 01:01 PM
Eh, I bet that if they clocked a GTX at 625/1500 it'd walk all over the 1800XT.

anzak
10-10-05, 01:13 PM
Eh, I bet that if they clocked a GTX at 625/1500 it'd walk all over the 1800XT.

I sure hope so! Your looking at 24 pipeline part vs. a 16 pipelines part.

Both designs get the job done, but the 24 pipe GTX gets it done with less heat and power making it a better choice IHMO.

SH64
10-10-05, 01:16 PM
Is it me or there are some mistakes in 3DMark05 chart ??

1) The difference in pixel shader performance was quite staggering. In this test the R520 again falls a significant amount behind the G70. 21% to be exact!
thats not right if you look at the chart :
Pixel Shader
7221.1 MTexels/s (G70)
7301.5 MTexels/s (R520)
& the difference is not 21%! .. its a mere 1% ir something :confused:

2) how come PS test are measured as MTexel/s ? isnt it supposed to be MPixel/s ?

3) Vertex shader figures are also interesting, as both cards have 8 vertex shader units and are clocked at 450/1000 we can really see which architecture processes vertex shaders more efficiently. In this test the result goes to the R520 when simple vertex shaders are used. However when more complex shaders are used the G70 just outperforms R520.
what i see is quite the opposite !
Vertex Shader - Simple
215.7 FPS (G70)
170.0 FPS (R520)

Vertex Shader - Complex
61.7 MVertices/s (G70)
63.2 MVertices/s (R52)

:confused:

anzak
10-10-05, 01:22 PM
Yeah, there are some mistakes SH64. Dunno if the wording was wrong or the chart.

The vertex performance is also off. Even the X850XTPE matches or beats the GTX in the simple vertex and complex vertex tests.

Ruthless
10-10-05, 02:28 PM
I emailed the guys because data seemed to be in the wrong rows in the 3dmark results, all messed up. They have fixed it if you refresh the page.

fivefeet8
10-10-05, 02:41 PM
I think they mislabed a few of items in the chart. They talk about multitexturing performance, but it's mislabled as Pixel Shader. The 21% pixel shader advantage comes from the mislabled vertext shader column.

Ruthless
10-10-05, 03:00 PM
I think they mislabed a few of items in the chart. They talk about multitexturing performance, but it's mislabled as Pixel Shader. The 21% pixel shader advantage comes from the mislabled vertext shader column.

That was one of the things they fixed when I emailed them, you are caching an old page. control F5 if you are on IE.

Pixel Shader
215.7 FPS NV
170.0 FPS ATI

"The difference in pixel shader performance was quite staggering. In this test the R520 again falls a significant amount behind the G70. 21% to be exact!"

Roliath
10-10-05, 03:27 PM
You cann't clock a regular 7800GTX on 625 mhz either.
With a volt-mod done right you can.
Nearly 600 on the core and 1500 on the memory seems to be the average for v-modded GTX's

rohit
10-10-05, 03:45 PM
Its strange, DH accepted the G70 vs r520 efficiency results.

Ruthless
10-10-05, 04:18 PM
Its strange, DH accepted the G70 vs r520 efficiency results.

Yeah I guess they are seen as an ATI fan site, but to be fair to them if you look at their last 3 or 4 ATi and Nvidia articles they are really fair to Nvidia and the last review I read with crossfire they basically said to buy current SLI nvidia hardware. maybe its a change internally or with management or something. I like their design so I hope so :)

superklye
10-10-05, 04:36 PM
Yeah, but they require bigger cooling solutions. On 90nm they could do higher clock speeds while keeping the single slot cooler.
Not necessarily. The eVGA 7800 GTX KO edition has a single-slot cooler and runs at 490/1400, but I think thatís the exception rather than the rule.

And as for efficiency, did they really have to do these tests? Just look at the stock speeds and compare the cards that are supposed to be compared. A 7800GTX standard is 430/1200 and a standard X1800XT is 625/1400, correct?

For all intents and purposes, the cards are equal. Letís just say they are. I know that in such and such game so and soís card is faster. But on the whole, weíre going to assume they are equal.

Now, what is more efficient? The card with the lowest speeds and highest score would be the most efficient. Thatís the definition of the word. This is very similar to AMD/Intel. AMD gets equal or faster speeds than Intel using MUCH slower processor speeds. Theyíre more efficient.

anzak
10-10-05, 04:41 PM
And as for efficiency, did they really have to do these tests? Just look at the stock speeds and compare the cards that are supposed to be compared. A 7800GTX standard is 430/1200 and a standard X1800XT is 625/1400, correct?

No, your missing the point. They wanted to see which GPU could get more work done per clock if every other factor is the same.

In their test both cards had 16 pipelines, ran at 450mhz, and had the same bandwidth. The GTX shows to do more work per clock even with only 16 pipelines.

superklye
10-10-05, 04:53 PM
No, your missing the point. They wanted to see which GPU could get more work done per clock if every other factor is the same.

In their test both cards had 16 pipelines, ran at 450mhz, and had the same bandwidth. The GTX shows to do more work per clock even with only 16 pipelines.
Ah...but still, it seems that this would be a moot thing, because just like Intel and AMD, ATI and NVIDIA have different cores and thereby different methods of executing instructions on each cycle right? It would be like having an A64 running at 2.0GHz and comparing it to a P4 running at 2.0GHz, all other factors being the same. The A64 would absolutely destroy the P4 because of the way each core handles instructions on any given cycle.

It just seems like this could be predicted just by looking at the generalities (ATI having much higher speeds to get the same/slightly better results than NVIDIA) and not having to set the cards at the same speeds. But again, maybe Iím looking at it too broadly. :)

Ruthless
10-10-05, 05:11 PM
Ah...but still, it seems that this would be a moot thing, because just like Intel and AMD, ATI and NVIDIA have different cores and thereby different methods of executing instructions on each cycle right?

But isnt that the whole point of the comparison to test how effective the different architectures are at the same given core and memory speed with equal footing on the pipelines (16). it shows the G70 architecture is more efficient.

superklye
10-10-05, 05:17 PM
But isnt that the whole point of the comparison to test how effective the different architectures are at the same given core and memory speed with equal footing on the pipelines (16). it shows the G70 architecture is more efficient.
Right, but Iím saying that we could already see that the 7800 is more efficient in the fact that ATI has to crank the speeds of the GPU and memory to extreme speeds just to, in some cases, match the speed of the much lower clocked GTX. If the GTX is getting roughly the same results as a card clocked much higher, doesnít that in itself prove that the GTX is more efficient?

Ruthless
10-10-05, 05:24 PM
Right, but I’m saying that we could already see that the 7800 is more efficient in the fact that ATI has to crank the speeds of the GPU and memory to extreme speeds just to, in some cases, match the speed of the much lower clocked GTX. If the GTX is getting roughly the same results as a card clocked much higher, doesn’t that in itself prove that the GTX is more efficient?

Yes, very true I think most people already knew that, but it was mainly just guesswork, isnt it nice to have figures and a decent set of comparison results. I know its a pretty useless exercise, something they say themselves in the conclusion but for the sheer "interest" factor I found it very interesting for them to use

a: the newest betas on nzone, and the newest betas that ati gave them.
b: downclock the R520 to the same speeds as the G70
c: reduce the pipes on the G70 to 16 to match the R520.

I mean the results are quite fascinating (well to me anyway) as its always something I wanted to see and im surprised they published it as it doesnt exactly show the R520 in a wonderful light. However we all know the R520 is kind of like the "intel" in this test needing higher frequencies to compete so i know its not a viable test of hardware so to speak, but interesting nonetheless.

I love the 7800GTX and the single slot design, NV really hit it on the head with these cards.

superklye
10-10-05, 05:30 PM
Yes, very true I think most people already knew that, but it was mainly just guesswork, isnt it nice to have figures and a decent set of comparison results. I know its a pretty useless exercise, something they say themselves in the conclusion but for the sheer "interest" factor I found it very interesting for them to use

a: the newest betas on nzone, and the newest betas that ati gave them.
b: downclock the R520 to the same speeds as the G70
c: reduce the pipes on the G70 to 16 to match the R520.

I mean the results are quite fascinating (well to me anyway) as its always something I wanted to see and im surprised they published it as it doesnt exactly show the R520 in a wonderful light. However we all know the R520 is kind of like the "intel" in this test needing higher frequencies to compete so i know its not a viable test of hardware so to speak, but interesting nonetheless.

I love the 7800GTX and the single slot design, NV really hit it on the head with these cards.
Oh, I definitely agree that itís very cool and quite interesting; I sure wish I was getting paid to do the tests. :)

Ruthless
10-10-05, 05:39 PM
Oh, I definitely agree that it’s very cool and quite interesting; I sure wish I was getting paid to do the tests. :)

hehe me too, some people have all the luck but I gotta say i was happy to read the findings, makes my purchase of the 7800GTX all the sweeter :afro: