PDA

View Full Version : First benchmarks of GTX 470


Pages : [1] 2 3 4

AT1st
03-04-10, 12:43 PM
Hi,
Heise got their hands on the first benchmarks of a pre-version of the GTX470 today on cebit.
The whole article can be found here: http://www.heise.de/newsticker/meldung/Nvidias-Fermi-Leistung-der-GeForce-GTX-470-enthuellt-946411.html(german).

Summary:
3DMark Vantage (X-Mode): 7511 Points
Radeon HD 5870: 8730 Points
HD 5850: 6430 Points
GeForce GTX 285: 6002 Points

Performance-Mode: 17156
Radeon HD 5870: 17303
Radeon HD 5850: 14300

Unigine-Benchmarks (DirectX 11 & Tessellation):
GeForce GTX 470: 29 fps
Radeon HD 5870: 27 fps
HD 5850: 22 fps

with 8x AA:
GeForce GTX 470: 20 fps
Radeon HD 5870: 23 fps
Radeon HD 5850: 19 fps

Clockspeeds:
Shader: 1255 MHz
GDDR5 Ram: 1600 MHz (Read-Write-Clock)

So rather disappointing I guess...

Oh and I'm sorry, I just saw that those numbers were already posted in the gf100 preview thread. I missed the post when I looked there before making a new topic...

Redeemed
03-04-10, 12:46 PM
Hi,
Heise got their hands on the first benchmarks of a pre-version of the GTX470 today on cebit.
The whole article can be found here: http://www.heise.de/newsticker/meldung/Nvidias-Fermi-Leistung-der-GeForce-GTX-470-enthuellt-946411.html(german).

Summary:
3DMark Vantage (X-Mode): 7511 Points
Radeon HD 5870: 8730 Points
HD 5850: 6430 Points
GeForce GTX 285: 6002 Points

Performance-Mode: 17156
Radeon HD 5870: 17303
Radeon HD 5850: 14300

Unigine-Benchmarks (DirectX 11 & Tessellation):
GeForce GTX 470: 29 fps
Radeon HD 5870: 27 fps
HD 5850: 22 fps

with 8x AA:
GeForce GTX 470: 20 fps
Radeon HD 5870: 23 fps
Radeon HD 5850: 19 fps

Clockspeeds:
Shader: 1255 MHz
GDDR5 Ram: 1600 MHz (Read-Write-Clock)

So rather disappointing I guess...

If there's any truth to that it means nVidia's 2nd tier part will compete with ATi's. The 480 will have to compete with the 5970. However, at that point nVidia isn't leaving themselves a lot of room for a dual GPU part. Maybe they don't plan one this gen?

shadow001
03-04-10, 01:18 PM
If there's any truth to that it means nVidia's 2nd tier part will compete with ATi's. The 480 will have to compete with the 5970. However, at that point nVidia isn't leaving themselves a lot of room for a dual GPU part. Maybe they don't plan one this gen?


If those are accurate,the extra 14% in shader and 20% extra memory bandwith for the GTX480,by having the full 512 shaders and 384 bit bus,simply isn't enough to go after the HD5970 performance wise,and actually challenge it in a meaningfull way.


If the GTX480 really does use close to 300 watts as the leaks suggest,which is the maximum that the PCI-e spec allows power wise,then making a dual GPU variation using GTX480 chips is flat out of the question.


Even using GTX470 GPU's,it would already be pretty hard,as the HD5870 cards use 188 watts as it is,and the dual GPU variant clocks in at 294 watts,but only after the clocks were lowered to 750/1000(850/1200 is stock) and ATI uses cherry picked Cypress chips running at 1.05 volts to make the HD 5970 possible.


Available information suggests that the GTX470 clocks in at 220 watts power consumption,so a dual GPU card using those would need even more drastic mesures that ATI did with the HD5970 cards(disabling hardware inside the GPU),and at that point,would it still beat the HD5970 performance wise?


My own opinion is that Fermi needs 28nm in a big way to cut down on power consumption significantly,increase yeilds,potentially raise operating clock speeds for the core,and make it technically possible to make a dual GPU card and stay under that 300 watt limit....It's simply too big and power hungry to allow that while still built at 40nm.

Redeemed
03-04-10, 02:03 PM
If those are accurate,the extra 14% in shader and 20% extra memory bandwith for the GTX480,by having the full 512 shaders and 384 bit bus,simply isn't enough to go after the HD5970 performance wise,and actually challenge it in a meaningfull way.


If the GTX480 really does use close to 300 watts as the leaks suggest,which is the maximum that the PCI-e spec allows power wise,then making a dual GPU variation using GTX480 chips is flat out of the question.


Even using GTX470 GPU's,it would already be pretty hard,as the HD5870 cards use 188 watts as it is,and the dual GPU variant clocks in at 294 watts,but only after the clocks were lowered to 750/1000(850/1200 is stock) and ATI uses cherry picked Cypress chips running at 1.05 volts to make the HD 5970 possible.


Available information suggests that the GTX470 clocks in at 220 watts power consumption,so a dual GPU card using those would need even more drastic mesures that ATI did with the HD5970 cards(disabling hardware inside the GPU),and at that point,would it still beat the HD5970 performance wise?


My own opinion is that Fermi needs 28nm in a big way to cut down on power consumption significantly,increase yeilds,potentially raise operating clock speeds for the core,and make it technically possible to make a dual GPU card and stay under that 300 watt limit....It's simply too big and power hungry to allow that while still built at 40nm.

I'm not GPU-architect-engineering-guru but...

It seems I recall the 7800GTX was under whelming, and the refresh, on the same process was leaps and bounds better (7900GTX). If nVidia managed it with the 7900- why couldn't they here? Who knows what is going on under the hood- maybe a lot of power leakage? I mean, there's a reason it's inefficient. It's possible, I'd imagine, they could fix this with a refresh? Maybe? :o

shadow001
03-04-10, 02:26 PM
I'm not GPU-architect-engineering-guru but...

It seems I recall the 7800GTX was under whelming, and the refresh, on the same process was leaps and bounds better (7900GTX). If nVidia managed it with the 7900- why couldn't they here? Who knows what is going on under the hood- maybe a lot of power leakage? I mean, there's a reason it's inefficient. It's possible, I'd imagine, they could fix this with a refresh? Maybe? :o


Well for one,you're mentioning a GPU with about 350 million transistors in it,and those GPU's used a lot less power to begin with,so it was easier on the cooling front as well,while fermi is 3 billion transistor processor afterall,is much larger and looks to use a lot more power.


The only chip that was close to Fermi's size in Nvidia's history was the original G80 GPU,when built at the 65 nm process,at about 576mm^ and it also used a 6 + 8 pin PCI-e power connector arrangement,using about 225 watts,but even then,the G80 was about 780 million transistors,give or take.


Fermi is a 3 billion transistor monster,and while the 40nm process also allows to reduce power consumption to a nice degree and make the actual die size small enough(relatively speaking here),to allow it to be built in volume,there are limits to it in the end,and there is an 850 million transistor difference between it and ATI's Cypress chip used on that very same TSMC process.


Regardless of how Fermi ends upin terms of gaming performance,that's still 850 million extra transistors that are being powered up and generating heat as they operate,so power consumption is higher,yeilds are lower(even if 100% of the chips were good) and so are the problems with keeping it running cool,or trying to make a dual GPU version of it.


Transitioning from 40nm down to 28nm,would cut the die size of fermi to about 1/2 of what it is at 40nm,so we'd see it shrink to about 250mm^(assuming Fermi is slightly over 500mm^ at 40nm),wich is a large drop and allows for options that simply aren't feasable at 40nm,even if no other changes are made to the architecture itself,which there will be for sure.

Maverick123w
03-04-10, 02:28 PM
If it benches between the the 5850 and the 5870 and comes in at $300 it will be a real winner. If it comes in at $400 it won't be.

hell_of_doom227
03-04-10, 02:36 PM
Excellent Numbers. So GTX480 sits betwen HD5870 and HD5870 X2 and considering what joke is Crossfire and CCC crapola, Nvidia owns them.

Can't wait to replace this garbage i currently own.

shadow001
03-04-10, 02:38 PM
If it benches between the the 5850 and the 5870 and comes in at $300 it will be a real winner. If it comes in at $400 it won't be.


And since ATI can drop prices as their chips are smaller and only use a 256 bit bus,and still make money on each one they sell?...Fermi being that large and using a 384 bit bus,doesn't lend itself quite as well to price drops while still making money.


It's the HD4870 versus GTX280 or the HD4890 versus GTX 285 fight all over again,only this time,unlike those previous examples,it seems Nvidia doesn't have a performance edge to charge a premium anymore.


Interesting times ahead to say the least.

Ninja Prime
03-04-10, 02:38 PM
Excellent Numbers. So GTX480 sits betwen HD5870 and HD5870 X2 and considering what joke is Crossfire and CCC crapola, Nvidia owns them.

Can't wait to replace this garbage i currently own.

Edited BY Muya Ninja can you stop with the personal attacks in this thread..

shadow001
03-04-10, 02:42 PM
Excellent Numbers. So GTX480 sits betwen HD5870 and HD5870 X2 and considering what joke is Crossfire and CCC crapola, Nvidia owns them.

Can't wait to replace this garbage i currently own.


Why am i not surprised you'd say that....:D :p


It would be more like slightly edging out the HD5870 and still being far away from the HD5970 cards,and i get the impression that once Nvidia does eventually release a dual GPU version of fermi,you won't be saying that SLI is crap or not having SLI profiles as soon as new games are released.


Just my 0.02 cents though....

hell_of_doom227
03-04-10, 02:47 PM
Why am i not surprised you'd say that....:D :p


It would be more like slightly edging out the HD5870 and still being far away from the HD5970 cards,and i get the impression that once Nvidia does eventually release a dual GPU version of fermi,you won't be saying that SLI is crap or not having SLI profiles as soon as new games are released.


Just my 0.02 cents though....

SLI owns Crossfire, scaling is much better and i never complained about it. And Nvidia is so good with drivers release that in most cases they have SLI profile before the game is even released. I don't say that HD5870 is bad hardware but its support sucks balls making it not useable as Nvidia cards are. Simply AMD fail with CCC big time. I miss game profiles with Forceware big time where i can nicely force AA and other settings for each profile. Also Nvidia is really good in SLI Profile Update releases.

Fermi SLI 2x480 here i come!!!!

Maverick123w
03-04-10, 02:51 PM
And since ATI can drop prices as their chips are smaller and only use a 256 bit bus,and still make money on each one they sell?...Fermi being that large and using a 384 bit bus,doesn't lend itself quite as well to price drops while still making money.


Yeah it may be difficult for Nvidia if AMD really gets agressive with the pricing. I'ts obvious ATI can really drop the prices on their cards and still be profitable.


You're such an idiot. I can't fathom how any being with braincells can be this stupid.

Truest statement in the history of teh NVNEWZ?

shadow001
03-04-10, 02:52 PM
SLI owns Crossfire, scaling is much better and i never complained about it. And Nvidia is so good with drivers release that in most cases they have SLI profile before the game is even released. I don't say that HD5870 is bad hardware but its support sucks balls making it not useable as Nvidia cards are. Simply AMD fail with CCC big time. I miss game profiles with Forceware big time where i can nicely force AA and other settings for each profile. Also Nvidia is really good in SLI Profile Update releases.

Fermi SLI 2x480 here i come!!!!


I replied to that in the other thread just now....Read up on it.

kaptkarl
03-04-10, 03:01 PM
You're such an idiot. I can't fathom how any being with braincells can be this stupid.

+1 - Agreed. I'm glad I'm not the only on this forum that thinks this guy is an idiot. Wish he would please go somewhere else to post his garbage. I've never seen a meaningful post from him.

Back on topic......if this is accurate, then it looks like the Fermi will have a very difficult time competing w/ ATI's best. I'm more curious about pricing....

Kapt

Razor1
03-04-10, 03:51 PM
Yeah it may be difficult for Nvidia if AMD really gets aggressive with the pricing. It's obvious ATI can really drop the prices on their cards and still be profitable.


Truest statement in the history of teh NVNEWZ?

No there isn't that much difference in the BOM between these cards, the chip if Fermi's rumored ~500mm2 is true, its only 1.5 bigger then then Cypress's 335mm2, unlike the gtx 280 where ATi had a 2 times the die size advantage. You are looking at possibly ~$20 between the two GPU's,

fivefeet8
03-04-10, 04:28 PM
The only chip that was close to Fermi's size in Nvidia's history was the original G80 GPU,when built at the 65 nm process,at about 576mm^ and it also used a 6 + 8 pin PCI-e power connector arrangement,using about 225 watts,but even then,the G80 was about 780 million transistors,give or take.


The original G80 uses 2x6 pin power connectors, was built on the 90 nm process, and was about 680 million transistors. Nvidia moved to the 65 nm node with the introduction of the G9x series. In any case, whether or not a refresh could or can alleviate unconfirmed issues with Fermi is anyones guess.

Toss3
03-04-10, 04:42 PM
Excellent Numbers. So GTX480 sits betwen HD5870 and HD5870 X2 and considering what joke is Crossfire and CCC crapola, Nvidia owns them.

Can't wait to replace this garbage i currently own.
Why don't you wait a couple of weeks 'til we get some real performance numbers from actual games before calling this an instant win for the green team?

Based on those benchmarks I'd say that the 400-series is going to be a disappointment. :( Was hoping to see the gtx 470 outperforming the 5870 as the latter has A LOT of oc headroom. Guessing the 400-series won't be the best overclockers due to their size.

shadow001
03-04-10, 06:39 PM
No there isn't that much difference in the BOM between these cards, the chip if Fermi's rumored ~500mm2 is true, its only 1.5 bigger then then Cypress's 335mm2, unlike the gtx 280 where ATi had a 2 times the die size advantage. You are looking at possibly ~$20 between the two GPU's,


What about the 384 bit bus then?,which makes the PCB more complex to design and more expensive.


Or the fact that the GTX480 needs 12 GDDR 5 memory modules on each card,because of the 384 bit bus,while the HD5870 makes do with 8 GDDR5 modules on each card(256 bit bus).

shadow001
03-04-10, 06:45 PM
The original G80 uses 2x6 pin power connectors, was built on the 90 nm process, and was about 680 million transistors. Nvidia moved to the 65 nm node with the introduction of the G9x series. In any case, whether or not a refresh could or can alleviate unconfirmed issues with Fermi is anyones guess.


My bad,i confused it with the GT200,sorry.


In any case,it's been proven that with the last 3 major GPU releases from Nvidia,they like to do large die on all of them(G80,GT200 and now Fermi)....That in itself isn't bad as long as they're absolutely sure that it'll give it the performance it needs to beat the competition,so they can charge a premium to offset the production costs related to it.


If it doesn't and a smaller chip can actually put up a fight in performance,and is cheaper to make,and also allows more options from the start(Dual GPU card),then it just screws up the overall plans Nvidia had for Fermi,at least in the early going for the Geforce variant,and i'll be perfectly honest here and say i'm not sure the gaming version will ever be available in high volumes,as long as it's still more expensive to make than the competition.


Nvidia is in business to make money,and if that means that the bulk of Fermi chips go towards the Tesla and Quadro markets,where the profit margins are much larger on each card they sell,then you can bet that's exactly what they'll do.

shadow001
03-04-10, 07:09 PM
More results for fermi,this time with 3 monitors on dirt 2,supposedly using 2 GTX480's in SLI.


http://vr-zone.com/forums/572464/gtx-480-unigine-and-3d-surround-demo-by-nvidia.html


Here's my result for unigine at those same settings:

http://i765.photobucket.com/albums/xx298/Superfly101_02/CrysisnoAA.jpg


Actually,mine is at 1920*1200,not 1920*1080 and i'm running with 4X AF,not 1X AF.

Kemo
03-04-10, 07:13 PM
why am i not surprised that these new "fermi" cards aren't so awesome as nvidia made them out to be. even these arent accurate, i still don't think we'll see huge changes in technology (like the 8800GTX). the 8 series had the 384-bit bandwidth, then the 9s had 256-bit, the 200s had up to 512-bit which was awesome. now theyre going backwards? i understand the faster memory and higher number of cores, but i have a feeling that nvidia is going down the greed path and only cares about money not quality.

shadow001
03-04-10, 07:34 PM
did a benchmark run with the same settings shown at the link i posted in my previous post:


http://i765.photobucket.com/albums/xx298/Superfly101_02/Capture4.jpg


Since Nvidia seems to insist tesselation is the big thing with Fermi,i'm not exactly underpowered in that area to say the least...:D


Bring on at least 3 Fermi cards in triple SLI if you want to beat that result.

Toss3
03-04-10, 08:02 PM
did a benchmark run with the same settings shown at the link i posted in my previous post:


Since Nvidia seems to insist tesselation is the big thing with Fermi,i'm not exactly underpowered in that area to say the least...:D


Bring on at least 3 Fermi cards in triple SLI if you want to beat that result.

Unless the card is going to be around 600$ you really shouldn't compare it to a 5970(we don't know anything about pricing yet). I'm guessing it's going to be priced right between the 5870 and 5970.
Wish ATi would get their butts into gear with 3D gaming as that might make me switch over to the green team once again.

shadow001
03-04-10, 09:02 PM
Unless the card is going to be around 600$ you really shouldn't compare it to a 5970(we don't know anything about pricing yet). I'm guessing it's going to be priced right between the 5870 and 5970.
Wish ATi would get their butts into gear with 3D gaming as that might make me switch over to the green team once again.


Put it this way,if they want 600$ for it not so much on the basis of performance,but because it has the Cuda,PhysX and 3D glasses support,i feel they're going to have one hell of an uphill marketing battle on their hands trying to sell these to people who don't really care if an Nvidia or ATI card in their system,they just want the fastest thing.


This has been the pattern for the past 10+ years...No company holds the performance lead forever,it comes and goes,regardless of what marketing depts say from either company,since there's only ATI and Nvidia left basically.....


To be honest,i miss the older days when 3Dfx was still around and matrox also had gamer oriented cards as well as PowerVR,and 5 way video card comparisons between 5 different vendors were fun and completely umpredictable in terms of who was the fastest overall.

Iruwen
03-05-10, 03:38 AM
did a benchmark run with the same settings shown at the link i posted in my previous post:


http://i765.photobucket.com/albums/xx298/Superfly101_02/Capture4.jpg


Since Nvidia seems to insist tesselation is the big thing with Fermi,i'm not exactly underpowered in that area to say the least...:D


Bring on at least 3 Fermi cards in triple SLI if you want to beat that result.

And they probably benched with v1.1 which runs much smoother.