PDA

View Full Version : NVIDIA GF100 Previews


Pages : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [24] 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

Razor1
03-12-10, 10:36 PM
Same here....

ah good time!

shadow001
03-12-10, 11:31 PM
Whereever ATi's viral minions seek to sell ATi's pipe dreams to the unsuspecting public, I'll be there. ;)


We can say the same for the lack of DX11 parts from Nvidia for the last 6 months too...;)


And here's another little tidbit you're likely to ignore.....ATI has launched their entire lineup of DX11 cards,ranging from 60$ ultra budget to the enthusiats level cards like the HD5970,and did so in a little over 4 months.....How long again will people have to wait for DX11 mainstream and budget cards based on the fermi architecture again?


Oh that's right,Nvidia's CEO saying their DX10.1 cards are good enough...LOL.

shadow001
03-12-10, 11:38 PM
no just sayin :headexplode:

see we still don't know the exact size of fermi, if its ~500mm2 then its transistors are packed just as tight as AMD's, but basing on transistor count can't really do that with performance, cause lets see there are quite a bit different things on that front, which you are just generalizing, sorry to say it.

Let me generalize something for you, and I'm sure you will think this is out in left field, because of Fermi's L2 coherent cache we get 10% extra performance. Makes some sense if that was in a CPU instead of a GPU, but doesn't really make sense over all right? ;) I drew a wrong conclusion by over generalizing because of previous knowledge.


Actually no,if we assume that the ratio between caches,which can be packed tighter and actual logic circuits,is roughly the same as with cypress,then the 3 billion transistor budget comes to a die size of 467mm^ at 40nm.


But we don't know the size of fermi die so it hard to postulate as it's a secret,just like it's power requirements and final performance figures,or even the price of the cards,and there's 2 weeks left to the official launch....Is Nvidia being a touch paranoid so close to the launch,i think so.


Edit:In any case,if the performance lead of Fermi isn't relative to the actual transistor budget difference between both Cypress and Fermi,there's only 2 logical possibilities that i can think of here,but feel free to add more:


1:Cypress being more efficient overall for the transistor budget it uses,in terms of delivered performance,and in a gaming environment.

2:The overall efficiency is the same between Fermi and Cypress in a gaming environment,but the extra transistor budget that Fermi uses is aimed towards professional graphics environments(high geometry workloads),and GP-GPU situations(higher Dual precision math abilities for scientific research purposes),with ECC memory support and large coherent caches.

3:?

shadow001
03-12-10, 11:44 PM
What's to mention? More ATi related vaporware you're hoping will come to market?

Poor ATi always the follower.


More like allowing the users to choose whatever 3D solution they want,and it'll be supported,and this for both movies and 3D gaming.....That's worth the wait.

Razor1
03-13-10, 01:04 AM
Actually no,if we assume that the ratio between caches,which can be packed tighter and actual logic circuits,is roughly the same as with cypress,then the 3 billion transistor budget comes to a die size of 467mm^ at 40nm.

But you aren't looking into the non parallel processing of Fermi, which will take up die space and transistor budget, something like that could be considerable.

But we don't know the size of fermi die so it hard to postulate as it's a secret,just like it's power requirements and final performance figures,or even the price of the cards,and there's 2 weeks left to the official launch....Is Nvidia being a touch paranoid so close to the launch,i think so.

yeah just like they were with the g80 ;) Common they have given us more information about the Fermi architecture then any other GPU before launch in the past.

Edit:In any case,if the performance lead of Fermi isn't relative to the actual transistor budget difference between both Cypress and Fermi,there's only 2 logical possibilities that i can think of here,but feel free to add more:

1:Cypress being more efficient overall for the transistor budget it uses,in terms of delivered performance,and in a gaming environment.

2:The overall efficiency is the same between Fermi and Cypress in a gaming environment,but the extra transistor budget that Fermi uses is aimed towards professional graphics environments(high geometry workloads),and GP-GPU situations(higher Dual precision math abilities for scientific research purposes),with ECC memory support and large coherent caches.

3:?


I'm not going to postulate on that since we don't know the performance of Fermi outside of one benchmark :beer:

Xion X2
03-13-10, 01:31 AM
But you aren't looking into the non parallel processing of Fermi, which will take up die space and transistor budget, something like that could be considerable.

+1

Fermi is meant to be more than a gaming GPU. With as strong as GPGPU is becoming for Nvidia, it only makes sense to go this way.

I own two 5970s like Shadow but do like the all-purpose chip that Fermi appears to be and will likely purchase one for my renderbox. Can anyone tell me from experience how well Nvidia's GPGPU works with CAD programs? More specifically, 3DSMax and Solidworks? I'll also use it for video editing in Premiere.. though I've heard Premiere is mostly CPU dependent.

And Shadow, for gods sakes man, just let it go. This is an Nvidia forum, and guys over here have a right to be enthusiastic about an upcoming card without you trying to crap all over it left and right. I now see you on every single thread doing the same thing. I don't know what your motivation is for doing it, but I get as sick of seeing it as I have of Rollo/JethroBodine doing it over on Rage3D.

You shouldn't stoop to such a level. You should just be happy with your two 5970s and quit worrying so much about Fermi before you lose all your hair and give yourself an ulcer.

Muppet
03-13-10, 01:34 AM
More like allowing the users to choose whatever 3D solution they want,and it'll be supported,and this for both movies and 3D gaming.....That's worth the wait.

And so will Fermi. :p

shadow001
03-13-10, 01:46 AM
But you aren't looking into the non parallel processing of Fermi, which will take up die space and transistor budget, something like that could be considerable.


No doubt,but non parallel processing has nothing to do with graphics work on a general principle,as the basic principle over the years has been to add more parallelism to the architecture to speed up performance in graphics workloads...More rendering pipelines,more shaders,more texture units,more rops,wider memory buses.


Non parallel work is handled by CPU's,so it's Nvidia trying to adapt the fermi architecture to suit more CPU oriented workloads overall.



yeah just like they were with the g80 ;) Common they have given us more information about the Fermi architecture then any other GPU before launch in the past.


Actually on the G80 front,Nvidia actually gave false information,or at least implied it since one of their main engineers at the time said that it was too early to unify shaders like it was rumored to be the case with the R600,and that they'd rather keep vertex and pixel shaders as seperate units,hardware wise,at least for the time being.

Low and behold,the G80 was released a few months after that interview with you guessed it,unified shaders,which caught everyone by surprise really.


With fermi though,they've discussed a lot about it's given features for both GP-GPU work and some gaming features,but nothing regarding clocks,power use or performance yet,and Anandtech,Hard OCP and Guru 3D,even as late as this january,were startng to get fed up of the lack of information on those issues,if anything else but to make a chart with the theoretical maximums that each achitecture can acheive for shader power,fillrate,texturing rate and memory bandwith,and get a better idea what it's potential is.


Guru 3D even mentioned in an article about 2~3 weeks old now,from information he's got,without revealing the actual performance numbers,that Fermi isn't going the mind blowing product that some here are still expecting it to be,relative to the HD5870 cards,and hilbert doesn't screw around here:


http://www.guru3d.com/article/nvidia-geforce-470-480/....I'll quote and bold the juicy parts here:


What I will tell you is that the clock frequencies on these boards surprised me, the GeForce 470 seems to be clocked at roughly 650 MHz, that's lower than I expected. And that indeed will have an effect on performance. I think it's safe to that the GeForce 470 and 480 will be worthy competitors towards the Radeon HD 5850 and 5870. Will it be a knock-out ? I doubt it very much. But is it important for NVIDIA to deliver a knockout to the competition ? Well they would hope so, but no .. not really, as the current performance levels that ATI for example offers simply are superb already. Being six months late to the market does pose an issue, ATI will already be respinning and binning their upcoming products, clocked higher and they could match NVIDIA in either price or performance.


Back to reality. We found out (and verified), surprisingly enough, that the GPU is already at revision A3, that's the third revision of the GPU, the GF100 already has had three builds. So yes, something was wrong, very wrong alongside the initial yield issues. But we know that the products right now are in volume production, will it be many weeks before we see good availability ? Sure it will. Maybe April, or indeed May is where things will start to make a difference. Performance will be very good, however with the clocks I have seen (and only if they are final) I do believe they will not be brilliant though. NVIDIA has many trump cards though, they have an outstanding driver team which will drive performance upwards fast and soon.

And that's where I like to end this little opinionated article. NVIDIA's Fermi is not a what I read somewhere "a sinking ship" or to quote "Hot, buggy and far too slow". I have no doubt it will be a good product series, but I'll agree on this, NVIDIA likely would have wanted to squeeze some more performance out of it as it was likely the most difficult product they have ever gotten to market, it has been fighting them all the way.


Feel free to read the entire article though,though it's pretty clear that Fermi doesn't seem to be at the performance level Nvidia wanted it to be,and that it's been a painfull birth to say the least.

shadow001
03-13-10, 02:02 AM
+1

Fermi is meant to be more than a gaming GPU. With as strong as GPGPU is becoming for Nvidia, it only makes sense to go this way.

I own two 5970s like Shadow but do like the all-purpose chip that Fermi appears to be and will likely purchase one for my renderbox. Can anyone tell me from experience how well Nvidia's GPGPU works with CAD programs? More specifically, 3DSMax and Solidworks? I'll also use it for video editing in Premiere.. though I've heard Premiere is mostly CPU dependent.

And Shadow, for gods sakes man, just let it go. This is an Nvidia forum, and guys over here have a right to be enthusiastic about an upcoming card without you trying to crap all over it left and right. I now see you on every single thread doing the same thing. I don't know what your motivation is for doing it, but I get as sick of seeing it as I have of Rollo/JethroBodine doing it over on Rage3D.

You shouldn't stoop to such a level. You should just be happy with your two 5970s and quit worrying so much about Fermi before you lose all your hair and give yourself an ulcer.


I agree with you on all points,and it seems GP-GPU is where fermi is at it's best,but the information i post is from verifiable sources(like my last post above),and the damn thing is late as hell,that's undesputable at this point.


Rollo is just being Rollo by posting what he does and simply doing a truckload of damage control for Nvidia no matter what.

Razor1
03-13-10, 02:02 AM
No doubt,but non parallel processing has nothing to do with graphics work on a general principle,as the basic principle over the years has been to add more parallism to the architecture to speed up performance...More rendering pipelines,more shaders,more texture units,more rops,wider memory buses.



It does have to do with gaming with physics though, sorry should have parallel kernels
Things like AI can now be offloaded to the GPU too, these are things that haven't been talked about much but soon the CPU will just be used for office applications ;)

Actually on the G80 front, Nvidia actually gave false information,or at least implied it since one of their main engineers at the time said that it was too early to unify shaders like it was rumored to be the case with the R600,and that they'd rather keep vertex and pixel shaders as seperate units,hardware wise,at least for the time being.

Low and behold,the G80 was released a few months after that interview with you guessed it,unified shaders,which caught everyone by surprise really.

In almost every generation nV sends out miss information to mess with possible leaks, AMD/ATi started doing this too with the rv770 ;), you probably haven't been around long enough or really paid attention to what is real and what isn't and yeah these companies do do this quite often.

With fermi though,they've discussed a lot about it's given features for both GP-GPU work and some gaming features,but nothing regarding clocks,power use or performance yet,and Anandtech,Hard OCP and Guru 3D,even as late as this january,were startng to get fed up of the lack of information on those issues,if anything else but to make a chart with the theoretical maximums that each achitecture can acheive for shader power,fillrate,texturing rate and memory bandwith,and get a better idea what it's potential is.

Who cares who is getting fed up, you know what, if TSMC didn't have all these problems, I'm sure nV would have had this card out sooner rather then in a couple of weeks.



Guru 3D even mentioned in an article about 2~3 weeks old now,from information he's got,without revealing the actual performance numbers,that Fermi isn't going the mind blowing product that some here are still expecting it to be,relative to the HD5870 cards,and hilbert doesn't screw around here:

http://www.guru3d.com/article/nvidia-geforce-470-480/....I'll quote and bold the juicy parts here:

Feel free to read the entire article though,though it's pretty clear that Fermi doesn't seem to be at the performance level Nvidia wanted it to be,and that it's been a painfull birth to say the least.



The clocks of the cards weren't finalized not only that he saw most likely an A2 sample :) and of the 470. Please read the article yourself, I'm very familiar with it (don't even need to read it) and he doesn't know the performance, he is guessing based on the clocks of what he saw since he thought the clocks would be higher. I'm not sure what to say at this point, but you are definitely skewing every single thing you read.

Muppet
03-13-10, 02:15 AM
I agree with you on all points,and it seems GP-GPU is where fermi is at it's best,but the information i post is from verifiable sources(like my last post above),and the damn thing is late as hell,that's undesputable at this point.


Rollo is just being Rollo by posting what he does and simply doing a truckload of damage control for Nvidia no matter what.

What damage control are you on about. Other than the cards being late, there is none. For all any of us know, it could very well be as fast or faster than the ATI 5800 series. If so there is no damage at all except for all the unsubstantiated dribble that has been posted about Nvidia. I mean for goodness sakes, how about being at least a little fair in your posts.

For Instance Nvidia has had a working 3D solution ( and it works bloody well) for a while now. How about ragging on ATI. Where is there answer. Stop being such a hardcore fanboy and stop being so biased.

shadow001
03-13-10, 02:33 AM
It does have to do with gaming with physics though, sorry should have parallel kernels
Things like AI can now be offloaded to the GPU too, these are things that haven't been talked about much but soon the CPU will just be used for office applications ;)


I'm sure that both Intel and AMD are going to love that since they're already offering multi core CPU's on the market,with Intel releasing their latest insanity,the 6 core/12 thread gulftown processors for desktops.


We've got tons of CPU power not getting used as it is,and while i'm sure Nvidia would love to see A.I. and physics processing done on the GPU,the question is why?.....Let's use all the leftover power not getting used on CPU cores first,and if it's not enough and GPU's can do even more sophisticated effects,with even better performance,then let them take over.....One step at a time and fully maxing out what's inside every system first basically....A multi core CPU.


In almost every generation nV sends out miss information to mess with possible leaks, AMD/ATi started doing this too with the rv770 ;), you probably haven't been around long enough or really paid attention to what is real and what isn't and yeah these companies do do this quite often.



I don't doubt it at this point,but it's the information we have,and if the release of both products were reasonably close to eachother,it makes sense to wait and see which is the best one and go from there,but they aren't even close release date wise,and spending months waiting for it,when there's something as good as what ATI released?....No way.




Who cares who is getting fed up, you know what, if TSMC didn't have all these problems, I'm sure nV would have had this card out sooner rather then in a couple of weeks.


So if TSMC's 40nm process is that screwed up,how come ATI managed to release much earlier,it is a 2+ billion transistor part afterall?.....Sorry but i don't buy that,and there had to be bugs to iron out within fermi's architecture to require 3 respins,so laying the fault exclusively on TSMC doesn't fly here.



The clocks of the cards weren't finalized not only that he saw most likely an A2 sample :) and of the 470. Please read the article yourself, I'm very familiar with it (don't even need to read it) and he doesn't know the performance, he is guessing based on the clocks of what he saw since he thought the clocks would be higher. I'm not sure what to say at this point, but you are definitely skewing every single thing you read.


It's not just that site either....The tech report article i linked to earlier in the thread estimated that fermi would operate at 725Mhz,and even at those clocks,it's still down on raw shader power(single precision) and texturing ability,and about 20% faster on raw fillrate,compared to Cypress,remember?....The numbers don't seem to be there for a crushing victory.


At some point,Nvidia has to show the real performance,so playing it close to the chest like this only raises expectations from users,especially given the amount of time that has already passed.

shadow001
03-13-10, 02:39 AM
What damage control are you on about. Other than the cards being late, there is none. For all any of us know, it could very well be as fast or faster than the ATI 5800 series. If so there is no damage at all except for all the unsubstantiated dribble that has been posted about Nvidia. I mean for goodness sakes, how about being at least a little fair in your posts.

For Instance Nvidia has had a working 3D solution ( and it works bloody well) for a while now. How about ragging on ATI. Where is there answer. Stop being such a hardcore fanboy and stop being so biased.


1: Disregarding that ATI has the fastest cards on the market.
2:That they were released 6 months ago.
3:That playing with 3 screens,but having to settle on playing in inferior 2D mode.
4:That playing with GPU physics support on the 10 odd games that actually use it is a huge deal.
5:That ATI has an entire lineup of DX11 cards covering all price points on the market now.


Fine,Nvidia has a working 3D glasses solution and it been the case for a long time now,and it's about the only thing ATI still needs to get it's ass in gear and have that as well.

Xion X2
03-13-10, 02:49 AM
and the damn thing is late as hell,that's undesputable at this point.

Dude, so what? That doesn't mean that you need to keep shouting it in people's ears constantly. Just what are you trying to accomplish by doing that?

Rollo is just being Rollo by posting what he does and simply doing a truckload of damage control for Nvidia no matter what.

True, but this is his backyard and so he's somewhat entitled to it. Doesn't mean he's right, but this forum is Nvidia-based, y'know. Look at it from the perspective of other Nvidia owners, though. You should respect the fact that they don't want to hear some ATI cheerleader go at it on their board 24/7 any time that any mention of Fermi is brought up. Let them have their space, and let them enjoy discussing it without you feeling the need to constantly put it down.

It's ok to discuss it occasionally and share your view, but what you're doing now is responding to EVERY thread and EVERY post and arguing against it. It's overbearing and disrespectful to the other members here. You criticize Rollo for it yet you're doing much the same things.

shadow001
03-13-10, 02:55 AM
Dude, so what? That doesn't mean that you need to keep shouting it in people's ears constantly. Just what are you trying to accomplish by doing that?



True, but this is his backyard and so he's somewhat entitled to it. Doesn't mean he's right, but this forum is Nvidia-based, y'know. Look at it from the perspective of other Nvidia owners, though. You should respect the fact that they don't want to hear some ATI cheerleader go at it on their board 24/7 any time that any mention of Fermi is brought up. Let them have their space, and let them enjoy discussing it without you feeling the need to constantly put it down.

It's ok to discuss it occasionally and share your view, but what you're doing now is responding to EVERY thread and EVERY post and arguing against it. It's overbearing and disrespectful to the other members here. You criticize Rollo for it yet you're doing much the same things.


Fair enough...I've said my opinion,and this will be my last reply on this thread....P.M me from now on.


We'll see in 2 weeks.....Off to bed.

Rollo
03-13-10, 07:55 AM
We can say the same for the lack of DX11 parts from Nvidia for the last 6 months too...;)


And here's another little tidbit you're likely to ignore.....ATI has launched their entire lineup of DX11 cards,ranging from 60$ ultra budget to the enthusiats level cards like the HD5970,and did so in a little over 4 months.....How long again will people have to wait for DX11 mainstream and budget cards based on the fermi architecture again?


Oh that's right,Nvidia's CEO saying their DX10.1 cards are good enough...LOL.

Apparently he's right as ATi lost marketshare to NVIDIA last quarter. :)

Of course you're ignoring the fact that delays of high end parts don't equate to delays of scaled down variants or refreshes again. (because it serves your ATi viral marketer goals to do so)

K007
03-13-10, 08:07 AM
still much lol in this thread

NIGELG
03-13-10, 08:17 AM
Apparently he's right as ATi lost marketshare to NVIDIA last quarter. :)

Of course you're ignoring the fact that delays of high end parts don't equate to delays of scaled down variants or refreshes again. (because it serves your ATi viral marketer goals to do so)

Why can't you guys state your facts and opinions without the personal accusations and name calling??This is the type of behaviour that does does neither of you any good.Words like 'viral marketer' and 'shill' do nothing but cause combat.You can argue 'ATI VS nVIDIA' in a much more civilized way.

Rollo
03-13-10, 08:22 AM
Dude, so what? That doesn't mean that you need to keep shouting it in people's ears constantly. Just what are you trying to accomplish by doing that?



True, but this is his backyard and so he's somewhat entitled to it. Doesn't mean he's right, but this forum is Nvidia-based, y'know. Look at it from the perspective of other Nvidia owners, though. You should respect the fact that they don't want to hear some ATI cheerleader go at it on their board 24/7 any time that any mention of Fermi is brought up. Let them have their space, and let them enjoy discussing it without you feeling the need to constantly put it down.

It's ok to discuss it occasionally and share your view, but what you're doing now is responding to EVERY thread and EVERY post and arguing against it. It's overbearing and disrespectful to the other members here. You criticize Rollo for it yet you're doing much the same things.

+1,000,000

We have a winner. Shadow001 is obviously an undercover ATi viral marketer, as much as he likes to brag about his hardware, he would have just posted the receipt to smack me down if he had one.

The number of words he's posted in this thread alone likely surpass most paperback novels, and for what reason?

To make sure people know the 5870s came out first, and any possible FUD about Fermi? Uh huh. What sane person would spend the amount of effort he has for no payback other than the satisfaction of knowing he might have cost NVIDIA a sale?

volumes of FUD + highest possible ATi rig with no receipts = ATi viral marketer

Revs
03-13-10, 08:39 AM
Why can't you guys state your facts and opinions without the personal accusations and name calling??This is the type of behaviour that does does neither of you any good.Words like 'viral marketer' and 'shill' do nothing but cause combat.You can argue 'ATI VS nVIDIA' in a much more civilized way.

+1 That's why I'm mostly staying away from this thread, and every other GF100 here actually. Full of bitching and BS and nothing of interest.

Welcome to the forum, bud :lol:

Villa
03-13-10, 09:53 AM
+1,000,000

We have a winner. Shadow001 is obviously an undercover ATi viral marketer, as much as he likes to brag about his hardware, he would have just posted the receipt to smack me down if he had one.

The number of words he's posted in this thread alone likely surpass most paperback novels, and for what reason?

To make sure people know the 5870s came out first, and any possible FUD about Fermi? Uh huh. What sane person would spend the amount of effort he has for no payback other than the satisfaction of knowing he might have cost NVIDIA a sale?

volumes of FUD + highest possible ATi rig with no receipts = ATi viral marketer

Actually, wasn't that you who was the undercover viral marketer who banned people at Anandtech forums if they said anything negative about Nvidia?

grey_1
03-13-10, 10:19 AM
+1 That's why I'm mostly staying away from this thread, and every other GF100 here actually. Full of bitching and BS and nothing of interest.

Welcome to the forum, bud :lol:

Exactly.

Rollo
03-13-10, 10:26 AM
Actually, wasn't that you who was the undercover viral marketer who banned people at Anandtech forums if they said anything negative about Nvidia?

LOL- welcome to 2006- maybe should break this 4 year old "news" on other forums as well, no one has ever heard of the days when the focus group members didn't disclose their affiliations in the gaming community! :rolleyes:

Can you think of someone other than me who is better qualified to spot a poster with an agenda after my years of experience moderating on NZONE and my years in the focus group in the pre-disclosure period?

Villa
03-13-10, 10:33 AM
LOL- welcome to 2006- maybe should break this 4 year old "news" on other forums as well, no one has ever heard of the days when the focus group members didn't disclose their affiliations in the gaming community! :rolleyes:

Can you think of someone other than me who is better qualified to spot a poster with an agenda after my years of experience moderating on NZONE and my years in the focus group in the pre-disclosure period?

So now we are suppose to believe that you have changed, that you no longer lie in public forums, sorry I don't buy it. You would still be undercover if Geo and others hadn't called you all out.

NIGELG
03-13-10, 10:44 AM
LOL- welcome to 2006- maybe should break this 4 year old "news" on other forums as well, no one has ever heard of the days when the focus group members didn't disclose their affiliations in the gaming community! :rolleyes:

Can you think of someone other than me who is better qualified to spot a poster with an agenda after my years of experience moderating on NZONE and my years in the focus group in the pre-disclosure period?

Pots and kettles??Anyhow the only NVIDIA card I'm interested in is the 470 and I would like to know the price and whether it's the same or better than a 5850.