PDA

View Full Version : NVIDIA GF100 Previews


Pages : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [20] 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

shadow001
03-11-10, 12:15 AM
We have SLi and CrossFire on the same motherboards now...its only right that PhysX should be for everyone...seems kind of childish if you ask me.


Don't kid yourself there,the only reason why we have both crossfire and SLI support for X58 chipsets,is that Intel didn't allow Nvidia to build chipsets for it's i3/i5/i7 based systems,at least until the legal case gets argued in court,and Nvidia couldn't allow ATI to enjoy the advantage of Multi-GPU on the fastest platforms currently on the market,all to themselves.


But X48 and earlier chipsets that have 2 PCI-e X16 slots also have the ability to support both crossfire and SLI as well,as the only block to keep it happen was in the bios....No special hardware is needed to make SLI work at the motherboard level,and was another way for Nvidia to force SLI users to use motherboards with their chipsets.


If Nvidia does win the court case,i'd bet a weeks paycheck that SLI gets locked down to their chipsets once again,as it was in the past.

XMAN52373
03-11-10, 01:16 AM
And here's the kicker,Nvidia actually blocks the use of calculating those physics for people that actually might have the CPU for it(think i7 processors and using those 4 hyperthreading cores to do it),or even allowing the use of an ATI graphics card,using an Nvidia one just to calculate the physics portion....Nope they don't allow it either,so they want users to use Nvidia hardware for it exclusively and for no real technical hurdle with regards to other options being possible.

The Devs of of Metro 2033 would like to disagree with you greatly. They have been using the PhysX SDK since they started working on the game and have updated it library as it has come along. They have stated that PhysX DOES and WILL use multi-core multi-thread capable CPUs for the calculations of PhysX and when available the heavier physics stuff can and will be moved to the GPU while STILL taking full advantage of the CPU.

Now if they can ****ing do it, it tells me the other devs are just to damn ****ing lazy to impliment it.

As to PhysX on Nvidia GPUs being disabled when paired with ATI cards. I'm with you on it, but ATI shouldn't be let off the hook. They have been talking up a storm about how OCL/DC/Brook+ and Stream will all for physics to be load balanced across GPUs and CPU. And in the 3+ years they have been talking up this storm, they have poduced a SINGLE ****ING DEMO 18m ago and **** else. So to ATI and ATI users I say this, either ****ing put up and work with game devs to bring this to bare fruit or SHUT THE **** UP and stop bitching because Nvidia users get to see the PhysX affects and you dont.

Iruwen
03-11-10, 03:02 AM
One area that AMD has Nvidia beat is in actual product availability Rollo :)

Availability > feature set.

The HD 5000 series has been a real paper launch. Even if Fermi availability is bad at launch, it can never be that bad.

Fotis
03-11-10, 03:16 AM
The HD 5000 series has been a real paper launch. Even if Fermi availability is bad at launch, it can never be that bad.

Actually it wasnt that bad.Got one early october and only for 300.

Iruwen
03-11-10, 05:41 AM
Lucky one.

onmikesline
03-11-10, 07:45 AM
The HD 5000 series has been a real paper launch. Even if Fermi availability is bad at launch, it can never be that bad.

your joking right? paper launch lol i see a fan boy, im not saying that im a ati fan because i always had nv, so the 5 series was my first go, and guess what, its great i can see my electric bill went down 30 bucks every month and not just that, my office is not as hot anymore, and it runs everything on max with 2560x1600 res

Revs
03-11-10, 08:08 AM
Dual Geforce 400 to come later...

http://www.fudzilla.com/content/view/18038/65/

Iruwen
03-11-10, 09:42 AM
your joking right? paper launch lol i see a fan boy, im not saying that im a ati fan because i always had nv, so the 5 series was my first go, and guess what, its great i can see my electric bill went down 30 bucks every month and not just that, my office is not as hot anymore, and it runs everything on max with 2560x1600 res

What the hell does this have to do with what I said?

lee63
03-11-10, 10:04 AM
I managed to get one on launch day and one a week later.....I bought a third to see how three will scale in games :D who knows I still might get a couple of Fermi's.

I'm not loyal to any brand...I just love new hardware. I really don't understand why everyone gets all pissy and such XD

Razor1
03-11-10, 10:36 AM
All we're seeing so far are the PCB's for the GTX470 and the GTX480 cards,with the latter using a 6 + 8 PCI-e power connector arrangement,which means a card that can be supplied with up to 300 watts if needed,and the X2900XT card was actually the first to use that power configuration if you wanted to overclock it,and it did run hot and used a lot of power.

Only the GTX470 card,with it's 6 + 6 pin power connector,is basically limited to 225 watts according to the PCI-e specifications,so it'll be close to what Cypress uses,wich clocks in at about 190 watts,while the GTX480 will go above that for sure,as it's design says so right there if you know the PCI-e specifications...We just don't know where exactly between that 225 watt and 300 watts envelope basically.

Yeah and the gtx 280 also had the same configuration, so what, its not going to consume all 300 watts that it can actually draw. And I can say for a fact it won't draw 300 watts cause some people "know" things, kinda like "I see dead people".

It's mostly clocks while keeping within accepted power consumption and cooling limits,which are limited by the electrical characteristics of the process itself obviously,also limited by the actual transistor budget that the new architecture will have,hence why it's logical that ATI try a simpler architecture that's well known to them,in that new fabrication process,and get to know what potential hurdles they might be facing,and adapt the design of whatever the new architecture is,to suit the 28nm fabrication process....Stepping stone aproach.

And in unconfirmed rumors dept,it seems that the higher volumes of Fermi that are going to ship starting in Q2,are actually B1 revision chips,which this version had new masks made for it,and the physical layout is different to counter the problems with TSMC's 40nm process.


I don't know why you took only ATi's name out that makes smaller chips on newer processes first. nV does too what do you think the Dx10.1 chips were all about?

No when designing chips, you don't really look at clocks and work yourself backwards, its more like this is the performance I want, this is the design of the chip, this is how many transistors it will take (which will have to fit into the transistor budget the process can handle), this is the clocks we will need to hit, then lets look at power and how much it consumes, is it in the thermal envelope so on and so forth. Power characteristics of the chip depends on design and the power characteristics of the silicon. What if B1 stepping is the refresh high end parts? A gtx 485 maybe? I'm 99% sure nV won't have the availability problems that ATi had at least not such a prolonged issue, maybe the first month it will be an issue of availability but after that should be just fine.

Your arguments tend to be very one sided, and always what nV is doing is something that is going to hurt themselves, guess what, outside of Fermi and the nv30, nv executed perfectly.

Lets see about ATi, r200, r420, r520, r580, r600, r670, lots more times ATi has had bad execution, form design issues, drivers, performance, marketing, availability.

edit: I know somethings that nV does people don't like them for, but they are a company and a damn good company that works very tightly with every single department, and this is what makes them strong, they work as one unison voice, and this comes straight from their management. Jensen is a very smart guy and picked the right people to help him run nV, every single intiative they have done, from pr, marketing,to engineering all drive to strategic points which makes sales, marketshare, and profitability. ATi really didn't have all this, AMD/ATi, is a bit better, but AMD/ATi is up against two companies that are damn good at what they do that means AMD/ATi now has to do it better then both these companies (Intel, and nV) from an internal point of view to stay in the same competitive brackets. This is why we saw the rv670, they knew with the r600 there was no way they could compete, so they did the next best thing, cut down R&D by not designing a big chip and put two of them together, and the strategy worked with the rv770, will it continue to work, for an execution point of view it worked with the rv870, now lets see what happens in the future.

lets talk about advertising, pr, and marketing for a little bit, nV makes brands, they don't have singular things that just die out. Look at their naming conventions, look at how they advertise, look at how they target different regions, how they target developers, this is what makes branding, its the holistic approach of marketing that drives sales, and brand loyalty. This is what ATi/AMD is missing. ATi can bitch all they want about physX but nV makes it a point every single time when ATi bitches, you know ATi doesn't have anything they can show in the real world that can keep up. ATi can bitch all they want about TWIMTBP program, but if they were capable (resources wise) they would do it too. And when ATi bitches about these things it plays right into nV's hands, because ATi just doesn't have anything to counter nV in the real world. Its great ATi has or had plans, like GPU physics it was them that started it, but if you can't get those plans into reality then doesn't mean anything. Did ATi drop the ball on physics on their GPU's. Yes and no, Yes they did because they just didn't have the resources, No because its possible their GPU's might not be as good as nV's when it comes to physics, specially now since Fermi was made to excel with GPGPU. You have to understand the first interation of anything won't be the best performance in all categories, its more like a proof of concept which you build upon. The G80 was a damn good chip, great gaming capabilities and many new features one of those was GPGPU, ATi was first with GPGPU with Folding, r520, what happened after the g80 came out?

Why do you think nV has so many more apps and demos in their dev rel website? That takes alot of money to make all those. Why does nV make books like GPU Gems every year? These are things that look cursory to most, but these are the things that are the corner stones of what makes nV's brands so strong.

fasedww
03-11-10, 10:40 AM
I just got another 5870 for TRI FIRE...I got my fix for now lol :D a week or two XD

Ya thats cool, I've been thinking about about going TRI FIRE myself, just would have to switch motherboards.:D

Rollo
03-11-10, 11:45 AM
Then if it's features you want,i can play Right now battlefield bad company 2 on 3 monitors and enjoy a much wider and immersive gameplay,especially if you're a helicopter pilot....Hell of an overall view of the map.


But i guess it's a useless feature,and can be used with a single card(no need for SLI) and physics is way more important....Cough.


I've got 3 Samsung 2233RZs just waiting the last few weeks to do the same thing. Except I'll be doing it in 3d of course. If I were to do it in plain old 2d like you, I'd be gaming at 120Hz for smoother game play and higher framerates. ;)

BTW- if you asked ME fo receipts I'd be happy enough to post receipts for two of the monitors and say I go the other one free from NVIDIA. ( but I don't have anything to hide <ahem>)

shadow001
03-11-10, 12:37 PM
Yeah and the gtx 280 also had the same configuration, so what, its not going to consume all 300 watts that it can actually draw. And I can say for a fact it won't draw 300 watts cause some people "know" things, kinda like "I see dead people".

I never said it would either,i said it would be somewhere between 225 and 300 watts....At a guess,given the extra 64 shaders,the extra memory controler,and 2 extra GDDR 5 memory modules,i'm going the half way mark and say 250~260 watts for the GTX480 cards.


And reviews will compare power use versus performance here,and the HD5870 does come in at 190 watts,and if the performance of the GTX480 is only marginally better,and uses that extra power and an extra 850 million transistors,some are bound to ask difficult answer questions,such as Cypress perhaps being a more efficient achitecture when it comes to gaming scenarios?



I don't know why you took only ATi's name out that makes smaller chips on newer processes first. nV does too what do you think the Dx10.1 chips were all about?

Simple,they were the first with 40nm chips on retail shelves,and those were released months before Nvidia released their DX10.1 chips,which ironically,also suffered delays since they were also supposed to be released last summer too,and were much simpler designs than fermi overall,using the same 40nm TSMC process....It doesn't bode too well for Fermi given it's complexity, if Nvidia can't get simpler designs out using 40nm on time here.


No when designing chips, you don't really look at clocks and work yourself backwards, its more like this is the performance I want, this is the design of the chip, this is how many transistors it will take (which will have to fit into the transistor budget the process can handle), this is the clocks we will need to hit, then lets look at power and how much it consumes, is it in the thermal envelope so on and so forth. Power characteristics of the chip depends on design and the power characteristics of the silicon. What if B1 stepping is the refresh high end parts? A gtx 485 maybe? I'm 99% sure nV won't have the availability problems that ATi had at least not such a prolonged issue, maybe the first month it will be an issue of availability but after that should be just fine.

Your arguments tend to be very one sided, and always what nV is doing is something that is going to hurt themselves, guess what, outside of Fermi and the nv30, nv executed perfectly.


I fully agree with you there on principle,but how much can a still new 40nm process actually handle die size and using a large transistor budget,while even hoping for good yeilds,and while ATI does do the dual GPU aproach for the very highest end version,it also has the benefit of not pushing a brand new fab process to the very edge of it's fabrication limits,as the GPU's are still reasonably sized overall and using a smaller transistor budget.


So did Nvidia simply aim too high with Fermi with large boosts to both GP-GPU and gaming performance in one chip?,and it's been suffering with delays from that decision,and also got surprised how fast Cypress was going to be performance wise,at least in gaming scenarios?....I think it's obvious that it's a yes on both counts.


Bottom line is that eventually,the 40nm process will be mature,and both ATI and Nvidia will get near 100% yeilds for both Cypress and Fermi,but since Cypress is a smaller chip,ATI gets more working chips out of each wafer,which can be installed and sold in more cards,and it looks like Fermi isn't much faster overall,so they can't charge a huge premium for the cards....The economic advantage of being smaller are obviously there,there's no way around that simple fact.



Lets see about ATi, r200, r420, r520, r580, r600, r670, lots more times ATi has had bad execution, form design issues, drivers, performance, marketing, availability.


Yup they did indeed,especially in execution,marketing and late availability issues,which has plagued the company on many occasions over the years,i won't doubt that,but it's also what made Nvidia's job a lot easier in many of those occasions,not having do the absolute best job they can and still remain on top as having the best products on the market,or at the very least,showing up earlier.


But these last 2 years or so,Seems to me that ATI has greatly elevated their game,put a lot more pressure on Nvidia,held on the performance crown a lot longer than ever before and executed a lot better overall,so the pressure is definitely on Nvidia to deliver here,and it's starting to show quite obviously that they indeed are in for a hell of a fight.

shadow001
03-11-10, 12:42 PM
I've got 3 Samsung 2233RZs just waiting the last few weeks to do the same thing. Except I'll be doing it in 3d of course. If I were to do it in plain old 2d like you, I'd be gaming at 120Hz for smoother game play and higher framerates. ;)

BTW- if you asked ME fo receipts I'd be happy enough to post receipts for two of the monitors and say I go the other one free from NVIDIA. ( but I don't have anything to hide <ahem>)

I use the 23.6 acer ones that also support 120hz btw and 1920*1080 resolutions and can view blueray at it's native resolution...And in the end,you're still mentally imagining how the experience will be,and assuming you'll snag a pair of them from the very first day,while i'm actually playing it right now,even if in only pathetic 2D mode...;)


It's sort of like having GPU physics,but it's the other way around this time,the enemy company has it,and your favorite company doesn't.

Edit:Not to mention the little issue of me owning the highest performing cards on the market for quite a bit longer it seems....Look at the article below,straight from Fuad.


http://www.fudzilla.com/content/view/18038/1/


No idea when a dual GPU variant might be released basically.

Razor1
03-11-10, 01:58 PM
I never said it would either,i said it would be somewhere between 225 and 300 watts....At a guess,given the extra 64 shaders,the extra memory controler,and 2 extra GDDR 5 memory modules,i'm going the half way mark and say 250~260 watts for the GTX480 cards.

its actually right around what you stated 250-260 watts, which isn't bad by any means, a bit higher then the gtx 280.


And reviews will compare power use versus performance here,and the HD5870 does come in at 190 watts,and if the performance of the GTX480 is only marginally better,and uses that extra power and an extra 850 million transistors,some are bound to ask difficult answer questions,such as Cypress perhaps being a more efficient achitecture when it comes to gaming scenarios?


You don't know (nor do I) what the performance of the top end Fermi is though ;). For what I've seen so far the 470 is up against the HD 5870, and so far the leaked benchmarks aren't finalized clocks and possibly even A2 silicon, so I wouldn't put too much stock in anything we have seen so far.


Simple,they were the first with 40nm chips on retail shelves,and those were released months before Nvidia released their DX10.1 chips,which ironically,also suffered delays since they were also supposed to be released last summer too,and were much simpler designs than fermi overall,using the same 40nm TSMC process....It doesn't bode too well for Fermi given it's complexity, if Nvidia can't get simpler designs out using 40nm on time here.


ATi released their 40 nm 2 months before, and had availability issues for 3 to 4 months as well. Then we saw what happened to the HD 5870, which it too had availability issues for a few months after its release.


I fully agree with you there on principle,but how much can a still new 40nm process actually handle die size and using a large transistor budget,while even hoping for good yeilds,and while ATI does do the dual GPU aproach for the very highest end version,it also has the benefit of not pushing a brand new fab process to the very edge of it's fabrication limits,as the GPU's are still reasonably sized overall.

Yeah but its still hard to get the HD5970, so what did ATi do, can't remember who but posted at B3D the reason for that was ATi can make more money on 2 HD 5870's then one HD 5970. That sounds like availability issues to me!


So did Nvidia simply aim too high with Fermi with large boosts to both GP-GPU and gaming performance in one chip?,and it's been suffering with delays from that decision,and also got surprised how fast Cypress was going to be performance wise,at least in gaming scenarios?....I think it's obvious that it's a yes on both counts.

Design choices are just that, its very hard to forecast manufacturing problems in a fabless company since they don't have direct access to what the Fab problems are. And if TSMC wants to keep their business, they won't go around and tell clients come back in six months to year once we get our problems sorted out.

Bottom line is that eventually,the 40nm process will be mature,and both ATI and Nvidia will get near 100% yeilds for both Cypress and Fermi,but since Cypress is a smaller chip,ATI gets more working chips out of each wafer,which can be installed and sold in more cards,and it looks like Fermi isn't much faster overall,so they can't charge a huge premium for the cards....The economic advantage of being smaller are obviously there,there's no way around that simple fact.

They won't get near 100% yield, gt100 probably at most 60-70% and rv870 10% more then that.

But these last 2 years or so,Seems to me that ATI has greatly elevated their game,put a lot more pressure on Nvidia,held on the performance crown a lot longer than ever before and executed a lot better overall,so the pressure is definitely on Nvidia to deliver here,and it's starting to show quite obviously that they indeed are in for a hell of a fight.

Pressure is a good thing, if ATi never came out with the r300 guess what we probably wouldn't see the dramatic increase in performance of these cards, since ATi wouldn't have been able to push nV.

shadow001
03-11-10, 02:40 PM
its actually right around what you stated 250-260 watts, which isn't bad by any means, a bit higher then the gtx 280.


You don't know (nor do I) what the performance of the top end Fermi is though ;). For what I've seen so far the 470 is up against the HD 5870, and so far the leaked benchmarks aren't finalized clocks and possibly even A2 silicon, so I wouldn't put too much stock in anything we have seen so far.


But the difference is that at least in the case of the GTX280,it usually had a consistent,across the board lead in gaming performance terms,usually ranging from 15~20% better over the HD4870 cards,so we have to see if the GTX480 version can pull a consistent,across the board performance lead of 15~20% over the HD5870....Not sure of that given the relatively mild differences between the GTX470 and GTX 480 cards.


And keep in mind i'm ignoring that ATI has had the time to develop higher clocked versions of cypress by now,since their engineers haven't spent the last 6+ months doing nothing and waiting to see how fermi performs,before working on a refresh.



ATi released their 40 nm 2 months before, and had availability issues for 3 to 4 months as well. Then we saw what happened to the HD 5870, which it too had availability issues for a few months after its release.

http://en.wikipedia.org/wiki/Nvidia_GPUs

Says here that Nvidia's DX10.1 GPU's at 40nm,were released between october and november of 2009(GT218,GT216 and GT215 GPU's).

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

April 28th for the HD4770 using the RV740 GPU,and that was a retail launch....It's more like a 6 month gap between both.


You'll have to scroll quite a bit on both articles and they contain charts for pretty much every chip both companies have ever released,including their relase dates and fabrication processes and technical specifications.



Yeah but its still hard to get the HD5970, so what did ATi do, can't remember who but posted at B3D the reason for that was ATi can make more money on 2 HD 5870's then one HD 5970. That sounds like availability issues to me!


That's not availability issues at all.....Availability issues in the inability to produce the cards even if you wanted to in the first place....The main reason is what you quoted though,and it's just plainly comes down to greed and making the maximum profits.


I pre-ordered my cards and got them 2 weeks later,even though NCIX didn't have any in stock and it was the first day that the HD5970 cards hit retail availability(November 18th 2009),so i'm guessing that the current policy for vendors is to make just enough HD5970 cards for those who pre order them,not to actually keep any iin store shelves collecting dust,as in the end,these are 700$ cards that only hardware enthusiats/hard core games,would be willing to pay that much for a graphics card,so it's a small market in the larger scheme of things,and hardly the one that pays the bills for ATI or Nvidia.


It's more of a PR/Halo product where one company has the fastest graphics card on the market,even if 95% of users would never spend that much on it,never mind 2 of them,but i'm just crazy that way.



Design choices are just that, its very hard to forecast manufacturing problems in a fabless company since they don't have direct access to what the Fab problems are. And if TSMC wants to keep their business, they won't go around and tell clients come back in six months to year once we get our problems sorted out.


From what i've heard,the main reason why fermi is so powerfull with geometry based workloads isn't because of gaming environments in the least,but rather professional applications in graphics(such as movies),requiring huge polygon handling abilities,rather than actual shading power,and that may be at least one of the reasons why Fermi is designed the way it is....Workload priorities not being the same in both cases.


Edit:And in TSMC's case,i though that they supply all the the required documentation on their 40nm process well before Fermi is anywhere near close to tapeout anyhow,so the blame can't be exclusively on TSMC's behalf here.


They won't get near 100% yield, gt100 probably at most 60-70% and rv870 10% more then that.


10% more is still an advantage if your competitors chip is smaller to begin with,as you get more chips per wafer even if the yeilds were the same for both cypress and fermi,and with the average 300 mm silicon wafer costing thousands for each one,it definitely matters.



Pressure is a good thing, if ATi never came out with the r300 guess what we probably wouldn't see the dramatic increase in performance of these cards, since ATi wouldn't have been able to push nV.

Looks like the pressure is on,and will remain on for the forseable future,and that can only benefit consumers in the end,by having much faster products,packing more features and costing less than would otherwise be the case,so everybody wins regardless of the brand chosen.

Razor1
03-11-10, 03:54 PM
But the difference is that at least in the case of the GTX280,it usually had a consistent,across the board lead in gaming performance terms,usually ranging from 15~20% better over the HD4870 cards,so we have to see if the GTX480 version can pull a consistent,across the board performance lead of 15~20% over the HD5870....Not sure of that given the relatively mild differences between the GTX470 and GTX 480 cards.


well just the amount of shaders is 10% more bandwidth can be in the neighborhood of 50-60% more don't know if that will happen but its possible looking at the memory clocks of 1000mhz is the lowest GDDR5 memory available (there were cards that came out with less but specs wise its the lowest)


And keep in mind i'm ignoring that ATI has had the time to develop higher clocked versions of cypress by now,since their engineers haven't spent the last 6+ months doing nothing and waiting to see how fermi performs,before working on a refresh.

So? every company does that, nV usually shortly after launch their partners come out with overclocked cards too. I don't really care to talk about overclocked cards because there are too many parameters to go into, price of overclocked cards are usually more too.



http://en.wikipedia.org/wiki/Nvidia_GPUs

Says here that Nvidia's DX10.1 GPU's at 40nm,were released between october and november of 2009(GT218,GT216 and GT215 GPU's).

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

April 28th for the HD4770 using the RV740 GPU,and that was a retail launch....It's more like a 6 month gap between both.

hmm ok was thinking about the rv770 you are correct about the dates

http://www.pcworld.fr/2009/05/27/materiel/penurie-de-4770-amd-reagit-avec-la-4850-a-99-8364/333351/?utm_source=matbe&utm_medium=redirect


That's not availability issues at all.....Availability issues in the inability to produce the cards even if you wanted to in the first place....The main reason is what you quoted though,and it's just plainly comes down to greed and making the maximum profits.

Remember this, translation pretty much stated because of the shortages ATi dropped prices on the 4850 to compensate.

No its not greed, if they could make both cards they would have both, they would get more money that way, because some people don't want to purchase two cards.


I pre-ordered my cards and got them 2 weeks later,even though NCIX didn't have any in stock and it was the first day that the HD5970 cards hit retail availability(November 18th 2009),so i'm guessing that the current policy for vendors is to make just enough HD5970 cards for those who pre order them,not to actually keep any iin store shelves collecting dust,as in the end,these are 700$ cards that only hardware enthusiats/hard core games,would be willing to pay that much for a graphics card,so it's a small market in the larger scheme of things,and hardly the one that pays the bills for ATI or Nvidia.

Were they instock last month they weren't I can post up links to articles about it if you like. Also the HD5850 took one or two months after launch for availability, not mention the HD5870 as was hard to find after its first 2 weeks of a hard launch.

It's more of a PR/Halo product where one company has the fastest graphics card on the market,even if 95% of users would never spend that much on it,never mind 2 of them,but i'm just crazy that way.


Your an enthusiast, pretty simple ;)

From what i've heard,the main reason why fermi is so powerfull with geometry based workloads isn't because of gaming environments in the least,but rather professional applications in graphics(such as movies),requiring huge polygon handling abilities,rather than actual shading power,and that may be at least one of the reasons why Fermi is designed the way it is....Workload priorities not being the same in both cases.


Partly true, but what I have heard anything you see on the web, regarding gaming performance/benchmarks don't listen to it :)

10% more is still an advantage if your competitors chip is smaller to begin with,as you get more chips per wafer even if the yeilds were the same for both cypress and fermi,and with the average 300 mm silicon wafer costing thousands for each one,it definitely matters.



They are getting ~60% yields now so I don't think you need to worry about cost of Fermi to nV.

shadow001
03-11-10, 05:19 PM
well just the amount of shaders is 10% more bandwidth can be in the neighborhood of 50-60% more don't know if that will happen but its possible looking at the memory clocks of 1000mhz is the lowest GDDR5 memory available (there were cards that came out with less but specs wise its the lowest)


Not quite following you here....The GTX 470 has 448 shaders,while the GTX 480 has the full 512 shaders,so it adds about 14% more shader power,with the variable being the actual clock speeds for the shaders.


For memory bandwith,it comes to an extra 20% with the GTX480 with the extra memory controler being enabled,with again,the only variation being the actual clocks at which the memory will operate at,and the fastest on the market is still 1200Mhz GDDR 5 anyhow.


So i don't really see where the 50~60% improvement fits in this particular situation....The GTX480 will be faster than the GTX470 for sure,but i'm not expecting a huge improvement here.


So? every company does that, nV usually shortly after launch their partners come out with overclocked cards too. I don't really care to talk about overclocked cards because there are too many parameters to go into, price of overclocked cards are usually more too.

I'm not talking about higher clocked versions of the existing Cypress chip,but something along the lines of a refresh part that has internal changes to allow it to clock higher by default,sort of like the RV770 to the RV790 chips,where a decoupling ring was added to the latter,and allowed an extra 100Mhz clock speed increase.

That's the sort of refresh i expect from ATI at the very least.



hmm ok was thinking about the rv770 you are correct about the dates

http://www.pcworld.fr/2009/05/27/materiel/penurie-de-4770-amd-reagit-avec-la-4850-a-99-8364/333351/?utm_source=matbe&utm_medium=redirect



Remember this, translation pretty much stated because of the shortages ATi dropped prices on the 4850 to compensate.

No its not greed, if they could make both cards they would have both, they would get more money that way, because some people don't want to purchase two cards.


True,the available volume for 40nm in general isn't huge in global terms,and TSMC has stated that it'll open up a second factory for 40nm production by the end of the year,but if users really want an HD5970,just pre order them,pay it in full and you'll get them within one or 2 weeks at most...They're only hard to find in stock on retail shelves at most.

And it's natural that a lot of the focus is on lower priced cards that use the same GPU as the HD5970 cards, like the HD5850's since they're much cheaper and have a much larger market appeal potential...They'll easily sell those in much higher volumes than would be the case for the HD5970 cards,even if the market wasn't supply constrained with 40nm products.


Were they instock last month they weren't I can post up links to articles about it if you like. Also the HD5850 took one or two months after launch for availability, not mention the HD5870 as was hard to find after its first 2 weeks of a hard launch.

Indeed it's true,but it seems that at launch,ATI had 300 000 cards shipped between the HD 5770/5750(250 000 cards there) and the Cypress based ones,HD5850 and HD5870(50 000 cards) and they all sold out in a matter of a week once reviews were out,they were much better than previous cards and even ATI was surprised how fast the stock had dissapeared.


It took those 2 months to increase orders and receive them back from TSMC to a much larger degree to start satisfying the demand,and except for the HD5970 cards,there's now no real problems in getting the other models in the lineup...Seems ATI has been selling 300 000+ cards a month for HD 5000 series,and totalled 2 million DX11 GPU's shipped by Xmas,and that was 2 1/2 months ago,so it's likely quite a bit higher now.



Your an enthusiast, pretty simple ;)

Indeed i am,and make no excuses for it,and in global terms,with GPU's in general becoming this powerfull and going one step up in the insanity level by putting 4 of them working together like this,it's a no brainer that my CPU is going to litterally be ****ting bricks trying to keep those 4 busy,that they are the first DX11 products on the market and managed to do so much earlier than Nvidia did,so it was a pretty simple decision in the end.


Those who will go towards multi GPU setups with GTX470~480's will also experience the same thing,with the CPU also screaming bloody murder until the software we run becomes way more demanding on the GPU side of the equation,so we aren't really seeing the full performance of these GPU's either.


And the irony is that we'll only know for sure which company made the right technical decisions probably 2~3 years from now,when no one will care about either Cypress or Fermi and there will be products on the market by then far more powerfull than either one in all aspects anyhow.




Partly true, but what I have heard anything you see on the web, regarding gaming performance/benchmarks don't listen to it :)



They are getting ~60% yields now so I don't think you need to worry about cost of Fermi to nV.

We'll see in both cases when it comes to both performance and product availability,though Nvidia's own CEO stated already that high volume availability only really starts in Q2 for fermi,so count yourself lucky if you get one from this initial batch when the card launches in 2 weeks time.

Razor1
03-11-10, 06:30 PM
Not quite following you here....The GTX 470 has 448 shaders,while the GTX 480 has the full 512 shaders,so it adds about 14% more shader power,with the variable being the actual clock speeds for the shaders.

Yes but its all up to where the bottlenecks lie on the chip depending on the program

For memory bandwith,it comes to an extra 20% with the GTX480 with the extra memory controler being enabled,with again,the only variation being the actual clocks at which the memory will operate at,and the fastest on the market is still 1200Mhz GDDR 5 anyhow.

http://www.xbitlabs.com/news/memory/display/20090212111407_Samsung_Begins_to_Produce_7GHz_GDDR 5_Memory.html

hmm I don't think so that was last year, and these products are listed under samsung page as in mass production


So i don't really see where the 50~60% improvement fits in this particular situation....The GTX480 will be faster than the GTX470 for sure,but i'm not expecting a huge improvement here.

q2 is april which is one week away from launch ;)

I'm not talking about higher clocked versions of the existing Cypress chip,but something along the lines of a refresh part that has internal changes to allow it to clock higher by default,sort of like the RV770 to the RV790 chips,where a decoupling ring was added to the latter,and allowed an extra 100Mhz clock speed increase.

Possibly we will see in May.

It took those 2 months to increase orders and receive them back from TSMC to a much larger degree to start satisfying the demand,and except for the HD5970 cards,there's now no real problems in getting the other models in the lineup...Seems ATI has been selling 300 000+ cards a month for HD 5000 series,and totalled 2 million DX11 GPU's shipped by Xmas,and that was 2 1/2 months ago,so it's likely quite a bit higher now.

Hmm no took longer then that, when was the 5870 launched, January of this year was when availability got better.


We'll see in both cases when it comes to both performance and product availability,though Nvidia's own CEO stated already that high volume availability only really starts in Q2 for fermi,so count yourself lucky if you get one from this initial batch when the card launches in 2 weeks time

Q2 is April ;)

XMAN52373
03-11-10, 06:44 PM
Not quite following you here....The GTX 470 has 448 shaders,while the GTX 480 has the full 512 shaders,so it adds about 14% more shader power,with the variable being the actual clock speeds for the shaders.


For memory bandwith,it comes to an extra 20% with the GTX480 with the extra memory controler being enabled,with again,the only variation being the actual clocks at which the memory will operate at,and the fastest on the market is still 1200Mhz GDDR 5 anyhow.


So i don't really see where the 50~60% improvement fits in this particular situation....The GTX480 will be faster than the GTX470 for sure,but i'm not expecting a huge improvement here.

You are neglecting 2 key things tho. What Nvidia plans to run the memory at for GTX480 and the clock speeds. If GTX 470 comes in at 550/650/1300/800(core[40ROPs]/half[TMUs]/hot[everything else]/mem[320bit]) and GTX 480 comes in at 600/725/1450/1000(48 ROPs,TMUs/everything else,384bit), you could very easily get to 50-60% improvement in performance.

As to your link provided earlier, remeber this, they may have luanched much earlier, but they had sever supply constraints for months because of the issues with 40nm which didn't seem to be cured when rv870 went thru TSMC either until just recently.

Xion X2
03-11-10, 07:07 PM
50-60% increase in performance is pie in the sky. I can't think of a prior generation of Nvidia high-end cards where that was the case.

7900GT --> 7900GTX
8800GTS --> 8800GTX
GTX260 --> GTX280

All of these were around a 15 - 20% jump. I expect the same with GTX470 and 480.

(Edit: perhaps the author meant to say "in certain scenarios" although I still find it unlikely)

Sazar
03-11-10, 07:10 PM
Yeah and the gtx 280 also had the same configuration, so what, its not going to consume all 300 watts that it can actually draw. And I can say for a fact it won't draw 300 watts cause some people "know" things, kinda like "I see dead people".

Related to the new cards and the potential to draw above spec, that is what he may be referring to.


Your arguments tend to be very one sided, and always what nV is doing is something that is going to hurt themselves, guess what, outside of Fermi and the nv30, nv executed perfectly.

Lets see about ATi, r200, r420, r520, r580, r600, r670, lots more times ATi has had bad execution, form design issues, drivers, performance, marketing, availability.

Both sides have had missed beforehand. Suggesting that Nvidia only had 2 mis-steps is a little disingenious when you are bringing up every potential issue for AMD above.

THE biggest issue for AMD has been the 2900 launch/performance/availability. Beyond that, they have been up and down a bit in terms of schedule and delivery but performance has been decent. The 2900 and related family launches were terrible performers given the scope of the hardware and this was not really rectified till the 4800 series debuted.

edit: I know somethings that nV does people don't like them for, but they are a company and a damn good company that works very tightly with every single department, and this is what makes them strong, they work as one unison voice, and this comes straight from their management. Jensen is a very smart guy and picked the right people to help him run nV, every single intiative they have done, from pr, marketing,to engineering all drive to strategic points which makes sales, marketshare, and profitability. ATi really didn't have all this, AMD/ATi, is a bit better, but AMD/ATi is up against two companies that are damn good at what they do that means AMD/ATi now has to do it better then both these companies (Intel, and nV) from an internal point of view to stay in the same competitive brackets. This is why we saw the rv670, they knew with the r600 there was no way they could compete, so they did the next best thing, cut down R&D by not designing a big chip and put two of them together, and the strategy worked with the rv770, will it continue to work, for an execution point of view it worked with the rv870, now lets see what happens in the future.

Recommend using paragraphs for readability purposes.

The thing about AMD's model for this and the previous generation is they are looking at scalable solutions that can be increased/decreased in complexity/performance, relatively easily. Nvidia's previous and upcoming model does not allow this flexibility, but they still typically deliver top to bottom market coverage.

This doesn't mean one is better than the other but AMD's appears to be working better over the past 2 years.

lets talk about advertising, pr, and marketing for a little bit, nV makes brands, they don't have singular things that just die out. Look at their naming conventions, look at how they advertise, look at how they target different regions, how they target developers, this is what makes branding, its the holistic approach of marketing that drives sales, and brand loyalty. This is what ATi/AMD is missing. ATi can bitch all they want about physX but nV makes it a point every single time when ATi bitches, you know ATi doesn't have anything they can show in the real world that can keep up. ATi can bitch all they want about TWIMTBP program, but if they were capable (resources wise) they would do it too. And when ATi bitches about these things it plays right into nV's hands, because ATi just doesn't have anything to counter nV in the real world. Its great ATi has or had plans, like GPU physics it was them that started it, but if you can't get those plans into reality then doesn't mean anything. Did ATi drop the ball on physics on their GPU's. Yes and no, Yes they did because they just didn't have the resources, No because its possible their GPU's might not be as good as nV's when it comes to physics, specially now since Fermi was made to excel with GPGPU. You have to understand the first interation of anything won't be the best performance in all categories, its more like a proof of concept which you build upon. The G80 was a damn good chip, great gaming capabilities and many new features one of those was GPGPU, ATi was first with GPGPU with Folding, r520, what happened after the g80 came out?

Again, please paragraphs.

Most of the items have already been discussed before but, I think your discussion of nomenclature is telling because over the past 2 or 3 years, Nvidia has been terrible, to the point of being blatantly misleading in their naming convention. Just look at all the retail names the G92 chip has received.

Why do you think nV has so many more apps and demos in their dev rel website? That takes alot of money to make all those. Why does nV make books like GPU Gems every year? These are things that look cursory to most, but these are the things that are the corner stones of what makes nV's brands so strong.

Nvidia is a good company and obviously makes good products. They have done so for many years in many segments.

shadow001
03-11-10, 07:59 PM
Yes but its all up to where the bottlenecks lie on the chip depending on the program


Of course,it all depends on how the program uses the resources in the end,but looking at it from the strong points of each architecture,it looks to me like this:


Games using a lot of geometry and memory bandwith = Fermi wins.
Games using a lot of pixel shading and texturing speed = Cypress wins.


So it comes down to which GPU maker made the right call and how developers use those resources in the end,but it's not a knock down victory for 1 GPU,as neither one has the highest performing features for every situation....Both have their respective strong points.



http://www.xbitlabs.com/news/memory/display/20090212111407_Samsung_Begins_to_Produce_7GHz_GDDR 5_Memory.html

hmm I don't think so that was last year, and these products are listed under samsung page as in mass production


I wasn't aware they have that memory type in actual mass volumes yet,thanks for the link.



q2 is april which is one week away from launch ;)


And lasts until?....Yup,end of june.



Possibly we will see in May.


I think it's more a definitely at this point,with the only question is what kind of refresh will it be,but the least Nvidia should expect is a 10~15% performance improvement over the current HD5870 cards.

shadow001
03-11-10, 08:04 PM
You are neglecting 2 key things tho. What Nvidia plans to run the memory at for GTX480 and the clock speeds. If GTX 470 comes in at 550/650/1300/800(core[40ROPs]/half[TMUs]/hot[everything else]/mem[320bit]) and GTX 480 comes in at 600/725/1450/1000(48 ROPs,TMUs/everything else,384bit), you could very easily get to 50-60% improvement in performance.

As to your link provided earlier, remeber this, they may have luanched much earlier, but they had sever supply constraints for months because of the issues with 40nm which didn't seem to be cured when rv870 went thru TSMC either until just recently.



And i did mention the clock speeds for both the GPU and memory as the main variable in my last post,and to be honest,when was the last time Nvidia has a card lineup where the top card was 50% faster than the second fastest card, and both those products are using the same GPU?


That's right....Never,and it usually hovers more towards 20~25% difference at most,and there's plenty of examples within Nvidia's previous product launches to prove it.

Rollo
03-11-10, 08:35 PM
I use the 23.6 acer ones that also support 120hz btw and 1920*1080 resolutions and can view blueray at it's native resolution...And in the end,you're still mentally imagining how the experience will be,and assuming you'll snag a pair of them from the very first day,while i'm actually playing it right now,even if in only pathetic 2D mode...;)


It's sort of like having GPU physics,but it's the other way around this time,the enemy company has it,and your favorite company doesn't.

Edit:Not to mention the little issue of me owning the highest performing cards on the market for quite a bit longer it seems....Look at the article below,straight from Fuad.


http://www.fudzilla.com/content/view/18038/1/


No idea when a dual GPU variant might be released basically.

Heh- why would I watch BluRays on a 23.6" screen? I'll give you better taste in monitors than video cards, but they're kind of wasted with the ATi cards aren't they?:(