PDA

View Full Version : NVIDIA GF100 Previews


Pages : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

lee63
03-07-10, 01:35 PM
Well explained...thanx, I might go for it then :D

Iruwen
03-08-10, 03:42 AM
GTX 400 Series at Cebit Information (http://translate.googleusercontent.com/translate_c?hl=de&ie=UTF-8&sl=de&tl=en&u=http://ht4u.net/reviews/2010/nvidia_geforce_gtx_470_480_cebit/&prev=_t&rurl=translate.google.de&twu=1&usg=ALkJrhiVBu92tA_ZYoCUotXF7WmgEWxOyg)

Ninja Prime
03-08-10, 12:28 PM
GTX 400 Series at Cebit Information (http://translate.googleusercontent.com/translate_c?hl=de&ie=UTF-8&sl=de&tl=en&u=http://ht4u.net/reviews/2010/nvidia_geforce_gtx_470_480_cebit/&prev=_t&rurl=translate.google.de&twu=1&usg=ALkJrhiVBu92tA_ZYoCUotXF7WmgEWxOyg)

Thats all the same crap we've seen everywhere else... Card is due to hard launch in two and a half weeks and they still can't come out with specs and benches?

Ywap
03-08-10, 12:30 PM
From the article Iruwen linked to: (thx btw.)

"At the Fermi Vostell NVIDIA spoke of 512 shader units. Precisely this question must be current. Rumors surfaced on the show were saying that the GeForce GTX 480 will not have the full 512 shader units."

AdamK47
03-08-10, 06:25 PM
Thats all the same crap we've seen everywhere else... Card is due to hard launch in two and a half weeks and they still can't come out with specs and benches?

Things that make you go hmmmmm.

It's becoming increasingly obvious just how GTX 480 will stack up against the 5870... pretty much the same. Those hand picked tessellation benchmarks with nVidia showing a fraction of the test is another indicator.

Maverick123w
03-08-10, 07:21 PM
I've flip flopped so many times in the last few months. I was certain I would buy a Fermi, then certain I would buy ATI, and back and forth and back forth again. The pendulum is swinging towards ATI atm.

Iruwen
03-09-10, 04:57 AM
GTX 280, 285, 470 cooling solutions compared:

http://www.imgbox.de/users/public/images/s50337j76.png

Iruwen
03-10-10, 07:05 AM
http://techpowerup.com/117181/GeForce_196.78_Beta_Driver_Runs_GeForce_GTX_470.ht ml

Czech technology website PCTuning confirmed a few details about NVIDIA's upcoming performance graphics accelerator, the GeForce GTX 470. It was found out that a beta driver by NVIDIA, GeForce 196.78 supports GeForce 400 series accelerators, and was able run a qualification sample of GeForce GTX 470. The card was using A3 revision GF100 silicon. The driver's System Information dialog revealed that the card indeed has 448 CUDA cores (SIMD units). Further, it has 1280 MB of memory, and a 320-bit wide memory interface. NVIDIA also changed the way it represents memory clock speeds. Since it is using GDDR5 memory, while the memory has an actual clock speed of 1000 MHz, the data rate (DDR speed) is represented first, as 2000 MHz, and "effective speed" next, which is 4000 MHz.

Given these speeds, at 1000 MHz GDDR5, the GPU has a memory bandwidth of 160 GB/s. Without compromise on looks and quality, NVIDIA kept the cooler design basic. It has a matte finish. Display outputs include two DVI-D, and one mini HDMI. It supports NVIDIA 3D Vision Surround (a technology competitive to ATI Eyefinity, to span a display head across multiple physical displays), just that NVIDIA requires at least two accelerators in SLI to use it. NVIDIA's GeForce 400 series graphics accelerators will be launched on the 26th of this month.

http://gathering.tweakers.net/forum/list_message/33608935#33608935
=> http://forum.beyond3d.com/showpost.php?p=1404925&postcount=3143

Availability for May should be about 6 to 7 times that of Rv770/Rv870 at launch

http://www.fudzilla.com/content/view/18016/65/

We redesigned GF100 from the ground up to deliver the best performance on DX11. This meant adding dedicated h/w engines in our GPU to accelerate key features like tessellation.

We also made changes on the compute side that specifically benefit gamers like interactive ray-tracing and faster physics performance through things like support for concurrent kernels.

Unfortunately all of these changes took longer than we originally anticipated and that’s why we are delayed.

Do we wish we had GF100 today? Yes. However based on all the changes we made will GF100 be the best gaming GPU ever built. Absolutely.

fasedww
03-10-10, 10:47 AM
I'm buying, Can't wait.:)

lee63
03-10-10, 10:51 AM
This is just pathetic and why I will probably be sticking with what I have or the refresh. It just seems like somethings not right with Fermi.

http://www.fudzilla.com/content/view/18029/1/

onmikesline
03-10-10, 11:16 AM
well i hope they come back hard, for the time my 5870 is running everything great at 2560x1600 maxed, even bfbc2 runs great

shadow001
03-10-10, 12:32 PM
This is just pathetic and why I will probably be sticking with what I have or the refresh. It just seems like somethings not right with Fermi.

http://www.fudzilla.com/content/view/18029/1/



It depends on how you look at it really,and the biggest issue by far is the delays it suffered,and under no condition should that be ignored,since ATI engineers haven't been sitting down having a good time,waiting for Fermi to be released,before working on a refresh part or even the next generation of hardware altogether,and have been doing so for over 6 months now,ever since their cards got released.


So even assuming that Fermi does become the fastest GPU on the market,Nvidia's fallen behind in terms of product releases,and it's a given that we might be well see ATI's next generation by the end of this year,which will of course be much higher performance than their current parts,then what,wait another 6+ months extra for Nvidia's reply to that product again.


Nvidia basically have the added pressure of needing to do more,and having less time to do it in,compared to ATI,which their product didn't suffer delays to begin with,and we all know how short lived the X2900XT was in the end which was also late,hot,power hungry and not the best performing part overall....It shared similar aspects to Fermi.


The X2900XT got released in may~june of 2007,and ATI really only got back in the game to actually dispute the speed crown with Cypress,even when doing straight 1 GPU versus 1 GPU comparisons,in september 2009,so it takes a while to get back up after releasing a product that didn't live up to expectations....It's not solved in months or even a year.

scubes
03-10-10, 01:28 PM
This is just pathetic and why I will probably be sticking with what I have or the refresh. It just seems like somethings not right with Fermi.

http://www.fudzilla.com/content/view/18029/1/

WHAT HE SAID...

lee has a point it must suck BIGTIME for them to say that....

shadow001
03-10-10, 02:35 PM
Here's a neat picture that says it all:


http://dnenni.files.wordpress.com/2010/01/globalfoundries_28nm_32nm_6.jpg



http://danielnenni.com/2010/01/17/tsmc-versus-global-foundries-part-ii/


In fact the first production 28nm wafers by a foundry were displayed by GFI at the Consumer Electronics show in Las Vegas this month. At least one of the wafers contained AMD/ATI GPUs


GFI stands for global foundries btw,and it seems ATI are already working on GPU's built at 28nm right now....Basically,it seems they're not going to make it easy for Nvidia to catch up,period.

Redeemed
03-10-10, 02:39 PM
Here's a neat picture that says it all:


http://dnenni.files.wordpress.com/2010/01/globalfoundries_28nm_32nm_6.jpg



http://danielnenni.com/2010/01/17/tsmc-versus-global-foundries-part-ii/





GFI stands for global foundries btw,and it seems ATI are already working on GPU's built at 28nm right now....Basically,it seems they're not going to make it easy for Nvidia to catch up,period.

And I hope they succeed at this- nVidia has had a good lead for a good while now. It's nice to see the competition doing good for a change. :)

shadow001
03-10-10, 03:32 PM
And I hope they succeed at this- nVidia has had a good lead for a good while now. It's nice to see the competition doing good for a change. :)


Makes me wonder what they are though?



They could be a much cheaper to produce Cypress GPU that would also allow it to be clocked at higher speeds,reducing power consumption and heat issues,and would also allow ATI engineers to get to know the process and it's electrical characteristics,before commiting a brand new architecture to the process directly.


Sort of what they did with the RV740 chips last summer,which were the first ones out on the market at 40nm.....The main point is that ATI are already moving beyond the 40nm process right now,so it looks like they'll be keeping the pressure on Nvidia for the time being.

shadow001
03-10-10, 04:19 PM
Broke out the old calculator,and a transition from 40nm to 28nm,means a 60% die size reduction for the same transistor budget overall,so if we apply that to the next generation of GPU's,while keeping in mind that power consumption or heat output,or that the ratio between caches and actual logic circuits isn't being figured into the calculations,it looks something like this:


Ati Cypress at 40nm equals 334mm^,while at 28nm it would be just 133mm^...It would become a GPU that would be moved onto the value market with just a die shrink basically,and if ATI maintains that current die size of Cypress,but using the 28nm process,then the transistor budget goes from 2.15 billion transistors at 40nm in Cypress,to about 3.45 billion transistors for the next architecture at 28nm.



For Nvidia,and i'll assume a 500mm^ die for Fermi,since we don't even know it's exact size at 40mn,it would shrink to 200mm^ at 28nm,while still using the same 3 billion transistors,so much cheaper to make due to higher yeilds,being able to be clocked faster obviously,and if we apply the same rule that the high end version at 28nm,will still use a single 500mm^ die,then the transistor budget increases from 3 billion in the current 40nm version of fermi,to about 4.8 billion transistors in the 28nm high end version of Fermi for that timeframe.



Both are obviously very large increases in transistor budget,which will allow the addition of a lot more hardware/features within each GPU,and the crazy part is that both will likely be released sometime late this year,or early next year,so our CPU's are going to be working even harder to attempt to keep those monsters busy,even more so in Multi GPU setups,using up to 4 of those monsters working together.

Razor1
03-10-10, 04:38 PM
So even assuming that Fermi does become the fastest GPU on the market,Nvidia's fallen behind in terms of product releases,and it's a given that we might be well see ATI's next generation by the end of this year,which will of course be much higher performance than their current parts,then what,wait another 6+ months extra for Nvidia's reply to that product again.


Nvidia basically have the added pressure of needing to do more,and having less time to do it in,compared to ATI,which their product didn't suffer delays to begin with,and we all know how short lived the X2900XT was in the end which was also late,hot,power hungry and not the best performing part overall....It shared similar aspects to Fermi.


Each individual product has their own timelines because one slides it won't effect the next. Look at the r600 and the rv670, or the nv30 and the nv35. You don't know about hot, power hungry what not about Fermi. Actually most likely it won't be that power hungry at all. Hot maybe but the chip still gives off heat equal to the wattage it consumes. So if its not power hunger don't worry too much about the heat.

Razor1
03-10-10, 04:42 PM
Ati Cypress at 40nm equals 334mm^,while at 28nm it would be just 133mm^...It would become a GPU that would be moved onto the value market with just a die shrink basically,and if ATI maintains that current die size of Cypress,but using the 28nm process,then the transistor budget goes from 2.15 billion transistors at 40nm in Cypress,to about 3.45 billion transistors for the next architecture at 28nm.



For Nvidia,and i'll assume a 500mm^ die for Fermi,since we don't even know it's exact size at 40mn,it would shrink to 200mm^ at 28nm,while still using the same 3 billion transistors,so much cheaper to make due to higher yeilds,being able to be clocked faster obviously,and if we apply the same rule that the high end version at 28nm,will still use a single 500mm^ die,then the transistor budget increases from 3 billion in the current 40nm version of fermi,to about 4.8 billion transistors in the 28nm high end version of Fermi for that timeframe.

Both are obviously very large increases in transistor budget,which will allow the addition of a lot more hardware/features within each GPU,and the crazy part is that both will likely be released sometime late this year,or early next year,so our CPU's are going to be working even harder to attempt to keep those monsters busy,even more so in Multi GPU setups,using up to 4 of those monsters working together.

Doesn't always work that way, we are talking about two different process nodes, there could be unforeseen changes, like the density of the transistors, or the ability to pack in transistors how ever you want to look at it. AMD's next GPU on 28nm most likely won't be a rv870 derivative as well. Clocks just don't go up because of going down a process node either.

shadow001
03-10-10, 05:07 PM
Each individual product has their own timelines because one slides it won't effect the next. Look at the r600 and the rv670, or the nv30 and the nv35. You don't know about hot, power hungry what not about Fermi. Actually most likely it won't be that power hungry at all. Hot maybe but the chip still gives off heat equal to the wattage it consumes. So if its not power hunger don't worry too much about the heat.


All we're seeing so far are the PCB's for the GTX470 and the GTX480 cards,with the latter using a 6 + 8 PCI-e power connector arrangement,which means a card that can be supplied with up to 300 watts if needed,and the X2900XT card was actually the first to use that power configuration if you wanted to overclock it,and it did run hot and used a lot of power.


Only the GTX470 card,with it's 6 + 6 pin power connector,is basically limited to 225 watts according to the PCI-e specifications,so it'll be close to what Cypress uses,wich clocks in at about 190 watts,while the GTX480 will go above that for sure,as it's design says so right there if you know the PCI-e specifications...We just don't know where exactly between that 225 watt and 300 watts envelope basically.

shadow001
03-10-10, 05:16 PM
Doesn't always work that way, we are talking about two different process nodes, there could be unforeseen changes, like the density of the transistors, or the ability to pack in transistors how ever you want to look at it. AMD's next GPU on 28nm most likely won't be a rv870 derivative as well. Clocks just don't go up because of going down a process node either.


It's mostly clocks while keeping within accepted power consumption and cooling limits,which are limited by the electrical characteristics of the process itself obviously,also limited by the actual transistor budget that the new architecture will have,hence why it's logical that ATI try a simpler architecture that's well known to them,in that new fabrication process,and get to know what potential hurdles they might be facing,and adapt the design of whatever the new architecture is,to suit the 28nm fabrication process....Stepping stone aproach.


And in unconfirmed rumors dept,it seems that the higher volumes of Fermi that are going to ship starting in Q2,are actually B1 revision chips,which this version had new masks made for it,and the physical layout is different to counter the problems with TSMC's 40nm process.

fasedww
03-10-10, 09:58 PM
I'm getting 3 of them for TRI SLI, Putting in rig # 1 below in sig.:D Just wish I could get this overwith so I can have some peace for awhile. I'm a hardware junkie and need my fix'.(lee)

lee63
03-10-10, 10:07 PM
I'm getting 3 of them for TRI SLI, Putting in rig # 1 below in sig.:D Just wish I could get this overwith so I can have some peace for awhile. I'm a hardware junkie and need my fix'.(lee)
I just got another 5870 for TRI FIRE...I got my fix for now lol :D a week or two XD

JasonPC
03-10-10, 11:03 PM
Yes I think it's utterly ridiculous that nvidia disabled the physx if catalyst drivers are detected. Who cares if there are potential QA issues. If you want a real QA issue, try melting down GPUs on for size. Just have a disclaimer saying it's unsupported and untested use at your own risk.

Really I think physx needs a lot of work because I've had issues getting it to work correctly even with all nvidia cards. I had to downgrade to an older physx software version for it to work in Batman AA.

lee63
03-10-10, 11:31 PM
We have SLi and CrossFire on the same motherboards now...its only right that PhysX should be for everyone...seems kind of childish if you ask me.