Go Back   nV News Forums > Graphics Card Forums > NVIDIA GeForce 400/500 Series

Newegg Daily Deals

Reply
 
Thread Tools
Old 01-26-12, 05:36 AM   #157
K007
 
K007's Avatar
 
Join Date: Sep 2004
Location: Australia, Sydney
Posts: 9,406
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

what happened to GTX680?
K007 is offline   Reply With Quote
Old 01-26-12, 06:46 AM   #158
Logical
Registered User
 
Logical's Avatar
 
Join Date: Apr 2007
Location: UK
Posts: 2,523
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by K007 View Post
what happened to GTX680?
Apparently, 'according to sources' (that bit always makes me smile), Nvidia dont want to release kepler under the 6 series name because AMD have there 7 series out and they dont want to confuse the public with the lower number as it makes it look less powerful so they decided to name it 7 series on par with what AMD have done.....True story brah.
Logical is offline   Reply With Quote
Old 01-26-12, 07:27 AM   #159
Rollo
 
Join Date: Jul 2003
Posts: 1,719
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by shadow001 View Post
So by that logic, those that already bought HD7970's won't buy high end keplers since it'll also be a side grade right, since they won't be that much faster anyhow, just like the GTX580 wasn't that much faster than the HD6970....
For the most part, yes, that's exactly right.

I can tell you this:

If Kepler comes out 15-20% faster than my GTX580s, I wouldn't sell the 580s at a loss and buy Keplers for $550 either.

Just as I presume not many people with 6970s sold them at a loss to buy 580s, or people with 5870s sold them to buy 480s. Unless they wanted to use 3d, CUDA, or PhysX, why would they?

Of course you're assuming Keplers won't be much faster than 7970s, which we don't know yet.
__________________
Rig1:
intel 990X + 2 X EVGA 3GB GTX580 + 3 X Acer GD235Hz
3D Vision Surround

Rig 2:
intel 2500K + NVIDIA GTX590 + Dell 3007 WFPHC

[SIZE="1"]NVIDIA Focus Group Member
[B]NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the Members.[/B][/SIZE]
Rollo is offline   Reply With Quote
Old 01-26-12, 07:57 AM   #160
Q
 
Join Date: Sep 2004
Posts: 7,808
Default

Quote:
Originally Posted by Rollo View Post
For the most part, yes, that's exactly right.

I can tell you this:

If Kepler comes out 15-20% faster than my GTX580s, I wouldn't sell the 580s at a loss and buy Keplers for $550 either.

Just as I presume not many people with 6970s sold them at a loss to buy 580s, or people with 5870s sold them to buy 480s. Unless they wanted to use 3d, CUDA, or PhysX, why would they?

Of course you're assuming Keplers won't be much faster than 7970s, which we don't know yet.
This is one of the most reasonable posts I've ever seen from you.

Sent from my ADR6300 using Tapatalk
Q is offline   Reply With Quote
Old 01-26-12, 10:27 AM   #161
ninelven
Registered User
 
Join Date: Jan 2003
Posts: 132
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by shadow001
I know that it's still a truckload of fillrate no matter what, but it isn't enough for the theoretical maximums that the chips are rated for, in terms of available memory bandwith, and the Fps calculations you made aren't quite right, since there's this little thing called triple buffering, so there's 2 extra frames already stored in memory ahead of the one that's being displayed on the LCD that very moment, so cut that by 3.
My calculations are correct (or my degree in mathematics is failing me). Buffering does not affect ROP throughput but it does add additional bandwidth and memory space overhead.

Quote:
Originally Posted by shadow001
Then add antialiasing when each frame has 12 megapixels across 3 screens to begin with, and let's go with SSAA( super sampling AA), to force the GPU's to render at a higher internal resolution than the display resolution, and really push them to their limits fillrate wise.
Even then, there is more than enough fillrate for 100 fps+.


Quote:
Originally Posted by shadow001
Yes, i like to torture hardware, and i enjoy finding the breaking point...For instance, let's try the heaven benchmark at 7880*1440, and at up to 2X antialiasing, it can still play back normally, though the FPS figures are pretty low thru the entire benchmark(20~30 FPS).
The 3 most likely culprits in performance are 1) Bandwidth, 2) being shader bound, 3) being tessellation bound.


Quote:
Originally Posted by shadow001
Jack it up to 4X AA(and it's the MSAA variety to boot, not SSAA), and it displays a new frame every 30 secs....Nope, 3 water cooled GTX580's can no longer handle it anymore.
It is very likely that the 4XAA is causing you to exceed the available local memory of your card, which then spills over to system ram causing a nosedive in performance.

Quote:
Originally Posted by shadow001
My 4 cards just arrived earlier today
Sweet, pop one in, downclock to 776/4008 and run 3dmark vantage.

Just in case someone reading this thread might be genuinely curious I looked at the numbers a little more closely:

HD6970: 8780 MP/sec with 176 GB/sec bandwidth
GTX580: 9750 MP/sec with 192.4 GB/sec bandwidth
HD7970: 13300 MP/sec with 264 GB/sec bandwidth

While it is impossible to tell how efficient the ROPs themselves are from this data, we may investigate how efficiently each chip uses its available bandwidth in this bandwidth limited test.

HD6970: 8780/176 = 49.89 MP per GB per sec
GTX580: 9750/192.4 = 50.68 MP per GB per sec
HD7970: 13300/264 = 50.38 MP per GB per sec

Thus, we see that the GTX580 is actually being the most efficient with the bandwidth available to it, while the 7970 and 6970 are not very far behind. In fact, I would say the numbers are close enough together that for all practical purposes the chips are equally efficient.

Now, we might ask ourselves, how much bandwidth do these chips actually need to take full advantage of their ROPs (so that bandwidth is no longer the bottle neck and the ROPs are)? Given the above data, this is not too difficult to calculate.

HD6970: 28,160 MP of Fillrate / 49.89 MP/GB/sec = 564.44 GB/sec of bandwidth required
GTX580: 24,832 MP of Fillrate / 50.68 MP/GB/sec = 489.98 GB/sec of bandwidth required
HD7970: 29,600 MP of Fillrate / 50.38 MP/GB/sec = 587.53 GB/sec of bandwidth required

That is how much bandwidth each chip would need to score the max its ROPs are capable of.

Here, we may ask, "if bandwidth is so great, then why didn't they design the above chips with all the bandwidth they needed to never be bottlenecked?" The answer is that additional memory channels are expensive both in terms of die space and board design. Additionally, the closer you get to the "ideal" bandwidth for the chip, the less additional bandwidth pays off because it is bandwidth limited less and less often. As an example, the HD7970 would still "only" have 528 GB/sec with a 768-bit memory interface (assuming 5.5 Gbps memory).

By this point you may be thinking, "well that sucks!" Yeah, prettymuch. To see just how big of a potential issue bandwidth is, you may want to read the following article: http://research.nvidia.com/sites/def...Micro_2011.pdf

*If you don't trust Nvidia engineers, let's ask some AMD ones by comparing the 6970 and 7970.

HD6970 vs 7970
Fillrate: +5%
Bandwidth: +50%
Texture Fill: +40%
FLOPs: +40%

So bandwidth got the single largest increase of anything in 7970 from 6970, and a 10x larger increase than pixel fill. I'm going to wager the AMD engineers had pretty good reasons for their design choices in this regard.
ninelven is offline   Reply With Quote
Old 01-26-12, 01:17 PM   #162
shadow001
Registered User
 
Join Date: Jul 2003
Posts: 1,526
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by Rollo View Post

Of course you're assuming Keplers won't be much faster than 7970s, which we don't know yet.

I'm just using as a base what's been happening for the last decade or so for cards of the same generation from both companies, where regardless of who's on top or who released first, and averaging the results across as many games/applications as possible, it's been a 20% gap between both, give or take a few percent.


No, i don't pay attention specific games at specific settings where it might be higher than that as a general rule, and even assuming it will be 20% faster on average isn't enough to make me wait 3~4~5 months extra either.


When i bought the GTX580's in november of 2010, just days after it's official release, and AMD didn't yet release the HD6970(showed up a month later give or take), wich at the time was being speculated that it would be same speed for less money or perhaps faster than the GTX580, i didn't wait there either and got the GTX580's, and that was just a month.


And now the situation is reversed, and AMD got their part out first, and Kepler may still be months away, and you expect people to wait for months for it's release?...Not with me as i am completely hardware agnostic, and either option is stupidly fast especially when using 4 cards like i am.
shadow001 is offline   Reply With Quote
Old 01-26-12, 01:29 PM   #163
shadow001
Registered User
 
Join Date: Jul 2003
Posts: 1,526
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by ninelven View Post
My calculations are correct (or my degree in mathematics is failing me). Buffering does not affect ROP throughput but it does add additional bandwidth and memory space overhead.

Even then, there is more than enough fillrate for 100 fps+.


The 3 most likely culprits in performance are 1) Bandwidth, 2) being shader bound, 3) being tessellation bound.


It is very likely that the 4XAA is causing you to exceed the available local memory of your card, which then spills over to system ram causing a nosedive in performance.

Sweet, pop one in, downclock to 776/4008 and run 3dmark vantage.

Just in case someone reading this thread might be genuinely curious I looked at the numbers a little more closely:

HD6970: 8780 MP/sec with 176 GB/sec bandwidth
GTX580: 9750 MP/sec with 192.4 GB/sec bandwidth
HD7970: 13300 MP/sec with 264 GB/sec bandwidth

While it is impossible to tell how efficient the ROPs themselves are from this data, we may investigate how efficiently each chip uses its available bandwidth in this bandwidth limited test.

HD6970: 8780/176 = 49.89 MP per GB per sec
GTX580: 9750/192.4 = 50.68 MP per GB per sec
HD7970: 13300/264 = 50.38 MP per GB per sec

Thus, we see that the GTX580 is actually being the most efficient with the bandwidth available to it, while the 7970 and 6970 are not very far behind. In fact, I would say the numbers are close enough together that for all practical purposes the chips are equally efficient.

Now, we might ask ourselves, how much bandwidth do these chips actually need to take full advantage of their ROPs (so that bandwidth is no longer the bottle neck and the ROPs are)? Given the above data, this is not too difficult to calculate.

HD6970: 28,160 MP of Fillrate / 49.89 MP/GB/sec = 564.44 GB/sec of bandwidth required
GTX580: 24,832 MP of Fillrate / 50.68 MP/GB/sec = 489.98 GB/sec of bandwidth required
HD7970: 29,600 MP of Fillrate / 50.38 MP/GB/sec = 587.53 GB/sec of bandwidth required

That is how much bandwidth each chip would need to score the max its ROPs are capable of.

Here, we may ask, "if bandwidth is so great, then why didn't they design the above chips with all the bandwidth they needed to never be bottlenecked?" The answer is that additional memory channels are expensive both in terms of die space and board design. Additionally, the closer you get to the "ideal" bandwidth for the chip, the less additional bandwidth pays off because it is bandwidth limited less and less often. As an example, the HD7970 would still "only" have 528 GB/sec with a 768-bit memory interface (assuming 5.5 Gbps memory).

By this point you may be thinking, "well that sucks!" Yeah, prettymuch. To see just how big of a potential issue bandwidth is, you may want to read the following article: http://research.nvidia.com/sites/def...Micro_2011.pdf

*If you don't trust Nvidia engineers, let's ask some AMD ones by comparing the 6970 and 7970.

HD6970 vs 7970
Fillrate: +5%
Bandwidth: +50%
Texture Fill: +40%
FLOPs: +40%

So bandwidth got the single largest increase of anything in 7970 from 6970, and a 10x larger increase than pixel fill. I'm going to wager the AMD engineers had pretty good reasons for their design choices in this regard.


I don't doubt your calculations at all, and in the case i mentioned in the heaven demo and it overspills to system memory at 4X AA causing performance to drop off a cliff is very likely correct too, but it was also only running between 20~30 FPS at 2X AA wich demanding enough as it is, so it's fair to assume that even if there was enough memory on the card, it would likely be slower still at 4X AA.


Low and behold, here comes AMD with a card packing twice as much memory and more memory bandwith than a base GTX580, and selling for 50$ less than the 3GB GTX580's, wich weren't available when i bought my GTX580's anyhow, not to mention more performance overall in every aspect and being released much earlier than it's intended competition....SOLD!!!....
shadow001 is offline   Reply With Quote
Old 01-26-12, 06:27 PM   #164
ninelven
Registered User
 
Join Date: Jan 2003
Posts: 132
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

They are nice cards indeed and the 3GB framebuffer + extra bandwidth should really help things at your resolution. I can't fathom powering / needing 4 of the them though!
ninelven is offline   Reply With Quote

Old 01-26-12, 06:40 PM   #165
shadow001
Registered User
 
Join Date: Jul 2003
Posts: 1,526
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by ninelven View Post
They are nice cards indeed and the 3GB framebuffer + extra bandwidth should really help things at your resolution. I can't fathom powering / needing 4 of the them though!

Single 1500 watt silvertone strider power supply, and yes the vast majority of that power rating will be used to power everything up, and since there's a lot of heat being released not just from the 4 cards but all both 6 core Xeon CPU's running a 4 Ghz(yup, 12 cores and 24 threads), and using an EVGA SR-2 motherboard, everything will be water cooled:


The beast in question with the GTX580's:





It can already use 1200 watts at the wall outlet when i used an inductive Ampmeter around the power cord as it is, so the 15 amp circuit breaker that powers the wall outlet itself is already complaining(it can deliver 1850 watts before shutting the circuit down...), so i'm expecting the 4 HD7970 get to it even closer to it's shutdown limit, so even the electrical wiring in the house is working against me...Nope, environmentally friendly and low carbon footprint this isn't...
shadow001 is offline   Reply With Quote
Old 01-27-12, 12:42 AM   #166
Redeemed
Registered User
 
Join Date: May 2005
Posts: 17,982
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

I'd have sex with your computer.
Redeemed is offline   Reply With Quote
Old 01-27-12, 01:15 AM   #167
Logical
Registered User
 
Logical's Avatar
 
Join Date: Apr 2007
Location: UK
Posts: 2,523
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by Redeemed View Post
I'd have sex with your computer.
I'd just want a game to be released that could justify the price and performance of such a machine. It must be like having a Ferrari with a speed limiter on it....nice to look at n all but speed wont go above 30mph untill the Fords and Vauxhalls are powerfull enough to do the same. Yes i referenced consoles as Fords and Vauxhalls.....

Surround gaming is nice but how often do you look at the left and right monitors when gaming ? When i tried my friends surround setup with Dirt 3 and Logitech wheel and pedals etc etc I found my focus still only to be on the middle monitor....Is it a gimmick ?...Is it for the benefit of spectators for more of a wow factor ?...I have never tried surround gaming in 3D but i can't imagine it being much different with the focus being only on the middle screen.

I do hope some developers can release a game that will actualy make use of the system you have there shadow....
Logical is offline   Reply With Quote
Old 01-27-12, 02:20 AM   #168
shadow001
Registered User
 
Join Date: Jul 2003
Posts: 1,526
Default Re: next gen kepler to support dx 11.1, also take a year to rollout all cards

Quote:
Originally Posted by Logical View Post
I'd just want a game to be released that could justify the price and performance of such a machine. It must be like having a Ferrari with a speed limiter on it....nice to look at n all but speed wont go above 30mph untill the Fords and Vauxhalls are powerfull enough to do the same. Yes i referenced consoles as Fords and Vauxhalls.....

Surround gaming is nice but how often do you look at the left and right monitors when gaming ? When i tried my friends surround setup with Dirt 3 and Logitech wheel and pedals etc etc I found my focus still only to be on the middle monitor....Is it a gimmick ?...Is it for the benefit of spectators for more of a wow factor ?...I have never tried surround gaming in 3D but i can't imagine it being much different with the focus being only on the middle screen.

I do hope some developers can release a game that will actualy make use of the system you have there shadow....

I actually play BF3 a lot and the way i setup the displays is such that the side ones are heavily angled towards me so that i can use my peripheral vision and see almost the entire display surface without moving my eyes...It takes a while to get it just right, but suffice to say on a few occasions where a member of the opposite team sees me from an angle that he thinks i can't see him and he wants a knife kill, just before he's close enough to pull it off, i switch to the handgun and put a bullet in his head....Always puts a smile on my face...


It's a fringe benefit but fun to watch when he's wondering how the hell i saw him coming....Another good example from the same game is flying helicopters or jets and seeing enemies both on the ground and in the air and getting on their tail before they even see me....The game is gorgeous when having such a wide field of view overall no matter what you do.


Games using all 12 CPU cores could develop some serious artificial intelligence and real physics well beyond the usual ragdoll stuff, and the latter is particularly funny when you kill an NPC and it stays in a death position that would made a professional contorsionist jeolous really...It's rare but it does happen.
shadow001 is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 04:02 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.