PDA

View Full Version : Nvidia may be in trouble according to this article


Pages : 1 2 [3] 4 5 6 7 8

shadow001
02-20-10, 06:46 PM
anyone who would want to run three monitors in 3d with one gpu is delusional anyways. Heck I would want Crossfire X or SLI for more than two monitors even without 3d Surround. Keep in mind I am the guy who tried to get four 8800 GTXs running when I did the SKulltrail Waterbox project for Intel


Crytek showed Crysis 2 running on a single HD5870 card across 3,30" LCD's,and it was fully playable,though i have no doubt that if users really go crazy and pour on insane amounts of AA on top,then more powerfull hardware is needed of course.

shadow001
02-20-10, 06:54 PM
And to be honest I really think that 3 screens would totally ruin the immersion of 3D with the 2 bezels showing. I would much rather have one large 40" - 50" screen.


there's always this option if you can stomach the price:


http://www.ostendotech.com/crvd/gallery/#thumbs


Look at the thumbnail picture at the far right and imagine that with a set of 3D glasses on top...:D

Muppet
02-20-10, 07:26 PM
there's always this option if you can stomach the price:


http://www.ostendotech.com/crvd/gallery/#thumbs


Look at the thumbnail picture at the far right and imagine that with a set of 3D glasses on top...:D

Something along that line would be pretty sweet. What res is that running at.

shadow001
02-20-10, 08:29 PM
Something along that line would be pretty sweet. What res is that running at.


2880*900 resolution for each display,so 8640*900 resolution for all 3 displays with a full 180* field of view thanks to the curve on each display,but be prepared to pay 6400$ for each one,so about the price of the average car for 3 of those.


Doesn't get more hardcore than that for now though.

shadow001
02-20-10, 10:10 PM
Well,here's an update from charlie on the GTX470 and GTX 480 cards,but without giving specific information on the actual performance numbers so that his sources aren't uncovered....It's a pretty long article:


http://www.semiaccurate.com/2010/02/20/semiaccurate-gets-some-gtx480-scores/


In short,the GTX 480 is only about 5% faster on average in real world gaming scenarios than the HD 5870 cards and that the GTX470 is actually slower than the HD5870,also that they run pretty hot even when in 2D mode and the card will have to sell for about the same price that HD5870's are going for.


The main advantage seems to be in tesselation ability,where it can actually beat an HD5970 card,but that only shows in the heaven tech demo when the shaders are just handling the tesselation calculations,and that Nvidia may be very picky about who it gives the few cards they have for review,in order to avoid the situation that happened with the GTS 250 cards and the anandtech and hard OCP reviews,which the cards got slammed hard for simply being rebrands of older GPU's basically....Availability is extremely limited period as if there selling these for the same price as HD5870's,they're losing money on each one they make.


I'm hoping to hell this isn't accurate to be honest,otherwise Nvidia is going to get roasted alive after all the delays that Fermi suffered so far,and all the performance promises they made that it would easily beat the HD5870 cards.

lee63
02-20-10, 10:18 PM
I'm gonna refer back to this thread after the release....just to see how much this Charlie guy really knows.

My gut feeling has always been they would be 10 to 20% faster.

Madpistol
02-20-10, 10:28 PM
I'm gonna refer back to this thread after the release....just to see how much this Charlie guy really knows.

My gut feeling has always been they would be 10 to 20% faster.

Yep... and everyone thought the 2900 XT was going to beat the pants off of the 8800 GTX as well. Guess how that turned out...


major delays + manufacturing problems + lots of heat + lots of power = a slower card.


Seriously... that's always how it ends up. :p

lee63
02-20-10, 10:43 PM
I wonder if there is any truth to this.

http://vr-zone.com/forums/561963/nvidia-launching-its-first-directx-11-cards-on-monday-.html

shadow001
02-20-10, 10:45 PM
Yep... and everyone thought the 2900 XT was going to beat the pants off of the 8800 GTX as well. Guess how that turned out...


major delays + manufacturing problems + lots of heat + lots of power = a slower card.


Seriously... that's always how it ends up. :p


It was wasn't it...The R600 was:

1:6+ months late(Like fermi is).
2:Used a large die,the largest ATI ever designed(Fermi is much larger than Cypress).
3:Power use is way too much period(Just like Fermi is aparently).
4:Chip ran hot and the cooling was noisy(Seems Fermi is the same there on both counts).
5:Users couldn't overclock the R600 that much,largely because the above issues(looks like fermi is the same there by the looks of it)
6:The X2900 XT card needed a 6 + 8 PCI-e power connector configuration(the Fermi cards shown at CES in january also had that too).


In short,the only thing not known for fermi was it's actual performance in games,and if this article is correct,then Nvidia better put the PR spin machine running faster than it ever has before,because i can see the flames coming a mile away.

Xion X2
02-20-10, 10:45 PM
I agree Joe Public may not use half the stuff, but wouldn't you want it available even if you didn't?

Why would I want something available that I never use?

Anyway, out of your long list of things, there are only a few that I care about. Eyefinity and video editing/rendering.

In fact, Eyefinity was the sole reason that I went with ATI this time around. If Nvidia had been a little quicker to the game, then I may not have.

And I used to think that Nvidia had the market cornered for video/GPGPU, but it seems as if ATI is making some headway in that area, as well.


Final Thoughts

After reviewing all the benchmark data as well as the image quality screenshots, both GPGPU technologies had their pros and cons that could affect a consumer's decision to purchase hardware and software that utilizes ATI Stream and/or CUDA. While Stream's transcoding times were slightly better than CUDA in most of our performance tests, CUDA seemed to produce a higher quality image that evened things out a bit. Stream also seemed to be more efficient in using less of the CPU's resources for transcoding while also producing fast transcoding times.
http://www.pcper.com/article.php?aid=745&type=expert&pid=8

I agree that Nvidia currently has the best feature set (I'll give them that on the more flexible 3D), but you exaggerate this to the point of being blatantly misleading. ATI can do just about everything that Nvidia can. They might not be quite as flexible or user-friendly at this point, but they can do it. The only thing of significance that you can't do on ATI by comparison is run PhysX. And PhysX, being a propietary technology, is never going to catch on with the majority of developers. Especially with how well ATI is doing with the 4xxx and 5xxx lines.

I think that Anandtech effectively summarizes the problem with propietary technology like PhysX and CUDA in its review of CUDA vs. ATI Stream GPU Computing--

Meanwhile NVIDIA, and now AMD, want to push their proprietary GPU computing technologies as a reason end users should want their hardware. At best, Brook+ and CUDA as language technologies are stop gap short term solutions. Both will fail or fall out of use as standards replace them. Developers know this and will simply not adopt the technology if it doesn't provide the return on investment they need, and 9 times out of 10, in the consumer space, it just won't make sense to develop either a solution for only one IHV's GPUs or to develop the same application twice using two different languages and techniques.

In the consumer space, the real advantage that CUDA has over ATI Stream is PhysX. But this is barely even a real advantage at this point as PhysX suffers from the same fundamental problem that CUDA does: it only targets one hardware vendor's products. While there are a handful of PhysX titles out there, they aren't that compelling at this point either. We will have to start seeing some real innovation in physics with PhysX before it becomes a selling point.

http://www.anandtech.com/video/showdoc.aspx?i=3475

That's the long and short of it. So waving pom-poms about CUDA or PhysX has very little influence with me, and it should have little influence on anyone's purchasing decision at this point. Both are extremely limited propietary solutions. They will NEVER gain significant developer attention and will eventually be replaced by standards that work on all hardware.

JasonPC
02-20-10, 11:07 PM
I really think Charlie is just being 100% sensationalist now but we'll soon find out.

shadow001
02-20-10, 11:28 PM
I really think Charlie is just being 100% sensationalist now but we'll soon find out.


True,but that's his style of writing really,although when you think about it,it could be the main reason why benchmark numbers still haven't leaked at all,and there's got to be at least some Fermi cards done by now,so the hardware is available even if in limited numbers.


And the keeping it a secret not to alert ATI just how fast Fermi is doesn't fly anymore since the HD5*** cards are ancient history and due for a refresh in the next 2~3 months,so Fermi is going to face faster cards with more mature drivers by the time it's out in decent quantities anyhow.


The tech industry is brutal and unforgiving when your product is delayed this much....

JasonPC
02-20-10, 11:36 PM
Well I'm going to say it right now. If the GTX 480 is only up to 5% faster than the 5870 in most real world gaming benchmarks, I'm going to buy a 5870 as soon as I find this out :P

Heck I'll even say Charlie is a good man fighting for the good of graphic card enthusaists ;)

CaptNKILL
02-21-10, 03:18 AM
And the keeping it a secret not to alert ATI just how fast Fermi is doesn't fly anymore since the HD5*** cards are ancient history and due for a refresh in the next 2~3 months,so Fermi is going to face faster cards with more mature drivers by the time it's out in decent quantities anyhow.


Actually, ATI being due for a refresh is a perfectly good reason for nvidia to keep performance numbers a secret.

As soon as we know how Fermi will perform, ATI will have something to set their sights on and will put their refresh cards out as soon as possible.

The longer nvidia keeps ATI in the dark, the less time ATI will have to match or beat their performance.

Enrico_be
02-21-10, 04:16 AM
edit: Oops, just saw that this article was already posted by shadow001, sorry.

New article from Charlie ...
-| SemiAccurate gets some GTX480 scores -> Hot, buggy and far too slow (http://www.semiaccurate.com/2010/02/20/semiaccurate-gets-some-gtx480-scores/)

The way the article is written is clearly anti-nvidia and I hope he is wrong :( !

A reply from Igor_Stanek (From Nvidia) @ Twitter @ this new article
http://www.we-wi.be/Images/fermi4.png

I really don't know what to expect now from GTX 480 but atm I'm hoping It's gonna kick Charlie's ass !!! If it's the other way around, my love for Nvidia will drop to a minimum but I'm positive that's not gonna happen.

Rollo
02-21-10, 07:58 AM
Charlie gets what he wants by articles like that.... more clicks on his site.

And with each click he can tell his owners (ATi) that he served them well. :ass:

I don't have Fermi card yet, but Charlie's article doesn't make me feel bad about the prospect even if every word is true.

Here's why:

5% faster overall than a HD5870: 5870s are very fast cards, 5% faster wouldn't suck. It would be approximately the speed of a GTX295, which would be in line with pretty much every other next gen launch I can remember. (little faster in some games, little slower in others than last gen SLi)

Double the DX11 performance of a 5870: that speaks for itself, just amazing. If single chip NV offers double chip ATi DX11, imagine what double chip NV will offer? ZOMG

Hotter: Yawn. I'm a freaking enthusiast. I don't try to stuff my expensive parts into an old Compaq desktop case with one fan.

Power: Again yawn. For the same reasons I have good cases, I have good power supplies. I advise all people who want to run high end graphics and CPUs to do the same. ;)

Cost/losing money: As much as I like my buddies at NVIDIA, I won't lose sleep if they lose money selling consumer GTX480s at first. My guess is a. they'll figure a way to make money anyway, perhaps with those excellent Quadros that totally own the professional market b. they'll make more money with each revision c. the closer they are to ATi pricing, the more cards they'll sell as they are the market leader with the name recognition in the masses anyway.

The more things change, the more they stay the same. I bet if someone was REALLY bored they could go through my old posts find me "predicting" that when Fermi launches, ATi guys will be back to saying "NVIDIA cards cost too much to make! NVIDIA cards are too hot!"

Guess I can see the future, just like Charlie. :rolleyes:

Enrico_be
02-21-10, 09:04 AM
I agree Rollo. I just read Charlie's article again and IF that are indeed the facts they aren't bad at all :)

Johnny C
02-21-10, 10:07 AM
And with each click he can tell his owners (ATi) that he served them well. :ass:

I don't have Fermi card yet, but Charlie's article doesn't make me feel bad about the prospect even if every word is true.

Here's why:

5% faster overall than a HD5870: 5870s are very fast cards, 5% faster wouldn't suck. It would be approximately the speed of a GTX295, which would be in line with pretty much every other next gen launch I can remember. (little faster in some games, little slower in others than last gen SLi)

Double the DX11 performance of a 5870: that speaks for itself, just amazing. If single chip NV offers double chip ATi DX11, imagine what double chip NV will offer? ZOMG

Hotter: Yawn. I'm a freaking enthusiast. I don't try to stuff my expensive parts into an old Compaq desktop case with one fan.

Power: Again yawn. For the same reasons I have good cases, I have good power supplies. I advise all people who want to run high end graphics and CPUs to do the same. ;)

Cost/losing money: As much as I like my buddies at NVIDIA, I won't lose sleep if they lose money selling consumer GTX480s at first. My guess is a. they'll figure a way to make money anyway, perhaps with those excellent Quadros that totally own the professional market b. they'll make more money with each revision c. the closer they are to ATi pricing, the more cards they'll sell as they are the market leader with the name recognition in the masses anyway.

The more things change, the more they stay the same. I bet if someone was REALLY bored they could go through my old posts find me "predicting" that when Fermi launches, ATi guys will be back to saying "NVIDIA cards cost too much to make! NVIDIA cards are too hot!"

Guess I can see the future, just like Charlie. :rolleyes:

Where the hell did you get double the DX11 performance.....


Pretty big jump to go from double performance in one section of the heaven demo that's tessellation limited....to broadly painting Fermi as having double the dx11 performance....which it won't have.

Charlie also indicates that the reason the entire benchmark wasn't shown was due to Fermi running the benchmark with "glitches" and that DX11 isn't "all there" yet.

Not saying Charlie is gospel...but whoever his sources @ nV are....he's been right about Fermi pretty much since last sept...

Rollo
02-21-10, 10:17 AM
So you're saying it possible NVIDIA's unreleased drivers for an unreleased new arch on a new version of DX aren't perfect Johnny?!??!

<gasp>

I bet all of ATi's unreleased drivers are glitch free and WHQL to boot.

Johnny C
02-21-10, 10:39 AM
So you're saying it possible NVIDIA's unreleased drivers for an unreleased new arch on a new version of DX aren't perfect Johnny?!??!

<gasp>

I bet all of ATi's unreleased drivers are glitch free and WHQL to boot.

I know that ATi's cards can run the heaven demo glitch free.....and honestly....that's all that matters...

Do you have some sort of strange ADD or something?

You read Charlies article....and then posted/skewed the info to "double dx11 performance"

You read my reply...which simply challenges that no-where does it state "double dx11 performance"

You then reply focused on pre-release drivers comparing nV to ATi and blame the drivers for the issues.

I don't know what your profession is....but I'd like to suggest you take up lobbying....I think that your line by line/take everything out of context and information skewing abilities would make you a very rich man. Someone could benefit alot from your hard work. However I'm betting as of yet...you've scared more people off nV products then you've convinced to buy nV products. So I probably wouldn't hand in a resume over there....



It saddens me to see the same kinds of posts over and over...especially when you do post up good info and relevant stuff from time to time...

Xion X2
02-21-10, 11:03 AM
Where the hell did you get double the DX11 performance.....



I'm glad you spoke up, because I was wondering the same thing myself. :lol:

Revs
02-21-10, 11:30 AM
Happens every time :lol:

shadow001
02-21-10, 12:22 PM
Actually, ATI being due for a refresh is a perfectly good reason for nvidia to keep performance numbers a secret.

As soon as we know how Fermi will perform, ATI will have something to set their sights on and will put their refresh cards out as soon as possible.

The longer nvidia keeps ATI in the dark, the less time ATI will have to match or beat their performance.


I don't believe so,since whatever the refresh is,the specifications they will have as the aim to hopefully hit will have already been decided upon fairly long ago,regardless of when fermi is actually released or how it performs....Basically they'll be doing the best they can with what they have,while still having good yeilds and making money,which is the whole point of the excercise.


Even ATI's CEO has already admitted that while he feels confident that the company will hold on to the speed crown for most of 2010,not only given the products they have now,but the ones they have coming in the pipeline,he also already went on record as saying that Nvidia might have the lead for a limited period of time during 2010.


Basically,he's already assuming the worst case scenario and thinking that Fermi will be the fastest single GPU on the market for a while,so there's not really much to gain by keeping the fermi benchmarks a secret,and in fact a lot to lose since it's so delayed to begin with,and it's potential reign as the fastest card is getting shorter by the week with all the delays.

lee63
02-21-10, 12:22 PM
I just wanna see some #'s....either I get another 5870 or a couple of 480's :headexplode:

shadow001
02-21-10, 12:47 PM
And with each click he can tell his owners (ATi) that he served them well. :ass:

I don't have Fermi card yet, but Charlie's article doesn't make me feel bad about the prospect even if every word is true.

Here's why:

5% faster overall than a HD5870: 5870s are very fast cards, 5% faster wouldn't suck. It would be approximately the speed of a GTX295, which would be in line with pretty much every other next gen launch I can remember. (little faster in some games, little slower in others than last gen SLi)


So waiting 6+ months for a card that's only 5% faster on average in real world gaming,and uses 280 watts of power in the process,versus a competitors card using 100 watts less,is considered a victory is it?.....Moving the goal posts much there?


Imagine enthusiasts like ourselves going towards SLI or triple SLI setups using GTX480 cards at 280 watts a pop,and those 3 cards alone can be pulling 840 watts as a worst case scenario,and still being beat performance wise by a quad crossfire setup using 2 HD5970 cards using about 600 watts between both cards,maybe 700 watts once you overclock them pretty hard.



Double the DX11 performance of a 5870: that speaks for itself, just amazing. If single chip NV offers double chip ATi DX11, imagine what double chip NV will offer? ZOMG


That was only shown with tesselation ability and it's cool that at least in that one aspect,it's so much faster than ATI's cards,but by the time games actually use enough tesselation to show that difference,both companies will be releasing GPU's far more advanced than these anyhow,so no high end enthusiast will care,as they'll have moved on since then.


Hotter: Yawn. I'm a freaking enthusiast. I don't try to stuff my expensive parts into an old Compaq desktop case with one fan.

Then get started on going water cooling the cards even if overclocking them isn't part of the plan,imagine if you want to overclock them...


Power: Again yawn. For the same reasons I have good cases, I have good power supplies. I advise all people who want to run high end graphics and CPUs to do the same. ;)

Well,840 watts for a triple SLI,then add the power use of a highly overclocked CPU,the motherboard needs,the ram,optical drives and hardrive/SSD setup,not to mention the water cooling power needs for the pump and fans,and don't stop until you hit power supplies rated for 1500 watts...You know,the ones that cost 400$ on their own.


And while you're at it,plug the PC in it's own dedicated power circuit within the house,since most of those use a 15 amp breaker at 120 volts,which outputs 1800 watts max,and power supplies are never 100% efficient(usually around 85% efficient these days),so that uber PC will suck all the juice from that wall socket on it's own.


No lights in the room,or large TV's,or audio/home theatre system unless you're also willing to hire a contractor to modify the electrical system in the house...;)


Cost/losing money: As much as I like my buddies at NVIDIA, I won't lose sleep if they lose money selling consumer GTX480s at first. My guess is a. they'll figure a way to make money anyway, perhaps with those excellent Quadros that totally own the professional market b. they'll make more money with each revision c. the closer they are to ATi pricing, the more cards they'll sell as they are the market leader with the name recognition in the masses anyway.

The more things change, the more they stay the same. I bet if someone was REALLY bored they could go through my old posts find me "predicting" that when Fermi launches, ATi guys will be back to saying "NVIDIA cards cost too much to make! NVIDIA cards are too hot!"

Guess I can see the future, just like Charlie. :rolleyes:


If they do lose money on each one they sell,you can bet that the volumes for the Geforce lineup will always be limited,and most chips will be used for quadro's and Tesla market,as Nvidia isn't stupid or suicidal enough to do gaming enthusiasts favors and lose money for them selling these at ~400$.


If the article is true,then this first generation Fermi card is dead in a water as a Geforce product basically,and look for 2nd generation products using 32nm or even 28 nm soon after....ATI will make sure they'll need them with the northern islands GPU's potentially being released by the end of the year.