PDA

View Full Version : Nvidia chief goes ballistic over gimmick charge


Pages : [1] 2 3 4

***CENSORED***
05-18-04, 11:30 PM
http://www.theinquirer.net/?article=15984

The researcher asked Jen Hsun about the differences crucial to the Unreal 3 engine that will be finished late next year if we are lucky. The researcher described PS 3 as a gimmick but Jen Hsun lit up at this suggestion.

He told the researcher: "It's not a gimmick and it cost us 60 millions of transistors". He was quite upset and doesn't like the fact that people compare ATI cards that he called "3 years old architecture" with its PS3 part. Nvidia CPU said "all consumers want to have the next gen architecture, not a three year old one".

Clay
05-18-04, 11:33 PM
Interesting. If SM3.0 is a gimmick, well that's just nonsensical for Microsoft to waste all their time and resources for a mere gimmick.

Jarred
05-18-04, 11:33 PM
http://www.theinquirer.net/?article=15984

yeah I read that, pretty funny.

Jarred
05-18-04, 11:34 PM
Interesting. If SM3.0 is a gimmick, well that's just nonsensical for Microsoft to waste all their time and resources for a mere gimmick.

indeed.

Skynet
05-19-04, 12:01 AM
gimĀ·mick ( P ) Pronunciation Key (gmk)
n.

1.
_____1. A device employed to cheat, deceive, or trick, especially a mechanism for the secret and dishonest control of gambling apparatus.
_____2. An innovative or unusual mechanical contrivance; a gadget.

2.
_____1. An innovative stratagem or scheme employed especially to promote a project: an advertising gimmick.
_____2. A significant feature that is obscured, misrepresented, or not readily evident; a catch.

3. A small object whose name does not come readily to mind.

PS3.0 may actually fit the description, "An innovative or unusual mechanical contrivance; a gadget."

Not all gimmicks are bad, just ones your competitor has that you don't :rolleyes:

Zeno
05-19-04, 12:04 AM
http://www.theinquirer.net/?article=15984

When I program, I always try to avoid gimmicky language features like loops and conditionals. Instead, I just write out separate code for each case and ship multiple programs to the clients. :rolleyes:

Sounds ridiculous, right? But that's basically what you have to do, in PS2.0, to get the best pixel shader performance for multiple cases (i.e. number of lights, fog on/off, lightmap on/off, etc). It's a huge pain. As a programmer, I'd like to thank NVIDIA for the new PS3.0 features. It's unfortunate that ATI is not following suit this round....they're holding back progress by keeping PS2.0 as the least common denominator for another generation.

Blacklash
05-19-04, 12:31 AM
When I program, I always try to avoid gimmicky language features like loops and conditionals. Instead, I just write out separate code for each case and ship multiple programs to the clients. :rolleyes:

Sounds ridiculous, right? But that's basically what you have to do, in PS2.0, to get the best pixel shader performance for multiple cases (i.e. number of lights, fog on/off, lightmap on/off, etc). It's a huge pain. As a programmer, I'd like to thank NVIDIA for this feature. It's unfortunate that ATI is not following suit this round....they're holding back progress by keeping PS2.0 as the least common denominator for another generation.

Indeed SM 3.0 is part of DX9. SM 3.0 will cease to be a 'gimmick' to some when it is implemented by their favorite brand, then it will be the greatest thing since sliced bread. I understand though, it would be unwise to increase perceived value for something you can not currently offer/deliver. Maybe next round...

-=DVS=-
05-19-04, 12:49 AM
The researcher asked Jen Hsun about the differences crucial to the Unreal 3 engine that will be finished late next year if we are lucky. The researcher described PS 3 as a gimmick but Jen Hsun lit up at this suggestion.

He told the researcher: "It's not a gimmick and it cost us 60 millions of transistors". He was quite upset and doesn't like the fact that people compare ATI cards that he called "3 years old architecture" with its PS3 part. Nvidia CPU said "all consumers want to have the next gen architecture, not a three year old one".

Heh he should not call ATI cards 3 year old tech as if he does , he makes his products look bad "hey look 3 year old tech keeps up and even surpasses our next gen" :screwy: but its Inq so what can you expect...

Ruined
05-19-04, 12:57 AM
Heh he should not call ATI cards 3 year old tech as if he does , he makes his products look bad "hey look 3 year old tech keeps up and even surpasses our next gen" :screwy: but its Inq so what can you expect...

The Voodoo3 was as fast as the TNT2 and faster in some cases. Doesn't mean it wasn't old ass technology with its lack of 32-bit color and stencil shadowing :)

-=DVS=-
05-19-04, 12:58 AM
Haveing more features but being slower doesn't make one better ;)

Ruined
05-19-04, 01:04 AM
Haveing more features but being slower doesn't make one better ;)

The TNT2 and V3 were in the same league performance wise just like the 6800/x800. What do you get for your money with 9800 -> x800? More fps and 3dc. What do you get for your money with 5900 -> 6800? More fps, Shader Model 3.0, HDRL, full 128-bit precision at playable framerates, and a hardware video encoder/decoder - things all of which ATI parts lack. Both are in the same league performance wise, but not feature wise. The 6800 is an evolution in graphics technology over its predecessor, while the x800 just has had a few cylinders added to the engine of the 9800xt. This is why I'd say the 6800 is a better buy - you simply get more for your money with similar performance. And you have more to play with :)

hovz
05-19-04, 01:14 AM
The TNT2 and V3 were in the same league performance wise just like the 6800/x800. What do you get for your money with 9800 -> x800? More fps and 3dc. What do you get for your money with 5900 -> 6800? More fps, Shader Model 3.0, HDRL, 128-bit precision at high framerates, and a hardware video encoder/decoder - things all of which ATI parts lack. Both are in the same league performance wise, but not feature wise. The 6800 is an evolution in graphics technology over its predecessor, while the x800 just has had a few cylinders added to the engine of the 9800xt. This is why I'd say the 6800 is a better buy - you simply get more for your money with similar performance. And you have more to play with :)

posts like that make it very hard to respect your opinion :rolleyes:

Ruined
05-19-04, 01:15 AM
posts like that make it very hard to respect your opinion :rolleyes:

Because... ?

Dazz
05-19-04, 01:28 AM
From what i can see 3dc makes a big diffrence to IQ, also HDRL peformance an't that good on the GF 6800 Ultra which is about 10% faster then a 9800XT but the X800 PE is 110% faster then the 9800XT. As for 128-bit precision it won't be taken advantage of fully as delevlopers still use 8, 16bit, 24bit modes which gives the best IQ needed without taxing the video card.

Ruined
05-19-04, 01:31 AM
From what i can see 3dc makes a big diffrence to IQ, also HDRL peformance an't that good on the GF 6800 Ultra which is about 10% faster then a 9800XT but the X800 PE is 110% faster then the 9800XT. As for 128-bit precision it won't be taken advantage of fully as delevlopers still use 8, 16bit, 24bit modes which gives the best IQ needed without taxing the video card.

32-bit precision and Shader Model 3.0 are already both going to be taken advantage of in the upcoming FarCry expansion, all the 2005 EA Sports games, Painkiller, Splinter Cell 3, etc. You probably would also use the 6800's hardware video encoder/decoder on a daily basis, because most everyone uses their PC to either watch or create media. Your performance numbers between the 6800/x800 are also way off :). The only place the 6800 is slower is when using high levels of AF in full trilinear mode at high resolutions, due to ATI using AF optimizations instead of full trilinear.

My point is, the 6800 is forward-looking. The x800 is not. If you took a 9800xt, added 8 pipes and clocked it to 550mhz you would basically have an x800XT sans 3dc. The 6800 is a totally different beast than the 5900 - no modification of the 5950 would bring you even remotely close to the 6800. Plus the 6800 offers a wealth of video features both for IQ/performance and practical uses such as hardware video encode/decode for users to play around with that both their competitor and last generation part didn't. And that is important when you spend $500 for a friggin video card :)

hovz
05-19-04, 01:48 AM
Because... ?

because ur twisting the data and stating false info to twist the view of people

MUYA
05-19-04, 01:53 AM
Guys if u wanna carry this into PM thanks. Do not hijack the thread and its original topic.

If you problems with each ither then pls...PM and sort it out thanks

MUYA
05-19-04, 02:15 AM
Any off topic or venting against each other will be deleted/edited.

Clay
05-19-04, 08:30 AM
When I program, I always try to avoid gimmicky language features like loops and conditionals. Instead, I just write out separate code for each case and ship multiple programs to the clients. :rolleyes:

Sounds ridiculous, right? But that's basically what you have to do, in PS2.0, to get the best pixel shader performance for multiple cases (i.e. number of lights, fog on/off, lightmap on/off, etc). It's a huge pain. As a programmer, I'd like to thank NVIDIA for this feature. It's unfortunate that ATI is not following suit this round....they're holding back progress by keeping PS2.0 as the least common denominator for another generation.Great points.

muzz
05-19-04, 07:32 PM
Indeed SM 3.0 is part of DX9. SM 3.0 will cease to be a 'gimmick' to some when it is implemented by their favorite brand, then it will be the greatest thing since sliced bread. I understand though, it would be unwise to increase perceived value for something you can not currently offer/deliver. Maybe next round...

I remember when the same was said about 1.4 by NV fans..
SO?
Look where THAT went without NV following suit.
As I said folks I PERSONALLY am glad that NV exposed 3.0 to developers, but please lets not go overboard and try and make it look like ATi has never done this b4.
ATi has stated numerous times why they did not implement 3.0, and IMO it makes sense whether I like it or not.
Don't get me wrong guys, I wish ATi kept moving forward, but it sure seems like they haven't done much since R300.
I do believe that had the NV3x been a monster they would have had to rethink some things, but it was very late and never really made them do anything, so in a sense IMO ( notice how I said IMO? Good), NV slowed down progress a bit themselves.

Lets try and keep this civil this time plaese.

TY

m

MikeC
05-19-04, 08:16 PM
"The conference call ended with a confusing series of questions from a financial analyst, which eventually led to a somewhat heated discussion."

http://www.nvnews.net/cgi-bin/search_news.cgi?keyword=conference+call

The conference call was not as dramatic as The Inquirer suggests and I would not describe Jen Hsun Huang as "incandescent with rage." Although the Inquirer reports the conference call as being held last week, it was actually held on May 6th.

However, you can decide for yourself.

http://www.corporate-ir.net/ireye/ir_site.zhtml?ticker=NVDA&script=1100

Blacklash
05-19-04, 08:21 PM
I remember when the same was said about 1.4 by NV fans..
SO?
Look where THAT went without NV following suit.
As I said folks I PERSONALLY am glad that NV exposed 3.0 to developers, but please lets not go overboard and try and make it look like ATi has never done this b4.
ATi has stated numerous times why they did not implement 3.0, and IMO it makes sense whether I like it or not.
Don't get me wrong guys, I wish ATi kept moving forward, but it sure seems like they haven't done much since R300.
I do believe that had the NV3x been a monster they would have had to rethink some things, but it was very late and never really made them do anything, so in a sense IMO ( notice how I said IMO? Good), NV slowed down progress a bit themselves.

Lets try and keep this civil this time plaese.

TY

m
I understand what you are saying Muzz,

I was highly critical of Nvidia for the mistakes they made with the FX series. For me the 6800U is close enough to the X800XT that I am willing to buy it, all things considered. And I get to see how SM 3.0 pans out on their hardware. It really does come down to personal preferences. I have said in other threads both cards are solid and worth an investment....

muzz
05-19-04, 08:34 PM
I agree they are both plenty powerful, and as I have stated the NV40 looks like a solid card, and I am glad they gave developers 3.0.

Lets put it this way, I am actually considering an NV40, and this is the FIRST time I have considered NV ( I have my reasons folks, please don't bother pushing it- it was my $ so My choice, and I'll leave it at that).

I will be buying one of them, I am awaiting more info/availabilty and driver updates.

rms
05-20-04, 10:16 AM
I've been using a 9700pro for almost 2 years now, but fully agree with the posts laying out how the nvidia part has more features and is more forward-looking.

But one thing that I haven't seen talked about so much is driver maturity. The CEO's disparaging comment about a '3 year-old ATI part' also means that the ATI drivers have been maturing on essentially identical hardware for 3 years, while Nvidia is in the position of re-writing their drivers 3 times in the last 3 years for completely different architectures.

I don't think the importance of this can be under-estimated. It will take many many months to work out bugs and optimize the nvidia drivers. Meanwhile ATI's driver team is freed up to work on R500. It still looks to me like Nvidia will be playing catch-up for some time to come.

rms

jimmyjames123
05-20-04, 10:23 AM
I agree with you that the NV drivers seem to be more raw at the moment, but don't you think you are oversimplifying things a bit? You need to take into account the fact that the R5xx generation of ATI hardware will be significantly different than the R3xx and R4xx generation, because they will need to add full support for SM 3.0, where full precision is FP32. The entire industry is moving towards SM 3.0. NV obviously has a large head start in creating hardware with full precision as stated by the next DirectX 9.0c spec, and also in optimizing software for SM 3.0 capable hardware. Also, the ATI driver team will be busy over the next 6-12 months doing a re-write of their OpenGL code. At this point, it is tough to say which driver team has the upper hand, all we can really say is that they will both be very busy in such a competitive environment.