View Single Post
Old 09-03-02, 03:37 PM   #5
jbirney
Registered User
 
jbirney's Avatar
 
Join Date: Jul 2002
Posts: 1,430
Default

Good points but I see it differently:

Quote:
ATI is currently betting on the fact that they got the peformance crown to sell their Radeon 9000
Well thats not gonna to hurt them, but I dont see that as the major selling point. They major selling point is its the first true "bugdet" card that features DX8.1. It also have a bunch of features. OEMs like features to advertise.


Quote:
Their opinion seems to be that DX8 should no longer be looked at - they're going to release the FASTEST DX8-only solution the world will ever see, the NV28, and that'll be truly sufficent for 95% of upcoming DX8 games.
There as some talk that the NV28 was the ingergrade MCU of the nForce2 that had lower cost GF3 intergrated into it. That could explian the extra transistors over the GF4. I have no idea. I did find out today that one of the new parts was only a GF4 Mx only with AGP x8. This is a huge mistake IMHO we need more DX8 cards out there not more DX7 parts!



Quote:
nVidia strategy with Cg is easy to understand: let the developers rapidly upgrade from DX8 to DX9, let them program great effects only working with the NV30 and stuff like that. In other words - let the developers have freedom to create things that make their cards sell.
Guys pixel/vertex shaders are not that hard to write by hand. Its not the easiest thing to do but there are other things much worse. Dont take me wrong as I like what Cg can do. It will help and shave a small amount of time off the developement cycles. But unitl the mass of people out there have hardware that can use these new features then it wont mean jack. Again look back at the orignal TNT. How long before games were 32 bit? Look at the TnL unit of the GF, how long before games that used TnL were the norm? Look at Vertex and pixel shadders made possible by the GF3 and 16 months later we still only have a handful of games that use these features. The reason is very simple. Developers will always write their game to the base hardware specs. Dose not matter what the highend cards have as very few if any of those features will be used. Its simple logic.

Quote:
So, nVidia most likely optimized the NV30 for DX9 - and it'll most likely won't be better at all in DX8 than the R300.
I can see it the other way. I think for some first set of DX8 titles the extra TMU of the NV30 will give it a bit more muscle. But as more and more games use a fully DX8 pipeline that extra TMU does not do much so maybe it will sift? Who knows.

Quote:
ATI, on the other hand, has the vision of a market where DX9 games will take so much time to be ready that having great DX9 performance is lost time - better to have great DX8 performance to make everyone think DX9 performance will own.
The same is true for the NV30 as its probably only a few months behind the R300.

Quote:
nVidia strategy is very simple - high DX8 performance with a DX8 card and high DX9 performance with a DX9 card - they don't think anyone cares that the performance of their DX9 card is so much better in DX8 than their DX8 card.

ATI, on the other hand, bets on the fact DX9 games will be nearly non-existant for months. And it isn't RenderMonkey which is gonna change that *grins* I'm sorry, but i prefer Cg with VC++ color coding.
Actually RenderMonkey is add on for the popular design tools. It outputs code so you dont have to write a single line. Funny thing is, in most cases you can use RenderMonkey to output Cg code which then can be complied. Imagine that
jbirney is offline   Reply With Quote