PDA

View Full Version : nvnews elite - I have a question for you


serAph
09-19-03, 12:20 PM
People keep comparing the 5900 to the geforce3 series and how after 6 months of crappy performance, a driver was released with significant speed increase.

What was that speed increase a result of? Why cant the nv3x be affected similarly with "proper" drivers?

I have an idea of the nv3x dilemma, but not in comparison to the GeForce 3's...

saturnotaku
09-19-03, 12:36 PM
Originally posted by serAph
People keep comparing the 5900 to the geforce3 series and how after 6 months of crappy performance, a driver was released with significant speed increase.


Where are you hearing those comparisons? Certainly not from here. And the GeForce3 was by no means slow in any test even with older drivers. People expected the GF3 to be quick, and it was. The new drivers just made it faster because the architecture was solid and capable. The FX architeture is not, and that's the key difference.

serAph
09-19-03, 12:43 PM
Originally posted by saturnotaku
Where are you hearing those comparisons? Certainly not from here. And the GeForce3 was by no means slow in any test even with older drivers. People expected the GF3 to be quick, and it was. The new drivers just made it faster because the architecture was solid and capable. The FX architeture is not, and that's the key difference.

heh yep Ive heard tales of those GeForce 3's really high quality - and Ive seen benchies in which they beat 9500's and 9600's in specific OpenGL tests... ...but Ive been hearing about that driver speed increase both @ guru & nvnews - but not from anyone with very high post counts or anything,...

digitalwanderer
09-19-03, 01:09 PM
Originally posted by serAph
heh yep Ive heard tales of those GeForce 3's really high quality - and Ive seen benchies in which they beat 9500's and 9600's in specific OpenGL tests... ...but Ive been hearing about that driver speed increase both @ guru & nvnews - but not from anyone with very high post counts or anything,...
The GF3 was a great card. I had a vanilla visiontek 64Mb GF3 with a crystal orb and some fins on her that did 240/530 for all the years I owned her...and I was her second owner. (And she's still going strong in a friend of mines rig...an incredible card!)

The FX is an entirely different story, and samey-same as to it's performance being fixed by a driver update: it's not going to happen.

Lemme repeat that just so there can be no uncertainty or doubt: it is NOT going to happen!

Yes they'll be able to boost performance a bit, but only at a loss of image quality....and they will NEVER do DX9 well, ever. nVidia will either have to custom code the shaders for the entire game to get decent performance, but at the cost of a LOT of eye-candy/quality or the games will perform horribly on them.....period.

There really isn't any room for doubt on this, honest. The cat is out of the bag about the FX and you will be hearing a whole lot of people saying a whole lot of things, but it simply will not change the fact that the FX is not a DX9 card nor will it ever be.

serAph
09-19-03, 01:31 PM
Originally posted by digitalwanderer
The GF3 was a great card. I had a vanilla visiontek 64Mb GF3 with a crystal orb and some fins on her that did 240/530 for all the years I owned her...and I was her second owner. (And she's still going strong in a friend of mines rig...an incredible card!)

The FX is an entirely different story, and samey-same as to it's performance being fixed by a driver update: it's not going to happen.

Lemme repeat that just so there can be no uncertainty or doubt: it is NOT going to happen!

Yes they'll be able to boost performance a bit, but only at a loss of image quality....and they will NEVER do DX9 well, ever. nVidia will either have to custom code the shaders for the entire game to get decent performance, but at the cost of a LOT of eye-candy/quality or the games will perform horribly on them.....period.

There really isn't any room for doubt on this, honest. The cat is out of the bag about the FX and you will be hearing a whole lot of people saying a whole lot of things, but it simply will not change the fact that the FX is not a DX9 card nor will it ever be.

woah woah hold your nvbashin horses - I didnt say anywhere that the FX is decent or that I expect anything from it. I know better than that. And it doesnt bother me so much that its not a DX9 card anyway - i hate microsoft and their moneymaking schemes, and Im displeased with how close ATi is to it - but I was just wondering what this GeForce3 thing was and how it was different than the FX situation - and I was hoping for something more technical than: "the GeForce 3 was a good card, the FX isnt."

saturnotaku
09-19-03, 01:50 PM
Originally posted by serAph
i hate microsoft and their moneymaking schemes, and Im displeased with how close ATi is to it

This is a completely assinine statement. Microsoft put out the specs for DirectX 9, ATI followed them with its hardware design, NVIDIA didn't. Your displeasure is totally misguided.

serAph
09-19-03, 02:04 PM
Originally posted by saturnotaku
This is a completely assinine statement. Microsoft put out the specs for DirectX 9, ATI followed them with its hardware design, NVIDIA didn't. Your displeasure is totally misguided.

nah - opengl is a perfectly fine standard - but microsoft has to put their greedy little hands on everything - so now we have competing standards, and because of it, I - as well as MANY MANY others - have gotten ripped off. I find it assenine how godly ATi card holders treat Direct X.

The Baron
09-19-03, 03:45 PM
Originally posted by serAph
nah - opengl is a perfectly fine standard - but microsoft has to put their greedy little hands on everything - so now we have competing standards, and because of it, I - as well as MANY MANY others - have gotten ripped off. I find it assenine how godly ATi card holders treat Direct X.
The problem is though that it's MS's tech, like it or not. So, it's either MS's way or the highway. If you don't like it, well then, don't make a D3D driver (and suffer the inevitable consequences).

serAph
09-19-03, 03:48 PM
Originally posted by The Baron
The problem is though that it's MS's tech, like it or not. So, it's either MS's way or the highway. If you don't like it, well then, don't make a D3D driver (and suffer the inevitable consequences).

Im not suggesting that anyone should leave out DirectX support - just that the world would be better if DirectX never existed in the first place. I for one am VERY aware of the consequences of bad directX support.

The Baron
09-19-03, 04:01 PM
Originally posted by serAph
Im not suggesting that anyone should leave out DirectX support - just that the world would be better if DirectX never existed in the first place. I for one am VERY aware of the consequences of bad directX support.
were you dropped on the head as an infant?

"Bad DirectX support..." what are you talking about? DirectX is the reason that you don't have to manually configure sound card drivers and all that crap anymore, like it or not. Hell, even Carmack uses some of it (DirectSound, think he might use DirectPlay but I'm not sure).

DX, since it's controlled by a single company, also progresses at a much faster rate than OGL.

serAph
09-19-03, 04:09 PM
Originally posted by The Baron
were you dropped on the head as an infant?

"Bad DirectX support..." what are you talking about? DirectX is the reason that you don't have to manually configure sound card drivers and all that crap anymore, like it or not. Hell, even Carmack uses some of it (DirectSound, think he might use DirectPlay but I'm not sure).

DX, since it's controlled by a single company, also progresses at a much faster rate than OGL.

haha - my bad - I totally meant Direct3D :)

Deathlike2
09-19-03, 08:19 PM
and I was hoping for something more technical than: "the GeForce 3 was a good card, the FX isnt."

Well... think about this...

The GF3 (NV20) was developed from the ground up with MS's input..

The Radeon 9700 (R300) went the same way.. (well, somewhat)

There have been some "conspiracy" theories on why MS ended up choosing FP24 as the standard.. and not FP16..

The NV30 was designed to "at least" simulate 8 pipelines of FP16 (this kind of info is shady at best)... and I'd support that MS decided to go for FP24... just for precision's sake (perhaps FP16 wasn't enough to them)..

It is a possibility that NVidia's arrogence took play into this... as they probably had a hand in the PP specification..

For the NV20... it was "crippled" in the sense that the Xbox version had a superior GPU.. the fact that the NV20 had one less pipeline for the VS (I think) made it sound "crippled"... I feel that the NV25 pretty much resolved that issue (adding that pipeline back in, although an anisotropic filtering issue kicked in... where the GF3 rendered aniso effectively because the GF4 (NV25) didn't have both texture units working at it - only 1 texture unit was working at it... thus reducing aniso performance to GF3 levels)

As much as I can say that the GF3 card will be extensively programmed for on the Xbox and PC (for DX8)... the same can be said for the R300 (for DX9)

Deathlike2
09-19-03, 08:37 PM
I forgot to mention.. I'm not an NVNews Elite :P

Seriously.. the drivers NVidia have come out on an annual basis (Det 2, 3, Det 40s).. they were of sheer quality (well, mostly performance enhancements)...

Most of those drivers worked the architecture to its fullest (the original release driver probably wasn't really optimized, it probably added some basic support and new features for that specific hardware)...

You could "say" they are planning to do that with the FX... but with the current trend these drivers will be "a significant performance improvement".. but we all know what's going to go out the window.. and there's not much you can do with an architecture that doesn't truly conform with the standard (if the NV3X series had 8 FP32 pipelines, the whole series of issues would never have happened in the first place, in my opinion)

As far as the "optimized shader compiler" is concerned.. to full make the NV3X efficient.. it has to run ALL FP24/32 code to FP16... it will LOOK different for sure.. but it may very well perform like the R3XX hardware (fast performance, lower IQ)

StealthHawk
09-19-03, 10:02 PM
Originally posted by serAph
People keep comparing the 5900 to the geforce3 series and how after 6 months of crappy performance, a driver was released with significant speed increase.

What was that speed increase a result of? Why cant the nv3x be affected similarly with "proper" drivers?

I have an idea of the nv3x dilemma, but not in comparison to the GeForce 3's...

The gf3 did not have crappy performance. I don't know how this myth keeps getting perpetuated.

The gf3 was a little slower than the gf2Ultra in some situations(low resolution, 16bit color) and faster in others(high resolution, 32bit color, FSAA). This comes as no surprise because the gf2Ultra had more fillrate than the gf3. Both cards had the same raw bandwidth. gf3 was clocked at 200/230 gf2Ultra clocked at 250/230.

Where the rumor orginated from: As you may remember, the gf3 was first paper launched in February 2001. No performance numbers were allowed. Rumor has it that performance was bad and that is why NVIDIA did not allow benchmarks at that time. Which didn't matter since it was a paper launch and by the time the cards were available benchmarks had been published.

Saying that the gf3 had crappy performance for 6 months(pre-Detonator4) is ludicrous. The mentality of "new products should always outperform old products" as fallacious at best. You need to look at a card's theoretical peak before making such announcements. And again, it is not surprising that in some situations the gf2Ultra could beat the gf3.

Compare this to the situation with NV3x. Whereas even the idiots bashing the NV20 agreed that the performance was "fixed" after the Fall refresh and new Detonators were released(Ti500 + Detonator4) we don't see that same situation at all with NV3x.

The NV30 refresh has been released. NV35 is out and DetonatorFX is out. We have preliminary Det50 results that don't improve things much. The two situations are totally incomparable. Some other differences that should be noted. NVIDIA was the market leader and performance leader with the gf2Ultra and gf3. It was staving off the r8500 with the gf3Ti500 and Det4. Compare this to the launch of the gfFX5800Ultra and realize that NVIDIA has been in ATI's shadow since ATI launched the r9700. You better believe that NVIDIA has been doing everything in its power to make the gfFX look competitive and to try to get it to take the crown. We have already seen what that entails; PR and driver cheats.

poursoul
09-20-03, 08:50 AM
1 of 2 things here.

1. You are refering to when (some) people were dissapointed with the small bump in performance the Ti500 gave to the GF3 series.

2. You are trying to create this rumor yourself.

BTW: EVERYONE POST QUICK! Posting in this thread makes you teh l337!

Deathlike2
09-20-03, 01:27 PM
The only differences between the GF3 and the GF3 Ti is the use of a lower process (lower power requirements and less heat being produced), 3D textures bug in the GF3 fixed.. and shadow buffers (however, this was already part of the GF3 hardware and not previously exposed in drivers)

StealthHawk
09-20-03, 07:46 PM
Originally posted by poursoul
1 of 2 things here.

1. You are refering to when (some) people were dissapointed with the small bump in performance the Ti500 gave to the GF3 series.

2. You are trying to create this rumor yourself.

BTW: EVERYONE POST QUICK! Posting in this thread makes you teh l337!

Actually the only situation that makes any sense at all is comparing the gf3 to the gf2Ultra. After ~6 months Det4 was released which made gf3 quite a bit faster, allowing the gf3 to edge out the gf2Ultra in more situations than it did before.

He can't really be talking about the gf3 -> gf3Ti500 because there was no driver that improved performance very much 6 months after the gf3Ti500 was released.

Greg
09-21-03, 12:26 AM
Nothing elite about me either, but I have a few comments.

The Det 40s did give some speed improvement to the GF3 and quite a bit more the the GF4s I believe. The most significant gains were in Anti Aliasing, even helping the GF4mx, but my memory is a bit hazy on that.

The optimisations (please, no discussion on cheats and opts) where in several places.

1) The software drivers.
a) The WindosXP driver model is way superior to the Win9x one and allowed improvements in performance.
b) The optimisation of code, making use of SSE and 3DNow.
c) Improvements in algorithm and general code optimisations to use less CPU time.
d) Better synchronisation of resource management, textures, vertices, handling of dependancy situations.

2) The hardware drivers.
a) The video driver updates micro code that runs on (or configures) the GPU. Allowing for optimised and varying FSAA modes.
b) Software modified shader code and state setup micro code that allows performance to be improved though the intimate knowledge of the GPU architecture, timings and expected use.
c) Software Replacement or optimisation of common application shaders to provide the same output at lower cost.

3) DirectX and OpenGL
a) Internal and external changes to the API, and the driver interface, allowing more optimal code paths. This is a continual evolution.
b) Exposing features as they are made, or discovered, some of which improve performance. (Discovered, because not all the ways hardware will be used, are known at the time of design or early use.)

All these optimisations occured due to the highly programmable nature of the GPU, and of course the continual revision of PC driver code.

I would personally suggest (again, without particular inside knowledge) that the GeforceFX series have MORE potential for optimisation that earlier generations, but I'd also suggest that they are HARDER to optimise and get right.

Instead of comparing the GFFX to GF3, I'd compare it to a PlayStation 2. The PS2 has some reasonably powered DSP style vector processors and other processors, which statistically can do great things, but their power is only realised when they work together in a very precise way, preventing stalls or under-utilisation. So, a carefully crafted PS2 game, can outperform an XBox under certain conditions, but even then, after many years of effort, and great improvements in performance, it only uses perhaps 30% of its potential. It is so hard to get right, that it takes lots of extra, and highly skilled progamming effort to squeeze good performance out of it. The XBox on the other hand is a custom Geforce3/4 style hybrid and the DirectX implementation allows the average programmer to realize perhaps 80-90% of its power immediately. So you end up with one system that could 'potentially be faster, and is in some situations' and other that 'proves to be faster in most situations with little effort'. What would you rather have?

So will we see significant performance increases in future Dets? I'd say definately 'maybe'.

I'd also say that nVidia purposely made the decision that 16 and 32bit be used in this way:
16bit precision for almost all operations, 32bit when precision is required for things like 'world space' calculations in fragment programs... a very rare, and arguably rarely necessary thing. The truth is that 16bit float precision are SO FAR above what existed before, and only falls apart when mixing numbers in different ranges (eg. thousands with thousanths). The obvious problems I see for nVidia at the moment (on this line of discussion) are:
a) DirectX9 doesn't easily expose the different precision options that perhaps Cg and OpenGL do.
b) Confusion, misinformation, and complexity prevent users from understanding what they are really buying, and what features they should really care about. Remember a while back when things like TrueForm (ATi) and Displacement mapping (Matrox) where all the rage? You weren't a DX8 card if you didn't support displacement mapping!
c) The NV3x hardware taking too much time to optimise for, when more products, with perhaps different architecture are in development.

I've probably made a mistake or two in that rant, but don't bother correcting unless its important to the topic. I didn't check up on the things I've said to make sure they are totally correct, but you can have some confidence in what I've said. Regulars may have noticed I havn't entered the latest NV vs ATI and HL2 debate and threads at all. Sometime I might write a thread on 'some truths about the video game industry', which will leave many people shaking their heads, fists or crying out loud.

StealthHawk
09-21-03, 02:50 AM
It is true that the gf4 has managed to gain speed here and there since its release. The gf3 has not been that lucky. Myself and others have not seen any raw speed improvements since 23.11 besides for new games and some AF speed improvements. I have not ever seen any FSAA improvements on my gf3(ie, I have never seen raw speed stay the same but FSAA speed go up). gf4s have had some FSAA speed improvements, but they do not appear to be global, only in some games. I think either Quake3 or UT2003 had improved 4x FSAA speed(not sure how NVIDIA managed this and why it only seems to affect one game).

But yeah, the gf4 has seen some good improvements with new drivers. gf3 has seen nothing since 28.90(which improved OGL AF speed) besides speed ups for new games; nothing global.

aapo
09-21-03, 03:48 AM
Originally posted by serAph
What was that speed increase a result of? Why cant the nv3x be affected similarly with "proper" drivers?

The problem: nv3x are very fast cards, except the pixel shader speeds suck.

The cure: There is no cure, because PS paths are very much hardwired to the silicon. With GF3 the problems were elsewhere, and there is much more flexibility with other parts of the graphics rendering. There is no point to optimize other parts of FX architecture than PS shaders, because the other parts are already blazing fast.

In theory it would be possible to optimize the fragment program compiler (they are trying this with the Det50 series), but R3x0 is much faster with fully nVidia optimized code, too. (Remember Dawn?) So there is hope for faster NV3x cards, but no hope for them to be faster than R3x0.

EDIT: For elite answers, go to B3D. But be sure to understand them... ;)

serAph
09-21-03, 04:34 PM
greg. Im speechless.

Thats the kinda answer I like - alllll info, NO opinion. What do you do for a living? It sounds like you're in the biz!

and whats your opinion on this entire situation anyway? Should I be as angry as I am (very much so) at my 5900?

~ serAph

saturnotaku
09-21-03, 06:33 PM
Originally posted by Greg
Instead of comparing the GFFX to GF3, I'd compare it to a PlayStation 2. The PS2 has some reasonably powered DSP style vector processors and other processors, which statistically can do great things, but their power is only realised when they work together in a very precise way, preventing stalls or under-utilisation. So, a carefully crafted PS2 game, can outperform an XBox under certain conditions, but even then, after many years of effort, and great improvements in performance, it only uses perhaps 30% of its potential. It is so hard to get right, that it takes lots of extra, and highly skilled progamming effort to squeeze good performance out of it. The XBox on the other hand is a custom Geforce3/4 style hybrid and the DirectX implementation allows the average programmer to realize perhaps 80-90% of its power immediately. So you end up with one system that could 'potentially be faster, and is in some situations' and other that 'proves to be faster in most situations with little effort'. What would you rather have?


This is an interesting comparison and probably spot on. Speaking from a business perspective, I'd rather develop for the Xbox if what you've said above is true about programmability of both consoles. The Xbox (theoretically) would allow me to put out a better looking game in less time, especially if I'm developing the game in tandem with a PC port. This makes for a happy publisher and potentially more revenue for me.

You know, now that I think about it, this comparison makes even more sense and allows for an explanation of why games released on all platforms (PC, Xbox, GC and PS2) typically look the worst on the PS2. Might you agree that games would look just as good, if not better, on the PS2 if developers would take more time to understand the software's relationship with the hardware?

Greg
09-22-03, 06:04 AM
Originally posted by saturnotaku
.....Might you agree that games would look just as good, if not better, on the PS2 if developers would take more time to understand the software's relationship with the hardware?

I won't start a discussion about PS2, but this is another strengths and weaknesses comparison. The PS2 could never compete with the texture size and resolution on the XBox, whereas a carefully coded PS2 could process more vertices for a high-res mesh, so long as it was lit very simply. Getting that performance from the PS2 would have cost you perhaps one man year of work (hence the amount of middleware available for PS2), compared to a tiny amount of tool and tweaking work to get optimal performance from the XBox.

I just thought the PS2 vs XBox was a reasonable analogy since the manufacturers could both produce benchmarks and stats proving each was better than the other, but a game player trying lots of (non-exclusive) games on each system should be able to decide (but god help a new comer). I also hinted at a future article describing things such as being asked to make sure a particular sku didn't look as good on another platform...one of many many true stories.