PDA

View Full Version : NVIDIA claims free FSAA on GeForce FX


Pages : [1] 2

AGP64
01-31-03, 05:03 AM
What's more, the GeForce FX's innovative new architecture includes an advanced and completely transparent form of
Antialiasing: Jaggy vs. Smooth
lossless depth Z-buffer and color compression technology. The result? Essentially all modes of antialiasing are available at all resolutions without any performance hit. Greatly improved image quality, with no drop in frame rate!

Source: http://www.nvidia.com/view.asp?IO=feature_intellisample

Thanks to Someotherdude on rage3d for pointing this out.

Clearly all reviews claim otherwise.

Hanners
01-31-03, 05:08 AM
Maybe by 'transparent form of Antialiasing', they mean you can't actually spot that it's doing anything? :p

Humus
01-31-03, 05:35 AM
LOL :D

ZoinKs!
01-31-03, 05:43 AM
What this has me wondering...

Did they deliberately mistate the issue? Was it a mistake from marketing not understanding what's going on? Or did they really expect the fx to provide free antialiasing but failed to meet this goal?

volt
01-31-03, 06:37 AM
The "free" AA was rumored since we first heard about the new chip. Then NVIDIA officially stated it at the conference.

Uttar
01-31-03, 10:35 AM
4x AA is free in a lot of games.
If you enables VSync :P

Damn, I love marketing BS :D


Uttar

DaveW
01-31-03, 12:02 PM
Maybe its just something the drivers don't support yet. I hope thats all it is.

ReDeeMeR
01-31-03, 12:05 PM
Shows how desperate they are.

Chalnoth
01-31-03, 12:09 PM
Originally posted by ZoinKs!
What this has me wondering...

Did they deliberately mistate the issue? Was it a mistake from marketing not understanding what's going on? Or did they really expect the fx to provide free antialiasing but failed to meet this goal?
It was probably a term that a tech person told a marketting person that was misunderstood/taken out of context.

In a perfectly ideal case, where the FX has plenty of memory to spare (ex. one quad drawn over the entire screen), antialiasing should take almost no performance hit. The only hit may be from memory granularity (only writing one out of every two or four pixels). Actually, the 3DMark fillrate tests are reasonably close to this, so it may be a good idea to look at the FSAA hit in those.

gokickrocks
01-31-03, 02:11 PM
1 full black screen w/ 8xS = 0 performance hit :D

Uttar
01-31-03, 02:52 PM
More seriously...

I believe the GFFX should be able to give nearly free 4X AA in the following conditions:
*Huge* VS/PS programs, 8X Aniso

Problem is, no freaking game uses "huge" shading programs in the forseeable future. Oh well!


Uttar

YeuEmMaiMai
01-31-03, 03:16 PM
nah,

WRONG ANSWER

please try again. fact is their design leaves a lot to be desired

Originally posted by Chalnoth
It was probably a term that a tech person told a marketting person that was misunderstood/taken out of context.

In a perfectly ideal case, where the FX has plenty of memory to spare (ex. one quad drawn over the entire screen), antialiasing should take almost no performance hit. The only hit may be from memory granularity (only writing one out of every two or four pixels). Actually, the 3DMark fillrate tests are reasonably close to this, so it may be a good idea to look at the FSAA hit in those.

Chalnoth
01-31-03, 03:43 PM
Originally posted by YeuEmMaiMai
nah,

WRONG ANSWER

please try again. fact is their design leaves a lot to be desired
Do you have any reason why? Or do you just not like it?

YeuEmMaiMai
01-31-03, 03:52 PM
simple,


why would Nvidia release the card knowing that 128MB killed FSAA performance at high res? in that case they would have equipped the card with 256MB from the get go......

advantage


1. More Ram than R300
2. FSAA NOT impacted

those are two very powerful marketing points

SavagePaladin
01-31-03, 03:59 PM
Because that would cost more AND produce more heat?

YeuEmMaiMai
01-31-03, 04:06 PM
given the situation that the NV30 is currently in, I doubt that the increase in heat would offset the fact that you get free fsaa

make the board 1" longer with a slightly larger heat pipe... not a big deal considering the benefit gained.

Filburt
01-31-03, 07:03 PM
Uh...Yem, I don't really follow your argument on this one. The thing that kills their FSAA performance is the memory bus being too narrow and thus too little memory bandwidth. The 128mb frame buffer is large enough for FSAA operations. I've certainly seen some odd reasons for bashing the NV30 design so far...but that one is certainly one of the strangest. If anything, had they equipped it with 256mb but the price were higher, I would have thought even less of it.

Anyhow, if I were nVidia I wouldn't be concerned about the NV30 vs. R300 or even R350. I'd be way more concerned about the NV31/34 vs. R300. This is because the mainstream market is where the money is made.

Chalnoth
01-31-03, 07:35 PM
Originally posted by YeuEmMaiMai
simple,

why would Nvidia release the card knowing that 128MB killed FSAA performance at high res? in that case they would have equipped the card with 256MB from the get go......
The high-density DDR2 memories required for 256MB are, apparently, not yet available.

Chalnoth
01-31-03, 07:38 PM
Originally posted by Filburt
Uh...Yem, I don't really follow your argument on this one. The thing that kills their FSAA performance is the memory bus being too narrow and thus too little memory bandwidth. The 128mb frame buffer is large enough for FSAA operations.
Try this little calculation on for size:

1600x1200 resolution
x4 bytes per pixel
x4 sample FSAA
x3 for double-buffering + z-buffer

...makes about 90MB (Well, 87MB to be precise). Since it appears that the GeForce FX is not downsampling until scanout, this can definitely have a significant impact on modern games that easily use more than 40MB of textures (particluarly UT2k3).

noko
01-31-03, 07:54 PM
Doesn't color compression 4:1 and Z-Buffer compression lower that some what?

Joe DeFuria
01-31-03, 08:01 PM
No. Color and z compression only impact bandwidth. GeForce (and Radeon for that matter) must reserve the entire space as if it were "uncompressed" in memory.

The reason for this is because they use lossless compression, and therefore cannot guarantee any compression at all.

Chalnoth
01-31-03, 08:04 PM
Right, and while it may be feasible to guarantee X level of compression on average over the entire scene, it is impossible to guarantee X level of compression on the local level (with current techniques). Since variable-size framebuffers are unfeasible, the requirement of the local area being uncompressable removes the possibility of reducing the framebuffer size...

Filburt
01-31-03, 08:17 PM
Originally posted by Chalnoth
Try this little calculation on for size:

1600x1200 resolution
x4 bytes per pixel
x4 sample FSAA
x3 for double-buffering + z-buffer

...makes about 90MB (Well, 87MB to be precise). Since it appears that the GeForce FX is not downsampling until scanout, this can definitely have a significant impact on modern games that easily use more than 40MB of textures (particluarly UT2k3).

Eh, my bad I guess! :p

noko
01-31-03, 08:17 PM
I know ATI has that limitation and must reserve the whole space but I am not sure about Nvidia methods. Now if you remember the Amiga days HAM mode could hold 12 bits of information with only 7bits of data or 18bits of information with 9bits with a turn on bit for the adjacent pixel, this was a consistent compression method. Nvidia claims a constant 4:1 ratio of compression, not up to 4:1. If true that would mean the frame buffer would indeed be smaller.

Bigus Dickus
02-01-03, 12:42 AM
Originally posted by noko
I know ATI has that limitation and must reserve the whole space but I am not sure about Nvidia methods. Now if you remember the Amiga days HAM mode could hold 12 bits of information with only 7bits of data or 18bits of information with 9bits with a turn on bit for the adjacent pixel, this was a consistent compression method. Nvidia claims a constant 4:1 ratio of compression, not up to 4:1. If true that would mean the frame buffer would indeed be smaller.

You simply can't guarantee a consistant compression ratio for all data cases. There are certain data sets for any compression algorithm that will result in no compression or even expansion of the original data.

Since you can't guarantee that you won't run into a scene which contains that data set, then you must reserve the maximum possible. Actually, I've often wondered if it couldn't be shown that the required data sets to yield little or no compression would be so far outside the realm of probability that for all practical purposes you could guarantee a compression ratio. Dunno.