Inno3D Home Page Inno3D Home Page

FAQ News Search Archive Forum Articles Tweaks Technology Files Prices SETI
Visit NVIDIA's home page.
Favorite Pics
Click to Enlarge
Articles/Reviews
OCZ Tech Titan 3
Absolute MORPHEUS
1.0GHz Pentium III
eVGA MX Shootout
nForce Preview
AMD AXIA CPU
VisionTek GeForce3
2001 Spring Lineup
GeForce3 Preview
eVGA TwinView Plus
VisionTek GF2 Ultra
OCZ GeForce2 Pro
Winfast GeForce2 MX
GeForce2 vs Quake3
Linksys Cable Router
GF2 FSAA Shootout
GeForce2 MX Preview
Benchmarking Guide
Quake 3 Tune-Up
Power DVD Review
Live! Experiences
Sponsors
Memory from Crucial.com


FastCounter by bCentral

 Visitors Are Online
Powered by Perlonline.com
Drivers/FAQ
NVIDIA
3D Chipset
Gamers Ammo
Reactor Critical
GeForce FAQ
Associates
Dayton's Misc.
G-Force X Sweden
Maximum Reboot
Media Xplosion
NVchips-fr
nV Italia
Riva Station
3D GPU
nV News Home Page

Creative Labs Responds

December 14, 1999



After reading a recent question and answer session with 3dfx's Brian Burke (Public Relations), I approached NVIDIA and Creative Labs asking if they wanted to offer any feeback on the subject matter.

While NVIDIA declined to comment, Steve Mosher, who is the Vice President of Graphics Business at Creative Labs, took the time to give us his thoughts on 3dfx's responses.  The original questions, along with 3dfx's response, have been repeated in this article and are followed by Steve's comments.


Question: In numerous interviews with 3dfx we always see them say that you do not support hardware T&L because not enough games will support T&L yet, and when your next generation card(post v4/v5) comes out you will support it.  However, with the new V4/V5 card you also introduce your own new technology(T-Buffer/Depth of Field Blur) that even less companies support at this time.  Past logic of lack of game support seems to not apply to your own technology. Can you explain why you are including new features(and arguably less useful features) that CURRENTLY has even less support than T&L?

3dfx Response: Valid question, but a little mis-quoted.  We have stated that we will support T&L in future products and that T&L is an important technology for the future.  It is not that T&L is unimportant, it is just not most important.  We have also said that when T&L games are prevalent, the GeForce will not be the card to use because it does not have the fill rate to go with the number of triangles it can produce, making it unbalanced.  GeForce is a better product for what it stands for than how it executes.

We feel that there are more important problems to solve immediately than T&L to make the biggest impact on the gaming experience that will be had when playing current and soon-to-be-released available titles.  Fill rate is still the biggest problem with 3D graphics.  Putting triangles on the screen has never been a problem, but filling those triangles has been a problem since the inception of the 3D graphics market.  Tests have shown that it continues to be with GeForce.

No double standard for the t-buffer.  There has not been a product that can touch the out-of-box experience the Voodoo5 product will give the end user.  The fill rate is the highest available.  It will also provide full scene AA to every game you play under D3D, Open GL and GLIDE.  That takes power and is a feature that every 3d card manufacturer has been striving for.

The other cinematic effects will come later, much like T&L support.  With the Voodoo5, you get a big performance step over current technology and a huge image quality boost immediately, and cinematic effect to look forward to later.  With the GeForce, you get a minor performance boost over current technologies and no out-of box impact, just the promise of something later.

Steve's Response: I think you see the characteristic difference between 3dfx's approach of accelerating the past, versus NVIDIA's approach of accelerating the past AND providing a platform for the future in this answer.  Up until recently, the processor has not had a problem pushing triangles to the screen.  So if you spend your time focusing on the past, on yesterday's titles, you would surely come to the conclusion that T&L wasn't needed.

Thankfully, NVIDIA's designers saw the need for T&L before it became a huge problem.  One reason for this difference of vision is schedule.  The VSA 100 was supposed to be a product some time ago, like last July (note it has TNT2 Ultra like performance in a single chip configuration: 166MHz, 333MegaTexels).  In the time frame when they originally planned to launch it, T&L was not so critical.

For those of us who have seen the titles in the pipeline with T&L support, there is no second guessing.  There is no waffling on its importance.  If you buy a card with T&L support now, you get a great fill rate for the legacy apps, and you get great protection for the future apps.  How long are you going to own your video card?  6 months, 1 year, 18 months?  If you buy T&L today, you have the peace of mind that you have bought the fastest card you can own today, and you have the security that as more and more T&L titles ship, your experience won't suffer because your processor can't keep up.  If you wait 6 months to buy a card that doesn't support T&L, you actually are in a bigger danger of the card being outdated by new content.  Now wouldn't that frost you?

The other thing you see here is division of the t-buffer effects.  In short, there is AA and the other stuff.  Let me address motion blur and depth of field first.  If you read 3dfx's white papers, you will see that both of these effects are described as CAMERA ARTIFACTS.  That's right, these are artifacts.

A camera produces motion blur when it's not fast enough to resolve the image in a single frame.  So why would you want to introduce artifacts (image degradation) into a game?  Would it help you aim better, see the textures on the wall better, or identify an enemy aircraft better?  No.  It will do little to enhance the illusion of reality.  It's a cinematic effect.  A cinematic artifact that is used to convey the illusion of speed (makes me think of the six million dollar man).

If I want cinematic effects, I will rent a DVD.  In a game, the experience is not passive, it's active, and when you are actively involved in a reality you don't sense or perceive motion blur.  In fact, the presence of motion blur will clue in the brain that this "reality" in front of me is not so real after all.  As far as check boxes go, I think the motion blur off check box will get a lot of hits.

Depth of field is even worse.  Again, this is a camera artifact.  When we are watching a movie, typically one part of the scene will be in focus while other parts will be out of focus.  Naturally, the director changes the focal plane to draw the viewer's attention to the object of interest (except in Repo man where all the cool stuff is in the background...Plate of Shrimp!).  Now this works fine in movies, which are passive experiences.  In life, and in games that try to create a sense of reality, or a believable reality, there is no depth of field.  You direct your attention where you want.  Point your fovea, and your brain focuses on the part of reality you are interested in.  Everything has to be in focus because ANYTHING could be the object of your attention.

I am not saying that depth of field and motion blur cannot be integrated into games.  I am saying that these artifacts come from movies, where they have a particular use and logic that is tied to that medium.  Part of the joy of playing games, as opposed to watching movies, is the amount of control you have.  No director can use the camera to force my focus to particular things.  Now, depth of field could find some use in rail shooters, like House of the Dead, where you are directed through a maze, but that genre has limited appeal on the PC.  But here too I don't think depth of field would add a lot to the inherently limited appeal and psychological depth of rail shooters.

The only t-buffer effect worth its weight in silicon is AA.  Here too you have to consider the trade offs.  AA is not a pancea.  Removing artifacts (like motion blur and depth of feild) is critical to creating a reality the user can believe in.  If I could get AA for free, with no decrease in performance I would take it.  If I have to pay $600, and take a hit in frame rate, I will pass on it.  Since the increase in visual quality is content dependent and subjective (some people can't see the jaggies unless you point them out), the value of AA is inherently subjective and unmeasurable.  To put it another way, in some games I will live with the jaggies on the staircase because I'm not looking at the staircase.  I'm looking at the bot who is going to blow me to smithereens and I need every millisecond to get a round off before it does.

The bottom line is which do you find more distracting: a slower frame rate or a jaggie on the staircase you're not even looking at?  And more importantly, which will you find more compelling: a realistic opponent rendered with 10K polygons or a blocky bot rendered with motion blur?

Question: At the point when 3dfx didn't support 32bit color you claimed that you did not do so because the performance hit was too great (which led to 3dfx's philosophy of speed is king).  According to all early reports, using your full-scene hardware anti-aliasing will dramatically cut into the speed you can run games.  Is this the case?  Will there be a dramatic cut in speed when using the optional FSAA at say resolutions above 800x600?  If there is a cut in performance, what happened to speed is king philosophy you had when you refused to even include an optional 32bit color support because of the hit?  Has 3dfx went from speed to image quality is king?

3dfx Response: Speed is king has never been a 3dfx philosophy.  3dfx cards are designed by gamers.  We design for the ultimate gaming experience, which means balancing image quality and speed.  Gamers know that a sustained frame rate of 60fps is need for optimum performance.  3dfx does not believe in adding features just for the sake of hype.  With out getting into the 32-bit vs 16-bit debate again, 32-bit in the last generation of products looks real nice in screen shots, but is not useful when playing a game.  Even GeForce based cards that have been reviewed have trouble with that.

We have been very upfront that the t-buffer effects will mean a performance hit.  The real question concerning gamers is whether they can run their favorite titles at real-time frame rates, like 60fps, with features turned on.  So with the Voodoo5, applications will run at real-time frame rates at 32bpp with FSAA turned on and they will run even faster with FSAA turned off.  We will also have a check box in the properties page so users can run with no AA, two pass AA and four pass AA.

Steve's Response: I seem to recall 3dfx saying that speed was king and that 16, err...22 bit was all you needed, but I will leave it up to the journalists to hold them accountable.  The problem with much of this is that it's not factually accurate.  Let's start with the frame rate nonsense.  There is no such thing as an optimal frame rate.  The frame rate required by a simulation is a function of many different things.  The human visual system, the visual content and the "gain" of the task being performed.  Dependening on the spectra of the visual, the human visual system can detect flicker at various frequencies.

I refer everyone interested to study the work done on the critical flicker frequency (CFF).  For some visual conditions and for some human subjects frame rates below 60 do not impede performance.  For other individuals in other lighting conditions more than 60 is required.  Equally important is the task being performed.  My favorite example is an oil tanker versus a jet aircraft.  Flying a jet is a high gain task.  You must have fast system resposne.  Steering and oil tanker is totally different.  Now, most games approach the jet aircraft side of things, rather than the oil tanker, but I use the example to make a point about the overly simplied "60 hertz" is optimal.

Theoretically, some games could require less, other could require more.  Long ago and far away, in Voodoo 1 time, I recall seeing a mass of 3dfx marketing materials that claimed 640*480 at 30hz was the sweet spot for games.  However, I wouldn't take this as an argument for slower frame rates.  My point is that different genre's are going to have different requirements.  There are definitely diminishing returns for going faster than 60 fps.

I just object to the over simplification and dumbing down of the discussion.  I think gamers are sophicated enough to realize that Quake needs a higher frame rate than Deer Hunter.  I think they are sophisticated enough to understand that 32 bit is better than 22 bit, and that sometimes image quality is more important than speed, and sometimes speed is more important than image quality, and sometimes the two need to be traded off.  I think when they see the realism created by 100K polys in a scene, they understand that a real time or near real time Myst is right around the corner.  Moreover, they understand that no amount of fill power can create this kind of complex reality.

It's all about balance.  GeForce has no sacrifice fill and T&L.  It's not a one dimensional product because games are not one dimensional.  I play Everquest and Quake.  I'm happy with frame rates and resolutions I get in Quake today.  Speed me up 10 fps or bump me to 1600*1200 and I won't be significantly happier.  But quadruple the polygons in Everquest, bring that world of wonder a notch closer to pyschological realism, and I will be a lot happier.  3dfx has to say that all games need fill and no games need T&L because all they have is fill-- at least when you buy 4 chips.  The truth is, some games could use more fill, others could use more T&L.  All games are not created equal and to accelerate the broad spectrum of content, of today's and tommorrow's content, you need a GPU, not a rasterizer.


End of Interview


Last Updated on December 14, 1999

All trademarks used are properties of their respective owners.