Go Back   nV News Forums > Hardware Forums > Rumor Mill

Newegg Daily Deals

Reply
 
Thread Tools
Old 09-25-02, 03:23 AM   #25
Nv40
Agent-Fx
 
Nv40's Avatar
 
Join Date: Aug 2002
Location: everywhere
Posts: 2,216
Default

Quote:
Originally posted by Bigus Dickus
A picture is worth a thousand words. Show me a pair of pictures where one was rendered at 96 bit precision, and the other at 128 bit precision, and show me the visual difference. Go ahead... I'm waiting. I don't think Pixar even renders movies at 128 bit precision, but 96 and downsamples to 64 bit for frame storage after rendering.

That table from tech-report is from August 9th, and is simply wrong. NVIDIA supplied their "best guess" as to the R300's capabilities, and it turned out that the R300 was much more powerful/flexible than NVIDIA had believed. Why don't you find some source from, say, the last month or so?
the table from tech report is not wrong ,the info posted there
from Radeon9700 and Nv30 comes directly from ATI and Nvdia respectively ,the guy who has done a terrific profesional and unbiased review asked directly to ATI and NVIDIA and the table of specs outlines what Nvdia and ATI has told to him ,much better info that was is posted in Beyondfans3d who everyones knows which company most members and reviewers are biased..

quote from techreport..
Quote:
As I said before, I've read up on both chips and talked to folks from both NVIDIA and ATI in an attempt to understand the similarities and differences between these chip designs. Both designs look very good, and somewhat to my surprise, I've found very few weaknesses in the ATI design, despite the fact it's hitting the market well before NVIDIA's chip. There are some differences between the chips, however, and they point to different approaches taken by the two companies. Most of my attention here is focused on the pixel pipeline, and the pixel shaders in particular, because that's where the key differences seem to be


Quote:
A picture is worth a thousand words. Show me a pair of pictures where one was rendered at 96 bit precision, and the other at 128 bit precision, and show me the visual difference.

i agree too, a picture is worth a million words , but you will need
to wait for Nv30 demos and see what kind of quality and precision you can create with its greater pixelshaders technology in CineFx

as i said ATi demos were 64bits! very weird right ?
probably because they were short on time when designing the demos
or because radeon9700 only support 64bits not 96bits that they claim
or because there were not enough power in the card to push a demo in realtime in more than that 64bits ,the later seems to be the
what really happened.. ,because there is no sense to advertise a card
with 96bits colors the later show 64bits demos

but just for reference lets see
what ati have done in their 64bits demos ..

http://www.tech-report.com/etc/2002q...s/index.x?pg=3

and now what Nvidia claim Nv30 can do..

http://www.tech-report.com/etc/2002q...s/index.x?pg=6

wow! just say it!!! ....impressive ? hehe
i tell you that if Nvidia backup and demostrate in real hardware his claims showing a demo with that kind of quality ,there will be no single human in the planet in the 3d profesional industry
like CAd egineers ,animators or gamedevelopers who will not RUN! a buy an Nv30 . hehe and even gamers will not resist the Nv30
just to have the most powerfull videocard in the planet .

if the Nv30 is Much much faster than the radeon9700pro in direcx9 (which i believe) and have that kind of quality Nvidia is claming
,i have no doubts that it will be possible to see true Cinematic
quality for the first time in the computer industry and in real time!


pixar use 64bits in their productions , but movies are very diferent than Games .the fact is that cinematic quality ->in realtime needs more precision more colors and more acurracy than Movies ,
which are not ->real time!! see ? they are prerendered shots ,
thats why john carmack asked for 64bits of colors.. see ?

a still image photo in 32bits can look as good as another one in
128bits ,compare this 32bits prerendered picture in your RADeon2

http://www.insidecg.com/feature.php?id=105&page=4

with the sport car demo made by ati in the radeon9700 in 64bits

64bits are more than enough for pixar shots!! prerendered
in movies because those pictures you see in the movie are not real time ..see? the quality in computer graphics movies in holywood are hand tweaked,with paint programs like photoshop or post processing programs like Shake .you will be amazed by the poor quality and graphics errors sometimes the original shots results rendered in computers vs the final shots that you see in the movie.. see?

but for Cinematic quality games or cinematic real times demos
you will need no less than Nvidia Cinefx pixelshaders/vertex shaders precision . hopefully ATi will do something in the future R350?,to match Nvidia Nv30 cinefxquality and its possible higher performance..

i predict that Nv30 will be able to show Final fantasy as close as 9/10 of the real thing in real time which will be impressive ,
not because of image quality because the Nv30 has no loss in image quality but because the huge performance needed to do that..
but surely an in 10/10 in the future with a more powerfull Gpu ,like Nv35!

like the interview with Nvida CEO ..
where the inteviewer asked ...
how much diference we will see between Nv30 cinematic quality
and what we have today..?

->it will be the diference between what we see in movies
and what we see in games

Last edited by Nv40; 09-25-02 at 03:32 AM.
Nv40 is offline   Reply With Quote
Old 09-25-02, 10:29 AM   #26
jbirney
Registered User
 
jbirney's Avatar
 
Join Date: Jul 2002
Posts: 1,430
Default

Quote:
i predict that Nv30 will be able to show Final fantasy as close as 9/10 of the real thing in real time which will be impressive
You really dont have a clue on how complicated movie scense are then if you make a statement like that. Taken from;

http://arstechnica.com/wankerdesk/01...terview-1.html

Here are some of the stats on the FF movie:

Number of Sequences = 36
Number of Shots = 1,336
Number of Layers = 24,606
Number of Final Renders (Assuming that > everything was rendered once) = 2,989,318
Number of Frames in the Movie = 149,246
Average number of shots per sequence = 37.11
Average number of rendered layers per shot = 18.42
Average number of frames per shot = 111.71
Estimated average number of render revisions = 5
Estimated average render time per frame = 90 min
Shot with the most layers = (498 layers)
Shot with the most frames = (1899 frames)
Shot with the most renders [layers * frames] = (60160 renders)
Sequence with the most shots = (162 shots)
Sequence with the most layers = AIR (4353 layers)
Sequence with the most frames = (13576 frames)
Using the raw data (not the averages) it all adds up to 934,162 days of render time on one processor. Keep in mind that we had a render farm with approximately 1,200 procs.

And this is going to run even close to real time on the nV30? Rrrriiiiigggghhhhtttttt
jbirney is offline   Reply With Quote
Old 09-25-02, 11:16 AM   #27
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

For once, i really gotta agree with you, jbirney. Final Fantasy in real time isn't for tommorow.

But i personally think that if you hand tweaked what's rendered to be more optimized towards where the NV30 is much faster at, maybe we'd get real time for something like 3/10 or more. But then again, i wouldn't want to bet on that


Uttar
Uttar is offline   Reply With Quote
Old 09-25-02, 12:51 PM   #28
SnakeEyes
Registered User
 
SnakeEyes's Avatar
 
Join Date: Jul 2002
Location: Lake Zurich, IL
Posts: 502
Send a message via ICQ to SnakeEyes
Question

What's yer point, jb? My RagePro from a couple years ago could render something that simple. Not.

Seriously, thanks for the stats. It's interesting to see what's involved with production-quality movie rendering right now, isn't it?
__________________
Snake-Eyes
SnakeEyes is offline   Reply With Quote
Old 09-25-02, 01:31 PM   #29
jbirney
Registered User
 
jbirney's Avatar
 
Join Date: Jul 2002
Posts: 1,430
Default

Yea sorry to come off like an A$$ but some people are going to take every shread of PR babble and claim its the second coming...jeez

The NV30 will be a great card and pave the way for many great things to come. But its also not going to do this overnight. So yea I am going to get one but I am going to get one becasue of what it can go for me today and not because of any Cfx features. After seeing what the R9700 can do it will be a incremental upgrade. Nothing to get in a tizzy about....


BTW I can find a few more specs if you want. I wish we could have turned that over to more PC like specs (like numbler of polys, fill rate, texture rate, ect).
jbirney is offline   Reply With Quote
Old 09-25-02, 01:38 PM   #30
Uttar
Registered User
 
Uttar's Avatar
 
Join Date: Aug 2002
Posts: 1,354
Send a message via AIM to Uttar Send a message via Yahoo to Uttar
Default

Just found something interesting.

No website ( including your Beyond3D review, Bigus Dickus ) seem to say R300 support dynamic branching.

I'd love to explain why dynamic branching is a lot more usefull than static branching ( which is what nVidia claims the R300 has ) , but I would have made a complete fool of myself if the R300 did in fact have it too.

So, anyone got a spec of the R300 showing dynamic branching or the same branching power as the NV30?

I'm not talking about loops or loop number here - i'm talking about branching. Can't seem to find much info on it with the R300.


Uttar
Uttar is offline   Reply With Quote
Old 09-25-02, 02:45 PM   #31
Bigus Dickus
GF7 FX Ti 12800 SE Ultra
 
Join Date: Jul 2002
Posts: 651
Default

Quote:
Originally posted by Nv40


the table from tech report is not wrong ,the info posted there
from Radeon9700 and Nv30 comes directly from ATI and Nvdia respectively ,the guy who has done a terrific profesional and unbiased review asked directly to ATI and NVIDIA and the table of specs outlines what Nvdia and ATI has told to him ,much better info that was is posted in Beyondfans3d who everyones knows which company most members and reviewers are biased..
Um... no. The NVIDIA numbers in that table were from the CineFX paper, which also included the "R300" numbers. Those "R300" numbers that NVIDIA used in the CineFX papers were simply the DX9 requirements, which it was later found that ATI had surpassed just as NVIDIA had. The numbers are wrong, period.

Quote:
as i said ATi demos were 64bits! very weird right ?
probably because they were short on time when designing the demos
or because radeon9700 only support 64bits not 96bits that they claim
or because there were not enough power in the card to push a demo in realtime in more than that 64bits
Or, probably, because there is little if any visual difference in final image quality between 64 bit internal precision and 96/128 bit internal precision for the number of shader ops used in that car demo? Another reason why the 128 bit vs. 96 bit argument is rather pointless. Little if any visible difference, at least for any application that will be run during the life of the cards. Besides, I believe that the texture read stage where the 9700 does use 128 bit is a key point in the process where accuracy degrades.

Quote:
but just for reference lets see
what ati have done in their 64bits demos ..

http://www.tech-report.com/etc/2002q...s/index.x?pg=3

and now what Nvidia claim Nv30 can do..

http://www.tech-report.com/etc/2002q...s/index.x?pg=6
So you're comparing a screenshot that was actually rendered on one piece of hardware, and comparing it against an off-line rendered image that NV claims they might come close to... if the NV30 ever shows up, that is.

Quote:
pixar use 64bits in their productions , but movies are very diferent than Games .the fact is that cinematic quality ->in realtime needs more precision more colors and more acurracy than Movies ,
which are not ->real time!! see ? they are prerendered shots ,
thats why john carmack asked for 64bits of colors.. see ?
OMG, I don't even know how to address that statement. How about this: when a movie is viewed, the frames are presented in real time to the viewer, regardless of whether the calculation of each frame was done in the interval occupied by each frame or not. No, that's probably too complex for you. How about this: games and movies alike present the viewer with a sequence of frames, and (assuming the framerate is the same) the depth of color precision will have an identical effect as perceived by the viewer for both. No, that's probably too complex as well.

How about this: yOu R teh retarted... DUH!!!

Quote:
those pictures you see in the movie are not real time ..see?
Yes, uOy R teh retetarted. When is the last time you watched a movie frame by frame... slowly? How about a game? It's the SAME DAMN CONCEPT.

Quote:
but for Cinematic quality games or cinematic real times demos you will need no less than Nvidia Cinefx pixelshaders/vertex shaders precision .
Says who? Oh, you're an expert on 3d rendering techniques... right? Please. Try to wrap your head around this: if the original movie scene was rendered by Pixar at 64 bits precision, then it will look identical if rendered by a turing equivalent 64 bit capable graphics card. IDENTICAL.
Bigus Dickus is offline   Reply With Quote
Old 09-25-02, 03:45 PM   #32
SnakeEyes
Registered User
 
SnakeEyes's Avatar
 
Join Date: Jul 2002
Location: Lake Zurich, IL
Posts: 502
Send a message via ICQ to SnakeEyes
Post

I'm not even sure that the final output is going to be too different, since (as someone else pointed out earlier) the output medium / devices can't really handle that much color (er, spectrum) anyway. The internal processing is where the number of pixels used is important, just to maintain accuracy as blends and such occur.

Also, too high an internal bit depth is just wasted bandwidth, imo. When the output pixel has to be fit into the more limited range available on, say, a computer monitor, going beyond a certain color depth internally won't produce any difference in the final adjusted output. eg. 0.898 calculated internally vs. 0.878 internally, when the output device is working in tenths, resulting in the output for both becoming 0.9. Maybe a dumb example, but I think the idea comes across (or I hope so, anyhow).

I'm not necessarily agreeing that 128bit color isn't useful or effective, because I don't really know the true limits of output devices or the current internal operations of the GPUs in question. Without having to explain this, since it's been discussed time and time again, basically it concerns rounding errors between bit depth conversions that accumulate as more passes are made through the pipeline during processing.
__________________
Snake-Eyes

Last edited by SnakeEyes; 09-25-02 at 03:51 PM.
SnakeEyes is offline   Reply With Quote

Old 09-25-02, 03:53 PM   #33
Nv40
Agent-Fx
 
Nv40's Avatar
 
Join Date: Aug 2002
Location: everywhere
Posts: 2,216
Default

to mr. biggus
----------------
the table in tech report as the
authors say comes directly from WHat Nvdia and ATi told him
what part of this ,you dont understand ?
,is not that the author were so stupid to only ask NVidia about
what Radeon9700 really is . stay in Beyondfans3d gurus
where they claim Nv30 is 128 bits bus and will be delayed to february because they say it..
the fact is you dont like the superiority of Nv30 CineFx ,
but it dont change the truth ...period


Quote:
Originally posted by jbirney


You really dont have a clue on how complicated movie scense are then if you make a statement like that. Taken from;

http://arstechnica.com/wankerdesk/01...terview-1.html

Here are some of the stats on the FF movie:

Number of Sequences = 36
Number of Shots = 1,336
Number of Layers = 24,606
Number of Final Renders (Assuming that > everything was rendered once) = 2,989,318
Number of Frames in the Movie = 149,246
Average number of shots per sequence = 37.11
Average number of rendered layers per shot = 18.42
Average number of frames per shot = 111.71
Estimated average number of render revisions = 5
Estimated average render time per frame = 90 min
Shot with the most layers = (498 layers)
Shot with the most frames = (1899 frames)
Shot with the most renders [layers * frames] = (60160 renders)
Sequence with the most shots = (162 shots)
Sequence with the most layers = AIR (4353 layers)
Sequence with the most frames = (13576 frames)
Using the raw data (not the averages) it all adds up to 934,162 days of render time on one processor. Keep in mind that we had a render farm with approximately 1,200 procs.
And this is going to run even close to real time on the nV30? Rrrriiiiigggghhhhtttttt

CPus are Hundreds of times slower computing graphics compared with
Video cards ,and with Nv30 it could be thousands of times slower .
have you ever used your Computer for anything diferent that gaming ?
something like 3dgraphics ? i can preview with MAYA (a 3dsoftware rendering package) some scenes in real time with my Geforce4 ,at the near same quality!!!! (sometimes better!!)that the final SOFTWARE RENDERING!! image!! in much much less time , 20frames per seconds(geforce4) vs 1 frame per minute!(AtlonXP1900+) ,thats huge diference...

(when heavy reflections are not used )its funny to see in some scenes ,how great my scnes looks in my Geforce4 in harware
rendering mode in real time (24 images per second) vs software rendering made by my ATLonxp1900+ which take minutes to render
just ONE IMAGE!!!

thousands million dollars Renderfarms-cpus vs just one video card? sounds crazy right? but this is something that is goint to
happen in a couple more years .Cpus Software rendering will be obsolete for graphics effects ,video cards alone will do that.

you mised my point ,im not saying Nv30 will do the 1h 30min the Full movie in real time ,that will require a lot more than a computer + video card ,and millions of hours of Nvidia Cg programers recreating
shaders used in the Movie in the Nv30.

what im saying NV30 will be able to re-create a very short cut scenes
of the movie Final fantasy -the spirits withing in real time ,
(like they have already done in a geforce4 see)
like ATi have done with the Lord of the rings version displayed
in the radeon9700 but rather than
1/10 the quality of the movie ,it will be 9/10 in the Nv30
thanks to its more powerfull Pixels shaders ,color accurracy and
most important perfromance .

Nv30 and radeon9700 have enough precision to render a still shot of Toystory2 (64bits) see?
the color precision IS THERE ,what is not there ,is the performance needed to render 2+ millions of polys scene with that kind
of quality , in just one computer with one video card see?

Nv30 Cinefx is well know to have even more pixel shaders presicion
to render ANYTHING!! read that! any still shot from any CG movie ever made you can name! see ? what is not there is the performance
to move any big 10/10 scene of a movie like final fantasy! at least not in real time which is 24fps ..

to move a 10/10 scene of CG movie like FInal FAntasy in real time
we are as close as the end of 2003 (by looking the coments of JC)it would be possible by the end of next year with the latest technology avaible from Cinefx harware from Nvidia and probably ATi ,which will be an the Nv35 or R350 in multi chip configurations for profesional boards.. see?

take your time to read this article ,and see by yourself how
close the Video card industry is ,to be at the level of latest
CG holywood movies.

http://www.tech-report.com/etc/2002q...s/index.x?pg=1

Quote:
So ATI's R300 and NVIDIA's NV30 will comprise the first generation of dedicated graphics chips capable of cinematic quality shading. They won't be capable of rendering all of the best effects seen in recent movies with all of the detail in each scene in real time, but they should be able to deliver some exceptionally compelling graphics in real time. Gamers had better hold on to their seats once games that use these chips arrive. And these chips will challenge entire banks of servers by rendering production-quality frames at near-real-time speeds. Graphics guru John Carmack's recent Slashdot post on the subject anticipates replacing entire render farms with graphics cards within 12 months:
Video cards already can match some shots done by Pixar
http://www.tech-report.com/etc/2002q...s/index.x?pg=2

the good thing is Nvidia Nv30 cinefx already has the image quality needed to RECREATE any still picture of any computer graphics movie ,ANY!!! wihout any loss in IQ not like the R300

however when you are going to show a demo animation that will run in real time (24fps),obviously if you dont have the performance to move so much detail at decent frames you will need to lower the image quality details to prove the power of your video card in real time animations.. see?

conclusion of Nv30 superiority..
it would only means higher performance for gamers in todays games ,
drivers that works flawlessly out of the box ,and the best video card for Doom3 like JC have said..

but for the profesional computer graphics Artist
like 3danimators it means support for much more longer pixel shaders per pass + 128bits colors full time precision + greater performance
with solid stable drivers ,and it would mean much much more
for top gamedevelopers who already has stated their next projects
will use the Nv30 as standar

but like a wise men once told A picture is worth a thousand words , with nvidia Nv30 demos alone will show clearly the superiority of its hardware technology ,like the title say
.. MUCH more than the R300

Last edited by Nv40; 09-25-02 at 05:03 PM.
Nv40 is offline   Reply With Quote
Old 09-25-02, 08:15 PM   #34
StealthHawk
Guest
 
Posts: n/a
Default

Quote:
Originally posted by jbirney
Yea sorry to come off like an A$$ but some people are going to take every shread of PR babble and claim its the second coming...jeez
seriously, i thought we had gone over this fact 1 million times already that the current gen R300 and NV30 won't be able to do true cinematic reality quality rendering. but the fan boys keep bringing it up
  Reply With Quote
Old 09-26-02, 03:17 AM   #35
nutball
International God of Sex
 
Join Date: Jul 2002
Location: en.gb.uk
Posts: 655
Default

Quote:
Originally posted by StealthHawk


seriously, i thought we had gone over this fact 1 million times already that the current gen R300 and NV30 won't be able to do true cinematic reality quality rendering. but the fan boys keep bringing it up
You're basically right, though I think you've missed some important words out of that assertion.

NV30/R300 won't be able to do true cinematic reality quality rendering in real-time (or anything like real-time).

Both parts have pretty much all the functionality in the pipeline necessary to render frames with the same quality as a Pixar movie[*]. All you have to do is spend the time programming them.

What they won't do is do this sufficiently fast for interactive use. OK, so Final Fantasy takes 9,000 hours or 90,000 hours to render on R300/NV30, rather than 900,000 on CPUs. That's not real-time is it? That's what the fan-boys seem to be closing their eyes, ears and brains to.
[*] Actually there is one very important piece of functionality which is missing from R300, and from what I've heard from NVIDIA people makes me think is missing from NV30. That is frame-buffer read-back into fragment programs. The lack of this makes blending in floating-point render buffers a very tricky operation. Basically this makes multi-pass rendering into floating-point buffers effectively impractical. This could well be a killer, I'm not sure how much Pixar-style renderers rely on this sort of functionality.

From what I understand frame-buffer read-back will very likely be a requirement in the OpenGL 2.0 spec (called for by the software developers, strongly resisted by the hardware developers). I presume it will be required for DX10, but that's pure speculation. If true, neither R300 nor NV30 would be OpenGL 2.0-capable. Interesting.
__________________
Got nuts?
nutball is offline   Reply With Quote
Old 09-26-02, 04:50 AM   #36
StealthHawk
Guest
 
Posts: n/a
Default

yes, in real time. that's what i meant to say
  Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
Understanding the Bulldozer Architecture through the LINPACK Benchmark News Archived News Items 0 06-26-12 12:30 PM
Enhance Max Payne 3, Diablo III with GeForce R300 Drivers News Archived News Items 0 05-22-12 07:30 PM
ATI R300 & nVidia NV30 - Different visions Uttar Rumor Mill 6 09-06-02 12:19 PM
Stop saying the NV30 will cost more than the R300! Uttar Rumor Mill 20 09-03-02 01:21 PM
WOOT R300 at 400 mhz already!!! druga runda Other Desktop Graphics Cards 28 08-22-02 11:22 PM

All times are GMT -5. The time now is 08:18 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.