PDA

View Full Version : NVIDIA: Permanent Member of the ARB


Pages : [1] 2

StealthHawk
10-07-03, 10:06 PM
That's right folks- you heard it right. NVIDIA is now a permanent voting member of the ARB.

The Baron
10-07-03, 10:08 PM
Thanks for stealing my stuff, Stealthy.

SANTA CLARA, CA—OCTOBER 8, 2003—NVIDIA Corporation (Nasdaq: NVDA), the worldwide leader in visual processing solutions, is pleased to announce that it has been elected by its peers in the technology industry as a permanent voting member of the OpenGL Architecture Review Board (ARB). Formed in 1992, the OpenGL® ARB is an independent consortium that governs the OpenGL specification. Composed of many of the industry's leading graphics vendors, the ARB defines conformance tests and approves new OpenGL features and extensions. The ARB has nine permanent voting members, including industry luminaries such as IBM, HP, Apple, and its founder, SGI.
The ARB’s unanimous vote comes as a result of NVIDIA’s continuous dedication and commitment to the evolution of OpenGL. For many years, NVIDIA has been an active participant in the OpenGL ARB, and has contributed resources, technologies, and support to many working groups.
“We are very proud to have been elected a permanent voting member of the OpenGL ARB,” said Kurt Akeley, co-inventor of OpenGL and a 3D graphics architect at NVIDIA. “NVIDIA is committed to the OpenGL specification and we look forward to helping the ARB respond quickly and flexibly to evolutionary changes in computer graphics technology.”
OpenGL has become the industry's most widely used and supported 2D and 3D graphics application programming interface (API), bringing thousands of applications to a wide variety of computer platforms. OpenGL fosters innovation and speeds application development by incorporating a broad set of rendering, texture mapping, special effects, and other powerful visualization functions.
“Promotion from auxiliary to permanent member status recognizes NVIDIA’s major contributions to OpenGL 1.3, 1.4, and 1.5 as well as its efforts at leading or participating in the development of many ARB-approved OpenGL extensions,” said Jon Leech, OpenGL ARB secretary at Silicon Graphics, Inc. “Silicon Graphics welcomes NVIDIA as a permanent member in the OpenGL Architectural Review Board.”

OpenGL enables visual computing applications, from markets such as computer-aided design and digital content creation, to exploit modern graphics hardware. This capability allows developers catering to sectors including auto manufacturing, medical imaging, and film production to create compelling graphics. NVIDIA has been driving several of the key new features of the upcoming OpenGL® 1.5 release such as vertex buffer objects and occlusion queries. NVIDIA was also one of the primary contributors to the new OpenGL® Shading Language extension. In addition, NVIDIA contributed technology and expertise toward the development of multi-texturing, vertex and fragment programming, cube mapping, point sprites, and non-power-of-two textures to OpenGL.

Enjoy the press release goodness.

micronX
10-07-03, 11:03 PM
Cool, now we'll get more NV_EXTENSIONS:dance:

-=DVS=-
10-08-03, 03:40 AM
:kill:

StealthHawk
10-08-03, 04:05 AM
Originally posted by micronX
Cool, now we'll get more NV_EXTENSIONS:dance:

Ok, maybe I'm totally off base here, but couldn't NVIDIA already make NV extensions? What you're saying is that the ARB is responsible for allowing NVIDIA to create NV extensions?

note: I still don't understand your premise since NVIDIA was already a voting member of the ARB.

sxotty
10-08-03, 06:47 AM
I am sooooooooooooooooooooooooooooooooooooooooooo happy about this seriously. It has been ridiculous that they have not already become a permanent member. I realized after whining about it that they still had basically the same amount of authority, but it was downright childish of the other members not to have already allowed Nvidia on as a permanent memeber.

Joe DeFuria
10-08-03, 06:51 AM
Originally posted by sxotty
I am sooooooooooooooooooooooooooooooooooooooooooo happy about this seriously. It has been ridiculous that they have not already become a permanent member. I realized after whining about it that they still had basically the same amount of authority, but it was downright childish of the other members not to have already allowed Nvidia on as a permanent memeber.

Actually, it wasn't childish at all.

nvidia was for a time trying the "make everyone pay to licence our crap" game with OpenGL extensions. I have to assume that nVidia's denial into the ARB was their "punishment" for trying to strong-arm the other ARB members.

Assuming nVidia has learned their lesson, then it is indeed "right" to admit them as a permanent member at this time. But that doesn't make their past exclusion wrong. ;)

sxotty
10-08-03, 08:02 AM
Well Joe I have no real knowledge or idea about that, all I know is that conspiracy theorists will loose a little more support for the idea that the openGl board screws over nvidia to punish them similar to the idea that microsoft punished them by picking fp24.

Rampant CL
10-08-03, 04:52 PM
Doesnt this mean that they can 'push in' specifications into new versions of openGL which they might just be planning to implement in the next range of their cards?

JohnsonLKD
10-08-03, 06:35 PM
Ati + MS DX team > nVIDIA + OGL team.

It's too late nVIDIA.:D

Hanners
10-09-03, 05:04 AM
Originally posted by Rampant CL
Doesnt this mean that they can 'push in' specifications into new versions of openGL which they might just be planning to implement in the next range of their cards?

Not really, because everything will still go to a vote.

theultimo
10-09-03, 02:14 PM
Originally posted by JohnsonLKD
Ati + MS DX team > nVIDIA + OGL team.

It's too late nVIDIA.:D

But ATi has been a member of the ARB for a lot longer then nVidia....

Nutty
10-09-03, 02:54 PM
But ATi has been a member of the ARB for a lot longer then nVidia....

And nvidia still made more interesting technology available over ATI. :)

Humus
10-09-03, 04:20 PM
I will have to disagree with that.

The way I see it nVidia has been the faster for most of the time, while ATI has had more features and cool stuff to play around with as a developer. In the last generation ATI wins both the features and performance game.

Steppy
10-09-03, 05:02 PM
Originally posted by Nutty
And nvidia still made more interesting technology available over ATI. :) Other than T&L on die, what features did NV introduce?

3dfx was multitexturing, and FSAA(first to market with usable FSAA)
S3 was texture compression,
ATI had programmable shaders or at least the beginnings of them
Matrox was EMBM.

What feature did Nvidia introduce? They always had the speed to run things others thought up(and some things they outright stole), but they were never new feature laden.

astroguy
10-09-03, 05:16 PM
lol ATI had programmable shaders first? WTF are you smoking?
geforce3 came out like nearly a year before

NVIDIA had the first full hardware GPU... how is that not a revolutionary feature?

NVIDIA was the first to have per-pixel lighting in the geforce2gts.

*sigh* some ati fanboys are just so ignorant

theultimo
10-09-03, 05:25 PM
Originally posted by Steppy
Other than T&L on die, what features did NV introduce?

3dfx was multitexturing, and FSAA(first to market with usable FSAA)
S3 was texture compression,
ATI had programmable shaders or at least the beginnings of them
Matrox was EMBM.

What feature did Nvidia introduce? They always had the speed to run things others thought up(and some things they outright stole), but they were never new feature laden.

Not sure of ATi and programmable shaders, as the GF2 GTS had the same "shaders" as the Radeon also.. But Don't forget PowerVR and TBDR...or The Baron might get upset.....:lol:

astroguy
10-09-03, 05:33 PM
geforce3 was the first card to have programmable shaders...

radeon did NOT have programmable shaders...

radeon 8500 was the very first card ati had to have programmable shaders...

gmontem
10-09-03, 05:49 PM
Originally posted by Steppy
Matrox was EMBM.
Tritech/Bitboys had EMBM first (Pyramid3D), not Matrox.

theultimo
10-09-03, 08:09 PM
Originally posted by gmontem
Tritech/Bitboys had EMBM first (Pyramid3D), not Matrox.

And followed up by Glaze3D, which XBA had its start there :headbang:

Nutty
10-10-03, 03:48 AM
In the last generation ATI wins both the features and performance game.


Do you mean the R3XX and NV3X generation? Yes ATI might have won the performance round, but features? Noway. Nvidia offer far more features than ATI.

True dynamic branching in vertex shaders.
Conditional branching in fragment shaders.
Asynchronous read pixels. (ATI's framebuffer reading is still crap from what I've heard)
Ultra Shadow stencil shadow accelerator.

And yes NV were the 1st to expose hardware vertex programs, and DX8 equivelent pixel shaders in OpenGL way before ATI, or even DX8 shipped.
They were 1st to ship a consumer level hardware T&L chip too.

What cool features does ATI have over NV? I haven't seen that many ATI only extensions.

Hanners
10-10-03, 05:57 AM
Originally posted by Nutty
What cool features does ATI have over NV? I haven't seen that many ATI only extensions.

F-Buffer and Truform are the two that spring to mind.

Humus
10-10-03, 06:42 AM
Originally posted by astroguy
lol ATI had programmable shaders first? WTF are you smoking?
geforce3 came out like nearly a year before

NVIDIA had the first full hardware GPU... how is that not a revolutionary feature?

NVIDIA was the first to have per-pixel lighting in the geforce2gts.

*sigh* some ati fanboys are just so ignorant

Shaders are a fuzzy area to talk about who came first. You could argue that even a GF1/2 and Radeon had programmable shaders with just very limited instructions and very lower instruction count if you'd prefer. When does it change from configurable to programmable? Hard to say. I'm not sure if you one call nVidia's register combiners programmable or configurable.
On the vertex shader side though it's pretty clear that GF3 was first.

Humus
10-10-03, 06:59 AM
Originally posted by Nutty
Do you mean the R3XX and NV3X generation? Yes ATI might have won the performance round, but features? Noway. Nvidia offer far more features than ATI.

True dynamic branching in vertex shaders.
Conditional branching in fragment shaders.
Asynchronous read pixels. (ATI's framebuffer reading is still crap from what I've heard)
Ultra Shadow stencil shadow accelerator.

And yes NV were the 1st to expose hardware vertex programs, and DX8 equivelent pixel shaders in OpenGL way before ATI, or even DX8 shipped.
They were 1st to ship a consumer level hardware T&L chip too.

What cool features does ATI have over NV? I haven't seen that many ATI only extensions.

First of all, there's no conditional branching in the fragment shaders. It's conditional assignment, and that's a completely different thing, and not very interesting feature either, it's only a performance thing.

Asynchronous reads are to me fairly uninteresting, but there's nothing keeping ATI from implementing it. That's not a hardware thing.

Ultra shadow is cool for those who use stencil shadows, but that's not the best shadowing technique IMO. Even with all these kinds of techniques applied it's still slower and less flexible than shadow mapping.

As for fragment shaders, which IMO is the most important feature. The GF3/4 model is largely inferior to the Radeon 8500. And when the 9700 came around it was waaaay ahead of the GF4. With GFFX you have a fatter version of the floating point shaders, but most of the extra stuff is fairly uninteresting, like multiple precisions, conditional assignment (which is very peripheral if you work in high level languages) etc. The only really good thing is the additional instruction count, though with the F-buffer that crown again goes back to ATI. And with the slowass performance it makes the whole thing utterly unpleasant to work with on the GFFX.

NV still doesn't support floating point textures in DX and in OGL it is very limited. Only rects, not 1D,2D,3D,Cube, which are the important targets.

ATI also support multiple render targets.

Then you have occlusion culling and query. The Radeon was first, though nVidia exposed query first, but it's supported even on the original Radeon.

Comparing the old Radeon to the GF2 from that time you have EMBM, three texture units and volumetric textures. At least the latter two, and especially volumetric textures are very important features.

ATI was also first with a sensible buffer object extension, and was first with doublesided stencil (not very important feature though IMO).

etc....

sxotty
10-10-03, 08:08 AM
There is not really much worth arguing, b/c everyone can have their own opinions of what things are the big cool feature, me I think the geforce with T&L was one of the most important features ever, but it matter little. The point is that this is good, and if NV can force some things into OpenGL that their upcoming product supports I fail to see that as bad, since that means what? They wont have nv_extension, just a regular openGl extension...