PDA

View Full Version : NVIDIA Vertex Pipelines Efficiency from TNT2 to 6800U


Graphicmaniac
10-27-04, 05:55 PM
I have always been interested by the number of poligons a chip can do, thats since the playstation time :)

Lately the attention has been actracted mostly by the fillrate power of the GPU anyway i think it will still be interesting to see the Efficiency of the pipelines of our love GPU and how they improved in the years :)


--MODEL-------Million poligons/sec.--N Vertex Pipelines----Core speed---Vertex Shaders
--------------------------------------------------------------------(introduced with GF3)

TNT2 Ultra_______________9_______________1___________15 0 Mhz

GeForce 256____________15_______________1___________120 Mhz

GeForce2 GTS___________25_______________1___________200 Mhz

GeForce3_______________60_______________1_________ __200 Mhz_________1.1

GeForce4 4600__________136_______________2___________300 Mhz_________1.3

GeForce5 5800U_________200_______________3___________500 Mhz_________2.0

GeForce5 5950U_________356_______________3___________475 Mhz_________2.0

GeForce6 6800U_________600_______________6___________400 Mhz_________3.0


The pipeline's efficiency is based on the number of poligons it can do with 1 Mhz

E = (N Poligons / N Vertex Pipelines) / Core Speed

---------------------------Pipeline's efficiency---------Improvement
------------------------(n poligons * mhz /sec)

TNT2 Ultra_____________________60.000

GeForce 256___________________125.000_____________+ 108%

GeForce2 GTS__________________125.000_____________+ 0%

GeForce3______________________300.000____________+ 140%

GeForce4 4600_________________226.667_____________- 24%

GeForce5 5800U________________133.333 ____________- 41%

GeForce5 5950U________________249.825_____________+ 87%

GeForce6 6800U________________250.000_____________+ 0%



As you can see there are always been moments where pipeline's efficiency has raised a lot followed by moments where efficiency is the same.

The negative improvement of GF4 i think is because of the introduction of the 2nd Vertex pipeline, and that's not efficient like we could think doing 1+1.

It seems the progress is done with huge jumps every 2 GeForce serie. The missed jump coincide with the GeForce 5800 disaster.

If Nvidia will still follow this steps for the future then this mean that NV50 will be the next huge jump. I didn't say NV47 because it is of the same family of NV40 and the cycle for a new architecture has been changed to 1 year from the old 6 months time.



Hope to have done some nice work :) if somebody will find this interesting then i can do the same for ATI CPU, GPU and VPU tomorrow or later :)

ChrisRay
10-27-04, 06:03 PM
I have always been interested by the number of poligons a chip can do, thats since the playstation time :)

Lately the attention has been actracted mostly by the fillrate power of the GPU anyway i think it will still be interesting to see the Efficiency of the pipelines of our love GPU and how they improved in the years :)


--MODEL---------------Million poligons/sec.--N Vertex Pipelines----Core speed-----Vertex Shaders
-------------------------------------------------------------------------------------------------(introduced with GF3)

TNT2 Ultra________________9_______________1___________1 50 Mhz

GeForce 256_____________15_______________1___________120 Mhz

GeForce2 GTS____________25_______________1___________200 Mhz

GeForce3________________60_______________1________ ___200 Mhz________1.1

GeForce4 4600__________136_______________2___________300 Mhz_________1.3

GeForce5 5800U_________200_______________3___________500 Mhz_________2.0

GeForce5 5950U_________356_______________3___________475 Mhz_________2.0

GeForce6 6800U_________600_______________6___________400 Mhz_________3.0


The pipeline's efficiency is based on the number of poligons it can do with 1 Mhz

E = (N Poligons / N Vertex Pipelines) / Core Speed

---------------------------Pipeline's efficiency---------Improvement
------------------------(n poligons * mhz /sec)

TNT2 Ultra______________60.000

GeForce 256____________125.000___________+ 108%

GeForce2 GTS___________125.000___________+ 0%

GeForce3_______________300.000__________+ 140%

GeForce4 4600__________226.667___________- 24%

GeForce5 5800U_________133.333 __________- 41%

GeForce5 5950U_________249.825___________+ 87%

GeForce6 6800U__________250.000__________+ 0%



As you can see there are always been moments where pipeline's efficiency has raised a lot followed by moments where efficiency is the same.

The negative improvement of GF4 i this is because of the introduction of the 2nd Vertex pipeline, and that's not efficient like we could think doing 1+1.

It seems the progress is done with huge jumps every 2 GeForce serie. The missed jump coincide with the GeForce 5800 disaster.

If Nvidia will still follow this steps for the future then this mean that NV50 will be the next huge jump. I didn't say NV47 because it is of the same family of NV40 and the cycle for a new architecture has been changed to 1 year from the old 6 months time.



Hope to have done some nice work :) if somebody will find this interesting then i can do the same for ATI CPU, GPU and VPU tomorrow or later :)


Pixel processing power has been the focus of major GPU design. I really dont expect a triangle count increase that substancial in the coming next 2 years, We should expect to see strides in pixel processing however.