UGO Home Page UGO Home Page

News Search Archive Forum Articles Tweaks Technology Files Prices Folding
Favorite Pics
Click to Enlarge
Articles/Reviews
OCZ Tech Titan 3
Absolute MORPHEUS
1.0GHz Pentium III
eVGA MX Shootout
nForce Preview
AMD AXIA CPU
VisionTek GeForce3
2001 Spring Lineup
GeForce3 Preview
eVGA TwinView Plus
VisionTek GF2 Ultra
OCZ GeForce2 Pro
Winfast GeForce2 MX
GeForce2 vs Quake3
Linksys Cable Router
GF2 FSAA Shootout
GeForce2 MX Preview
Benchmarking Guide
Quake 3 Tune-Up
Power DVD Review
Live! Experiences
Drivers/FAQ
NVIDIA
3D Chipset
Gamers Ammo
Reactor Critical
GeForce FAQ
Associates
Dayton's Misc.
G-Force X Sweden
Media Xplosion
NVchips-fr
nV Italia
Riva Station
3D GPU
nV News Home Page

Gainward Ultra/750 Golden Sample GeForce4 Ti4600 Review
By: Pakman - April 7, 2002

The GeForce4 Ti Series

So just what is different about the GeForce4 Ti series of cards? Is this just a hopped up GeForce3 card running at higher clock speeds? Or if not, what's new with it? I will bullet some of the key features of the GeForce4 Ti series of cards. I am going to give a word of warning to the naive users about the "GeForce4" MX series of cards. They are really less than GeForce3's if you ask me, as they do no programmable pixel or vertex shading whatsoever.  If you are a 3D gamer, you will be disappointed with your purchase. You may save a few dollars, but you will have a lackluster 3D gaming card compared to the GeForce4 Ti series of cards.

1) The NV25 chip - Built on a .15 micron process, it has 63 million transistors. That's 6 million more transistors than the GeForce3 chip and 8 million more than the Pentium4 has. It also adds an extra vertex shader to pump programmable shading in DirectX 8.1 games at double the throughput of what a GeForce3 can do. 


NV25 chip layout


2) BGA memory modules - The GeForce4 cards incorporate a new package design for their memory modules. Ball-grid array (BGA) uses small balls of solder to connect to sockets on the card. The older NVIDIA cards used TSOP (thin small outline package) to connect to the card using leads which where each soldered to the PCB, creating dozens of possible failure points. BGA offers higher speeds with less errors. The GeForce4 Ti4600 is using top of the line memory and plenty of it.


The DDR RAM is in a smaller package now


The GeForce4's new memory has really got balls! Solder balls that is .
 Each of them fit into a corresponding socket on the card. Resulting in faster memory with less errors.

The GeForce4 Ti4600 cards come with 128MB of 2.8ns DDR RAM. That's twice the memory that came with a typical GeForce3 card, and it's rated about 2ns or 41% quicker than the GeForce3's memory. BGA modules are smaller and pack more of a punch. The memory on the GeForce4 cards is a big plus over that which came with the GeForce3's. The architecture used on the GeForce4 GPU also uses this memory in a much more efficient manner than the GeForce3 cards did. The next bulleted item will explain how it does this.


3) Lightspeed Memory Architecture II - Adding new enhancements to the GPU's crossbar memory technology. By drawing only those pixels which will be shown, compressing Z-buffer data and changing the way that data is written to and read from the memory banks, the GeForce4 reduces the amount of traffic traveling over the memory bus.  All these features answer the problem of memory bottlenecking, and give the GeForce4 a theoretical 10.4GB/sec bandwidth. The additional bandwidth makes
FSAA a very usable feature, which does not take such a massive hit on performance as it did on the GeForce3 cards. This is what NVIDIA says about LMA II:

"A crossbar-based memory controller: Ensures that every aspect of the memory system is balanced and that all memory requests by the graphics processor are handled properly. Under complex loads, LMA II’s memory crossbar architecture delivers 2-4 times the memory bandwidth of other standard architectures. 

A Quad Cache memory caching subsystem: High-speed access buffers that store small amounts of data and operate at tremendously high bandwidth, ensuring that data is queued and ready to be written to the memory. These caches are individually optimized for the specific information they deal with, resulting in almost instantaneous retrieval of key data. 

Lossless Z-buffer compression: Reduces Z-buffer traffic—one of the largest consumers of memory bandwidth in a graphics subsystem—by a factor of four, without any reduction in image quality or precision. 

A visibility subsystem: Determines whether or a not a pixel will be visible in a scene. If it determines a pixel will not be visible, the pixel is not rendered, saving valuable frame buffer bandwidth. 

Fast Z-clear technology: Minimizes the time it takes to clear the old data in the Z-buffer, boosting frame rates up to 10% without compromising image quality. "

This is probably one of the most important changes in the GeForce4 over the GeForce3's technology. Memory is handled much differently and better with the GeForce4. Note that the visibility subsystem has NVIDIA now showing signs of the Gigapixel/3dfx hidden surface removal influence. The Quad Cache, Fast Z-buffer clearing and Lossless Z-buffer compression are  really  great features as well. It all adds up to a totally new architecture in the GeForce4, and points to the fact that it is not just a juiced up GeForce3 card. 

 


4) Accuview FSAA - The GeForce4 Ti cards use a unique multi-sampling technique for rendering FSAA called "Accuview". What it does is renders extra pixels at less of a performance hit due to the AA subsystem being outfitted with wider internal data paths which are designed to accommodate additional texture information.  In addition, pixels are now rendered in semi-random spots rather than the same exact spots all the time. They are continually changing, which creates, believe it or not, a more mathematically correct FSAA effect and does away with the FSAA halo effect. So with the GeForce4 Ti cards, you get a whole new way of handling FSAA. It looks better and performs much better than the GeF0rce3 cards did.


5) Multi-Monitor Support - The GeForce4 cards come equipped with two monitor outputs, with at least one having DVI output for use with flat panel displays. With NVIDIA's nView feature, you can run two displays at once. This is not offered with all the various companies cards, you must check for it before buying your card if you need it.

 

So to sum things up, the GeForce4 uses a whole new architecture which is greatly improved from that found on a GeForce3 card. It has more and better memory which is used better, better implementation of FSAA, and two programmable vertex shaders for use with DX 8.1 games. If you want the newest and best technology in a 3D video card, the GeForce4 Ti is clearly the card to get. 

 

Next Page: Overclocking & Performance

Skip To:
 

Last Updated on April 7, 2002

Copyright © 1998-2004. All rights reserved.
Reproduction in whole or in part in any form or medium
without written permission of the site's owners is prohibited.
All trademarks are properties of their respective owners.

Privacy Policy



FastCounter by bCentral

 Visitors Are Online
Powered by Perlonline.com
Sponsors
Memory from Crucial.com

Shop at Monitors Direct.com

Your one stop point for the complete power PC user