Re: Pixel Clock
First some small parts of theory which hopefully clarify some things.
When using a normal CRT screen you have a electron canon which needs to illuminate all pixels on the screen each refresh. The canon draws line by line and after the end of a line it needs to move back to the start which takes a certain amount of time in which the canon does 'nothing'.
A LCD screen works differently and for instance doesn't have to 'idle' when moving to a next line. Because it doesn't have to idle (and other related reasons) a LCD screen can do with a lower pixelclock than a CRT for the same resolution and refresh rate. In general CRT modelines work fine on LCD screens. You get problems when you want to use CRT modelines for resolutions like 1600x1200 and that 1680x1050 as those can require pixelclocks higher than what is supported by your card. To work around this there's a trick called 'reduced blanking' which reduces those 'move back times (blanking). This allows you to use lower pixelclocks.
The first nvidia videocards which shipped with DVI contained a tmds/lmds output with a max pixelclock of somewhere around 140MHz. (perhaps 150MHz or 130MHz, I don't know it precisely). It was designed to atleast allow 1280x1024 which at the time was the max available resolution for LCD screens. (look for good modelines at this forum as more people have this issue)
At more modern geforce cards the max pixelclock of the DVI output is somewhere between 150MHz and 165MHz (depending on the card). This should be good enough for 1600x1200 or a little higher.
Geforce7800, Quadro cards and some Geforce6 boards (atleast the 6800 card from Apple and perhaps others) contain a dual-link DVI interface which supports twice as high pixelclocks upto a little more than 300MHz. This is mainly usefull for 30" lcd screens running at 2560x1600 which can't be supported on single link.