View Full Version : FX 5900 overclock rates (& oddities)

07-14-03, 06:43 PM
Over the weekend, I did some experiments with my eVGA FX5900u. When I use NVIDIA's autodetect, I get clock rates of 490/932 - oddly, a core clock rate of less than the 500mhz the performance bios uses. My friend with an identical setup (except for the case) reported similar frequences - less than 500mhz core, a bit more than 900mhz memory.

I decided to use Artifact Tester 5 to try and verify the limits on the memory clock. However experiments showed I can tolerate higher memory clock rates if I use a lower core clock. In particular, if I left the 2D core rate at 300mhz, the memory clock could hit 1000mhz without a problem. However, if I used 450mhz core, I still got artifacts at 950mhz.

Seems like the autodetect is doing a fairly decent job of determining the limits. What I'm wondering is if its better to trade off some speed for increased core versus memory, or vice-versa.

07-19-03, 08:40 PM
The autodetect on my BFG 5900 Ultra also chooses 490/930 and after extensive testing I found that is the exact perfect spot for it for the highest framerates with no artifacting. So I'd say use that autodetect button it seems to know what it's doing! :)

07-20-03, 08:11 AM
Artifact Tester 5 is woefully inadequate as overclocking tester. By the time that program detects problems, you are well past your limits, even on a Geforce3. The program simply doesnt stress the card enough to show errors. For my Geforce3, I used to use the nvidia WolfMan Demo, when the sparkles on the screen disappeared and didn't show up even every 30-60sec, the memory wasn't too high, if it didn't freeze or crash, the Core was okay.

The auto-detect feature on the FX cards is nice, but also very inaccurate. I ran it 3 times and got very different results. Presumably it does a search high and low until it finds stable speeds, however, it often settles on values that are not good. Eg. run1 = 450 core 1100 mem, run2 = 499core 999 mem. But to get my geforcefx5800 to oc stable (run 3dmark 2003 in 40 reps per test without freezing or artifacts) I had to go as low as 480 core 980 mem. Although it could run quite a bit higher, it could freeze if I test it for 10hrs straight.

Finally, when you do find a speed that is fast and stable, do you cater for change in season. Summer here is up to 20degC warmer than winter. You could lose 10mhz to instability if your temperature went up 10-15degC.

What's the point of all that speed if it ain't stable?

07-20-03, 08:27 AM
I would think temperture changes could have an significant effect on the overclockability of the card so the results would vary from one instance to another. Try maintaining a constant temperture, cooler the better.

07-20-03, 06:16 PM
I donnu - artifact tester is definitely showing problems sooner than other tests I've been doing. I'm not convinced its tests patterns are the best stress-tests possible, but haven't had time to roll better images yet. It certainly did a better job finding the max point on my GF4200 than any other tests I ran. Games wouldn't noticeably fail until the memory clock was 10-20hz higher.

However, I am still curious if others have seen the max frequencies on memory vary depending on the core speed & vice versa.

Heat definitely makes a difference - for safest results, its probably best to test in a thermally-unfriendly environment. However, if one wants stability, they can just run at stock speed also.

07-26-03, 01:40 PM
Everytime I choose the Autodect it get values like 490/930, 490/931, 491/929, 492/934 so it seems pretty accurate as it always hovers right there by 490/930.

07-26-03, 01:51 PM
autodetect works really lousy on mine. it autodetects usually 449/448/450 core, and 947/948/949 memory. If I set the memory to that, run a 3d app, it seems ok, but as soon as I return to the desktop and load up my web browser, I start noticing artifacts. I don't see the artifacts in 3d though. The artifacts go away at 900mhz on the ram

Carbon Unit
07-26-03, 08:00 PM
I have found that depending on you gpu's temps you will get different results with auto detect, so if you run a bench or play a game and select auto detect you might get a lower setting.....

also I have noticed that I get my highest benchmarks at the auto detect setting, any overclock above the setting yeilds lower scores?

07-26-03, 09:46 PM
Since this thread is still alive...

I tried Artifact tester again for laughs. I monitored the GPU temperature during the tests. I also made sure the GPU was using the overclocked 3D speeds, and hadn't used the Window speeds because I am not sure what the detection mechanism is for the driver to switch between them.

GPU temperature:
Idle: 40degC
ArtifactTester5 (Hardcore): 41degC

So you see that program doesn't stress the GPU at all. If I run other benchmarks and games, I can get the GPU over 60degC, and if I clock the card to Ultra speeds, get to 90degC. During those intense tests, the GPU and Memory begin to fail, but may only fail once in a while. For example, I ran a test with extreme overclocking for this card, and it crashed once in 2hrs. Repeating the test twice, it didn't crash again in 2x 2hr sessions. Dropping the speed 10mhz, it never failed in 12hrs intensive testing. That is what I call a stable overclock. You know it ain't going to crash in the middle of playing some favourite game.

07-26-03, 10:15 PM
I'd recommend X-Isle demo to monitor temps. it does a pretty good job of stressing gpu's, and since it can run windowed, you can watch the temp level in real-time

07-29-03, 09:33 AM
For my Leadtek GeForce FX5900 A350 TDH 128MB
My Auto Detect use 460/970
but now I am running it at 500/985
Those are for 3D Mode

PS: I was wondering why that when I set memory over 985 and testing it then apply it it still jump back to 985?? I was able to get to 988 without problems but 1MHz more than 988 it jump back.
The reason I set my memory to 985 now is coz it look better :eek: