View Single Post
Old 10-21-07, 03:59 PM   #1
andy753421
Registered User
 
Join Date: Oct 2007
Posts: 1
Default memory leak with nvidia-settings

I have been using 'nvidia-settings -q GPUCoreTemp' in a script to get the temperature from my graphics card (Quadro FX Go1400) but it seems like every time I run the command it makes X eat up a small portion of my memory. It's not really noticeable when running it just once, but after a few hours it starts to take a toll.

If you run it in a loop it seems to take up about 1 MB per second on my machine.
Code:
while true; do
	nvidia-settings -q GPUCoreTemp > /dev/null
done
It seem to have the same problem running 'nvidia-settings <anything>' even just 'nvidia-settings -h'.

Does anyone know of a better way to check the temperature? It doesn't seem to be supported by nvclock and I haven't been able to find anything else like a /proc file that I could read from.

My system specs are:
Card: Quadro FX Go1400
Driver: 100.14.19
X: xorg-server 1.4.0
Kernel: 2.6.22
Distro: Gentoo

If anyone can reproduce this or if I can add any other other information let me know.
andy753421 is offline   Reply With Quote