nV News Forums

 
 

nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   NVIDIA Linux (http://www.nvnews.net/vbulletin/forumdisplay.php?f=14)
-   -   memory leak with nvidia-settings (http://www.nvnews.net/vbulletin/showthread.php?t=100747)

andy753421 10-21-07 02:59 PM

memory leak with nvidia-settings
 
I have been using 'nvidia-settings -q GPUCoreTemp' in a script to get the temperature from my graphics card (Quadro FX Go1400) but it seems like every time I run the command it makes X eat up a small portion of my memory. It's not really noticeable when running it just once, but after a few hours it starts to take a toll.

If you run it in a loop it seems to take up about 1 MB per second on my machine.
Code:

while true; do
        nvidia-settings -q GPUCoreTemp > /dev/null
done

It seem to have the same problem running 'nvidia-settings <anything>' even just 'nvidia-settings -h'.

Does anyone know of a better way to check the temperature? It doesn't seem to be supported by nvclock and I haven't been able to find anything else like a /proc file that I could read from.

My system specs are:
Card: Quadro FX Go1400
Driver: 100.14.19
X: xorg-server 1.4.0
Kernel: 2.6.22
Distro: Gentoo

If anyone can reproduce this or if I can add any other other information let me know.

logan 10-21-07 08:23 PM

Re: memory leak with nvidia-settings
 
I've been monitoring CPU/GPU temps for weeks on end after installing some new cooling on both and haven't noticed anything, nor can I reproduce this with your while loop.

6600GT AGP, 100.14.19, xorg 1.4.0, 2.6.23.1, unstable debian

nvclock -i shows GPU temps for my card


All times are GMT -5. The time now is 05:42 AM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.