memory leak with nvidia-settings
I have been using 'nvidia-settings -q GPUCoreTemp' in a script to get the temperature from my graphics card (Quadro FX Go1400) but it seems like every time I run the command it makes X eat up a small portion of my memory. It's not really noticeable when running it just once, but after a few hours it starts to take a toll.
If you run it in a loop it seems to take up about 1 MB per second on my machine.
Does anyone know of a better way to check the temperature? It doesn't seem to be supported by nvclock and I haven't been able to find anything else like a /proc file that I could read from.
My system specs are:
Card: Quadro FX Go1400
X: xorg-server 1.4.0
If anyone can reproduce this or if I can add any other other information let me know.
Re: memory leak with nvidia-settings
I've been monitoring CPU/GPU temps for weeks on end after installing some new cooling on both and haven't noticed anything, nor can I reproduce this with your while loop.
6600GT AGP, 100.14.19, xorg 1.4.0, 184.108.40.206, unstable debian
nvclock -i shows GPU temps for my card
|All times are GMT -5. The time now is 05:42 AM.|
Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright ©1998 - 2014, nV News.