|10-21-07, 02:59 PM||#1|
Join Date: Oct 2007
memory leak with nvidia-settings
I have been using 'nvidia-settings -q GPUCoreTemp' in a script to get the temperature from my graphics card (Quadro FX Go1400) but it seems like every time I run the command it makes X eat up a small portion of my memory. It's not really noticeable when running it just once, but after a few hours it starts to take a toll.
If you run it in a loop it seems to take up about 1 MB per second on my machine.
while true; do nvidia-settings -q GPUCoreTemp > /dev/null done
Does anyone know of a better way to check the temperature? It doesn't seem to be supported by nvclock and I haven't been able to find anything else like a /proc file that I could read from.
My system specs are:
Card: Quadro FX Go1400
X: xorg-server 1.4.0
If anyone can reproduce this or if I can add any other other information let me know.
|10-21-07, 08:23 PM||#2|
Join Date: Aug 2002
Re: memory leak with nvidia-settings
I've been monitoring CPU/GPU temps for weeks on end after installing some new cooling on both and haven't noticed anything, nor can I reproduce this with your while loop.
6600GT AGP, 100.14.19, xorg 1.4.0, 126.96.36.199, unstable debian
nvclock -i shows GPU temps for my card
|Thread||Thread Starter||Forum||Replies||Last Post|
|Measuring card memory usage||peter_ga||NVIDIA Linux||0||05-24-12 08:07 PM|
|Need Help Installing NVIDIA Tesla M2070Q in Linux RHEL5||Ferianto85||NVIDIA Linux||0||05-18-12 08:35 PM|
|Found Simpsons in NVIDIA registry settings!||ShVen||NVIDIA GeForce 7, 8, And 9 Series||6||10-06-02 09:26 PM|