I have been using 'nvidia-settings -q GPUCoreTemp' in a script to get the temperature from my graphics card (Quadro FX Go1400) but it seems like every time I run the command it makes X eat up a small portion of my memory. It's not really noticeable when running it just once, but after a few hours it starts to take a toll.
If you run it in a loop it seems to take up about 1 MB per second on my machine.
while true; do
nvidia-settings -q GPUCoreTemp > /dev/null
It seem to have the same problem running 'nvidia-settings <anything>' even just 'nvidia-settings -h'.
Does anyone know of a better way to check the temperature? It doesn't seem to be supported by nvclock and I haven't been able to find anything else like a /proc file that I could read from.
My system specs are:
Card: Quadro FX Go1400
X: xorg-server 1.4.0
If anyone can reproduce this or if I can add any other other information let me know.