View Single Post
Old 08-10-10, 01:12 AM   #1
peter_ga
Registered User
 
Join Date: Dec 2009
Posts: 8
Default Massive memory leak, twinview, xinerama

Hi,
I am getting a memory leak running 190.42 driver using two cards pci ids 0640 rev A1 and 0645 revision a1. One card has two terminals attached, and uses twinview, with xinerama merging the other card into the virtual space. The x11 server is home-brew, version 1.6.1. Kernel version 2.6.31.5

My application occasionally behaves well, with no leak. When I look at /proc/PID/maps, there are say 66 entries for /dev/nvidia0 and 54 entries for /dev/nvidia1. When the leak occurs, the entries for /dev/nvidia1 grow rapidly until the kernel kills the application due to shortage of memory. This takes about 2 minutes, with the application size approaching about 1.6 gig. The entry count for nvidia0 in the maps file is stable, as is that for "00:00", ie the ram.
Windows are set so that each window covers a single card.

Is it possible to ensure that the buffer allocated to a texture is set to the correct card, in a multi-card system. I notice that glxCreateContext refuses to return a context for a non-zero screen number, nevertheless the app usually behaves well with the window on screen 1.

The high memory leak could possibly be being caused by video being output using gltexsubimage allocating an excessive amount of memory for each frame. Can I somehow force the device a texture id is allocated to? However even if there are no calls to gltexsubimage, the leak will occur, though slower.

Any advice along these lines would be appreciated.
peter_ga is offline   Reply With Quote