Measuring card memory usage
We use the nvidia opengl driver on nvidia cards, typically GT 220 with 1073 megabytes of
graphics memory. The supported applications have been growing in size, and problems with
stability and performance are being experienced.
Reducing the size of loaded graphics is effective at ameliorating these problems.
However it is necessary to estimate the size of the loaded resources for a variety
of graphics loads and what the limit is.
Intercepting malloc/free calls from the nvidia shared object and counting
the amount of allocated memory has been tried, and also testing how much memory is able to be allocated
by a CUDA application on the nvidia card.
Looking at these variables at one second intervals as an application is loading, the available
card memory drops to zero at a particular point, then when the screens are mapped and
rendering commences, much more card memory becomes available, possibly because the
CUDA allocation measurement is intrusive. We would like to set
an effective limit so that all graphics resources are available on-card during continuous
operation. Would setting that limit as the amount of dynamic memory allocated to
the nvidia shared object at the point where the available card
memory drops to zero be reasonable?
Is there any better way of measuring the available card memory resources?
That point appears to be about 880 megabytes of dynamically allocated nvidia shared object memory for the system configuration we are using.