View Single Post
Old 08-06-09, 11:17 AM   #145
Registered User
Join Date: Oct 2008
Posts: 98
Default Re: VDPAU testing tool

I'm using 190.18 with my 8800GTS 512 (G92) on Fedora11 x86_64.
I have tried using plain X without anything running (just xterm), does not make any difference.
I changed GPU and Memory Clock settings with nvidia-settings. That seems to work, at least glxgears performance is lowered when I reduce the clock speeds.

GPU@650/972 (GPU Clock/Memory Clock)
Intel(R) Core(TM)2 Duo CPU     E6750  @ 2.66GHz
26:20 NVIDIA(0): NVIDIA GPU GeForce 8800 GTS 512 (G92) at PCI:1:0:0 (GPU-0)

VDPAU API version : 0
VDPAU implementation : NVIDIA VDPAU Driver Shared Library  190.18  Wed Jul 22 16:37:05 PDT 2009


MPEG DECODING (1920x1080): 79 frames/s
MPEG DECODING (1280x720): 157 frames/s
H264 DECODING (1920x1080): 45 frames/s
H264 DECODING (1280x720): 99 frames/s
VC1 DECODING (1440x1080): 123 frames/s

MIXER WEAVE (1920x1080): 2409 frames/s
MIXER BOB (1920x1080): 4362 fields/s
MIXER TEMPORAL (1920x1080): 946 fields/s
MIXER TEMPORAL + SKIP_CHROMA (1920x1080): 1250 fields/s
MIXER TEMPORAL_SPATIAL (1920x1080): 405 fields/s
MIXER TEMPORAL_SPATIAL + SKIP_CHROMA (1920x1080): 452 fields/s

MIXER TEMPORAL_SPATIAL (720x576 video to 1920x1080 display): 1404 fields/s

SURFACE GET/SET BITS seems to be mainly affected by the CPU (100% usage, scales quite linear with CPU frequency), but only very little by the GPU Clock (GET goes down to 885 M/s with GPU@162/430).

MIXER scales nearly linear with GPU frequency/Memory frequency, but is almost independent of the CPU Clock.

Decoding shows only a very slight dependence of the GPU/Memory Clock
GPU@162/430 (GPU Clock/Memory Clock)
MPEG DECODING (1920x1080): 74 frames/s
MPEG DECODING (1280x720): 144 frames/s
H264 DECODING (1920x1080): 37 frames/s
H264 DECODING (1280x720): 79 frames/s
VC1 DECODING (1440x1080): 111 frames/s
CPU-usage is about 30% in the tests and performance does barely change if I throttle down the CPU just a little (throttle to 75%, 2GHz)
MPEG DECODING (1920x1080): 75 frames/s
MPEG DECODING (1280x720): 156 frames/s
H264 DECODING (1920x1080): 45 frames/s
H264 DECODING (1280x720): 99 frames/s
VC1 DECODING (1440x1080): 115 frames/s
If I throttle the CPU to 10%, and limit CPU-Clock to 2GHz, I get
MPEG DECODING (1920x1080): 15 frames/s
MPEG DECODING (1280x720): 50 frames/s
H264 DECODING (1920x1080): 45 frames/s
H264 DECODING (1280x720): 96 frames/s
VC1 DECODING (1440x1080): 14 frames/s
MPEG and VC1 is mainly dependent on CPU, but when CPU works at full power, this is not the main limiting factor.
So H264 is independent of CPU and only slightly depends on GPU Clock, where is the limiting factor here?

And some feedback on the test:
- Would be nice if the programm exits when I press "esc". I's useful mainly when running in plain X without window manager.
- There need to be command line options to run only single tests
- The rates for get/put have too many digits that are not statistically significant. I think all post decimal positions can be left out without loss in precision.
Lysius is offline   Reply With Quote