View Single Post
Old 11-13-09, 01:32 PM   #4
uau
Registered User
 
Join Date: Sep 2009
Posts: 45
Default Re: VDPAU API and implementation issues

Quote:
Originally Posted by Stephen Warren View Post
It sounds like you're attempting to use a model where you calculate/predict when a future VSYNC will occur, and adjust future scheduling of frames in terms of that.
Yes that's what I'm doing, and I think it does allow a better end result. If you implement a frame rate limiting mechanism then you may as well synchronize it with the real display updates to achieve the most accurate possible results. In addition to lagging caused by display frame rate limits, there's also another kind of undesirable behavior that can be fixed by explicit synchronization with display refreshes, namely jitter caused by times randomly falling a bit before or a bit after a vsync boundary. This occurs when for example playing 24 FPS content on a 72 Hz display. Ideally you'd show each frame for 3 display refreshes, but if the queued times happen to be near vsync boundaries and VDPAU shows each frame at the next vsync after the timestamp it was queued with then "random" variation in timing can cause alternating 2- and 4- refresh frames. With vsync-aware timing this can be fixed by adjusting the timestamps to avoid unwanted changes from just-after-vsync to just-before-vsync or vice versa. In principle VDPAU could do similar adjustments internally by taking into account previously queued timestamps, but I'm not sure whether such "smart" adjustments would really be appropriate at this level.

Quote:
Unfortunately, I'm not sure if it would be possible to implement a "tell me when the/a most recent VSYNC occurred" API. I'll file a feature request to investigate this, although I certainly am not committing to implementing it. You may be able to simulate this by presenting a dummy surface and querying when it gets presented.
This is not a major issue; any inaccuracies caused by not having the correct timing from the start are unlikely to be really visible. Just nice to have.

Quote:
One other thing to note: In the NVIDIA implementation, the clock used to scan out pixels is not locked to the presentation queue timestamp clock; they may slowly drift (hopefully very slowly). This is another argument to use a feedback mode of operation, based on actual rather than pre-calculated VSYNC times.
If by "drift" you only mean that intervals between VSYNCs are not necessarily a constant multiple of some queue timestamp amount then that's not a problem. I don't extrapolate timestamps arbitrarily far into the future from a single VSYNC time, but rather always use the latest available VSYNC time from the surface status functions as the base. So as long as there's no significant deviation in the interval from the last queried surface to the next frame being processed to be queued there's no problem.

Quote:
BlockUntilSurfaceIdle: no non-blocking way to wait for event:

We envisaged applications that needed this functionality would use multiple threads. The thread that blocks inside BlockUntilSurfaceIdle could itself signal back to the main thread using a pipe/select mechanism. Would that work for you?
Yes it should work, though adding thread management does make things clumsier.

Quote:
Inability to queue more than a couple surfaces into the presentation queue while performing other operations:

Did this occur in both the overlay- and blit-based presentation queues?
I initially noticed it with overlay enabled, but IIRC I did test that disabling overlay made no difference.
Quote:
To investigate this, it'd be easiest if we could reproduce the issue using your code. Can you provide an application that reproduces this. Thanks.
I initially noticed significant increased CPU use in a test version of MPlayer that queued multiple frames ahead, then tested it with a version that queued frames at now + 20 seconds while manually stepping through one frame at a time. I don't have any of the test code left though. I can recreate a version later.

Quote:
Unfortunately, the timestamps will jitter a lot more in the blit-based presentation queue. Yes, this may be affected by CPU/GPU load. It's unlikely this will change in the near term, or possibly even long term.
Couldn't the driver "fake" better values? Even if the real time when things are processed varies, it could calculate the ideal vsync time and use that as the timestamp.
uau is offline   Reply With Quote