Re: VDPAU API and implementation issues
Functions to get vsync interval and time:
Philosophically, the presentation queue is intended to be used in a "feedback mode" rather than a pre-calculated mode. I think that's where most of our disconnect is.
In other words, we intended applications to queue up N frames at the start of presenting a stream, the monitor the actual display times of those frames as they get displayed/idled in the presentation queue, then make decisions regarding whether to skip the display of future frames based on whether the display of previous frames lagged at all.
It sounds like you're attempting to use a model where you calculate/predict when a future VSYNC will occur, and adjust future scheduling of frames in terms of that.
For the issue you mention of knowing which frequency of frames to queue up when pre-loading the presentation queue when beginning to present a stream, I think that XF86VidMode should allow a close enough approximation that there will be no issue.
Unfortunately, I'm not sure if it would be possible to implement a "tell me when the/a most recent VSYNC occurred" API. I'll file a feature request to investigate this, although I certainly am not committing to implementing it. You may be able to simulate this by presenting a dummy surface and querying when it gets presented.
One other thing to note: In the NVIDIA implementation, the clock used to scan out pixels is not locked to the presentation queue timestamp clock; they may slowly drift (hopefully very slowly). This is another argument to use a feedback mode of operation, based on actual rather than pre-calculated VSYNC times.
BlockUntilSurfaceIdle: no non-blocking way to wait for event:
We envisaged applications that needed this functionality would use multiple threads. The thread that blocks inside BlockUntilSurfaceIdle could itself signal back to the main thread using a pipe/select mechanism. Would that work for you?
Inability to queue more than a couple surfaces into the presentation queue while performing other operations:
Did this occur in both the overlay- and blit-based presentation queues? To investigate this, it'd be easiest if we could reproduce the issue using your code. Can you provide an application that reproduces this. Thanks.
Presentation timestamps jitter:
Unfortunately, the timestamps will jitter a lot more in the blit-based presentation queue. Yes, this may be affected by CPU/GPU load. It's unlikely this will change in the near term, or possibly even long term.
De-interlacing algorithm performance:
The most obvious difference between bob and better algorithms should be increased vertical resolution in the output. In some cases, whether this is noticable will depend on the exact image being displayed.
Various documentation issues:
I'll add a few more notes to vdpau.h that should help.