So, it could be a long time till we get the new Linux driver. Could we round this road blocker by a different design?
I thought for a while. Here is what I guess one could try:
We have 2 streams, A.264 B.264. We create two buffer bufferA and bufferB to associate with these 2 streams, and two context associate with them. But there is just one VdpDecoder here. We can use lock to make sure that only 1 stream can use this decoder at a time.
I don't know if the driver allows such a context shifting. I thought it is slice independent: every time you input a NAL and the context structure to the hardware, the YUV data is rendered to a video surface. The connection among the NAL sequence is 100% kept in the context. No temporal information is kept by the hardware.
Is the decoding process in the HW an atom operation for each and every NAL?
Looking forwards to your soon reply~