nV News Forums

 
 

nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   NVIDIA Linux (http://www.nvnews.net/vbulletin/forumdisplay.php?f=14)
-   -   GLX Pixmap destroy issue (http://www.nvnews.net/vbulletin/showthread.php?t=82124)

kwindla 12-10-06 06:12 PM

GLX Pixmap destroy issue
 
Hi,

I'm using GLX Pixmaps and the texture_from_pixmap extension to render
textures in one process and use them from another. What I'm doing is
similar to what folks writing compositors are doing, except that I'm
not concerned about drawing from X Windows, only about textures I
create myself across multiple processes using GL.

I've got two -- I think, related -- problems that appear to me to be
driver bugs:

1) When I "wrap" an X Pixmap in two different GLX Pixmaps,
destruction of either GLX Pixmap makes the other one into a Bad
Drawable and renders the underlying X Pixmap unusable. This is
almost a show-stopper for me, although it's just barely possible
to work around this problem with a lot of interprocess
communication <laugh>.

2) A memory leak in the X server is created when a GLX Pixmap that
"shares" an underlying X Pixmap is destroyed -- either explicitly
or when a process terminates. This is a show-stopper, as I
haven't been able to figure out a work-around (other than
re-starting the X Server <laugh>.)

I'm attaching a stripped-down, single-process test case that
demonstrates both these issues (at least on my GeForce 6600 with
1.0-9631).

Some further notes ...

- My core work-flow looks like so:

Rendering Process: - create an X Pixmap
- create a GL Context and a GLX Pixmap
- make Context/glXPixmap current
- draw a bunch of stuff
- [... loop, drawing periodically]

Compositing Process: - create an X Window
- create a GL Contex
- create a GLX Pixmap that wraps an
X Pixmap created by a rendering process
- make Context/Window current
- call glXBindTexImageEXT to use the X Pixmap
as a texture
- draw that texture to the Window
- [... loop, drawing that and other textures]



- I need the speed of direct rendering both for creating the textures
in the rendering process and for drawing with them in the
compositing process. I may be running many rendering processes in
parallel, and so using a relatively large number of textures across
processes.

So my assumption has been that using this texture_from_pixmap
approach is the best way to go. Suggestions for alternative ways to
accomplish my goals are very welcome.

- I know that my graphics card is a bit on the elderly side. I'd be
happy to hear that my card is the problem. I'm porting from another
platform and using a cobbled-together linux box for testing. When I
get things working, I'll buy a spiffy new PCI-X machine and card
(or, actually, several of each!).

Kwin

Attachments:


Hi,

I'm using GLX Pixmaps and the texture_from_pixmap extension to render
textures in one process and use them from another. What I'm doing is
similar to what folks writing compositors are doing, except that I'm
not concerned about drawing from X Windows, only about textures I
create myself across multiple processes using GL.

I've got two -- I think, related -- problems that appear to me to be
driver bugs:

1) When I "wrap" an X Pixmap in two different GLX Pixmaps,
destruction of either GLX Pixmap makes the other one into a Bad
Drawable and renders the underlying X Pixmap unusable. This is
almost a show-stopper for me, although it's just barely possible
to work around this problem with a lot of interprocess
communication <laugh>.

2) A memory leak in the X server is created when a GLX Pixmap that
"shares" an underlying X Pixmap is destroyed -- either explicitly
or when a process terminates. This is a show-stopper, as I
haven't been able to figure out a work-around (other than
re-starting the X Server <laugh>.)

I'm attaching a stripped-down, single-process test case that
demonstrates both these issues (at least on my GeForce 6600 with
1.0-9631).

Some further notes ...

- My core work-flow looks like so:

Rendering Process: - create an X Pixmap
- create a GL Context and a GLX Pixmap
- make Context/glXPixmap current
- draw a bunch of stuff
- [... loop, drawing periodically]

Compositing Process: - create an X Window
- create a GL Contex
- create a GLX Pixmap that wraps an
X Pixmap created by a rendering process
- make Context/Window current
- call glXBindTexImageEXT to use the X Pixmap
as a texture
- draw that texture to the Window
- [... loop, drawing that and other textures]



- I need the speed of direct rendering both for creating the textures
in the rendering process and for drawing with them in the
compositing process. I may be running many rendering processes in
parallel, and so using a relatively large number of textures across
processes.

So my assumption has been that using this texture_from_pixmap
approach is the best way to go. Suggestions for alternative ways to
accomplish my goals are very welcome.

- I know that my graphics card is a bit on the elderly side. I'd be
happy to hear that my card is the problem. I'm porting from another
platform and using a cobbled-together linux box for testing. When I
get things working, I'll buy a spiffy new PCI-X machine and card
(or, actually, several of each!).

Kwin

Attachments:
http://g-speak.com/nvidia/nvidia-bug-report.log
http://g-speak.com/nvidia/pixbuff-destroy-test.c

AaronP 12-10-06 07:10 PM

Re: GLX Pixmap destroy issue
 
A coworker recently explained this to me. He said that the intent of the GLX spec is that like GLXWindows, only one GLXPixmap should be allowed to be created for a given Pixmap and that subsequent allocations should fail with BadAlloc. However, a bug in the driver allows multiple GLXPixmaps to be created which confuses the dealloc code, which only expects there to be one. The second allocation will fail in future driver releases.

kwindla 12-11-06 09:40 AM

Re: GLX Pixmap destroy issue
 
Hi Aaron,

Thanks for your explanation. Of course, I have a follow-up!

I must be missing something, but my reading of the texture_from_pixmap spec made me think that extension was designed to enable exactly what I'm trying to do: "share" an X Pixmap between two processes (one writing and one reading) by wrapping that X Pixmap in two different GLX Pixmaps.

If I can't do that, how should I share textures across processes? I need direct rendering both when drawing to the texture and when using the texture as an element in later drawing.

Kwin

jamesjones 12-11-06 04:35 PM

Re: GLX Pixmap destroy issue
 
Hi kwindla,

Aaron asked me to comment here. I'm the other employee he was referring to :-)

First of all, this is not what texture_from_pixmap was intended for at all. When you wish to knowingly render to a texture with OpenGL, the correct solution is to use FBO. Search for EXT_framebuffer_object on google for all the details and specifications.

However, this won't work if you want to share the object between processes AND use direct rendering, as you can not share objects between GLX context's in different address spaces. For this reason, your idea of using texture sharing with texture from pixmap also won't work.

I think the best way to accomplish what you're trying to do is by using multiple threads and the FBO extension rather than multiple separate processes and the texture from pixmap extension. If this is not possible in your situation, could you explain why?

Now, assuming you can't make this work with FBO, there is a way to use a GLX drawable in two different processes. Rather than create 2 GLX drawables, just use the same GLX drawable in both processes. You're already sharing the X pixmap ID; sharing the GLX pixmap ID should also work. X resources should not be process-specific. I THINK this is supposed to work even with direct rendering according to the spec, but I wouldn't be surprised if it is broken right now in our driver. I'll look into it, but please try the FBO/multiple threads approach if at all possible.

kwindla 12-11-06 06:04 PM

Re: GLX Pixmap destroy issue
 
Hi James,

Thank you very much for your detailed response.

I appreciate the suggestion to use FBOs -- I started down that path originally. Unfortunately, I need true process separation (threads aren't enough).

I have long-running "application" processes that need to be fully independent, each of which renders output to one or more textures. These textures are then drawn to the screen by yet another process. I'm doing more or less what a window manager does (but with a lot of fancy GL in both the app and window-manager processes), but using as little X infrastructure as possible. (All of this is in the context of high-end, multi-machine simulation and visualization frameworks. We move a lot of pixels around, try for high framerates, and grovel through a lot of data every frame. We also have to build everything in very defined, modular ways -- hence the "apps and window manager" architecture.)

I did also try sharing GLX Pixmaps across processes, but passing the GLX Pixmap XID from one process into glXMakeCurrent() in another gave me a Bad Drawable error. I could well have done something else wrong, though, so I'll try again.

Again, thank you for your advice. Your help is much appreciated. I must admit that I'm a little surprised that sharing GL resources across processes is so difficult <laugh>. I'm relatively new to PC graphics programming (I come from a big-systems development background), and it's very tempting to design multi-process/distributed systems that take advantage of the really impressive power in the new generation of graphics cards.

Yours,
Kwin

jamesjones 12-11-06 09:17 PM

Re: GLX Pixmap destroy issue
 
kwindla,

After reviewing the spec further, it looks like all GLX objects are covered by the address space rule, not just share lists. This means my suggestion was incorrect, it is not possible to share a GLX drawable among direct rendering contexts in different processes. Please review section 2.3 of the GLX 1.4 spec for details.

If you can not rework your application to use FBO, you'll have to add a copy step to retrieve the contents of the pixmap with X commands, and then download it to a texture in the compositing process. The download and upload should be accelerated, so assuming your bottleneck is the rendering itself, this should only have a minor affect on performance.

Alternatively, you could make your compositing process an actual composite manager and have your rendering apps render to redirected GLX windows instead of GLX pixmaps. This would probably only be practical if the machine is dedicated to running this one application.

If the above solutions are not sufficient, I encourage you to get in touch with our developer relations department by becoming a registered developer on nvidia web site. They can work with you optimize your application flow to get the best performance on our driver.

tmack 08-06-12 12:14 AM

Re: GLX Pixmap destroy issue
 
I'm trying to do something similar as the OP.

Can you use something like XCopyArea to copy the underlying pixmaps between processes or do you need to actually read back the remote pixmap data and then upload it to a texture in the local application?

Sorry about necroing this thread.


All times are GMT -5. The time now is 07:16 AM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.