Go Back   nV News Forums > Linux Support Forums > NVIDIA Linux

Newegg Daily Deals

Reply
 
Thread Tools
Old 12-10-06, 07:12 PM   #1
kwindla
Registered User
 
Join Date: Dec 2006
Posts: 6
Default GLX Pixmap destroy issue

Hi,

I'm using GLX Pixmaps and the texture_from_pixmap extension to render
textures in one process and use them from another. What I'm doing is
similar to what folks writing compositors are doing, except that I'm
not concerned about drawing from X Windows, only about textures I
create myself across multiple processes using GL.

I've got two -- I think, related -- problems that appear to me to be
driver bugs:

1) When I "wrap" an X Pixmap in two different GLX Pixmaps,
destruction of either GLX Pixmap makes the other one into a Bad
Drawable and renders the underlying X Pixmap unusable. This is
almost a show-stopper for me, although it's just barely possible
to work around this problem with a lot of interprocess
communication <laugh>.

2) A memory leak in the X server is created when a GLX Pixmap that
"shares" an underlying X Pixmap is destroyed -- either explicitly
or when a process terminates. This is a show-stopper, as I
haven't been able to figure out a work-around (other than
re-starting the X Server <laugh>.)

I'm attaching a stripped-down, single-process test case that
demonstrates both these issues (at least on my GeForce 6600 with
1.0-9631).

Some further notes ...

- My core work-flow looks like so:

Rendering Process: - create an X Pixmap
- create a GL Context and a GLX Pixmap
- make Context/glXPixmap current
- draw a bunch of stuff
- [... loop, drawing periodically]

Compositing Process: - create an X Window
- create a GL Contex
- create a GLX Pixmap that wraps an
X Pixmap created by a rendering process
- make Context/Window current
- call glXBindTexImageEXT to use the X Pixmap
as a texture
- draw that texture to the Window
- [... loop, drawing that and other textures]



- I need the speed of direct rendering both for creating the textures
in the rendering process and for drawing with them in the
compositing process. I may be running many rendering processes in
parallel, and so using a relatively large number of textures across
processes.

So my assumption has been that using this texture_from_pixmap
approach is the best way to go. Suggestions for alternative ways to
accomplish my goals are very welcome.

- I know that my graphics card is a bit on the elderly side. I'd be
happy to hear that my card is the problem. I'm porting from another
platform and using a cobbled-together linux box for testing. When I
get things working, I'll buy a spiffy new PCI-X machine and card
(or, actually, several of each!).

Kwin

Attachments:


Hi,

I'm using GLX Pixmaps and the texture_from_pixmap extension to render
textures in one process and use them from another. What I'm doing is
similar to what folks writing compositors are doing, except that I'm
not concerned about drawing from X Windows, only about textures I
create myself across multiple processes using GL.

I've got two -- I think, related -- problems that appear to me to be
driver bugs:

1) When I "wrap" an X Pixmap in two different GLX Pixmaps,
destruction of either GLX Pixmap makes the other one into a Bad
Drawable and renders the underlying X Pixmap unusable. This is
almost a show-stopper for me, although it's just barely possible
to work around this problem with a lot of interprocess
communication <laugh>.

2) A memory leak in the X server is created when a GLX Pixmap that
"shares" an underlying X Pixmap is destroyed -- either explicitly
or when a process terminates. This is a show-stopper, as I
haven't been able to figure out a work-around (other than
re-starting the X Server <laugh>.)

I'm attaching a stripped-down, single-process test case that
demonstrates both these issues (at least on my GeForce 6600 with
1.0-9631).

Some further notes ...

- My core work-flow looks like so:

Rendering Process: - create an X Pixmap
- create a GL Context and a GLX Pixmap
- make Context/glXPixmap current
- draw a bunch of stuff
- [... loop, drawing periodically]

Compositing Process: - create an X Window
- create a GL Contex
- create a GLX Pixmap that wraps an
X Pixmap created by a rendering process
- make Context/Window current
- call glXBindTexImageEXT to use the X Pixmap
as a texture
- draw that texture to the Window
- [... loop, drawing that and other textures]



- I need the speed of direct rendering both for creating the textures
in the rendering process and for drawing with them in the
compositing process. I may be running many rendering processes in
parallel, and so using a relatively large number of textures across
processes.

So my assumption has been that using this texture_from_pixmap
approach is the best way to go. Suggestions for alternative ways to
accomplish my goals are very welcome.

- I know that my graphics card is a bit on the elderly side. I'd be
happy to hear that my card is the problem. I'm porting from another
platform and using a cobbled-together linux box for testing. When I
get things working, I'll buy a spiffy new PCI-X machine and card
(or, actually, several of each!).

Kwin

Attachments:
http://g-speak.com/nvidia/nvidia-bug-report.log
http://g-speak.com/nvidia/pixbuff-destroy-test.c
kwindla is offline   Reply With Quote
Old 12-10-06, 08:10 PM   #2
AaronP
NVIDIA Corporation
 
AaronP's Avatar
 
Join Date: Mar 2005
Posts: 2,487
Default Re: GLX Pixmap destroy issue

A coworker recently explained this to me. He said that the intent of the GLX spec is that like GLXWindows, only one GLXPixmap should be allowed to be created for a given Pixmap and that subsequent allocations should fail with BadAlloc. However, a bug in the driver allows multiple GLXPixmaps to be created which confuses the dealloc code, which only expects there to be one. The second allocation will fail in future driver releases.
AaronP is offline   Reply With Quote
Old 12-11-06, 10:40 AM   #3
kwindla
Registered User
 
Join Date: Dec 2006
Posts: 6
Default Re: GLX Pixmap destroy issue

Hi Aaron,

Thanks for your explanation. Of course, I have a follow-up!

I must be missing something, but my reading of the texture_from_pixmap spec made me think that extension was designed to enable exactly what I'm trying to do: "share" an X Pixmap between two processes (one writing and one reading) by wrapping that X Pixmap in two different GLX Pixmaps.

If I can't do that, how should I share textures across processes? I need direct rendering both when drawing to the texture and when using the texture as an element in later drawing.

Kwin
kwindla is offline   Reply With Quote
Old 12-11-06, 05:35 PM   #4
jamesjones
NVIDIA Corporation
 
jamesjones's Avatar
 
Join Date: Feb 2005
Location: San Jose
Posts: 37
Default Re: GLX Pixmap destroy issue

Hi kwindla,

Aaron asked me to comment here. I'm the other employee he was referring to :-)

First of all, this is not what texture_from_pixmap was intended for at all. When you wish to knowingly render to a texture with OpenGL, the correct solution is to use FBO. Search for EXT_framebuffer_object on google for all the details and specifications.

However, this won't work if you want to share the object between processes AND use direct rendering, as you can not share objects between GLX context's in different address spaces. For this reason, your idea of using texture sharing with texture from pixmap also won't work.

I think the best way to accomplish what you're trying to do is by using multiple threads and the FBO extension rather than multiple separate processes and the texture from pixmap extension. If this is not possible in your situation, could you explain why?

Now, assuming you can't make this work with FBO, there is a way to use a GLX drawable in two different processes. Rather than create 2 GLX drawables, just use the same GLX drawable in both processes. You're already sharing the X pixmap ID; sharing the GLX pixmap ID should also work. X resources should not be process-specific. I THINK this is supposed to work even with direct rendering according to the spec, but I wouldn't be surprised if it is broken right now in our driver. I'll look into it, but please try the FBO/multiple threads approach if at all possible.
jamesjones is offline   Reply With Quote
Old 12-11-06, 07:04 PM   #5
kwindla
Registered User
 
Join Date: Dec 2006
Posts: 6
Default Re: GLX Pixmap destroy issue

Hi James,

Thank you very much for your detailed response.

I appreciate the suggestion to use FBOs -- I started down that path originally. Unfortunately, I need true process separation (threads aren't enough).

I have long-running "application" processes that need to be fully independent, each of which renders output to one or more textures. These textures are then drawn to the screen by yet another process. I'm doing more or less what a window manager does (but with a lot of fancy GL in both the app and window-manager processes), but using as little X infrastructure as possible. (All of this is in the context of high-end, multi-machine simulation and visualization frameworks. We move a lot of pixels around, try for high framerates, and grovel through a lot of data every frame. We also have to build everything in very defined, modular ways -- hence the "apps and window manager" architecture.)

I did also try sharing GLX Pixmaps across processes, but passing the GLX Pixmap XID from one process into glXMakeCurrent() in another gave me a Bad Drawable error. I could well have done something else wrong, though, so I'll try again.

Again, thank you for your advice. Your help is much appreciated. I must admit that I'm a little surprised that sharing GL resources across processes is so difficult <laugh>. I'm relatively new to PC graphics programming (I come from a big-systems development background), and it's very tempting to design multi-process/distributed systems that take advantage of the really impressive power in the new generation of graphics cards.

Yours,
Kwin
kwindla is offline   Reply With Quote
Old 12-11-06, 10:17 PM   #6
jamesjones
NVIDIA Corporation
 
jamesjones's Avatar
 
Join Date: Feb 2005
Location: San Jose
Posts: 37
Default Re: GLX Pixmap destroy issue

kwindla,

After reviewing the spec further, it looks like all GLX objects are covered by the address space rule, not just share lists. This means my suggestion was incorrect, it is not possible to share a GLX drawable among direct rendering contexts in different processes. Please review section 2.3 of the GLX 1.4 spec for details.

If you can not rework your application to use FBO, you'll have to add a copy step to retrieve the contents of the pixmap with X commands, and then download it to a texture in the compositing process. The download and upload should be accelerated, so assuming your bottleneck is the rendering itself, this should only have a minor affect on performance.

Alternatively, you could make your compositing process an actual composite manager and have your rendering apps render to redirected GLX windows instead of GLX pixmaps. This would probably only be practical if the machine is dedicated to running this one application.

If the above solutions are not sufficient, I encourage you to get in touch with our developer relations department by becoming a registered developer on nvidia web site. They can work with you optimize your application flow to get the best performance on our driver.
jamesjones is offline   Reply With Quote
Old 08-06-12, 01:14 AM   #7
tmack
Registered User
 
Join Date: Nov 2006
Posts: 35
Default Re: GLX Pixmap destroy issue

I'm trying to do something similar as the OP.

Can you use something like XCopyArea to copy the underlying pixmaps between processes or do you need to actually read back the remote pixmap data and then upload it to a texture in the local application?

Sorry about necroing this thread.
tmack is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA Responds to Reports of Kepler V-Sync Stuttering Issue Rieper NVIDIA GeForce 600 Series 13 03-03-13 11:56 PM
173.14.34 -- Xorg 1.12.1 & 1.11.4, glx module fails to load mereset NVIDIA Linux 17 06-10-12 02:24 AM
Intel's Ivy Bridge Core i7 3770K Overheating Issue Detailed News Archived News Items 0 05-16-12 11:40 AM
Glx mrbig1344 NVIDIA Linux 7 09-30-02 07:45 AM
GLX bug? (1.0-3123) marc NVIDIA Linux 2 09-21-02 05:58 PM

All times are GMT -5. The time now is 11:25 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.