We're running a 3 node 3000G graphics cluster with frame and swap lock enabled. We've gotten everything running pretty well except for one last annoyance. Occasionally the center channel will get ahead of the left and right channel's for a frame or two. You really only see it if you're looking real close (as we are of course :)
Anyway, these graphics nodes are all connected by gigabit network on their own switch, but despite that, we figure that we're off in what viewpoint position we're rendering into the buffer on the different nodes.
We tried adding a sort of software based locking as well (on the rendered position), but that killed our framerate pretty fast :)
Short of buying reflective memory cards to ensure that all nodes have the exact same position all the time, has anyone run into this problem and solved it?
The application is the out the window scene for a flight sim.
Other useful info:
We're using openscenegraph, and dds textures. All computers with 3000G cards are dual processor Xeons (obviously hyperthreading is off), but we're currently running on one cpu per request of nvidia while we wait for the next driver release (yes, it did help). They're running redhat 8.0, but with a vanilla 2.4.26 kernel. And last but not least, nvidia driver version 5341.
Any help would be greatly appreciated!!!
Re: 3000G issues...
Sorry to bump this, but we're still having the problem, and I haven't heard anything yet...
Re: 3000G issues...
This is the kind of question that should get answered here.. rather than what readme to read. :) Unfortunately, I cannot help, so I'll just bump it for you.
Two similar threads although you have these initial issues figured out already:
(it might help if you modify your topic with a bit more detail to increase exposure to those with this type of background)
|All times are GMT -5. The time now is 10:42 AM.|
Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.
Copyright ©1998 - 2014, nV News.