View Single Post
Old 11-10-02, 07:50 AM   #7
ScoobyDoo
Registered User
 
ScoobyDoo's Avatar
 
Join Date: Nov 2002
Posts: 11
Default

Well, yes, windows does a lot of crap, but at the very least it can run my app smooth as silk in < 3% CPU. Proof of the pudding is in the eating.

Well, I don't want to get too much into why I need OpenGL. But it is the technology of choice as far as I am concerned. Modern windowing systems make use of what Apple would call a compositing layer. Over the next five years I read Apple expects Microsoft to use a similar approach (DirectX no doubt) for theie windowing system. As usual, Linux/FreeBSD and X11 for that matter will be way behind because most of their coders are kernel hackers and can't grasp these concepts, or the need for certain driver features.

I pasted the poll() code out of order. In my application I open /dev/nvidia0 in a seperate function and pasted it in the wrong place in the original post. The code I have once worked. Reread that - I mentioned it several times. In fact I had a reply from Nvidia today and they said:

Quote:
Yes, at one point the driver did poll rather than busy wait, though there were various problems with this (there were some scheduling issues, if I remember correctly). Nvidia.
So this is not a problem with the poll code I was using.

The guy from Nvidia didnt explicitly say what the scheduling problem was, but I heard elsewhere that it could often miss out vsync's as the Linux scheduler could not get back to the application in time (in < 1/100th second).

Quote:
As it should have!
No no no. It shouldnt! This is my whole point. It should still run at full speed with only a couple of % of CPU time. This isnt a difficult concept to grasp now is it. If the drivers were perfect, the glxgears should have used 3% of the CPU with that nice value, and the a.out program should have had 97%, and glxgears should have gotten priority. Anything else shows the margin of error (about 90% wasted CPU time).

Any option using usleep() is prone to problems as this call is subject very much to the Linux scheduler. The average latency of the Linux kernel scheduler is 10,000 microseconds, but this can be as high as 100,000 microseconds under load. So if you call usleep() there is a good chance the scheduler never gets back to you in time (60hz == 16667 microseconds), and you will miss the vsync (this is why nvidia didnt slip a usleep() in their glxSwapBuffers() function themselves).

I was encouraged by Nvidia's response however! And I am thankful they took the time to read and reply to my email (the first portion if this thread). They replied:

Quote:
Consider your request received; we'll investigate better solutions
for a future release.
Regards,

Jamie.
ScoobyDoo is offline