View Single Post
Old 07-20-10, 03:14 PM   #15
kRogue
Registered User
 
Join Date: Jan 2005
Posts: 19
Default Re: is the nvidia driver falling behind?

Quote:
Might be that other DEs do not use advanced graphics, and these advanced operations are poorly implemented by the driver.
GNOME+COMPIZ looks pretty snazzy, wobbly windows (if you go for that kind of thing), transparency, etc... and it performs well. My brutal opinion is that is comes down to Qt. Find a Linux GUI app/environment that does not use Qt and performs badly. Firefox's pixmap issue is below, but funny thing, Chrome seems plenty fast.

Quote:
Are you referring to the GL backends that are still marked as experimental and which the folks at Qt strongly discourage people from using?
Where is it marked in Qt docs "don't use"? You will use the GL backends whenever you make a QGLWidget and want to draw to it via QPainter, or for that matter, use QGLFrameBufferObject, then you will be using GL (usually 2) backend. The GL backends have been around since QGLWidget, which has been around for a very, very long time. To give you an idea of how bad the GL backends are and have been: lets talk about clipping. Qt 4.5.x both GL1 and GL2 backends and any version of Qt GL1 backend clipping is implemented, and this is rich: as an array of screen aligned rectangles. Worse, they used the depth buffer to do clipping (essentially reducing a 24bit buffer to 1bit). Even worse, to set the clipping, Qt draws the rectangles, not via correct setting of projection matrices and correct setting of depth testing, but as, cough: glClear for each rectangle. As soon as you have something rotated that has clipping, we are talking hundreds of glClear's. No one having a single clue in using GL would do that. The more recent GL2 backends (and only GL2) starting in Qt 4.6 correctly use the stencil buffer and "calculate the clipping zone" via stencil test and ops on the GPU, but that should have been a when GL1 backends were made (the clipping via stencil test/op works just fine since TNT).. and getting into rotated text..sighs... really ugly and horribly slower.

Since, I am going into Qt's failings and I have to deal with it so much so often, let me share some more which is moderately off topic now:

Qt has some Qt wrappers over the GL API. Let me list some of it's failings:
(1) buffer object wrapping has no support for GL3 style flags when mapping also where to bind buffer objects is handled as an enum, last time I checked they are missing all GL3 specifi enums

(2) EGL. Qt managed to make a mess of this. When you build Qt you must choose "which" GL Qt will use: GL, GL ES1 or GL ES2. Under EGL, one can create, within the same program, GL, GL ES1 and GL ES2 contexts, and even have them share data. Qt lost this functionality. On the market now, there are plenty of devices supporting both GL ES1 and GL ES2.

(3) QtGLFramebufferObject has had a bug since forever for GL ES (1 and 2) environment in getting a stencil buffer, the reason the bug is there: they are too lazy to take the 2 days to test the fixes. Let's not talk about MRT for that matter, ok.

(4) The last time I looked into there shader "pipeline" they had munged it: most filtering algorithms require fetching neighboring texels, the correct way to do this is to set specify all texture co-ordinates in the vertex shader. There infrastructure does not support this, so you need to calculate this in the fragment shader (which is not only worse performance wise, once we get into non-orthogonal projections the calculation is worse)

(5) for QML, the performance of the Qt GL was so bad, that drawing QML elements is handled as follows: content is drawn to a QImage (or QPixmap) which is then uploaded to a GL texture and drawn... dynamic QML content then murders bandwidth (the worst sinner is animated text) Can you imagine what happens on portable devices?

(6) rotated text is dog slow and looks horribly with a GL backend unless the QGLWidget (or FBO) is created with MSAA, with MSAA is looks better but is even slower, MSAA is heck of expensive. Horrible thing is that drawing rotated text is a joke in GL and to keep it reasonable AA ...just blit the freetype rendered glyphs as a texture... but not Qt.. they got it into their heads that it must be rendered as "a path" whenever the text is rotated which is ugly and slow 9/10... even if the rotation is 90 degrees.

and there are more! But enough is enough I think.

Quote:
Except that the problems only occur on Nvidia drivers, and they are not unique to KDE or Qt (Firefox has been having a lot of the same problems on Nvidia with Linux as KDE/Qt).
The firefox issue is really a firefox bug, here is why: an application can query how big a pixmap can be and still be hardware accelerated, for webpages with lots of pixel area, firefox renders then to one big pixmap, ignoring what the hardware tells them: "make it too big and go to slow path".

There is something people need to realize: on Linux, NVIDIA has by far the best GL support in their driver than anyone. None of the open source drivers come even close to a fraction of what NVIDIA there. In spite of the fact that ATI/AMD released the majority of the hardware specs for their cards, the open source drivers for that hardware give horribly under whelming GL... the proprietary GL drivers for ATI/AMD have some a long way since AMD bought ATI, but they still are no match for NVIDIA. Don't get me started on the crapiness of DRI and DRI2 either. Just so you know, the NVIDIA driver bypasses them completely and for a really good reason, the why is technical and I am spitting enough bile already.

Quote:
Except the windows 7 interface, if I recall correctly, supports user-level settings that are remembers across sessions (as do standard xrandr GUI's in Linux). That means you can set your display configuration for each user independently without administrator access and still have them when you reboot.
I have had nothing but grief from Window's 7 interface for this laptop.. I want to clone between the laptop and a monitor, Windows 7 interface is pain, the Linux interface is heck-a-easy. Windows attempts to auto guess the correct thing, and for me a great deal of the time it is the wrong thing. I don't even want to get into what happens when I hook u VR goggles. There is a way to do per user setting with nvidia-settings without needing root access: nvidia-settings --config=~/.nvidia-settings-rc --load-config-only

Edit: while typing my bile looks like lots of people piped in the same wisdom for nvidia-settings.
kRogue is offline   Reply With Quote