Go Back   nV News Forums > Linux Support Forums > NVIDIA Linux

Newegg Daily Deals

Reply
 
Thread Tools
Old 02-22-06, 08:53 AM   #1
SNoiraud
Registered User
 
Join Date: Jun 2005
Posts: 19
Angry Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Hi,

I'm trying to use a real time kernel ( actually 2.6.15 and rt17 ). I saw always the same traces and I am sure the problem is nvidia.
you are calling some kernel function from a non preemptible context :
os_acquire_sema, os_release_sema, rm_set_interrupts ...

On the kernel list the answers are :
Your using the illegal and broken nvidia driver, and as usual it's doing
stupid things. NVidia customer support is that way ---->
How can we have a real support ?
You seem to be a Trash in which we put our questions !
With many chance, perhaps we will have an answer but not a solution !

So this is certainly the last question before we switch to ATI.

The traces are :
softirq-tasklet/8[CPU#0]: BUG in debug_deadlocks_up_mutex at kernel/rt.c:842
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c0121d98>] __WARN_ON+0x48/0x60 (40)
[<c013b3f1>] debug_deadlocks_up_mutex+0x91/0x120 (28)
[<c013ba30>] release_lock+0x20/0x100 (16)
[<c013c267>] ____up_mutex+0x37/0x1a0 (28)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000003 ]
| 3-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c013c37a>] .. ( <= ____up_mutex+0x14a/0x1a0)
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c0121d63>] .. ( <= __WARN_ON+0x13/0x60)

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: scheduling while atomic: softirq-tasklet/0x00000001/8
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cdc0>] os_release_sema+0x24/0x47 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cdb1>] .. ( <= os_release_sema+0x15/0x47 [nvidia])

BUG: scheduling while atomic: softirq-tasklet/0x00000001/8
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cdc0>] os_release_sema+0x24/0x47 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cdb1>] .. ( <= os_release_sema+0x15/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: scheduling while atomic: IRQ 137/0x00000001/6471
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cd7b>] os_cond_acquire_sema+0x3a/0x5b [nvidia] (24)
[<f8de18ea>] _nv002338rm+0x12/0x18 [nvidia] (32)
[<f8de5355>] _nv001650rm+0x18d/0x250 [nvidia] (80)
[<f8de5101>] _nv001647rm+0x51/0x70 [nvidia] (48)
[<f8de95c7>] rm_isr+0x3b/0x4c [nvidia] (48)
[<f900a714>] nv_kern_isr+0x29/0x65 [nvidia] (36)
[<c014b8d3>] handle_IRQ_event+0x73/0x110 (52)
[<c014c779>] thread_edge_irq+0x59/0xf0 (24)
[<c014c846>] do_hardirq+0x36/0x80 (20)
[<c014c98a>] do_irqd+0xfa/0x1c0 (28)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (364462108)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cd59>] .. ( <= os_cond_acquire_sema+0x18/0x5b [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cdd7>] os_release_sema+0x3b/0x47 [nvidia] (16)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a76a>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cdd7>] .. ( <= os_release_sema+0x3b/0x47 [nvidia])

... Message too long : I cut here

| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900ccf9>] .. ( <= os_acquire_sema+0x18/0x60 [nvidia])
================== Here the system freeze =====================
SNoiraud is offline   Reply With Quote
Old 02-22-06, 09:10 AM   #2
JaXXoN
Registered User
 
Join Date: Jul 2005
Location: Munich
Posts: 910
Default Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Quote:
Originally Posted by SNoiraud
os_acquire_sema, os_release_sema, rm_set_interrupts ...
Hi!

Did you checked the following URL and tried the patches posted there?
http://www.nvnews.net/vbulletin/showthread.php?t=60619

The patches are actually for 1.0-7676 and won't directly apply for 1.0-8178.

Because of general problems with kernels > 2.6.13.5 with my specific
setup, i didn't yet deeply dived into the kernel glue sources of 1.0-8178,
but it looks pretty similar to the old driver.

Concerning ATI: did you yet tried out -rt with ATI hardware/software?
I'm just currious if that works any better. Unfortnuatly, ATI doesn't yet
support their latest X1000 family under Linux. From what i know,
the X850 is the fastes chip you can have for now.

If FPGA cards wouldn't be that expensive, then it would be
interessting to port Mesa to VHDL or Verilog as a free alternative
to both, ATI and nvidia :-) Just kidding!

regards

Bernhard
JaXXoN is offline   Reply With Quote
Old 02-22-06, 10:44 AM   #3
SNoiraud
Registered User
 
Join Date: Jun 2005
Posts: 19
Unhappy Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Quote:
Originally Posted by JaXXoN
Hi!

Did you checked the following URL and tried the patches posted there?
http://www.nvnews.net/vbulletin/showthread.php?t=60619

The patches are actually for 1.0-7676 and won't directly apply for 1.0-8178.

...

regards

Bernhard
I didn't know it? I used an older one for rt4.
Patch OK with 8178 + U012206
Tested on 2.6.15-rt17 : The result is better. the system doesn't freeze, but I always get the following messages :

When installing :
PCI: Setting latency timer of device 0000:40:00.0 to 64
NVRM: loading NVIDIA Linux x86 NVIDIA Kernel Module 1.0-8178 Wed Dec 14 16:22:51 PST 2005
BUG: nonzero lock count 1 at exit time?
modprobe: 1280 [f7dee750, 120]
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013affe>] check_no_held_locks+0x23e/0x330 (48)
[<c0123cb7>] do_exit+0x2a7/0x4d0 (40)
[<c0123f77>] do_group_exit+0x37/0xc0 (28)
[<c0124016>] sys_exit_group+0x16/0x20 (12)
[<c01031b8>] sysenter_past_esp+0x61/0x89 (-8116)
---------------------------
| preempt count: 00000000 ]
| 0-level deep critical section nesting:
----------------------------------------

------------------------------
| showing all locks held by: | (modprobe/1280 [f7dee750, 120]):
------------------------------

#001: [c47c4104] {(struct semaphore *)(&os_sema->wait)}
... acquired at: os_alloc_sema+0x37/0x5b [nvidia]

BUG: modprobe/1280, lock held at task exit time!
[c47c4104] {(struct semaphore *)(&os_sema->wait)}
.. ->owner: f7dee750
.. held by: modprobe: 1280 [f7dee750, 120]
... acquired at: os_alloc_sema+0x37/0x5b [nvidia]


I think there is something bad in the semaphore managment.
All the traces seems to have a semaphore initial call in them.

Then when X start then switching from VT1 to VT7



softirq-tasklet/8[CPU#0]: BUG in debug_deadlocks_up_mutex at kernel/rt.c:842
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c0121d98>] __WARN_ON+0x48/0x60 (40)
[<c013b3f1>] debug_deadlocks_up_mutex+0x91/0x120 (28)
[<c013ba30>] release_lock+0x20/0x100 (16)
[<c013c267>] ____up_mutex+0x37/0x1a0 (28)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cde4>] os_release_sema+0x4a/0x57 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000003 ]
| 3-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cde4>] .. ( <= os_release_sema+0x4a/0x57 [nvidia])
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c013c37a>] .. ( <= ____up_mutex+0x14a/0x1a0)
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c0121d63>] .. ( <= __WARN_ON+0x13/0x60)

softirq-tasklet/8[CPU#0]: BUG in debug_deadlocks_up_mutex at kernel/rt.c:843
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c0121d98>] __WARN_ON+0x48/0x60 (40)
[<c013b44d>] debug_deadlocks_up_mutex+0xed/0x120 (28)
[<c013ba30>] release_lock+0x20/0x100 (16)
[<c013c267>] ____up_mutex+0x37/0x1a0 (28)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cde4>] os_release_sema+0x4a/0x57 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000003 ]
| 3-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cde4>] .. ( <= os_release_sema+0x4a/0x57 [nvidia])
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c013c37a>] .. ( <= ____up_mutex+0x14a/0x1a0)
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<c0121d63>] .. ( <= __WARN_ON+0x13/0x60)

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cde4>] os_release_sema+0x4a/0x57 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cde4>] .. ( <= os_release_sema+0x4a/0x57 [nvidia])

BUG: scheduling while atomic: softirq-tasklet/0x00000001/8
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cdc9>] os_release_sema+0x2f/0x57 [nvidia] (24)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cdb0>] .. ( <= os_release_sema+0x16/0x57 [nvidia])

BUG: scheduling while atomic: softirq-tasklet/0x00000001/8
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cdc9>] os_release_sema+0x2f/0x57 [nvidia] (24)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cdb0>] .. ( <= os_release_sema+0x16/0x57 [nvidia])

BUG: scheduling while atomic: IRQ 137/0x00000001/6429
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cd80>] os_cond_acquire_sema+0x3a/0x54 [nvidia] (24)
[<f8de18ea>] _nv002338rm+0x12/0x18 [nvidia] (32)
[<f8de5355>] _nv001650rm+0x18d/0x250 [nvidia] (80)
[<f8de5101>] _nv001647rm+0x51/0x70 [nvidia] (48)
[<f8de95c7>] rm_isr+0x3b/0x4c [nvidia] (48)
[<f900a720>] nv_kern_isr+0x29/0x65 [nvidia] (36)
[<c014b8d3>] handle_IRQ_event+0x73/0x110 (52)
[<c014c779>] thread_edge_irq+0x59/0xf0 (24)
[<c014c846>] do_hardirq+0x36/0x80 (20)
[<c014c98a>] do_irqd+0xfa/0x1c0 (28)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (185417756)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cd5e>] .. ( <= os_cond_acquire_sema+0x18/0x54 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cde4>] os_release_sema+0x4a/0x57 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cde4>] .. ( <= os_release_sema+0x4a/0x57 [nvidia])

BUG: scheduling while atomic: IRQ 137/0x00000001/6429
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cd80>] os_cond_acquire_sema+0x3a/0x54 [nvidia] (24)
[<f8de18ea>] _nv002338rm+0x12/0x18 [nvidia] (32)
[<f8de5355>] _nv001650rm+0x18d/0x250 [nvidia] (80)
[<f8de5101>] _nv001647rm+0x51/0x70 [nvidia] (48)
[<f8de95c7>] rm_isr+0x3b/0x4c [nvidia] (48)
[<f900a720>] nv_kern_isr+0x29/0x65 [nvidia] (36)
[<c014b8d3>] handle_IRQ_event+0x73/0x110 (52)
[<c014c779>] thread_edge_irq+0x59/0xf0 (24)
[<c014c846>] do_hardirq+0x36/0x80 (20)
[<c014c98a>] do_irqd+0xfa/0x1c0 (28)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (185417756)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cd5e>] .. ( <= os_cond_acquire_sema+0x18/0x54 [nvidia])

BUG: scheduling while atomic: X/0x00000001/6405
caller is schedule+0x43/0x120
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c03ce5ff>] __schedule+0x7bf/0xa60 (76)
[<c03ce8e3>] schedule+0x43/0x120 (12)
[<c013bdd3>] ____down_mutex+0x283/0x4b0 (92)
[<c03cfd55>] ___down_mutex+0x15/0x20 (16)
[<c03d041c>] _spin_lock_irqsave+0x1c/0x50 (24)
[<c021886d>] pci_bus_read_config_word+0x2d/0x70 (24)
[<f9008909>] nv_verify_pci_config+0x3a/0xd2 [nvidia] (40)
[<f8de932c>] rm_set_interrupts+0x114/0x144 [nvidia] (48)
[<f900cdc9>] os_release_sema+0x2f/0x57 [nvidia] (24)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8dc53a3>] _nv003185rm+0x73/0x90 [nvidia] (64)
[<f8dea67d>] _nv001649rm+0x7d/0x5d0 [nvidia] (64)
[<f8de966f>] rm_ioctl+0x23/0x38 [nvidia] (48)
[<f900a690>] nv_kern_ioctl+0x2a6/0x2ee [nvidia] (48)
[<c0181b3d>] do_ioctl+0x4d/0x80 (36)
[<c0181ce8>] vfs_ioctl+0x58/0x1c0 (40)
[<c0181eaf>] sys_ioctl+0x5f/0x70 (40)
[<c01031b8>] sysenter_past_esp+0x61/0x89 (-8116)

---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c0141dc4>] .... add_preempt_count+0x14/0x20
.....[<f900cdb0>] .. ( <= os_release_sema+0x16/0x57 [nvidia])

BUG: softirq-tasklet/8: lock count underflow!
[<c01047ec>] dump_stack+0x1c/0x20 (20)
[<c013a41e>] account_mutex_owner_up+0x6e/0xb0 (24)
[<c013c278>] ____up_mutex+0x48/0x1a0 (24)
[<c03cfdf7>] ___up_mutex_nosavestate+0x17/0x20 (20)
[<c013cb9e>] rt_up+0x3e/0x80 (24)
[<f900cde4>] os_release_sema+0x4a/0x57 [nvidia] (20)
[<f8de1902>] _nv002251rm+0x12/0x18 [nvidia] (32)
[<f8de51bc>] _nv001646rm+0x9c/0xa8 [nvidia] (64)
[<f8de963a>] rm_isr_bh+0x62/0x74 [nvidia] (48)
[<f900a776>] nv_kern_isr_bh+0x1a/0x1f [nvidia] (24)
[<c0126d37>] __tasklet_action+0x57/0x110 (24)
[<c0126e37>] tasklet_action+0x47/0x60 (28)
[<c0127074>] ksoftirqd+0x124/0x1d0 (32)
[<c0136258>] kthread+0x98/0xd0 (40)
[<c0101359>] kernel_thread_helper+0x5/0xc (1000988700)
---------------------------
| preempt count: 00000001 ]
| 1-level deep critical section nesting:
----------------------------------------
.. [<c013cb82>] .... rt_up+0x22/0x80
.....[<f900cde4>] .. ( <= os_release_sema+0x4a/0x57 [nvidia])
SNoiraud is offline   Reply With Quote
Old 02-22-06, 10:55 AM   #4
dmetz99
Registered User
 
Join Date: Mar 2005
Posts: 84
Default Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Well, I couldn't resist make a few comments abou this one. I saw the reply that Serge got on the LKML. My answer to those folk is: performance. Show me an open-source driver for nvidia hardware that comes anywhere close to the proprietary drivers and I'll gladly switch! I think we're all basically interested in getting our jobs done and use whatever software that gets the job done the way we want. (I'd challenge any kernel developer to get some serious 3-d modelling or visualization work done with Mesa's OGL capabilities.)
On the other hand, I'd like to see a bit more concern (on Nvidia's part) about using their driver with high-performance preemptible and RT kernels. What's the point of having high performance video hard/soft ware if you're forced to use it with a kernel basically oriented towards server-type tasks (non-preemptible kernel) for stability's sake? I'd like to see Linux mature as a multimedia-capable OS. Not because I'm an open-source bigot, but because I personally like the design and logic of *nix-based OS's and would like to have an alternative to win32.
I see real issues with both sides of this (apparently) growing rift and in the long run it's only going to drive people away from both Linux & Nvidia or a least back towards the Win32 camp.

Sorry for the mildly-frustrated-user rant, folks
dmetz99 is offline   Reply With Quote
Old 07-05-06, 10:42 PM   #5
lukemacneil92
Registered User
 
Join Date: Jul 2006
Posts: 2
Default Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

I am experiencing the same issue. I do audio processing on my box and use ingo's patch. Since applying it, I've noticed this, and also am unable to run vmware. I'm not a huge nvidia supporter, but this is an issue with two sides.


Don't just go screaming at Nvidia. We're working with open source. There are many hands that will have to work together in order to make everything work right.

Report the bug to nvidia, report the bug to ingo, or the kernel maintainers and wait.
You're paying nvidia, but you're not paying Ingo. Other things may be more important at moment. I'm sure if your patient, eventually the problem will be resolved. Until then, chose. Preempt or NVIDIA drivers.

Or... Learn to program and start looking into the code yourself.


Me, I'm too lazy to learn to code, so I'm going to lay off the patch for a while until I hear that VMWare and NVIDIA are playing nice with it.

www.lukemacneil.com
lukemacneil92 is offline   Reply With Quote
Old 07-06-06, 05:24 AM   #6
JaXXoN
Registered User
 
Join Date: Jul 2005
Location: Munich
Posts: 910
Default Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Quote:
Originally Posted by lukemacneil92
Since applying it, I've noticed this, and also am unable to run vmware.
What exactly are the symptoms? (I didn't yet tried VMware with the realtime preempt patch applied)

Quote:
Originally Posted by lukemacneil92
chose. Preempt or NVIDIA drivers.
Just to make sure, did you patched the nvidia driver? See
http://www.nvnews.net/vbulletin/showthread.php?t=70776

regards

Bernhard
JaXXoN is offline   Reply With Quote
Old 07-06-06, 08:23 AM   #7
JaXXoN
Registered User
 
Join Date: Jul 2005
Location: Munich
Posts: 910
Default Re: Real time kernel and NVIDIA : I'm using the illegal and broken nvidia driver !

Quote:
Originally Posted by lukemacneil92
Since applying it, I've noticed this, and also am unable to run vmware.
Hi!

I just tried it out - here's what i needed to do in order to get vmware
running on Fedora Core 4 with 2.6.17-rt7:

1. Apply an (unofficial?) update for vmware:

Code:
wget http://ftp.cvut.cz/vmware/vmware-any-any-update101.tar.gz
tar -xzf vmware-any-any-update101.tar.gz 
cd vmware-any-any-update101
./runme.pl
2. Fake GPL for vmmon:

Code:
cd /usr/lib/vmware/modules/source
cp vmmon.tar vmmon.tar.orig
tar -xf vmmon.tar
cd vmmon-only/linux
echo "MODULE_LICENSE(\"GPL\");" >> driver.c
cd ../..
tar -cf vmmon.tar vmmon-only
The second step bascially imposes a legal issue, so alternativly,
you can export "rt_mutex_lock" in kernel/rtmutex.c using
EXPORT_SYMBOL instead of EXPORT_SYMBOL_GPL (which
prevents the unmodfied vmmon kernel module to load).

regards

Bernhard

Last edited by JaXXoN; 07-06-06 at 10:30 AM.
JaXXoN is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 06:52 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.