Go Back   nV News Forums > Linux Support Forums > NVIDIA Linux

Newegg Daily Deals

Reply
 
Thread Tools
Old 07-15-11, 11:10 AM   #1
Xevious
Registered User
 
Join Date: Aug 2002
Posts: 291
Default Hard lockup NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrup

I have this problem with both the gtx 460's I have in my workstation at work along with the gtx 470 I have at home. I am pretty sure I had the problem with a gtx 260 as well.

For me it is reproducible by changing which X session/virtual terminal I am using with control + alt + f7/f8.

It seems like it is far more common on my workstation at work which has two gtx 460s and is outputing on all four DVI ports (no SLI). I have seen many threads about this but nvidia seems to be clueless on the issue still?

I never have this problem except when switching X sesions (it never does it any other time). I would say it does it 1 out of 20 (sometimes more, sometimes less) times I switch X from one session to another. Disabling the video cards from going to low powermode helps a little bit (I think) but I have not verified this 100%.


Here is my bug report (also used the upload applet):

http://box.houkouonchi.jp/nvidia-bug-report-sigoto.txt


I generally get:

NVRM: os_pci_init_handle: invalid context!


or:

NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context

repeated multiple times. When it happens my machine usually becomes totally unresponsive. Sometimes I can still ssh and kill X (like today) and other times my machine runs so slowly that it only periodically responds to pings (let alone anything else) and i can't even ssh in and have to do a hard reset.

When I was able to get back on the system I couldn't start X back up even when unloading/reloading the nvidia module. X would just use 100% CPU and half initialize (make my monitors go blank). I have also seen this on my home behavior. In this case dmesg showed the following:


Code:
NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context
NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context
NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context
nvidia 0000:84:00.0: PCI INT A disabled
nvidia 0000:86:00.0: PCI INT A disabled
nvidia 0000:84:00.0: PCI INT A -> GSI 50 (level, low) -> IRQ 50
nvidia 0000:84:00.0: setting latency timer to 64
vgaarb: device changed decodes: PCI:0000:84:00.0,olddecodes=none,decodes=none:owns=none
nvidia 0000:86:00.0: PCI INT A -> GSI 56 (level, low) -> IRQ 56
nvidia 0000:86:00.0: setting latency timer to 64
vgaarb: device changed decodes: PCI:0000:86:00.0,olddecodes=none,decodes=none:owns=none
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  270.41.06  Mon Apr 18 14:53:56 PDT 2011
NVRM: failed to enable MSI,
using PCI-E virtual-wire interrupts!
ioremap error for 0xbf790000-0xbf791000, requested 0x10, got 0x0
NVRM: failed to enable MSI,
using PCI-E virtual-wire interrupts!
BUG: unable to handle kernel NULL pointer dereference at           (null)
IP: [<ffffffffa18c5887>] _nv015958rm+0x6e1/0x9b2 [nvidia]
PGD 623e82067 PUD 623f3d067 PMD 0
Oops: 0002 [#1] PREEMPT SMP
last sysfs file: /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.1/net/eth1/carrier
CPU 2
Modules linked in: nvidia(P) snd_seq snd_pcm_oss snd_mixer_oss snd_emu10k1 snd_rawmidi snd_ac97_codec ac97_bus snd_seq_device snd_util_mem snd_hwdep ipv6 dm_mod snd_hda_codec_hdmi snd_hda_intel snd_hda_codec snd_ctxfi snd_pcm snd_timer snd i2c_i801 snd_page_alloc [last unloaded: nvidia]

Pid: 12101, comm: Xorg Tainted: P            2.6.38-web100 #2 Supermicro X8DTH-i/6/iF/6F/X8DTH
RIP: 0010:[<ffffffffa18c5887>]  [<ffffffffa18c5887>] _nv015958rm+0x6e1/0x9b2 [nvidia]
RSP: 0018:ffff8806252a5ad8  EFLAGS: 00010256
RAX: 0000000000000000 RBX: 0000000000008000 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000011401 RDI: ffff8805f9524828
RBP: ffff8805fd76b3a0 R08: 0000000000000020 R09: 0000000000000000
R10: ffff8805fc853000 R11: ffffffff81316762 R12: ffff8805fb77c000
R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000008000
FS:  00007fa68dd786f0(0000) GS:ffff8800bee40000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000000 CR3: 00000005fb160000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process Xorg (pid: 12101, threadinfo ffff8806252a4000, task ffff88061faa18d0)
Stack:
 ffff8805fb77c000 ffff8805fd76b3f8 ffff8805fae73000 0000000000000000
 ffff8805f9d5a740 ffffffffa18c5ba6 0000000000000000 ffff8805fae73000
 ffff8805fb77c000 ffffffffa18ce542 ffff8805fb77c000 ffff8805fae73000
Call Trace:
 [<ffffffffa18c5ba6>] ? _nv015970rm+0x4e/0x78 [nvidia]
 [<ffffffffa18ce542>] ? _nv015971rm+0x12/0x38 [nvidia]
 [<ffffffffa18868ed>] ? _nv016320rm+0x49/0x6a [nvidia]
 [<ffffffffa1869f4b>] ? _nv014862rm+0xb8/0x55b [nvidia]
 [<ffffffffa1868cf2>] ? _nv015122rm+0xe3/0x45a [nvidia]
 [<ffffffffa153e88e>] ? _nv015300rm+0xd/0x12 [nvidia]
 [<ffffffffa1a43c09>] ? _nv002400rm+0x1e7/0x28a [nvidia]
 [<ffffffffa1a44a0f>] ? _nv002394rm+0x4a7/0x685 [nvidia]
 [<ffffffffa1a4a51d>] ? rm_init_adapter+0x9d/0x111 [nvidia]
 [<ffffffffa1a6791d>] ? nv_kern_open+0x49b/0x5ce [nvidia]
 [<ffffffff810c3455>] ? chrdev_open+0x1d9/0x1f8
 [<ffffffff810c327c>] ? chrdev_open+0x0/0x1f8
 [<ffffffff810bf34f>] ? __dentry_open+0x1b3/0x2af
 [<ffffffff810ca46d>] ? finish_open+0x91/0x141
 [<ffffffff810cc243>] ? do_filp_open+0x178/0x5ad
 [<ffffffff8102f5c9>] ? get_parent_ip+0x9/0x1b
 [<ffffffff8102f5c9>] ? get_parent_ip+0x9/0x1b
 [<ffffffff8159daee>] ? _raw_spin_unlock+0x10/0x2e
 [<ffffffff810d5813>] ? alloc_fd+0x112/0x123
 [<ffffffff810bf0e5>] ? do_sys_open+0x51/0xdd
 [<ffffffff81001f7b>] ? system_call_fastpath+0x16/0x1b
Code: b3 4c 46 50 00 ba 0f 00 00 00 4c 89 e7 41 ff 94 24 28 11 00 00 48 8b 55 38 48 8b 82 10 03 00 00 44 89 f2 4a 8b 04 e8 48 c1 e2 04 <c7> 04 02 00 00 00 00 48 8b 4d 38 48 8b 81 10 03 00 00 4a 8b 04
RIP  [<ffffffffa18c5887>] _nv015958rm+0x6e1/0x9b2 [nvidia]
 RSP <ffff8806252a5ad8>
CR2: 0000000000000000
---[ end trace d734671180180aa1 ]---
So nvidia doing anything about this? it is extremely annoying.
Attached Files
File Type: gz nvidia-bug-report.log.gz (115.7 KB, 48 views)
Xevious is offline   Reply With Quote
Old 02-12-12, 04:05 AM   #2
elsifaka
Registered User
 
Join Date: Jan 2012
Location: Antananarivo Madagascar
Posts: 2
Default Re: Hard lockup NVRM: os_schedule: Attempted to yield the CPU while in atomic or inte

I also have this issue on my GTX460M. I don't know which driver version should work.

And if this error is related to xorg/kernel version in use.

I run an updated archlinux as of 12/02/2012.
Attached Files
File Type: gz nvidia-bug-report.log.gz (57.8 KB, 23 views)
elsifaka is offline   Reply With Quote
Old 02-12-12, 09:07 AM   #3
Licaon
Registered User
 
Licaon's Avatar
 
Join Date: Nov 2004
Location: Between the keyboard and the chair.
Posts: 490
Default Re: Hard lockup NVRM: os_schedule: Attempted to yield the CPU while in atomic or inte

did you get a chance to test 295.17 ?
Licaon is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:09 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.