nV News Forums

 
 

nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   NVIDIA Linux (http://www.nvnews.net/vbulletin/forumdisplay.php?f=14)
-   -   What better, PCI-X MSI or Wired IRQ? [177.80] (http://www.nvnews.net/vbulletin/showthread.php?t=120710)

pavlinux 10-07-08 06:09 PM

What better, PCI-X MSI or Wired IRQ? [177.80]
 
What better, PCI-X MSI or Wired IRQ, for performance and stability.

NVreg_EnableMSI=0 or 1 ?

Drone4four 10-07-08 09:39 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
What is PCI-X, anyways? Googling PCI X MSI brings up results about Microstar International's GPU products, which I think is different from how MSI is being used in this context. AaronP mentioned it too in the release notes for the 177.80 driver.

SilentLexx 10-07-08 11:52 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
I'm testing now MSI. Very interesting... Performance is good, for stability will see ;) I have now 2 devices on msi interrupts bus:

cat /proc/interrupts | grep -i msi
221: 37100 202 PCI-MSI-edge eth1
222: 51351 77 PCI-MSI-edge nvidia


PS:
Message Signaled Interrupts (MSI and MSI-X) (PCI_MSI)

This allows device drivers to enable MSI (Message Signaled
Interrupts). Message Signaled Interrupts enable a device to
generate an interrupt using an inbound Memory Write on its
PCI bus instead of asserting a device IRQ pin.

Use of PCI MSI interrupts can be disabled at kernel boot time
by using the 'pci=nomsi' option. This disables MSI for the
entire system.

hyfans 10-08-08 12:45 AM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Quote:

Originally Posted by SilentLexx (Post 1801752)
I'm testing now MSI. Very interesting... Performance is good, for stability will see ;) I have now 2 devices on msi interrupts bus:

cat /proc/interrupts | grep -i msi
221: 37100 202 PCI-MSI-edge eth1
222: 51351 77 PCI-MSI-edge nvidia


PS:
Message Signaled Interrupts (MSI and MSI-X) (PCI_MSI)

This allows device drivers to enable MSI (Message Signaled
Interrupts). Message Signaled Interrupts enable a device to
generate an interrupt using an inbound Memory Write on its
PCI bus instead of asserting a device IRQ pin.

Use of PCI MSI interrupts can be disabled at kernel boot time
by using the 'pci=nomsi' option. This disables MSI for the
entire system.

looks interesting, does it need some extra configurations on my debian sid?

i'm using a customized kernel 2.6.26.

zcat /proc/config.gz | grep MSI

CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y


lspci -v, nvidia related output, so i can tell it's MSI disabled at this moment, am i right?


02:00.0 VGA compatible controller: nVidia Corporation GeForce 8500 GT (rev a1)
Subsystem: ASUSTeK Computer Inc. Device 8245
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at fa000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f8000000 (64-bit, non-prefetchable) [size=32M]
I/O ports at bc00 [size=128]
[virtual] Expansion ROM at fb000000 [disabled] [size=128K]
Capabilities: [60] Power Management version 2
Capabilities: [68] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Virtual Channel <?>
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information <?>
Kernel driver in use: nvidia
Kernel modules: nvidia

JaXXoN 10-08-08 02:52 AM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Hi!

Message Signaled Interrupts (MSI) have the advantage that you don't
need to share physical signal lines on the mainboard because the
PCI/PCIe/PCI-X device issues an "interrupt command" on the PCI/PCIe/PCI-X
bus to the chipset, instead. For the OS, that means that it can assign an
individual interrupt vector for each device rather than polling all driver's ISRs
using the same IRQ line.

Example: in my configuration, a "cat /proc/interrupts" shows (except):
Code:

          CPU0      CPU1     
 16:    207031          0  IO-APIC-fasteoi  arcmsr, EMU10K1, nvidia
220:      1608    165138  PCI-MSI-edge      eth0

This means that the nvidia card, the sound card and the raid controller
are all sharing the same (physical or logical) interrupt line. If an IRQ #16
drops in, the OS needs to call all three ISRs, because it can't directly
determine which of the devices actually caused the interrupt (or even
all three devices may have caused an interrupt at the same time).
Typically one interrupt drops in and one ISR will actively handle
this interrupt, the other two ISRs will typically only check that they are
not in charge and return early (but nevertheless consume some CPU
cycles).

Sharing interrupts is always a good source of problems, i.e. race-conditions,
or if one driver goes mad and disables the interrupt line, then the other two
driver are gone, too: when the nivida driver crashes in my setup (i.e.
because of overheat) the sound and the access to the harddisks is also gone.
When using MSI, instead, the OS needs to call only one ISR and it is less likely
that a driver problem leads to a complete system failure.

regards

Bernhard

kgroombr 10-08-08 06:12 AM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
PCI-X != PCIe

PCI-X is a evolution of PCI which still used parallel transfers as opposed to PCIe which uses serial transfers.

Ken

JaXXoN 10-08-08 07:08 AM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Quote:

Originally Posted by kgroombr (Post 1801990)
PCI-X is a evolution of PCI which still used parallel transfers as opposed to PCIe which uses serial transfers.

True, but concerning the MSI protocol on top of the physical PCI[-X|e] bus,
this doesn't make any difference :-)

regards

Bernhard

logan 10-08-08 04:25 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
In my dmesg, I see:
Code:

[    0.661199] PCI: Setting latency timer of device 0000:00:01.0 to 64
[    0.661199] assign_interrupt_mode Found MSI capability
[    0.661199] Allocate Port Service[0000:00:01.0:pcie00]
[    0.661199] Allocate Port Service[0000:00:01.0:pcie03]
[    0.661199] PCI: Setting latency timer of device 0000:00:1c.0 to 64
[    0.661199] assign_interrupt_mode Found MSI capability
[    0.661199] Allocate Port Service[0000:00:1c.0:pcie00]
[    0.661199] Allocate Port Service[0000:00:1c.0:pcie02]
[    0.661199] Allocate Port Service[0000:00:1c.0:pcie03]
[    0.661199] PCI: Setting latency timer of device 0000:00:1c.4 to 64
[    0.661199] assign_interrupt_mode Found MSI capability
[    0.661199] Allocate Port Service[0000:00:1c.4:pcie00]
[    0.661199] Allocate Port Service[0000:00:1c.4:pcie02]
[    0.661199] Allocate Port Service[0000:00:1c.4:pcie03]
[    0.661199] PCI: Setting latency timer of device 0000:00:1c.5 to 64
[    0.661199] assign_interrupt_mode Found MSI capability
[    0.661199] Allocate Port Service[0000:00:1c.5:pcie00]
[    0.661199] Allocate Port Service[0000:00:1c.5:pcie02]
[    0.661199] Allocate Port Service[0000:00:1c.5:pcie03]

and lspci -s *:id shows:
Code:

00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02)
00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02)
00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02)

Does this mean that only my PCIe ports support MSI? What if my video card is sharing with things that aren't MSI-aware/enabled? Can I still make use of EnableMSI=1?
Code:

16:    6789643    6791704  IO-APIC-fasteoi  uhci_hcd:usb1, uhci_hcd:usb7, EMU10K1, nvidia
My system (8800GT) has been stable with the 177.x drivers and I'm afraid to change anything, but if it's beneficial... :P

I see that ajw1980 just posted about problems resuming with EnableMSI=1. Is anyone else using this? How's it so far?

alan242 10-08-08 06:47 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Quote:

Originally Posted by logan (Post 1802759)
Does this mean that only my PCIe ports support MSI?

It depends. It's been part of the PCI spec since version 2.2. I have a QStor PCI SATA card that supports MSI. You might want to do a command like
lspci -vvv |grep -E '[0-9]:[0-9][0-9]\.[0-9]|Message|Address'
and look for bridge and other devices that have the Message Signalled Interrupt capability.
I think there's also some chips on a blacklist. So you should check the boot log for any messages that indicate MSI is being disabled.

Quote:

Originally Posted by logan (Post 1802759)
What if my video card is sharing with things that aren't MSI-aware/enabled? Can I still make use of EnableMSI=1?

Yes. When MSI is enabled, the kernel assigns a new IRQ number/route to the device and it is removed from the initial IRQ route. With MSI enabled, the boot log shows this as the initial IRQ when the driver is loading.

[ 7.578952] nvidia 0000:01:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18

Then after the driver switches to MSI, /proc/interrupts looks like:

Code:

          CPU0      CPU1     
  0:    8869393        244  IO-APIC-edge      timer
  1:        13          9  IO-APIC-edge      i8042
  8:        51          1  IO-APIC-edge      rtc0
  9:          0          4  IO-APIC-fasteoi  acpi
 14:          0          0  IO-APIC-edge      pata_atiixp
 15:          0          0  IO-APIC-edge      pata_atiixp
 16:    104814          1  IO-APIC-fasteoi  ohci_hcd:usb1, HDA Intel
 17:          1          1  IO-APIC-fasteoi  ohci_hcd:usb2, ohci_hcd:usb4
 18:          1          1  IO-APIC-fasteoi  ohci_hcd:usb3, ohci_hcd:usb5
 21:          2          1  IO-APIC-fasteoi  ohci1394
 22:    530004        28  IO-APIC-fasteoi  ahci, ohci_hcd:usb6
218:        291      74047  PCI-MSI-edge      nvidia
219:      1339    566655  PCI-MSI-edge      eth0
220:          0          4  PCI-MSI-edge      sata_qstor
221:      1272    253824  PCI-MSI-edge      ehci_hcd:usb7

Quote:

Originally Posted by logan (Post 1802759)
I see that ajw1980 just posted about problems resuming with EnableMSI=1. Is anyone else using this? How's it so far?

YMMV.

On one system I have (AMD 770 / 7600GS), it works quite well.

On a another system I have (MCP51 / 6150PV), it works but Bad Things(TM) under heavy load. I haven't had time to figure out what's up with that one yet as it's a MythTV box as most of the IRQs are unshared and the box is usually busy.

Alan

ledoc 10-08-08 07:00 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Switching to MSI works here, but it wont resume from suspend (to ram) any more (T61p, 2.6.26.5 vanilla, 177.80).

logan 10-08-08 07:23 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
Thanks Alan, that's very helpful.

Looks like it's all PCI Express on my MSI P35 PLATINUM, (onboard) network and now video. The only PCI card I have installed is a Soundblaster Live and that's probably too old (~1998). So far so good, but I haven't done much beyond a quick game of NWN and testing VTTY changes in X after setting NVreg_UseVBios=0.

philipl 10-08-08 10:10 PM

Re: What better, PCI-X MSI or Wired IRQ? [177.80]
 
FWIW, On my Dell XPS m1330 laptop, it will fail to resume from suspend if I turn MSI on.


All times are GMT -5. The time now is 05:53 AM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.