View Single Post
Old 05-03-03, 10:12 PM   #1
Registered User
Join Date: May 2003
Posts: 12
Default module nvnet not working


I got a brand new shuttle, the SN41G2, with a nforce2 chipset inside, and everything is working really nice and fine under linux expect the network. I'm running a debian and here what I get:

With the bare 2.4.18-bf kernel shipped with the debian system, I get the message "PCI: Setting latency timer of device 00:04.0 to 64" when I modprobe nvnet, then if I try to ifconfig eth0 up the device, the kernel goes panic inside the syscall of ifconfig. Further call to ifconfig or rmmod nvnet does freeze.

Then I tried to compile both a 2.4.20 and 2.4.21-pre7 kernel, and there the module loads perfectly, I can ifconfig eth0 up the nvnet adapter, but it simply doesn't work. If I try to send packets thru the interface, the little led behind the machine doesn't blink. And of course, ifconfig shows that packets are emitted, but no packet at all are received. And things like ethereal shows that nothing goes out the interface (with ethereal running on another computer linked to the shuttle) nor in (with ethereal running on the shuttle, hence the kernel actually thinks it sends the packets, so I can see them)

So, well, what should I do? It seems that this bug was not already reported (I went through the forum, and I didn't see any message like mine)

I'm actually running the shuttle under linux with a third party PCI network card which is not a good solution for me since I've got only one PCI slot in the shuttle So I really would like the nvnet driver to be working

Last edited by pixelxeno; 05-03-03 at 10:17 PM.
pixelxeno is offline   Reply With Quote