View Single Post
Old 05-12-04, 04:24 PM   #19
deadmeat
Registered User
 
Join Date: Apr 2004
Posts: 2
Default Re: 2.6.6 kernel breaks NVIDIA driver?

Quote:
Originally Posted by hppnq
It always pisses me off to no end that people install a test release and expect it to work perfect and the moment there is a problem they complain constantly about it.

Yes that is annoying, fortunately most of the users on this forum are quite polite. I fail to see how your post relates to the rest of this thread.

Only because FC is using 4K stacks does not mean that this is a wise decision. And if you have further read the thread on lkml, then you would see that there are a lot of open questions about that.
So, it is not even sure that 4k stacks will apear in vanilla 2.6, why are you trying to panicing the people?


What do you mean? 8K stacks are almost out of the window (you do read LKML, don't you?). Wouldn't you want to know if you had to use Nvidia's current drivers?!
for the people who just can seem to research things before they open their mouths, from the 2.6.6 release changelog:

"<akpm@osdl.org>
[PATCH] ia32: 4Kb stacks (and irqstacks) patch

From: Arjan van de Ven <arjanv@redhat.com>

Below is a patch to enable 4Kb stacks for x86. The goal of this is to

1) Reduce footprint per thread so that systems can run many more threads
(for the java people)

2) Reduce the pressure on the VM for order > 0 allocations. We see real life
workloads (granted with 2.4 but the fundamental fragmentation issue isn't
solved in 2.6 and isn't solvable in theory) where this can be a problem.
In addition order > 0 allocations can make the VM "stutter" and give more
latency due to having to do much much more work trying to defragment

The first 2 bits of the patch actually affect compiler options in a generic
way: I propose to disable the -funit-at-a-time feature from gcc. With this
enabled (and it's default with -O2), gcc will very agressively inline
functions, which is nice and all for userspace, but for the kernel this makes
us suffer a gcc deficiency more: gcc is extremely bad at sharing stackslots,
for example a situation like this:

if (some_condition)
function_A();
else
function_B();

with -funit-at-a-time, both function_A() and _B() might get inlined, however
the stack usage of both functions of the parent function grows the stack
usage of both functions COMBINED instead of the maximum of the two. Even
with the normal 8Kb stacks this is a danger since we see some functions grow
3Kb to 4Kb of stack use this way. With 4Kb stacks, 4Kb of stack usage growth
obviously is deadly ;-( but even with 8Kb stacks it's pure lottery.
Disabling -funit-at-a-time also exposes another thing in the -mm tree; the
attribute always_inline is considered harmful by gcc folks in that when gcc
makes a decision to NOT inline a function marked this way, it throws an
error. Disabling -funit-at-a-time disables some of the agressive inlining
(eg of large functions that come later in the .c file) so this would make
your tree not compile.

The 4k stackness of the kernel is included in modversions, so people don't
load 4k-stack modules into 8k-stack kernels.

At present 4k stacks are selectable in config. When the feature has settled
in we should remove the 8k option. This will break the nvidia modules. But
Fedora uses 4k stacks so a new nvidia driver is expected soon"
deadmeat is offline   Reply With Quote