[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] Re: Reproducable data corruption on xen-unstable
> On Sun, 6 Feb 2005, I wrote: > > A syscall was made (connect). Immediately before the syscall, the > > floating-point stack was empty; immediately after the syscall, the > > floating-point stack was nonempty, and the TS flag (Task > Switch) was > > _cleared_. > > I now have an "easier" way to reproduce this problem. Apply the patch > below to a xen0-kernel, which checks the FPU state against TS. What it > basically does is: > > if (TS == 0 && fpu_stack_size > 0) panic ("Corrupt FPU"); > > An equivalent patch against a non-xen kernel yields no > problems that I can > detect, but patching a xen0-kernel with this patch, causes it > to panic and > reboot as soon as it hits the graphical login manager (in my > case, kdm). > (Of course, it might be specific to kdm, or my hardware, or who knows > what.) The fact that the bug is triggered when the Xserver starts makes me suspect that the vm86 system call may have something to do with this. Please can you find out whether your Xserver is using the vm86 bios or vesa modules. Also instrument the vm86 syscall in linux just to make sure. It may be able to get the Xserver to run without those modules -- you could try moving them off the module search path. Also, what CPU type do you compile your kernel for? I'm wandering whether this is an AMD-specific issue. Another place to look is the fpu_kernel_begin()/end() to see whether they're correct. Ian ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id396&op=click _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |