[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] (XEN) d0:v0: unhandled page fault (ec=0009)



On Wed, Aug 04, 2010 at 10:12:02PM +0200, Stefan Kuhne wrote:
> Am 04.08.2010 16:09, schrieb Konrad Rzeszutek Wilk:
> > On Wed, Aug 04, 2010 at 10:45:20AM +0200, Stefan Kuhne wrote:
> 
> Hello Konrad,
> 
> >> with 2.6.32.x i get the same failure.
> > 
> > And what .config are you using? Are you basing it from Pasi's
> > known ones?
> > 
> my .config from second try is at [1].

OK. that is not what Pasi has:
http://pasik.reaktio.net/xen/pv_ops-dom0-debug/config-2.6.32.10-pvops-dom0-xen-stable-x86_32

Well, I can take a look at this and spin off a kernel, but I won't be
able to look at this until after Aug 16th (LinuxCon is next week)

I would suggest you read over:
http://wiki.xensource.com/xenwiki/XenParavirtOps

and read the section titled "Getting the current development version "in the 
meantime

Also if you are feeling adventerous you oculd try to launch a debugger
and try to disect which instruction is the offending one. Fokls here on
the mailing list would be more than happy to help you when you show them
assembler code. Here is a thread (read the whole thing) on how it is
done:

http://lists.xensource.com/archives/html/xen-devel/2010-04/msg00424.html
http://www.mailinglistarchive.com/xen-devel@xxxxxxxxxxxxxxxxxxx/msg61385.html
http://copilotco.com/mail-archives/xen.2009/msg08734.html

Make sure your kernel is compiled with CONFIG_DEBUG_INFO=y
> 
> Regards,
> Stefan Kuhne
> 
> -- 
> [1]: http://skweb.buetow.org/Linux/EisXen/Dom0-Kernel/testing/2.6.32.x-xen/
> 



> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.