[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Could some help to explain code under /xen/arch/x86/hvm



Thanks George.
 
Well.
What is EXIT_REASON_EPT_VIOLATION for in hvm_vmx_exit_reason_name?
Is it refer to guest page fault?
 
> Date: Fri, 18 Feb 2011 10:28:41 +0000
> Subject: Re: [Xen-devel] Could some help to explain code under /xen/arch/x86/hvm
> From: George.Dunlap@xxxxxxxxxxxxx
> To: tinnycloud@xxxxxxxxxxx
> CC: xen-devel@xxxxxxxxxxxxxxxxxxx
>
> As the name suggests, it contains code related to hardware-assisted
> virtualizaton (HVM). hvm/vmx/* have to do specifically with Intel's
> HVM tecnology, and hvm/svm/* have to do with AMD's HVM technology.
> Overall activities:
> * Dealing with virtualizing privileged instructions (pagetables, CR3,
> LDTs IDTs, traps, interrupts, &c)
> * Serving as an interface to pagetable functionality, either using HAP
> (hardware-assisted paging) or shadow pagetables
> * Dealing with device IO: PIO and MMIO. Most of these are passed back
> to qemu, but some are handled in the hypervisor.
> * Implementing in-hypervisor devices
>
> The instruction emu lation routines are abstracted such that they can
> be shared between shadow pagetable code and MMIO/PIO.
>
> Hopefully that gives you an idea as you explore on your own. :-)
>
> -George
>
> On Fri, Feb 18, 2011 at 9:10 AM, tinnycloud <tinnycloud@xxxxxxxxxxx> wrote:
> > Hi:
> >
> >        I am trying to understand more Xen code.
> >
> >        Regards to /xen/arch/x86/hvm, I would like to know what is the code
> > for?
> >
> >    What is it try to emulate, it looks like has relation with IO, what is
> > the logic?
> >
> >        Many thanks.
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen -devel
> >
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.