[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/hvm: unify HVM and PVH hypercall tables.



On May 15, 2014 6:30 AM, Tim Deegan <tim@xxxxxxx> wrote:
>
> At 14:39 -0400 on 08 May (1399556383), Konrad Rzeszutek Wilk wrote: 
> > On Thu, May 08, 2014 at 04:31:30PM +0100, Tim Deegan wrote: 
> > > Stage one of many in merging PVH and HVM code in the hypervisor. 
> > > 
> > > This exposes a few new hypercalls to HVM guests, all of which were 
> > > already available to PVH ones: 
> > > 
> > >Â - XENMEM_memory_map / XENMEM_machine_memory_map / 
> > >XENMEM_machphys_mapping: 
> > >ÂÂÂ These are basically harmless, if a bit useless to plain HVM. 
> > > 
> > >Â - VCPUOP_send_nmi / VCPUOP_initialise / VCPUOP[_is]_up / VCPUOP_down 
> > >ÂÂÂ This will eventually let HVM guests bring up APs the way PVH ones do. 
> > >ÂÂÂ For now, the VCPUOP_initialise paths are still gated on is_pvh. 
> > 
> > I had a similar patch to enable this under HVM and found out that 
> > if the guest issues VCPUOP_send_nmi we get in Linux: 
> > 
> > [ÂÂÂ 3.611742] Corrupted low memory at c000fffc (fffc phys) = 00029b00 
> > [ÂÂÂ 2.386785] Corrupted low memory at ffff88000000fff8 (fff8 phys) = 
> > 2990000000000 
> > 
> > http://mid.gmane.org/20140422183443.GA6817@xxxxxxxxxxxxxxxxxxx 
>
> Right, thanks. Do you think that's likely to be a hypervisor bug, or 
> just a "don't do that then"?

It is a bug. But I don't know where and had not had a chance to investigate 
this further.

My feeling is that it is APIC emulation but I might be quite off.
>
> AFAICT PVH domains need this as they have no other way of sending 
> NMIs. 
>

Perhaps. The vAPIC that Boris had been looking at could make this work via the 
APIC path.
> Tim. 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.