[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] x86/hvm: unify HVM and PVH hypercall tables.
On Thu, May 15, 2014 at 04:35:57PM -0700, Mukesh Rathor wrote: > On Thu, 15 May 2014 10:32:24 -0400 > Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote: > > > > > On May 15, 2014 6:30 AM, Tim Deegan <tim@xxxxxxx> wrote: > > > > > > At 14:39 -0400 on 08 May (1399556383), Konrad Rzeszutek Wilk wrote: > > > > On Thu, May 08, 2014 at 04:31:30PM +0100, Tim Deegan wrote: > > > > > Stage one of many in merging PVH and HVM code in the > > > > > hypervisor. > > > > > > > > > > This exposes a few new hypercalls to HVM guests, all of which > > > > > were already available to PVH ones: > > > > > > > > > > - XENMEM_memory_map / XENMEM_machine_memory_map / > > > > >XENMEM_machphys_mapping: These are basically harmless, if a bit > > > > >useless to plain HVM. > > > > > > > > > > - VCPUOP_send_nmi / VCPUOP_initialise / VCPUOP[_is]_up / > > > > >VCPUOP_down This will eventually let HVM guests bring up APs the > > > > >way PVH ones do. For now, the VCPUOP_initialise paths are still > > > > >gated on is_pvh. > > > > > > > > I had a similar patch to enable this under HVM and found out that > > > > if the guest issues VCPUOP_send_nmi we get in Linux: > > > > > > > > [ 3.611742] Corrupted low memory at c000fffc (fffc phys) = > > > > 00029b00 [ 2.386785] Corrupted low memory at ffff88000000fff8 > > > > (fff8 phys) = 2990000000000 > > > > > > > > http://mid.gmane.org/20140422183443.GA6817@xxxxxxxxxxxxxxxxxxx > > > > > > Right, thanks. Do you think that's likely to be a hypervisor bug, > > > or just a "don't do that then"? > > > > It is a bug. But I don't know where and had not had a chance to > > investigate this further. > > > > My feeling is that it is APIC emulation but I might be quite off. > > > > > > AFAICT PVH domains need this as they have no other way of sending > > > NMIs. > > > > > > > Perhaps. The vAPIC that Boris had been looking at could make this > > work via the APIC path. > > But, VCPUOP_send_nmi works for PVH right, and only HVM has problem? I did not test PVH - just tried with HVM. It might be that the PVH won't have the issue. > > mukesh > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |