[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: Merge HVM and PVH hypercall tables



On 12/10/2015 07:30 AM, Jan Beulich wrote:
On 08.12.15 at 15:20, <boris.ostrovsky@xxxxxxxxxx> wrote:
The tables are almost identical and therefore there is little reason to
keep both sets.

PVH needs 3 extra hypercalls:
* mmuext_op. PVH uses MMUEXT_TLB_FLUSH_MULTI and MMUEXT_INVLPG_MULTI to
   optimize TLB flushing. Since HVMlite guests may decide to use them as
   well we can allow these two commands for all guests in an HVM container.
I must be missing something here: Especially for the INVLPG variant
I can't see what use it could be for a PVH guest, as it necessarily
would act on a different address space (the other one may have at
least some effect due to hvm_flush_guest_tlbs()).

This is done out of xen_flush_tlb_others(), which is what PVH guests use.

And yes --- there indeed seems to be little reason to do that. But it is there now so I am not sure we can make this not work anymore for PVH guests.


And then, if those two really are meant to be enabled, why would
their _LOCAL and _ALL counterparts not be? And similarly,
MMUEXT_FLUSH_CACHE{,_GLOBAL} may then be valid to expose.

This is only used by PVH guests as optimization (see comment in xen_init_mmu_ops()). So there is no need to do a hypercall for LOCAL operations. For ALL/GLOBAL --- maybe we should allow those too, even though they are not currently used (in Linux).

(In principle we could allow LOCAL ones too. Assuming this all is needed at all)


Wasn't it much rather that PVH Dom0 needed e.g. MMUEXT_PIN_Ln_TABLE
to deal with foreign guests' page tables?

That I haven't considered.

Especially given that PVH dom0 is not booting for me, as I just found out:

...
(XEN) d0v0 EPT violation 0x1aa (-w-/r-x) gpa 0x000000c0008116 mfn 0xc0008 type 5
(XEN) d0v0 Walking EPT tables for GFN c0008:
(XEN) d0v0  epte 800000082bf50007
(XEN) d0v0  epte 800000082bf19007
(XEN) d0v0  epte 800000043c6f9007
(XEN) d0v0  epte 80500000c0008805
(XEN) d0v0  --- GLA 0xffffc90020008116
(XEN) domain_crash called from vmx.c:2816
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.7-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    0010:[<ffffffff816150dc>]
(XEN) RFLAGS: 0000000000010046   CONTEXT: hvm guest (d0v0)
(XEN) rax: 000000000000001d   rbx: 0000000000000000   rcx: ffff88014700f9b8
(XEN) rdx: 00000000000000ff   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) rbp: ffff88014700fa18   rsp: ffff88014700f9e8   r8: ffff88014700f9c0
(XEN) r9:  000000000000001d   r10: ffffffff8189c7f0   r11: 0000000000000000
(XEN) r12: ffffc90020008000   r13: ffffc90020008116   r14: 0000000000000002
(XEN) r15: 000000000000001d   cr0: 0000000080050033   cr4: 00000000000406f0
(XEN) cr3: 0000000001c0e000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0010
(XEN) Guest stack trace from rsp=ffff88014700f9e8:
(XEN)   Fault while accessing guest memory.
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.


We haven't been running regression tests for PVH dom0 so I don't know how long this has been broken.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.