[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 00/16] xenhost support



On 2019-06-07 9:21 a.m., Juergen Gross wrote:
On 07.06.19 17:22, Joao Martins wrote:
On 6/7/19 3:51 PM, Juergen Gross wrote:
On 09.05.19 19:25, Ankur Arora wrote:
Hi all,

This is an RFC for xenhost support, outlined here by Juergen here:
https://lkml.org/lkml/2019/4/8/67.

First: thanks for all the effort you've put into this series!

The high level idea is to provide an abstraction of the Xen
communication interface, as a xenhost_t.

xenhost_t expose ops for communication between the guest and Xen
(hypercall, cpuid, shared_info/vcpu_info, evtchn, grant-table and on top
of those, xenbus, ballooning), and these can differ based on the kind
of underlying Xen: regular, local, and nested.

I'm not sure we need to abstract away hypercalls and cpuid. I believe in
case of nested Xen all contacts to the L0 hypervisor should be done via
the L1 hypervisor. So we might need to issue some kind of passthrough
hypercall when e.g. granting a page to L0 dom0, but this should be
handled via the grant abstraction (events should be similar).

Just to be clear: By "kind of passthrough hypercall" you mean (e.g. for every access/modify of grant table frames) you would proxy hypercall to L0 Xen via L1 Xen?

It might be possible to spare some hypercalls by directly writing to
grant frames mapped into L1 dom0, but in general you are right.
Wouldn't we still need map/unmap_grant_ref?
AFAICS, both the xenhost_direct and the xenhost_indirect cases should be
very similar (apart from the need to proxy in the indirect case.)

Ankur



Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.