[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HVMlite ABI specification DRAFT A



At 18:48 +0100 on 04 Feb (1454611694), Roger Pau Monné wrote:
> Hello,
> 
> I've Cced a bunch of people who have expressed interest in the HVMlite
> design/implementation, both from a Xen or OS point of view. If you
> would like to be removed, please say so and I will remove you in
> further iterations. The same applies if you want to be added to the Cc.
> 
> This is an initial draft on the HVMlite design and implementation. I've
> mixed certain aspects of the design with the implementation, because I
> think we are quite tied by the implementation possibilities in certain
> aspects, so not speaking about it would make the document incomplete. I
> might be wrong on that, so feel free to comment otherwise if you would
> prefer a different approach. At least this should get the conversation
> started into a couple of pending items regarding HVMlite. I don't want
> to spoil the fun, but IMHO they are:
> 
>  - Local APIC: should we _always_ provide a local APIC to HVMlite
>    guests?
>  - HVMlite hardware domain: can we get rid of the PHYSDEV ops and PIRQ
>    event channels?
>  - HVMlite PCI-passthrough: can we get rid of pciback/pcifront?

FWIW, I think we should err on the side of _not_ emulating hardware or
providing ACPI; if the hypervisor interfaces are insufficient/unpleasant
we should make them better.

I understand that PCI passthrough is difficult because the hardware
design is so awkward to retrofit isolation onto.  But I'm very
uncomfortable with the idea of faking out things like PCI root
complexes inside the hypervisor -- as a way of getting rid of qemu
it's laughable.  I'd be much happier saying that PCI passthrough
requires PV or legacy HVM until a better plan can be found
(e.g. depriv helpers).

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.