[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: making the PVH 64bit ABI as stableo



On 06/02/2015 12:51 PM, Stefano Stabellini wrote:
On Tue, 2 Jun 2015, Jan Beulich wrote:
On 02.06.15 at 17:11, <roger.pau@xxxxxxxxxx> wrote:
Hello,

The document describing the PVH interface was committed 9 months ago
[1], and since then there hasn't been any change regarding the
interface. PVH is still missing features in order to have feature parity
with pure PV, mainly:

  - DomU miration support.
  - PCI passthrough support.
  - 32bit support.
  - AMD support.

AFAICT however none of these features are going to change the current
ABI.

This your guess; I don't think there's any guarantee.

Let's make it a guarantee.


The more that talk was about making PVH uniformly enter the kernel in
32-bit mode.

What talk? IRL talks are irrelevant in this context. If it is not on the
list, it doesn't exist.


PCI passthrough might expand it, by adding new hypercalls, but I
don't think this should prevent us from marking the current ABI as
stable. ARM for example doesn't have PCI passthrough yet or migration
support, but the ABI has been marked as stable.

To that end, I would like to request the 64bit PVH ABI to be marked as
stable for DomUs. This is needed so external projects (like PVH support
for grub2) can progress.

Understandable, but no, not before all the fixmes in the tree got
dealt with.

What is your timeline for that? In fact does anybody have any timelines
for it?

We need to have a clear idea of what exactly needs to happen. We also
need to have confidence that is going to happen in a reasonable time
frame. At the moment we have some various mumblings about things, but we
don't have a clear breakdown of the outstanding work and names
associated with each work item.

Is anybody going to volunteer writing that todo list?

Are we going to be able to find enough volunteers with the right skills
to be confident that PVH is going to be out of "experimental" within a
reasonable time frame? It is clear that some of the clean-ups require an
hypervisor expert.

If not, I suggest we rethink our priorities and we consider dropping PVH
entirely. I don't think is fair to expect Roger or anybody else to keep
their efforts up on PVH, when actually we don't know if we'll be able to
land it.



Roger, Tim, Elena, Konrad and I had a conversation a few months ago and at that time we came up with a (somewhat informal) list of what we knew was broken:

  - 32-bit cannot boot.
  - Does not work under AMD.
  - Migration
  - PCI passthrough
  - Memory ballooning
  - Multiple VBDs, NICs, etc.
  - CPUID filtering. There are no filtering done at all which means
that certain cpuid flags are exposed to the guest.
  - The x2apic will cause a crash if the NMI handler is invoked.
  - The APERF will cause inferior scheduling decisions.
  - working with shadow code (which is what we use when migrating HVM
guests). But the nice side-benefit is that we can then run PVH on
machines without VMX or SVM support.
  - TSC modes are broken.

Obviously some things in this list are large(er) projects and some are simply bugs.

I picked 32-bit support, Elena is looking into AMD and Roger agreed to look at migration and passthrough for now. Plus, at some point we will probably need to think about how to move PVH support into a "feature-flag" support that Tim proposed at a hackathon last year (where we don't have HVM/PV/PVH guests but rather guests with various features enabled).

(Incidentally, I booted a 32-bit PVH guest yesterday finally. UP only for now)

So there is a more than one person working on it (for a specific definition of the word "working" since we all are constantly preempted by other things that also preemptable by more important things).

-boris



Maybe we could focus on improving PV on HVM and its security. Maybe we
could resurrect Intel's HVM Dom0 project
(http://events.linuxfoundation.org/sites/events/files/slides/HVM%20Dom0.pdf).
Think of how much farther we would be if we didn't start PVH in the
first place.


P.S.
This message is not addressed to Jan in particular but to the larger Xen
community.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.