[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v14 00/20] Introduce PVH domU support



On Mon, Nov 04, 2013 at 12:14:49PM +0000, George Dunlap wrote:
> Updates:
>  - Fixed bugs in v14:
>    Zombie domains, FreeBSD crash, Crash at 4GiB, HVM crash
>    (Thank you to Roger Pau Mone for fixes to the last 3)
>  - Completely eliminated PV emulation codepath


Odd, you dropped Mukesh email from the patch series - so he can't
jump on answering questions right away.

> 
> == RFC ==
> 
> We had talked about accepting the patch series as-is once I had the
> known bugs fixed; but I couldn't help making an attempt at using the
> HVM IO emulation codepaths so that we could completely eliminate
> having to use the PV emulation code, in turn eliminating some of the
> uglier "support" patches required to make the PV emulation code
> capable of running on a PVH guest.  The idea for "admin" pio ranges
> would be that we would use the vmx hardware to allow the guest direct
> access, rather than the "re-execute with guest GPRs" trick that PV
> uses.  (This functionality is not implememted by this patch series, so
> we would need to make sure it was sorted for the dom0 series.)
> 
> The result looks somewhat cleaner to me.  On the other hand, because
> string in & out instructions use the full emulation code, it means
> opening up an extra 6k lines of code to PVH guests, including all the
> complexity of the ioreq path.  (It doesn't actually send ioreqs, but
> since it shares much of the path, it shares much of the complexity.)
> Additionally, I'm not sure I've done it entirely correctly: the guest
> boots and the io instructions it executes seem to be handled
> correctly, but it may not be using the corner cases.

The case I think Mukesh was hitting was the 'speaker_io' path. But
perhaps I am misremembering it?

> 
> This also means no support for "legacy" forced invalid ops -- only native
> cpuid is supported in this series.

OK.
> 
> I have the fixes in another series, if people think it would be better
> to check in exactly what we had with bug fixes ASAP.
> 
> Other "open issues" on the design (which need not stop the series
> going in) include:
> 
>  - Whether a completely separate mode is necessary, or whether having
> just having HVM mode with some flags to disable / change certain
> functionality would be better
> 
>  - Interface-wise: Right now PVH is special-cased for bringing up
> CPUs.  Is this what we want to do going forward, or would it be better
> to try to make it more like PV (which was tried before and is hard), or more
> like HVM (which would involve having emulated APICs, &c &c).

How is it hard? From the Linux standpoint it is just an hypercall?

> 
> == Summay ==
> 
> This patch series is a reworking of a series developed by Mukesh
> Rathor at Oracle.  The entirety of the design and development was done
> by him; I have only reworked, reorganized, and simplified things in a
> way that I think makes more sense.  The vast majority of the credit
> for this effort therefore goes to him.  This version is labelled v14
> because it is based on his most recent series, v11.
> 
> Because this is based on his work, I retain the "Signed-off-by" in
> patches which are based on his code.  This is not meant to imply that
> he supports the modified version, only that he is involved in
> certifying that the origin of the code for copyright purposes.
> 
> This patch series is broken down into several broad strokes:
> * Miscellaneous fixes or tweaks
> * Code motion, so future patches are simpler
> * Introduction of the "hvm_container" concept, which will form the
> basis for sharing codepaths between hvm and pvh
> * Start with PVH as an HVM container
> * Disable unneeded HVM functionality
> * Enable PV functionality
> * Disable not-yet-implemented functionality
> * Enable toolstack changes required to make PVH guests
> 
> This patch series can also be pulled from this git tree:
>  git://xenbits.xen.org/people/gdunlap/xen.git out/pvh-v14
> 
> The kernel code for PVH guests can be found here:
>  git://oss.oracle.com/git/mrathor/linux.git pvh.v9-muk-1
> (That repo/branch also contains a config file, pvh-config-file)
> 
> Changes in v14 can be found inline; major changes since v13 include:
> 
> * Various bug fixes
> 
> * Use HVM emulation for IO instructions
> 
> * ...thus removing many of the changes required to allow the PV
>   emulation codepath to work for PVH guests
> 
> Changes in v13 can be found inline; major changes since v12 include:
> 
> * Include Mukesh's toolstack patches (v4)
> 
> * Allocate hvm_param struct for PVH domains; remove patch disabling
>   memevents
> 
> For those who have been following the series as it develops, here is a
> summary of the major changes from Mukesh's series (v11->v12):
> 
> * Introduction of "has_hvm_container_*()" macros, rather than using
>   "!is_pv_*".  The patch which introduces this also does the vast
>   majority of the "heavy lifting" in terms of defining PVH.
> 
> * Effort is made to use as much common code as possible.  No separate
>   vmcs constructor, no separate vmexit handlers.  More of a "start
>   with everything and disable if necessary" approach rather than
>   "start with nothing and enable as needed" approach.
> 
> * One exception is arch_set_info_guest(), where a small amount of code
>   duplication meant a lot fewer "if(!is_pvh_domain())"s in awkward
>   places
> 
> * I rely on things being disabled at a higher level and passed down.
>   For instance, I no longer explicitly disable rdtsc exiting in
>   construct_vmcs(), since that will happen automatically when we're in
>   NEVER_EMULATE mode (which is currently enforced for PVH).  Similarly
>   for nested vmx and things relating to HAP mode.
> 
> * I have also done a slightly more extensive audit of is_pv_* and
>   is_hvm_* and tried to do more restrictions.
> 
> * I changed the "enable PVH by setting PV + HAP", replacing it instead
>   with a separate flag, just like the HVM case, since it makes sense
>   to plan on using shadow in the future (although it is 
> 
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> CC: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> CC: Jan Beulich <beulich@xxxxxxxx>
> CC: Tim Deegan <tim@xxxxxxx>
> CC: Keir Fraser <keir@xxxxxxx>
> CC: Ian Jackson <ian.jackson@xxxxxxxxxx>
> CC: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.