[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v15 00/18] Introduce PVH domU support



Acked-by Eddie Dong <eddie.dong@xxxxxxxxx>

-----Original Message-----
From: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx] 
Sent: Monday, November 11, 2013 10:57 PM
To: xen-devel@xxxxxxxxxxxxx
Cc: George Dunlap; Mukesh Rathor; Jan Beulich; Tim Deegan; Keir Fraser; Ian 
Jackson; Ian Campbell; Nakajima, Jun; Dong, Eddie
Subject: [PATCH v15 00/18] Introduce PVH domU support

== Status ==

01   a  Allow vmx_update_debug_state to be called when v!=current
02   A  libxc: Move temporary grant table mapping to end of memory
03   a  pvh prep: code motion
04 n    pvh: Tolerate HVM guests having no ioreq page
05   a  Introduce pv guest type and has_hvm_container macros
06 *    pvh: Introduce PVH guest type
07      pvh: Disable unneeded features of HVM containers
08 *  ! pvh: vmx-specific changes
09  ra  pvh: Do not allow PVH guests to change paging modes
10   a  pvh: PVH access to hypercalls
11  Ra  pvh: Use PV e820
12 *  ! pvh: Set up more PV stuff in set_info_guest
13 *  ! pvh: PV cpuid
14 *  ! pvh: Use PV handlers for :IO
15   A  pvh: Disable 32-bit guest support for now
16   a  pvh: Restrict tsc_mode to NEVER_EMULATE for now
17   a  pvh: Documentation
18   a  PVH xen tools: libxc changes to build a PVH guest.
19   a  PVH xen tools: libxl changes to create a PVH guest.

Key
  *: Non-trivial changes in v15.
  n: New in v15
a/r: acked / reviewed (lowercase for 1, capitals for >1)
  !: Still missing any review from necesary maintainers (VMX maintainers)

This series addresses review comments from versions 13 and 14.

Additionally, it generalizes the PVH IO path from the previous series, by 
allowing HVM guests in general to tolerate not having a backing device model, 
and then registering a PIO handler for PVH guests which will call into the PV 
IO handlers.

Major "to-do" or "open issues" (which need not stop the series going
in) include:

 - Get rid of the extra mode, and make PVH just HVM with some flags

 - Implement full PV set_info_guest, to make the cpu bring-up code the same

 - Whether to support forced invalid ops. ATM the only effect of not
   having this is that xen-detect claims to be in an HVM xen guest
   rather than a PV xen guest.

== Summay ==

This patch series is a reworking of a series developed by Mukesh Rathor at 
Oracle.  The entirety of the design and development was done by him; I have 
only reworked, reorganized, and simplified things in a way that I think makes 
more sense.  The vast majority of the credit for this effort therefore goes to 
him.  This version is labelled v14 because it is based on his most recent 
series, v11.

Because this is based on his work, I retain the "Signed-off-by" in patches 
which are based on his code.  This is not meant to imply that he supports the 
modified version, only that he is involved in certifying that the origin of the 
code for copyright purposes.

This patch series is broken down into several broad strokes:
* Miscellaneous fixes or tweaks
* Code motion, so future patches are simpler
* Introduction of the "hvm_container" concept, which will form the basis for 
sharing codepaths between hvm and pvh
* Start with PVH as an HVM container
* Disable unneeded HVM functionality
* Enable PV functionality
* Disable not-yet-implemented functionality
* Enable toolstack changes required to make PVH guests

This patch series can also be pulled from this git tree:
 git://xenbits.xen.org/people/gdunlap/xen.git out/pvh-v15

The kernel code for PVH guests can be found here:
 git://oss.oracle.com/git/mrathor/linux.git pvh.v9-muk-1 (That repo/branch also 
contains a config file, pvh-config-file)

Changes in v14 can be found inline; major changes since v13 include:

* Various bug fixes

* Use HVM emulation for IO instructions

* ...thus removing many of the changes required to allow the PV
  emulation codepath to work for PVH guests

Changes in v13 can be found inline; major changes since v12 include:

* Include Mukesh's toolstack patches (v4)

* Allocate hvm_param struct for PVH domains; remove patch disabling
  memevents

For those who have been following the series as it develops, here is a summary 
of the major changes from Mukesh's series (v11->v12):

* Introduction of "has_hvm_container_*()" macros, rather than using
  "!is_pv_*".  The patch which introduces this also does the vast
  majority of the "heavy lifting" in terms of defining PVH.

* Effort is made to use as much common code as possible.  No separate
  vmcs constructor, no separate vmexit handlers.  More of a "start
  with everything and disable if necessary" approach rather than
  "start with nothing and enable as needed" approach.

* One exception is arch_set_info_guest(), where a small amount of code
  duplication meant a lot fewer "if(!is_pvh_domain())"s in awkward
  places

* I rely on things being disabled at a higher level and passed down.
  For instance, I no longer explicitly disable rdtsc exiting in
  construct_vmcs(), since that will happen automatically when we're in
  NEVER_EMULATE mode (which is currently enforced for PVH).  Similarly
  for nested vmx and things relating to HAP mode.

* I have also done a slightly more extensive audit of is_pv_* and
  is_hvm_* and tried to do more restrictions.

* I changed the "enable PVH by setting PV + HAP", replacing it instead
  with a separate flag, just like the HVM case, since it makes sense
  to plan on using shadow in the future (although it is 

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
CC: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
CC: Jan Beulich <beulich@xxxxxxxx>
CC: Tim Deegan <tim@xxxxxxx>
CC: Keir Fraser <keir@xxxxxxx>
CC: Ian Jackson <ian.jackson@xxxxxxxxxx>
CC: Ian Campbell <ian.campbell@xxxxxxxxxx>
CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
CC: Eddie Dong <eddie.dong@xxxxxxxxx>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.