[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 0/6] tools/libs: add missing support of linear p2m_list, cleanup



There are some corners left which don't support the not so very new
linear p2m list of pv guests, which has been introduced in Linux kernel
3.19 and which is mandatory for non-legacy versions of Xen since kernel
4.14.

This series adds support for the linear p2m list where it is missing
(colo support and "xl dump-core").

In theory it should be possible to merge the p2m list mapping code
from migration handling and core dump handling, but this needs quite
some cleanup before this is possible.

The first three patches of this series are fixing real problems, so
I've put them at the start of this series, especially in order to make
backports easier.

The other three patches are only the first steps of cleanup. The main
work done here is to concentrate all p2m mapping in libxenguest instead
of having one implementation in each of libxenguest and libxenctrl.

Merging the two implementations should be rather easy, but this will
require to touch many lines of code, as the migration handling variant
seems to be more mature, but it is using the migration stream specific
structures heavily. So I'd like to have some confirmation that my way
to clean this up is the right one.

My idea would be to add the data needed for p2m mapping to struct
domain_info_context and replace the related fields in struct
xc_sr_context with a struct domain_info_context. Modifying the
interface of xc_core_arch_map_p2m() to take most current parameters
via struct domain_info_context would then enable migration coding to
use xc_core_arch_map_p2m() for mapping the p2m. xc_core_arch_map_p2m()
should look basically like the current migration p2m mapping code
afterwards.

Any comments to that plan?

Changes in V2:
- added missing #include in ocaml stub

Juergen Gross (6):
  tools/libs/guest: fix max_pfn setting in map_p2m()
  tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m
    table
  tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any()
  tools/libs: move xc_resume.c to libxenguest
  tools/libs: move xc_core* from libxenctrl to libxenguest
  tools/libs/guest: make some definitions private to libxenguest

 tools/include/xenctrl.h                       |  63 ---
 tools/include/xenguest.h                      |  63 +++
 tools/libs/ctrl/Makefile                      |   4 -
 tools/libs/ctrl/xc_core_x86.c                 | 223 ----------
 tools/libs/ctrl/xc_domain.c                   |   2 -
 tools/libs/ctrl/xc_private.h                  |  43 +-
 tools/libs/guest/Makefile                     |   4 +
 .../libs/{ctrl/xc_core.c => guest/xg_core.c}  |   7 +-
 .../libs/{ctrl/xc_core.h => guest/xg_core.h}  |  15 +-
 .../xc_core_arm.c => guest/xg_core_arm.c}     |  31 +-
 .../xc_core_arm.h => guest/xg_core_arm.h}     |   0
 tools/libs/guest/xg_core_x86.c                | 399 ++++++++++++++++++
 .../xc_core_x86.h => guest/xg_core_x86.h}     |   0
 tools/libs/guest/xg_dom_boot.c                |   2 +-
 tools/libs/guest/xg_domain.c                  |  19 +-
 tools/libs/guest/xg_offline_page.c            |   2 +-
 tools/libs/guest/xg_private.h                 |  16 +-
 .../{ctrl/xc_resume.c => guest/xg_resume.c}   |  69 +--
 tools/libs/guest/xg_sr_save_x86_pv.c          |   2 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   1 +
 20 files changed, 545 insertions(+), 420 deletions(-)
 delete mode 100644 tools/libs/ctrl/xc_core_x86.c
 rename tools/libs/{ctrl/xc_core.c => guest/xg_core.c} (99%)
 rename tools/libs/{ctrl/xc_core.h => guest/xg_core.h} (92%)
 rename tools/libs/{ctrl/xc_core_arm.c => guest/xg_core_arm.c} (72%)
 rename tools/libs/{ctrl/xc_core_arm.h => guest/xg_core_arm.h} (100%)
 create mode 100644 tools/libs/guest/xg_core_x86.c
 rename tools/libs/{ctrl/xc_core_x86.h => guest/xg_core_x86.h} (100%)
 rename tools/libs/{ctrl/xc_resume.c => guest/xg_resume.c} (80%)

-- 
2.26.2




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.