[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v10 0/3] tools/libxc: use superpages



Using superpages on the receiving dom0 will avoid performance regressions.

Olaf

v10:
 coding style in xc_sr_bitmap API
 reset bitmap size on free
 check for empty bitmap in xc_sr_bitmap API
 add comment to struct x86_hvm_sp, keep the short name
 style and type changes in x86_hvm_punch_hole
 do not mark VGA hole as busy in x86_hvm_setup
 call decrease_reservation once for all pfns
 rename variable in x86_hvm_populate_pfns
 call decrease_reservation in 2MB chucks if possible

v9:
 update hole checking in x86_hvm_populate_pfns
 add out of bounds check to xc_sr_test_and_set/clear_bit
v8:
 remove double check of 1G/2M idx in x86_hvm_populate_pfns
v7:
 cover holes that span multiple superpages
v6:
 handle freeing of partly populated superpages correctly
 more DPRINTFs
v5:
 send correct version, rebase was not fully finished
v4:
 restore trailing "_bit" in bitmap function names
 keep track of gaps between previous and current batch
 split alloc functionality in x86_hvm_allocate_pfn
v3:
 clear pointer in xc_sr_bitmap_free
 some coding style changes
 use getdomaininfo.max_pages to avoid Over-allocation check
 trim bitmap function names, drop trailing "_bit"
 add some comments
v2:
 split into individual commits

based on staging c39cf093fc ("x86/asm: add .file directives")


Olaf Hering (3):
  tools/libxc: move SUPERPAGE macros to common header
  tools/libxc: add API for bitmap access for restore
  tools/libxc: use superpages during restore of HVM guest

 tools/libxc/xc_dom_x86.c            |   5 -
 tools/libxc/xc_private.h            |   5 +
 tools/libxc/xc_sr_common.c          |  41 +++
 tools/libxc/xc_sr_common.h          | 103 ++++++-
 tools/libxc/xc_sr_restore.c         | 141 +---------
 tools/libxc/xc_sr_restore_x86_hvm.c | 536 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_restore_x86_pv.c  |  72 ++++-
 7 files changed, 755 insertions(+), 148 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.