[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 3/3] tools: introduce parameter max_ranges.



A new parameter - max_ranges is added to set the upper limit of ranges
to be tracked inside one ioreq server rangeset.

Ioreq server uses a group of rangesets to track the I/O or memory
resources to be emulated. The default value of this limit is set to
256. Yet there are circumstances under which the limit should exceed
the default one. E.g. in XenGT, when tracking the per-process graphic
translation tables on intel broadwell platforms, the number of page
tables concerned will be several thousand(normally in this case, 8192
could be a big enough value). Users who set his item explicitly are
supposed to know the specific scenarios that necessitate this
configuration.

Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
---
 docs/man/xl.cfg.pod.5           | 17 +++++++++++++++++
 tools/libxl/libxl_dom.c         |  3 +++
 tools/libxl/libxl_types.idl     |  1 +
 tools/libxl/xl_cmdimpl.c        |  4 ++++
 xen/arch/x86/hvm/hvm.c          |  7 ++++++-
 xen/include/public/hvm/params.h |  5 ++++-
 6 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 8899f75..562563d 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -962,6 +962,23 @@ FIFO-based event channel ABI support up to 131,071 event 
channels.
 Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit
 x86).
 
+=item B<max_ranges=N>
+
+Limit the maximum ranges that can be tracked inside one ioreq server
+rangeset.
+
+Ioreq server uses a group of rangesets to track the I/O or memory
+resources to be emulated. By default, this item is not set. Not
+configuring this item, or setting its value to 0 will result in the
+upper limit set to its default value - 256. Yet there are circumstances
+under which the upper limit inside one rangeset should exceed the
+default one. E.g. in XenGT, when tracking the per-process graphic
+translation tables on intel broadwell platforms, the number of page
+tables concerned will be several thousand(normally in this case, 8192
+could be a big enough value). Users who set his item explicitly are
+supposed to know the specific scenarios that necessitate this
+configuration.
+
 =back
 
 =head2 Paravirtualised (PV) Guest Specific Options
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 47971a9..607b0c4 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -288,6 +288,9 @@ static void hvm_set_conf_params(xc_interface *handle, 
uint32_t domid,
                     libxl_defbool_val(info->u.hvm.nested_hvm));
     xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
                     libxl_defbool_val(info->u.hvm.altp2m));
+    if (info->u.hvm.max_ranges > 0)
+        xc_hvm_param_set(handle, domid, HVM_PARAM_MAX_RANGES,
+                        info->u.hvm.max_ranges);
 }
 
 int libxl__build_pre(libxl__gc *gc, uint32_t domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9ad7eba..c936265 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -518,6 +518,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("serial_list",      libxl_string_list),
                                        ("rdm", libxl_rdm_reserve),
                                        ("rdm_mem_boundary_memkb", MemKB),
+                                       ("max_ranges", uint32),
                                        ])),
                  ("pv", Struct(None, [("kernel", string),
                                       ("slack_memkb", MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 25507c7..9359de7 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1626,6 +1626,10 @@ static void parse_config_data(const char *config_source,
 
         if (!xlu_cfg_get_long (config, "rdm_mem_boundary", &l, 0))
             b_info->u.hvm.rdm_mem_boundary_memkb = l * 1024;
+
+        if (!xlu_cfg_get_long (config, "max_ranges", &l, 0))
+            b_info->u.hvm.max_ranges = l;
+
         break;
     case LIBXL_DOMAIN_TYPE_PV:
     {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d59e7bc..2f85089 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -943,6 +943,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct 
hvm_ioreq_server *s,
 {
     unsigned int i;
     int rc;
+    unsigned int max_ranges =
+        ( s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_RANGES] > 0 ) ?
+        s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_RANGES] :
+        MAX_NR_IO_RANGES;
 
     if ( is_default )
         goto done;
@@ -965,7 +969,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct 
hvm_ioreq_server *s,
         if ( !s->range[i] )
             goto fail;
 
-        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
+        rangeset_limit(s->range[i], max_ranges);
     }
 
  done:
@@ -6012,6 +6016,7 @@ static int hvm_allow_set_param(struct domain *d,
     case HVM_PARAM_IOREQ_SERVER_PFN:
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     case HVM_PARAM_ALTP2M:
+    case HVM_PARAM_MAX_RANGES:
         if ( value != 0 && a->value != value )
             rc = -EEXIST;
         break;
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 81f9451..7732087 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -210,6 +210,9 @@
 /* Boolean: Enable altp2m */
 #define HVM_PARAM_ALTP2M       35
 
-#define HVM_NR_PARAMS          36
+/* Maximum ranges to be tracked in one rangeset by ioreq server */
+#define HVM_PARAM_MAX_RANGES  36
+
+#define HVM_NR_PARAMS          37
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.