[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1][RFC] xen balloon driver numa support, libxl interface
On Mon, 2013-08-12 at 21:18 +0800, Yechen Li wrote: > --- > > This small patch implements a numa support of memory operation for libxl > The command is: xl mem-set-numa [-e] vmid memorysize nodeid > To pass the parameters to balloon driver in kernel, I add a file of > xen-store > as /local/domain/(id)/memory/target_nid, hoping this is ok.... It might be OK if you document it in docs/misc/xenstore-paths.markdown. > It's my first time submitting a patch, please point out the problems so that > I could work better in future, thanks very much! Please see http://wiki.xen.org/wiki/Submitting_Xen_Patches, in particular the bit about Signed-off-by. > > tools/libxl/libxl.c | 14 ++++++++++++-- > tools/libxl/libxl.h | 1 + > tools/libxl/xl.h | 1 + > tools/libxl/xl_cmdimpl.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ > tools/libxl/xl_cmdtable.c | 7 +++++++ > 5 files changed, 66 insertions(+), 2 deletions(-) > > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c > index 81785df..f027d59 100644 > --- a/tools/libxl/libxl.c > +++ b/tools/libxl/libxl.c > @@ -3642,10 +3642,17 @@ retry: > } > return 0; > } > - > int libxl_set_memory_target(libxl_ctx *ctx, uint32_t domid, > int32_t target_memkb, int relative, int enforce) > { > + return libxl_set_memory_target_numa(ctx, domid, target_memkb, relative, > + enforce, -1, 0); > +} > + > +int libxl_set_memory_target_numa(libxl_ctx *ctx, uint32_t domid, > + int32_t target_memkb, int relative, int enforce, > + int node_specify, bool nodeexact) > +{ > GC_INIT(ctx); > int rc = 1, abort_transaction = 0; > uint32_t memorykb = 0, videoram = 0; > @@ -3754,7 +3761,10 @@ retry_transaction: > abort_transaction = 1; > goto out; > } > - > + //lcc: Please don't leave debug dropping in place. > + LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "target_nid = %d focus= %d", > node_specify, (int) nodeexact); > + libxl__xs_write(gc, t, libxl__sprintf(gc, "%s/memory/target_nid", > + dompath), "%"PRId32" %"PRId32, node_specify, (int)nodeexact); > libxl__xs_write(gc, t, libxl__sprintf(gc, "%s/memory/target", > dompath), "%"PRIu32, new_target_memkb); > rc = xc_domain_getinfolist(ctx->xch, domid, 1, &info); > diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h > index be19bf5..e21d8c3 100644 > --- a/tools/libxl/libxl.h > +++ b/tools/libxl/libxl.h > @@ -628,6 +628,7 @@ int libxl_domain_core_dump(libxl_ctx *ctx, uint32_t domid, > > int libxl_domain_setmaxmem(libxl_ctx *ctx, uint32_t domid, uint32_t > target_memkb); > int libxl_set_memory_target(libxl_ctx *ctx, uint32_t domid, int32_t > target_memkb, int relative, int enforce); > +int libxl_set_memory_target_numa(libxl_ctx *ctx, uint32_t domid, int32_t > target_memkb, int relative, int enforce, int node_specify, bool nodeexact); This needs a LIBXL_HAVE style declaration. I'm unsure about adding another function as opposed to extending the current ABI using the LIBXL_API_VERSION compatibility provisions. > diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c > index 326a660..ab918c0 100644 > --- a/tools/libxl/xl_cmdtable.c > +++ b/tools/libxl/xl_cmdtable.c > @@ -199,6 +199,13 @@ struct cmd_spec cmd_table[] = { > "Set the current memory usage for a domain", > "<Domain> <MemMB['b'[bytes]|'k'[KB]|'m'[MB]|'g'[GB]|'t'[TB]]>", > }, > + { "mem-set-numa", Perhaps instead of adding a new function the existing mem-set should take a -n <node> parameter? > + &main_memset_numa, 0, 1, > + "Set the current memory usage for a domain, with numa node > specified", > + "[-e] <Domain> <MemMB['b'[bytes]|'k'[KB]|'m'[MB]|'g'[GB]|'t'[TB]]> > <nid>", > + "-e, --exact: operatrion will force on this node exactly" "operation" > + "nid: the machine(physical) node id\n" > + }, > { "button-press", > &main_button_press, 0, 1, > "Indicate an ACPI button press to the domain", _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |