[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 4.6 v4 2/3] xl/libxl: disallow saving a guest with vNUMA configured



On Fri, 2015-09-11 at 15:31 +0100, Wei Liu wrote:
> On Fri, Sep 11, 2015 at 03:24:21PM +0100, Ian Campbell wrote:
> > On Fri, 2015-09-11 at 15:14 +0100, Wei Liu wrote:
> > @@ -1636,6 +1638,20 @@ void libxl__domain_save(libxl__egc *egc,
> > > libxl__domain_suspend_state *dss)
> > >            | (debug ? XCFLAGS_DEBUG : 0)
> > >            | (dss->hvm ? XCFLAGS_HVM : 0);
> > >  
> > > +    /* Disallow saving a guest with vNUMA configured because
> > > migration
> > > +     * stream does not preserve node information.
> > > +     *
> > > +     * Do not differentiate "no vnuma configuration" from "empty
> > > vnuma
> > > +     * configuration".
> > > +     */
> > > +    rc = xc_domain_getvnuma(CTX->xch, domid, &nr_vnodes,
> > > &nr_vmemranges,
> > > +                            &nr_vcpus, NULL, NULL, NULL);
> > 
> > Sorry for not noticing this before but this is putting a non-libxl
> > error
> > code in a variable named rc, which is verboten in coding style.
> > 
> 
> My bad. Should have noticed that earlier.
> 
> > Not least because I think it is now possible to get through this
> > function
> > successfully without changing it from the rc == -1 which might be
> > assigned
> > here (in the case where xs_suspend_evtchn_port returns < 0).
> > 
> > Ian.
> 
> Add a new variable called ret to store return value from xc function
> call. Here is the patch.
> 
> ---8<---
> From c2e9567fa0c5a00405d3759321c9eefb8ec049fc Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@xxxxxxxxxx>
> Date: Wed, 9 Sep 2015 17:11:24 +0100
> Subject: [PATCH] xl/libxl: disallow saving a guest with vNUMA configured
> 
> This is because the migration stream does not preserve node information.
> 
> Note this is not a regression for migration v2 vs legacy migration
> because neither of them preserves node information.
> 
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>

Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

> ---
> Cc: andrew.cooper3@xxxxxxxxxx
> 
> v4:
> 1. Don't differentiate "no vnuma" from "empty vnuma".
> 2. Use ret to store xc function call return value.
> 
> v3:
> 1. Update manpage, code comment and commit message.
> 2. *Don't* check if nomigrate is set.
> ---
>  docs/man/xl.cfg.pod.5   |  2 ++
>  tools/libxl/libxl_dom.c | 18 +++++++++++++++++-
>  2 files changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index c6345b8..157c855 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -263,6 +263,8 @@ virtual node.
>  
>  Note that virtual NUMA for PV guest is not yet supported, because
>  there is an issue with cpuid handling that affects PV virtual NUMA.
> +Further more, guest with virtual NUMA cannot be saved or migrated

I _think_ (but am not 100% sure) that in the sense you mean it is
"Furthermore". I don't think "Further more," actually means anything.

I can fix as I commit.

@@ -1636,6 +1638,20 @@ void libxl__domain_save(libxl__egc *egc,
> libxl__domain_suspend_state *dss)
>            | (debug ? XCFLAGS_DEBUG : 0)
>            | (dss->hvm ? XCFLAGS_HVM : 0);
>  
> +    /* Disallow saving a guest with vNUMA configured because migration
> +     * stream does not preserve node information.
> +     *
> +     * Do not differentiate "no vnuma configuration" from "empty vnuma
> +     * configuration".

Actually, we do differentiate, since we are checking for one explicitly.
What we are not differentiating is "vnuma enabled and configured" vs "numa
enabled but not configured", or something.

How about:

* Reject any domain which has vnuma enabled, even if the configuration is 
* empty. Only domains which have no vnuma configuration at all are 
* supported.
*/

as the second paragraph of the comment?

I can do that on commit too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.