[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 04/10] libxc: Introduce xc_domain_setvnuma to set vNUMA



On Fri, 2014-07-18 at 01:50 -0400, Elena Ufimtseva wrote:
> With the introduction of the XEN_DOMCTL_setvnumainfo
> in patch titled: "xen: vnuma topology and subop hypercalls"
> we put in the plumbing here to use from the toolstack. The user
> is allowed to call this multiple times if they wish so.
> It will error out if the nr_vnodes or nr_vcpus is zero.
> 
> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
> ---
>  tools/libxc/xc_domain.c |   63 
> +++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxc/xenctrl.h   |    9 +++++++
>  2 files changed, 72 insertions(+)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index 0230c6c..a5625b5 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -2123,6 +2123,69 @@ int xc_domain_set_max_evtchn(xc_interface *xch, 
> uint32_t domid,
>      return do_domctl(xch, &domctl);
>  }
>  
> +/* Plumbing Xen with vNUMA topology */
> +int xc_domain_setvnuma(xc_interface *xch,
> +                        uint32_t domid,
> +                        uint16_t nr_vnodes,
> +                        uint16_t nr_vcpus,
> +                        vmemrange_t *vmemrange,
> +                        unsigned int *vdistance,
> +                        unsigned int *vcpu_to_vnode,
> +                        unsigned int *vnode_to_pnode)
> +{
> +    int rc;
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BOUNCE(vmemrange, sizeof(*vmemrange) * nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vdistance, sizeof(*vdistance) *
> +                                    nr_vnodes * nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vcpu_to_vnode, sizeof(*vcpu_to_vnode) * 
> nr_vcpus,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vnode_to_pnode, sizeof(*vnode_to_pnode) *
> +                                    nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    if ( nr_vnodes == 0 ) {

{'s on the next line please (througout)

> +        errno = EINVAL;
> +        return -1;
> +    }
> +
> +    if ( !vdistance || !vcpu_to_vnode || !vmemrange || !vnode_to_pnode ) {
> +        PERROR("%s: Cant set vnuma without initializing topology", __func__);
> +        errno = EINVAL;
> +        return -1;
> +    }
> +
> +    if ( xc_hypercall_bounce_pre(xch, vmemrange)      ||
> +         xc_hypercall_bounce_pre(xch, vdistance)      ||
> +         xc_hypercall_bounce_pre(xch, vcpu_to_vnode)  ||
> +         xc_hypercall_bounce_pre(xch, vnode_to_pnode) ) {
> +        PERROR("%s: Could not bounce buffers!", __func__);
> +        errno = EFAULT;

You will leak whichever ones of these succeeded before the failure. You
can set rc and goto an out label on the exit path which already does the
cleanup by calling bounce_post.

> +        return -1;
> +    }
> +
> +    set_xen_guest_handle(domctl.u.vnuma.vmemrange, vmemrange);
> +    set_xen_guest_handle(domctl.u.vnuma.vdistance, vdistance);
> +    set_xen_guest_handle(domctl.u.vnuma.vcpu_to_vnode, vcpu_to_vnode);
> +    set_xen_guest_handle(domctl.u.vnuma.vnode_to_pnode, vnode_to_pnode);
> +
> +    domctl.cmd = XEN_DOMCTL_setvnumainfo;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.vnuma.nr_vnodes = nr_vnodes;
> +
> +    rc = do_domctl(xch, &domctl);
> +
> +    xc_hypercall_bounce_post(xch, vmemrange);
> +    xc_hypercall_bounce_post(xch, vdistance);
> +    xc_hypercall_bounce_post(xch, vcpu_to_vnode);
> +    xc_hypercall_bounce_post(xch, vnode_to_pnode);
> +
> +    if ( rc )
> +        errno = EFAULT;

Why override the errno from do_domctl? Surely there are other failure
modes than EFAULT which can occur?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.