[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/7] xen: vNUMA support for guests.



>>> On 14.11.13 at 04:26, Elena Ufimtseva <ufimtseva@xxxxxxxxx> wrote:
> +    case XEN_DOMCTL_setvnumainfo:
> +    {
> +        unsigned int i = 0, dist_size;
> +        uint nr_vnodes;

"unsigned int" or "uint" consistently please (preferably the former).

Also the initializer for "i" seems pointless.

> +        ret = -EFAULT;
> +
> +        /* Already set? */
> +        if ( d->vnuma.nr_vnodes > 0 )
> +            return 0;

This surely needs to be some -E (or else how would the caller know
that the function didn't do what was asked for).

> +
> +        nr_vnodes = op->u.vnuma.nr_vnodes;
> +        
> +        if ( nr_vnodes == 0 )
> +            return ret;
> +        if ( nr_vnodes * nr_vnodes > UINT_MAX )
> +            return ret;

Neither of these are -EFAULT (and I'm pretty certain I commented
on the need to use suitable error indicators before).

Further, the last check is insufficient: The product can appear to
be smaller than UINT_MAX due to truncation. You really need to
compare one of the values to the quotient of UINT_MAX and the
other.

> +        /* Everything is good, lets set the number of vnodes */
> +        d->vnuma.nr_vnodes = nr_vnodes;
> +        ret = 0;
> +err_dom:

Labels indented by one space please (to make diff's -p option not
pick them up as context for a hunk).

> +        if ( ret != 0 )
> +        {
> +            d->vnuma.nr_vnodes = 0;

Now that you (properly) set the field only once everything else
was successful, this seems unnecessary.

> +    case XENMEM_get_vnuma_info:
> +    {
> +        vnuma_topology_info_t mtopology;
> +        struct domain *d;
> +
> +        rc = -EFAULT;
> +        if ( copy_from_guest(&mtopology, arg, 1) )
> +            return -EFAULT;
> +        if ( (d = rcu_lock_domain_by_any_id(mtopology.domid)) == NULL )
> +            return -ESRCH;
> +        
> +        if ( (d->vnuma.nr_vnodes == 0) || (d->vnuma.nr_vnodes > 
> d->max_vcpus) )
> +            return EOPNOTSUPP;

-EOPNOTSUPP. And you need to "goto vnumaout" here.

> +       
> +        if ( __copy_to_guest(mtopology.vmemrange,
> +                                d->vnuma.vmemrange,
> +                                d->vnuma.nr_vnodes) != 0 )
> +            goto vnumaout;
> +        if ( __copy_to_guest(mtopology.vdistance,
> +                                d->vnuma.vdistance,
> +                                d->vnuma.nr_vnodes * d->vnuma.nr_vnodes) != 
> 0 )
> +            goto vnumaout;
> +        if ( __copy_to_guest(mtopology.vcpu_to_vnode,
> +                                d->vnuma.vcpu_to_vnode,
> +                                d->max_vcpus) != 0 )
> +            goto vnumaout;

You can't use __copy_* without previously having validated the
passed in address range (or you risk corrupting hypervisor
memory).

> +        
> +        if ( __copy_to_guest(mtopology.nr_vnodes, &d->vnuma.nr_vnodes, 1) != 
> 0 )

I think you could use __copy_field_to_guest() here.

> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -459,6 +459,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>   * The zero value is appropiate.
>   */
>  
> +/*
> + * XENMEM_get_vnuma_info used by caller to retrieve
> + * vNUMA topology constructed for particular domain.
> + *
> + * The data exchanged is presented by vnuma_topology_info. 
> + */
> +#define XENMEM_get_vnuma_info               25
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */

Isn't that call supposed to be used by guests? If so, it must not sit
in a hypervisor-and-tools-only section.

> --- /dev/null
> +++ b/xen/include/public/vnuma.h
> @@ -0,0 +1,44 @@
> +#ifndef _XEN_PUBLIC_VNUMA_H
> +#define _XEN_PUBLIC_VNUMA_H
> +#include "memory.h"

This seems backwards - if anything, I'd expect memory.h to
include vnuma.h (if we need a new header here at all).

> +#include "xen.h"
> +
> +/*
> + * Following structures are used to represent vNUMA 
> + * topology to guest if requested.
> + */
> +
> +/* 
> + * Memory ranges can be used to define
> + * vNUMA memory node boundaries by the 
> + * linked list. As of now, only one range
> + * per domain is suported.
> + */
> +
> +struct vmemrange {
> +    uint64_t start, end;
> +    struct vmemrange *next;

ISTR having commented on this before: No pointers in public headers.
Only guest handles are permitted here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.