[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 1/9] xen: vnuma topology and subop hypercalls



>>> On 29.08.14 at 05:04, <ufimtseva@xxxxxxxxx> wrote:
> Define interface, structures and hypercalls for toolstack to
> build vnuma topology and for guests that wish to retrieve it.
> Two subop hypercalls introduced by patch:
> XEN_DOMCTL_setvnumainfo to define vNUMA domain topology per domain
> and XENMEM_get_vnumainfo to retrieve that topology by guest.
> 
> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>



> +static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
> +                                     const struct domain *d)
> +{
> +    unsigned int nr_vnodes;
> +    int i, ret = -EINVAL;
> +    struct vnuma_info *info;
> +
> +    nr_vnodes = uinfo->nr_vnodes;
> +
> +    if ( nr_vnodes == 0 || nr_vnodes > uinfo->nr_vmemranges ||

Is that really a necessary check? I.e. does code elsewhere rely on
that? I ask because memory-less nodes are possible on real
hardware.

> +    case XEN_DOMCTL_setvnumainfo:
> +    {
> +        struct vnuma_info *vnuma = NULL;

Now this initializer has become pointless too.

> +    case XENMEM_get_vnumainfo:
> +    {
> +        struct vnuma_topology_info topology;
> +        struct domain *d;
> +        unsigned int dom_vnodes, dom_vranges, dom_vcpus;
> +        struct vnuma_info tmp;
> +
> +        /*
> +         * Guest passes nr_vnodes, number of regions and nr_vcpus thus
> +         * we know how much memory guest has allocated.
> +         */
> +        if ( copy_from_guest(&topology, arg, 1 ))
> +            return -EFAULT;
> +
> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
> +            return -ESRCH;
> +
> +        read_lock(&d->vnuma_rwlock);
> +
> +        if ( d->vnuma == NULL )
> +        {
> +            read_unlock(&d->vnuma_rwlock);
> +            rcu_unlock_domain(d);
> +            return -EOPNOTSUPP;
> +        }
> +
> +        dom_vnodes = d->vnuma->nr_vnodes;
> +        dom_vranges = d->vnuma->nr_vmemranges;
> +        dom_vcpus = d->max_vcpus;
> +
> +        /*
> +         * Copied from guest values may differ from domain vnuma config.
> +         * Check here guest parameters make sure we dont overflow.
> +         */
> +        if ( topology.nr_vnodes < dom_vnodes  ||
> +             topology.nr_vcpus < dom_vcpus    ||
> +             topology.nr_vmemranges < dom_vranges )
> +        {
> +            read_unlock(&d->vnuma_rwlock);
> +            rcu_unlock_domain(d);
> +
> +            topology.nr_vnodes = dom_vnodes;
> +            topology.nr_vcpus = dom_vcpus;
> +            topology.nr_vmemranges = dom_vranges;
> +
> +            /* Copy back needed values. */
> +             __copy_to_guest(arg, &topology, 1);
> +
> +            return -ENOBUFS;
> +        }
> +
> +        read_unlock(&d->vnuma_rwlock);
> +
> +        tmp.vdistance = xmalloc_array(unsigned int, dom_vnodes * dom_vnodes);
> +        tmp.vmemrange = xmalloc_array(vmemrange_t, dom_vranges);
> +        tmp.vcpu_to_vnode = xmalloc_array(unsigned int, dom_vcpus);
> +
> +        if ( tmp.vdistance == NULL || tmp.vmemrange == NULL ||
> +             tmp.vcpu_to_vnode == NULL )
> +        {
> +            rc = -ENOMEM;
> +            goto vnumainfo_out;
> +        }
> +
> +        /* Check if vnuma info has changed. */
> +        read_lock(&d->vnuma_rwlock);
> +
> +        if ( dom_vnodes != d->vnuma->nr_vnodes ||
> +             dom_vranges != d->vnuma->nr_vmemranges ||
> +             dom_vcpus != d->max_vcpus )
> +        {
> +            read_unlock(&d->vnuma_rwlock);
> +            rc = -EAGAIN;
> +            goto vnumainfo_out;

So you're pushing the burden of retrying on the caller. Probably
okay (and should be rather unlikely anyway). One thing, however,
could further improve behavior: If you allocated too large arrays,
proceeding would be fine (and you'd only have to update the
local variables).

> +
> +        }

Stray blank line above.

> +
> +        memcpy(tmp.vmemrange, d->vnuma->vmemrange,
> +               sizeof(*d->vnuma->vmemrange) * dom_vranges);
> +        memcpy(tmp.vdistance, d->vnuma->vdistance,
> +               sizeof(*d->vnuma->vdistance) * dom_vnodes * dom_vnodes);
> +        memcpy(tmp.vcpu_to_vnode, d->vnuma->vcpu_to_vnode,
> +               sizeof(*d->vnuma->vcpu_to_vnode) * dom_vcpus);
> +
> +        read_unlock(&d->vnuma_rwlock);
> +
> +        if ( copy_to_guest(topology.vmemrange.h, tmp.vmemrange,
> +                           dom_vranges) != 0 )
> +            goto vnumainfo_out;
> +
> +        if ( copy_to_guest(topology.vdistance.h, tmp.vdistance,
> +                           dom_vnodes * dom_vnodes) != 0 )
> +            goto vnumainfo_out;
> +
> +        if ( copy_to_guest(topology.vcpu_to_vnode.h, tmp.vcpu_to_vnode,
> +                           dom_vcpus) != 0 )
> +            goto vnumainfo_out;
> +
> +        topology.nr_vnodes = dom_vnodes;
> +        topology.nr_vcpus = dom_vcpus;
> +        topology.nr_vmemranges = dom_vranges;
> +        topology.pad = 0;

Why? You'd better check it got passed in as zero.

> +
> +        rc = -EFAULT;
> +
> +        if ( __copy_to_guest(arg, &topology, 1) != 0 )
> +            goto vnumainfo_out;
> +
> +        rc = 0;

Could I talk you into using the conditional operator here, making this
a single line without another goto?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.