[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 1/9] xen: vnuma topology and subop hypercalls



On Fri, Aug 29, 2014 at 6:37 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 29.08.14 at 05:04, <ufimtseva@xxxxxxxxx> wrote:
>> Define interface, structures and hypercalls for toolstack to
>> build vnuma topology and for guests that wish to retrieve it.
>> Two subop hypercalls introduced by patch:
>> XEN_DOMCTL_setvnumainfo to define vNUMA domain topology per domain
>> and XENMEM_get_vnumainfo to retrieve that topology by guest.
>>
>> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
>
>
>
>> +static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
>> +                                     const struct domain *d)
>> +{
>> +    unsigned int nr_vnodes;
>> +    int i, ret = -EINVAL;
>> +    struct vnuma_info *info;
>> +
>> +    nr_vnodes = uinfo->nr_vnodes;
>> +
>> +    if ( nr_vnodes == 0 || nr_vnodes > uinfo->nr_vmemranges ||
>
> Is that really a necessary check? I.e. does code elsewhere rely on
> that? I ask because memory-less nodes are possible on real
> hardware.

That is true. But taking into account that there are no buses support
yet added, absence of memory and buses for a vNUMA node
seem to be useless. And vNUMA can mimic hardware NUMA as close as
possible, but I think the degree of this is pretty much our choice.
With further extension of vNUMA to include buses I think this check
will naturally disappear.

>
>> +    case XEN_DOMCTL_setvnumainfo:
>> +    {
>> +        struct vnuma_info *vnuma = NULL;
>
> Now this initializer has become pointless too.
>
>> +    case XENMEM_get_vnumainfo:
>> +    {
>> +        struct vnuma_topology_info topology;
>> +        struct domain *d;
>> +        unsigned int dom_vnodes, dom_vranges, dom_vcpus;
>> +        struct vnuma_info tmp;
>> +
>> +        /*
>> +         * Guest passes nr_vnodes, number of regions and nr_vcpus thus
>> +         * we know how much memory guest has allocated.
>> +         */
>> +        if ( copy_from_guest(&topology, arg, 1 ))
>> +            return -EFAULT;
>> +
>> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
>> +            return -ESRCH;
>> +
>> +        read_lock(&d->vnuma_rwlock);
>> +
>> +        if ( d->vnuma == NULL )
>> +        {
>> +            read_unlock(&d->vnuma_rwlock);
>> +            rcu_unlock_domain(d);
>> +            return -EOPNOTSUPP;
>> +        }
>> +
>> +        dom_vnodes = d->vnuma->nr_vnodes;
>> +        dom_vranges = d->vnuma->nr_vmemranges;
>> +        dom_vcpus = d->max_vcpus;
>> +
>> +        /*
>> +         * Copied from guest values may differ from domain vnuma config.
>> +         * Check here guest parameters make sure we dont overflow.
>> +         */
>> +        if ( topology.nr_vnodes < dom_vnodes  ||
>> +             topology.nr_vcpus < dom_vcpus    ||
>> +             topology.nr_vmemranges < dom_vranges )
>> +        {
>> +            read_unlock(&d->vnuma_rwlock);
>> +            rcu_unlock_domain(d);
>> +
>> +            topology.nr_vnodes = dom_vnodes;
>> +            topology.nr_vcpus = dom_vcpus;
>> +            topology.nr_vmemranges = dom_vranges;
>> +
>> +            /* Copy back needed values. */
>> +             __copy_to_guest(arg, &topology, 1);
>> +
>> +            return -ENOBUFS;
>> +        }
>> +
>> +        read_unlock(&d->vnuma_rwlock);
>> +
>> +        tmp.vdistance = xmalloc_array(unsigned int, dom_vnodes * 
>> dom_vnodes);
>> +        tmp.vmemrange = xmalloc_array(vmemrange_t, dom_vranges);
>> +        tmp.vcpu_to_vnode = xmalloc_array(unsigned int, dom_vcpus);
>> +
>> +        if ( tmp.vdistance == NULL || tmp.vmemrange == NULL ||
>> +             tmp.vcpu_to_vnode == NULL )
>> +        {
>> +            rc = -ENOMEM;
>> +            goto vnumainfo_out;
>> +        }
>> +
>> +        /* Check if vnuma info has changed. */
>> +        read_lock(&d->vnuma_rwlock);
>> +
>> +        if ( dom_vnodes != d->vnuma->nr_vnodes ||
>> +             dom_vranges != d->vnuma->nr_vmemranges ||
>> +             dom_vcpus != d->max_vcpus )
>> +        {
>> +            read_unlock(&d->vnuma_rwlock);
>> +            rc = -EAGAIN;
>> +            goto vnumainfo_out;
>
> So you're pushing the burden of retrying on the caller. Probably
> okay (and should be rather unlikely anyway). One thing, however,
> could further improve behavior: If you allocated too large arrays,
> proceeding would be fine (and you'd only have to update the
> local variables).

Thanks Jan, will change this.
>
>> +
>> +        }
>
> Stray blank line above.
>
>> +
>> +        memcpy(tmp.vmemrange, d->vnuma->vmemrange,
>> +               sizeof(*d->vnuma->vmemrange) * dom_vranges);
>> +        memcpy(tmp.vdistance, d->vnuma->vdistance,
>> +               sizeof(*d->vnuma->vdistance) * dom_vnodes * dom_vnodes);
>> +        memcpy(tmp.vcpu_to_vnode, d->vnuma->vcpu_to_vnode,
>> +               sizeof(*d->vnuma->vcpu_to_vnode) * dom_vcpus);
>> +
>> +        read_unlock(&d->vnuma_rwlock);
>> +
>> +        if ( copy_to_guest(topology.vmemrange.h, tmp.vmemrange,
>> +                           dom_vranges) != 0 )
>> +            goto vnumainfo_out;
>> +
>> +        if ( copy_to_guest(topology.vdistance.h, tmp.vdistance,
>> +                           dom_vnodes * dom_vnodes) != 0 )
>> +            goto vnumainfo_out;
>> +
>> +        if ( copy_to_guest(topology.vcpu_to_vnode.h, tmp.vcpu_to_vnode,
>> +                           dom_vcpus) != 0 )
>> +            goto vnumainfo_out;
>> +
>> +        topology.nr_vnodes = dom_vnodes;
>> +        topology.nr_vcpus = dom_vcpus;
>> +        topology.nr_vmemranges = dom_vranges;
>> +        topology.pad = 0;
>
> Why? You'd better check it got passed in as zero.
>
>> +
>> +        rc = -EFAULT;
>> +
>> +        if ( __copy_to_guest(arg, &topology, 1) != 0 )
>> +            goto vnumainfo_out;
>> +
>> +        rc = 0;
>
> Could I talk you into using the conditional operator here, making this
> a single line without another goto?

Absolutely! )
>
> Jan



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.