[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 09/14] xen: sched: DOMCTL_*vcpuaffinity works with hard and soft affinity



>>> On 18.11.13 at 19:17, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote:
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -617,19 +617,65 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
> u_domctl)
>          if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
>          {
>              cpumask_var_t new_affinity;
> +            cpumask_t *online;
>  
>              ret = xenctl_bitmap_to_cpumask(
>                  &new_affinity, &op->u.vcpuaffinity.cpumap);
> -            if ( !ret )
> +            if ( ret )
> +                break;
> +
> +            ret = -EINVAL;
> +            if ( op->u.vcpuaffinity.flags & XEN_VCPUAFFINITY_HARD )
> +                ret = vcpu_set_hard_affinity(v, new_affinity);
> +            if ( op->u.vcpuaffinity.flags & XEN_VCPUAFFINITY_SOFT )
> +                ret = vcpu_set_soft_affinity(v, new_affinity);

You're discarding an eventual error indicator from
vcpu_set_hard_affinity() here.

> +
> +            if ( ret )
> +                goto setvcpuaffinity_out;

Considering that you're going to return an error here, the caller
may expect that the call did nothing, even if
vcpu_set_hard_affinity() succeeded and vcpu_set_soft_affinity()
failed. I know this is ugly to handle...

> +
> +            /*
> +             * Report back to the caller what the "effective affinity", that
> +             * is the intersection of cpupool's pcpus, the (new?) hard
> +             * affinity and the (new?) soft-affinity.
> +             */
> +            if ( !guest_handle_is_null(op->u.vcpuaffinity.eff_cpumap.bitmap) 
> )
>              {
> -                ret = vcpu_set_affinity(v, new_affinity);
> -                free_cpumask_var(new_affinity);
> +                online = cpupool_online_cpumask(v->domain->cpupool);
> +                cpumask_and(new_affinity, online, v->cpu_hard_affinity);
> +                if ( op->u.vcpuaffinity.flags & XEN_VCPUAFFINITY_SOFT)
> +                    cpumask_and(new_affinity, new_affinity,
> +                                v->cpu_soft_affinity);
> +
> +                ret = cpumask_to_xenctl_bitmap(
> +                    &op->u.vcpuaffinity.eff_cpumap, new_affinity);

Considering that you have two bitmaps available from the caller,
can't you just return both when both flags are set?

>          else
>          {
> +            cpumask_var_t affinity;
> +
> +            /*
> +             * If the caller asks for both _HARD and _SOFT, what we return
> +             * is the intersection of hard and soft affinity for the vcpu.
> +             */
> +            if ( !alloc_cpumask_var(&affinity) ) {

Coding style.

> +                ret = -EFAULT;

-ENOMEM

> +                break;
> +            }
> +            cpumask_setall(affinity);
> +
> +            if ( op->u.vcpuaffinity.flags & XEN_VCPUAFFINITY_HARD )
> +                cpumask_copy(affinity, v->cpu_hard_affinity);
> +            if ( op->u.vcpuaffinity.flags & XEN_VCPUAFFINITY_SOFT )
> +                cpumask_and(affinity, affinity, v->cpu_soft_affinity);

Just like in the set case, you should fail when neither bit is set,
and you could easily copy out both mask when both bits are
set (or otherwise for the get case using the same interface
structure is sort of bogus).

> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -654,22 +654,14 @@ void sched_set_node_affinity(struct domain *d, 
> nodemask_t *mask)
>      SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
>  }
>  
> -int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
> +static int vcpu_set_affinity(
> +    struct vcpu *v, const cpumask_t *affinity, cpumask_t **which)

I don't think there's a need for the double * on "which".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.