[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V2 24/45] xen: let vcpu_create() select processor



>>> On 16.05.19 at 14:46, <jgross@xxxxxxxx> wrote:
> On 16/05/2019 14:20, Jan Beulich wrote:
>>>>> On 06.05.19 at 08:56, <jgross@xxxxxxxx> wrote:
>>> --- a/xen/common/schedule.c
>>> +++ b/xen/common/schedule.c
>>> @@ -314,14 +314,42 @@ static struct sched_item *sched_alloc_item(struct 
>>> vcpu *v)
>>>      return NULL;
>>>  }
>>>  
>>> -int sched_init_vcpu(struct vcpu *v, unsigned int processor)
>>> +static unsigned int sched_select_initial_cpu(struct vcpu *v)
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    nodeid_t node;
>>> +    cpumask_t cpus;
>> 
>> To be honest, I'm not happy to see new on-stack instances of
>> cpumask_t appear. Seeing ...
>> 
>>> +    cpumask_clear(&cpus);
>>> +    for_each_node_mask ( node, d->node_affinity )
>>> +        cpumask_or(&cpus, &cpus, &node_to_cpumask(node));
>>> +    cpumask_and(&cpus, &cpus, cpupool_domain_cpumask(d));
>>> +    if ( cpumask_empty(&cpus) )
>>> +        cpumask_copy(&cpus, cpupool_domain_cpumask(d));
>> 
>> ... this fallback you use anyway, is there any issue with it also
>> serving the case where zalloc_cpumask_var() fails?
> 
> Either that, or:
> 
> - just fail to create the vcpu in that case, as chances are rather
>   high e.g. the following arch_vcpu_create() will fail anyway

Ah, right, this is for vCPU creation only anyway.

> - take the scheduling lock and use cpumask_scratch
> - (ab)use one of the available cpumasks in struct sched_unit which
>   are not in use yet
> 
> My preference would be using cpumask_scratch.

I'm actually fine with any of the variants, including that of simply
returning -ENOMEM.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.