|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 5/5] x86: add accessors for scratch cpu mask
On 27.02.2020 18:54, Roger Pau Monné wrote:
> On Wed, Feb 26, 2020 at 11:36:52AM +0100, Jan Beulich wrote:
>> On 24.02.2020 11:46, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/irq.c
>>> +++ b/xen/arch/x86/irq.c
>>> @@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
>>> {
>>> unsigned int cpu, old_vector, irq = desc->irq;
>>> unsigned int vector = desc->arch.vector;
>>> - cpumask_t *tmp_mask = this_cpu(scratch_cpumask);
>>> + cpumask_t *tmp_mask = get_scratch_cpumask();
>>>
>>> BUG_ON(!valid_irq_vector(vector));
>>>
>>> @@ -223,7 +223,10 @@ static void _clear_irq_vector(struct irq_desc *desc)
>>> trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask);
>>>
>>> if ( likely(!desc->arch.move_in_progress) )
>>> + {
>>> + put_scratch_cpumask();
>>> return;
>>> + }
>>
>> I think if possible such error path adjustments would better be
>> avoided. And this seems feasible here: There are two entirely
>> independent used of the scratch mask in this function. You could
>> therefore put the mask above from here, and get it again further
>> down, or you could leverage a property of the current
>> implementation, plus the fact that the 2nd use doesn't involved
>> any "real" function calls, and avoid a 2nd get/put altogether.
>
> No, it's very easy to add function calls later on and forget to use
> get_scratch_cpumask.
Well, yes, such a deliberate omission would of course require a
bold comment.
>> Of course another question then is whether it is a good property
>> of the current model, i.e. whether it wouldn't be better for
>> "put" to actually zap the pointer, to prevent subsequent use.
>
> So that put_scratch_cpumask takes the pointer as a parameter and
> writes NULL to it?
For example, yes.
>>> @@ -3645,12 +3647,17 @@ long do_mmuext_op(
>>> mask)) )
>>> rc = -EINVAL;
>>> if ( unlikely(rc) )
>>> + {
>>> + put_scratch_cpumask();
>>> break;
>>> + }
>>
>> Again, instead of adjusting an error path, how about making this
>> have an empty statement (i.e. dropping the break) and ...
>>
>>> if ( op.cmd == MMUEXT_TLB_FLUSH_MULTI )
>>
>> ... having this become "else if()"?
>>
>>> @@ -4384,6 +4393,9 @@ static int __do_update_va_mapping(
>>> break;
>>> }
>>>
>>> + if ( mask && mask != d->dirty_cpumask )
>>> + put_scratch_cpumask();
>>
>> The right side of the && here makes things feel a little fragile for
>> me.
>
> What about using:
>
> switch ( flags & ~UVMF_FLUSHTYPE_MASK )
> {
> case UVMF_LOCAL:
> case UVMF_ALL:
> break;
>
> default:
> put_scratch_cpumask();
> }
Fine with me.
> I could also use an if, but I think it's clearer with a switch.
Agreed.
>>> --- a/xen/arch/x86/msi.c
>>> +++ b/xen/arch/x86/msi.c
>>> @@ -159,13 +159,15 @@ void msi_compose_msg(unsigned vector, const cpumask_t
>>> *cpu_mask, struct msi_msg
>>>
>>> if ( cpu_mask )
>>> {
>>> - cpumask_t *mask = this_cpu(scratch_cpumask);
>>> + cpumask_t *mask;
>>>
>>> if ( !cpumask_intersects(cpu_mask, &cpu_online_map) )
>>> return;
>>>
>>> + mask = get_scratch_cpumask();
>>> cpumask_and(mask, cpu_mask, &cpu_online_map);
>>> msg->dest32 = cpu_mask_to_apicid(mask);
>>> + put_scratch_cpumask();
>>> }
>>
>> This, I think, could do with a little more changing:
>>
>> if ( cpu_mask )
>> {
>> cpumask_t *mask = get_scratch_cpumask();
>>
>> cpumask_and(mask, cpu_mask, &cpu_online_map);
>> if ( !cpumask_empty(mask) )
>> msg->dest32 = cpu_mask_to_apicid(mask);
>> put_scratch_cpumask();
>> }
>>
>> This way instead of looking twice at two cpumask_t instances, the
>> 2nd one involves just one. Thoughts?
>
> LGTM.
>
> Note that this won't exit early however in case masks don't intersect,
> and will set the address field with whatever is in msg->dest32.
Oh, I should have noticed this. No, the early exit has to remain
one way or another. I guess I'm fine then with the original
variant.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |