[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 1/9] x86/xpti: avoid copying L4 page table contents when possible



On 18/04/18 18:12, Jan Beulich wrote:
>>>> On 18.04.18 at 10:30, <jgross@xxxxxxxx> wrote:
>> @@ -160,5 +161,20 @@ unsigned int flush_area_local(const void *va, unsigned 
>> int flags)
>>  
>>      local_irq_restore(irqfl);
>>  
>> +    if ( flags & FLUSH_ROOT_PGTBL )
>> +        get_cpu_info()->root_pgt_changed = true;
>> +
>>      return flags;
>>  }
>> +
>> +void flush_root_pgt_mask(cpumask_t *mask)
> 
> const
> 
>> +{
>> +    int cpu;
> 
> unsigned int
> 
>> +    struct cpu_info *cpu_info;
> 
> Declaration in most narrow possible scope please.
> 
>> +    for_each_cpu(cpu, mask)
>> +    {
>> +        cpu_info = (struct cpu_info *)(stack_base[cpu] + STACK_SIZE) - 1;
>> +        cpu_info->root_pgt_changed = true;
>> +    }
>> +}
> 
> So why is this not sending an IPI to trigger the FLUSH_ROOT_PGTBL processing
> visible above?

It is not needed for the case this function was introduced. Maybe I
should rename it to flush_root_pgt_mask_lazy() ?

> Especially the cast you need here looks quite undesirable.

In get_cpu_info() it is done the same way. I can add a common helper
function to current.h

> Plus I
> think this is racy (even if quite unlikely to trigger) with a CPU being 
> brought down:
> What if stack_base[cpu] becomes NULL after you've found the respective bit in
> mask set. Especially when Xen runs itself virtualized, (almost) arbitrarily 
> long
> periods of time may pass between any two instructions.

Right.

So either I'm adding some kind of locking/rcu, or I'm switching to use
IPIs and access root_pgt_changed only locally.

Do you have any preference?


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.