[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 1/9] x86/xpti: avoid copying L4 page table contents when possible



>>> On 19.04.18 at 08:19, <jgross@xxxxxxxx> wrote:
> On 18/04/18 18:12, Jan Beulich wrote:
>>>>> On 18.04.18 at 10:30, <jgross@xxxxxxxx> wrote:
>>> @@ -160,5 +161,20 @@ unsigned int flush_area_local(const void *va, unsigned 
>>> int flags)
>>>  
>>>      local_irq_restore(irqfl);
>>>  
>>> +    if ( flags & FLUSH_ROOT_PGTBL )
>>> +        get_cpu_info()->root_pgt_changed = true;
>>> +
>>>      return flags;
>>>  }
>>> +
>>> +void flush_root_pgt_mask(cpumask_t *mask)
>> 
>> const
>> 
>>> +{
>>> +    int cpu;
>> 
>> unsigned int
>> 
>>> +    struct cpu_info *cpu_info;
>> 
>> Declaration in most narrow possible scope please.
>> 
>>> +    for_each_cpu(cpu, mask)
>>> +    {
>>> +        cpu_info = (struct cpu_info *)(stack_base[cpu] + STACK_SIZE) - 1;
>>> +        cpu_info->root_pgt_changed = true;
>>> +    }
>>> +}
>> 
>> So why is this not sending an IPI to trigger the FLUSH_ROOT_PGTBL processing
>> visible above?
> 
> It is not needed for the case this function was introduced. Maybe I
> should rename it to flush_root_pgt_mask_lazy() ?
> 
>> Especially the cast you need here looks quite undesirable.
> 
> In get_cpu_info() it is done the same way. I can add a common helper
> function to current.h

While aiui moot with the below, a common helper would indeed be the
way to go if we really needed such a cross-CPU access.

>> Plus I
>> think this is racy (even if quite unlikely to trigger) with a CPU being 
>> brought down:
>> What if stack_base[cpu] becomes NULL after you've found the respective bit in
>> mask set. Especially when Xen runs itself virtualized, (almost) arbitrarily 
>> long
>> periods of time may pass between any two instructions.
> 
> Right.
> 
> So either I'm adding some kind of locking/rcu, or I'm switching to use
> IPIs and access root_pgt_changed only locally.
> 
> Do you have any preference?

Since issuing an IPI is just a single call, I'd prefer not to have new (locking,
rcu, or whatever else) logic added here. Unless of course someone, in
particular Tim, thinks sending an IPI here is a rather bad idea.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.