[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.12 v2 16/17] xen/arm: Implement Set/Way operations

On 07/12/2018 21:29, Stefano Stabellini wrote:
CC'ing Dario

Dario, please give a look at the preemption question below.

On Fri, 7 Dec 2018, Julien Grall wrote:
On 06/12/2018 23:32, Stefano Stabellini wrote:
On Tue, 4 Dec 2018, Julien Grall wrote:
So you may not execute them before returning to the guest introducing
long delay. That's why we execute the rest of the code with interrupts
masked. If sotfirq_pending() returns 0 then you know there were no
more softirq pending to handle. All the new one will be signaled via
an interrupt than can only come up when irq are unmasked.

The one before executing vCPU work can potentially be avoided. The reason I
added it is it can take some times before p2m_flush_vm() will call softirq. As
we do this on return to guest we may have already been executed for some time
in the hypervisor. So this give us a chance to preempt if the vCPU consumed
his sliced.

This one is difficult to tell whether it is important or if it would be
best avoided.

For Dario: basically we have a long running operation to perform, we
thought that the best place for it would be on the path returning to
guest (leave_hypervisor_tail). The operation can interrupt itself
checking sotfirq_pending() once in a while to avoid blocking the pcpu
for too long.

The question is: is it better to check sotfirq_pending() even before
starting? Or every so often during the operating is good enough? Does it
even matter?
I am not sure to understand what is your concern here. Checking for softirq_pending() often is not an issue. The issue is when we happen to not check it. At the moment, I would prefer to be over cautious until we figure out whether this is a real issue.

If you are concerned about the performance impact, this is only called when a guest is using set/way.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.