[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/7] xen/arm: When CPU dies, free percpu area immediatelly

On 18/04/18 23:52, Stefano Stabellini wrote:
On Tue, 17 Apr 2018, Julien Grall wrote:
On 17/04/18 11:52, Mirela Simonovic wrote:
Hi Julien,

Hi Mirela,

On Mon, Apr 16, 2018 at 5:21 PM, Julien Grall <julien.grall@xxxxxxx> wrote:

On 16/04/18 14:41, Mirela Simonovic wrote:

On Mon, Apr 16, 2018 at 3:14 PM, Julien Grall <julien.grall@xxxxxxx>

On 12/04/18 22:31, Stefano Stabellini wrote:

On Thu, 12 Apr 2018, Julien Grall wrote:

On 12/04/18 00:46, Stefano Stabellini wrote:

On Wed, 11 Apr 2018, Julien Grall wrote:

On 11/04/18 14:19, Mirela Simonovic wrote:

I guess the rcu_barrier() in the function handling suspend/resume
that doesn't cover the hotplug case. Looking at x86, suspend/resume
For the hotplug case, there are an rcu_barrier in cpu_{up,down}_helper
they are only present in the case of cpu_{up,down} failed. I am not
sure how this is handled in x86

Andrew, Jan, do you know when the percpu will be free on hotplug? It
to call_rcu(...) but I am not sure when this is going to be executed.

AFAIK disable/enable_nonboot_cpus() is the only way to do the hotplug
and rcu_barrier() is not included in the flow.

That's not the only way. I clearly specified one in my previous answer
cpu_{up,down}_helper) and there are other place (look for cpu_up).

I've looked at cpu_{up,down}_helper and cpu_up and I'm convinced now
that adding rcu_barrier() prior to calling enable_nonboot_cpus() is
the right approch.

cpu_{up,down}_helper functions exist only for x86.

They have nothing very x86 specific AFAICT so they could potentially be used
for Arm when XEN_SYSCTL_hotplug will be implemented.

does call rcu_barrier() prior to calling cpu_up().

That's not true. Below the code for cpu_up_helper():

     int ret = cpu_up(cpu); <- First call
     if ( ret == -EBUSY )
         rcu_barrier();    <- RCU barrier
         ret = cpu_up(cpu); <- Second call
     return ret;

So the rcu_barrier is called after cpu_up() in case it returns -EBUSY.

So calling rcu_barrier() is expected to be done prior to calling
cpu_up() (or enable_nonboot_cpus(), which is just a wrapper for

I believe this is right way to do because cpu_up() is used for
enabling non-boot CPUs in both boot and suspend/hotplug scenarios,
while rcu_barrier() is not required in boot scenario.
Therefore, I'll add rcu_barrier() prior to calling
enable_nonboot_cpus(). If I missed something please let me know.

See above, this is exactly why I asked Andrew & Jan input on how rcu work is
flushed when using cpu_up_helper/cpu_down_helper. Because I don't understand
if it is meant to work.

So I would like to see whether it would make sense to put the rcu_barrier()
somewhere else to cover every call of cpu_up().

I thought that for the specific problem we are talking about, we only
need a rcu_barrier() after a cpu has been brought offline (and before
it is brought online again). I don't understand what other use-cases you
are thinking about that might require an rcu_barrier().

Is this the question you are asking Andrew and Jan?

There are 2 use cases:
        - suspend/resume
        - XEN_SYSCTL_hotplug (not implemented yet on Arm)

Mirela suggestion only apply for suspend/resume. However, there are clearly something undocumented in the code about the expectation on calling cpu_up/cpu_down.

I would like to understand where is the rcu_barrier when the CPU is offlined using the hypercall XEN_SYSCTL_hotplug. As I pointed out there the only rcu_barrier call I can find in that path is when cpu_down() fails.

The answer may be: "It is missing and would need to be fixed on x86" and that would be fine by me.

In any case, we should probably document the expectation if we rely on the caller of cpu_{up,down} to call rcu_barrier(). Otherwise, this is yet another way to rediscover the problem with a different path.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.