[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Rendezvous selected cpus


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Mon, 11 Feb 2008 16:01:44 +0000
  • Delivery-date: Mon, 11 Feb 2008 08:02:19 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Achlcy9d/rIcXbNERJmDq57BBZNC3QBGTqlDAA+IkJAADVOV/AAEgPSQAW1h0T4=
  • Thread-topic: [Xen-devel] [PATCH] Rendezvous selected cpus

Applied, but I put the patch on a diet first. You'll want to check it still
does what you expect... :-)

 -- Keir

On 4/2/08 09:51, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> Then, attach updated version per your comments, with Linux
> stop_machine semantics kept in common place. Build tested
> for x86, and ideally it should also work on other archs.
> 
> One small issue is about the lock handle with cpu hotplug.
> For cpu hotplug request from control tools, there's a dead lock
> race between pure cpu hotplug path and stop machine if we
> still use spinlock as the guard. For example, stop_machine
> may hold lock on one cpu, and wait response from another
> one who happnes to spin loop on lock per hotplug request.
> In that case, the other cpu gets no chance to handle softirq.
> 
> We may be able to use trylock for cpu_up/down, however
> that also affects stop_machine like called from S3 path for
> which try-and-give-up is too heavy and unexpected. Maybe
> some new cpu_tryup/trydown may be used. Not sure...
> 
> But anyway, still some way to go for a full cpu hotplug support
> which doesn't prevent this feature to slip in based on spinlock
> at this stage. :-)
> 
> More comments?
> 
> Thanks,
> Kevin
> 
>> From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
>> Sent: 2008年2月4日 15:31
>> 
>> On 4/2/08 02:02, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
>> 
>>> All above just made me distraught to follow Linux semantics, and
>>> further thought leads me to why not let cpu_i to conduct stop process
>>> directly, and then let concerned cpus to call (*fn) by adding a new
>>> action as ***_INVOKE. By this way, cpu_i doesn't need to be cut
>>> off from current flow, and once stop_machine returns, all necessary
>>> works to be handled in a stopped environment are fulfilled.
>> 
>> Yes, I saw that, but it doesn't require a modified interface.
>> The semantics
>> of the call are still:
>> 1. Synchronize/rendezvous all CPUs (the caller is assumed
>> already to be at
>> a safe point and just needs to disable IRQs at the right time).
>> 2. Run the (*fn) on one designated CPU (via your new bit of
>> mechanism).
>> 3. All done.
>> 
>> This fits fine within the stop_machine_run(fn, data, cpu)
>> interface, albeit
>> that the underlying implementation is somewhat different (and
>> you already
>> have that pretty much correct!).
>> 
>> Really all you need to do is put your implementation in a
>> different file, do
>> some simple renaming of stuff, push some of your caller code
>> (the bits that
>> create the cpumasks) into your stop_machine_run()
>> implementation and give
>> stop_machine_run() the simple Linux interface.
>> 
>> -- Keir
>> 
>> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.