[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Rendezvous selected cpus


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Mon, 04 Feb 2008 07:30:45 +0000
  • Delivery-date: Sun, 03 Feb 2008 23:30:37 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Achlcy9d/rIcXbNERJmDq57BBZNC3QBGTqlDAA+IkJAADVOV/A==
  • Thread-topic: [Xen-devel] [PATCH] Rendezvous selected cpus

On 4/2/08 02:02, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> All above just made me distraught to follow Linux semantics, and
> further thought leads me to why not let cpu_i to conduct stop process
> directly, and then let concerned cpus to call (*fn) by adding a new
> action as ***_INVOKE. By this way, cpu_i doesn't need to be cut
> off from current flow, and once stop_machine returns, all necessary
> works to be handled in a stopped environment are fulfilled.

Yes, I saw that, but it doesn't require a modified interface. The semantics
of the call are still:
 1. Synchronize/rendezvous all CPUs (the caller is assumed already to be at
a safe point and just needs to disable IRQs at the right time).
 2. Run the (*fn) on one designated CPU (via your new bit of mechanism).
 3. All done.

This fits fine within the stop_machine_run(fn, data, cpu) interface, albeit
that the underlying implementation is somewhat different (and you already
have that pretty much correct!).

Really all you need to do is put your implementation in a different file, do
some simple renaming of stuff, push some of your caller code (the bits that
create the cpumasks) into your stop_machine_run() implementation and give
stop_machine_run() the simple Linux interface.

 -- Keir
 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.