[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Simplify IO event handling since it's now onlyused for IO done notification.


  • To: "Li, Xin B" <xin.b.li@xxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Fri, 10 Nov 2006 07:54:21 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 09 Nov 2006 23:54:47 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AccEGeyAxUYCtUelSqeyOM8p7TlblwAWJ3RQAAq4/uc=
  • Thread-topic: [Xen-devel] [PATCH] Simplify IO event handling since it's now onlyused for IO done notification.

On 10/11/06 3:15 am, "Li, Xin B" <xin.b.li@xxxxxxxxx> wrote:

> For the old code, we need it because interrupt notification from qemu
> may wake up a vcpu that is waiting for an IO done notification from
> Qemu, however, now interrupt notification logic is seperated and using
> hypercall, so only after Qemu changes IO slot state to
> STATE_IORESP_READY, it will wake up the target vcpu.  Why we still need
> wait_on_xen_event_channel now?
> BTW, prepare_wait_on_xen_event_channel and wait_on_xen_event_channel
> don't need port parameter.

I simply want to keep the usual event-channel semantics even in this limited
scenario, which is that you may get spurious wakeups, so you need to
re-check the condition you blocked on. Also, specifying the port parameter
keeps a clean interface in case there are multiple Xen event channels in
future (unlikely, but still). The fact that the implementation of blocking
is currently quite simplistic is an implementation detail hidden behind the
two xen_event_channel macros. So, basically, I'm keeping the event-channel
'look and feel' since it doesn't actually cost us much to do so. Your patch
was correct though, and a bit less code.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.