[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] evtchn_upcall_mask for PV-on-HVM


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Thu, 30 Nov 2006 15:05:03 +0800
  • Cc: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Delivery-date: Wed, 29 Nov 2006 23:05:15 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AccUTduru7Rk1b5TSXG5odbpD2pVlg==
  • Thread-topic: evtchn_upcall_mask for PV-on-HVM

We seem to find an interesting bug, but not for sure.

Each time before xen returns to hvm domain, it checks 
local_events_need_delivery to see whether any events pending and 
if yes, inject event by callback_irq into the virtual interrupt
controller. 
However, the interesting point is, we never found any place, either in 
xen or in PV drivers, to clear evtchn_upcall_mask at any time. The 
initial value of evtchn_upcall_mask is 1 which makes 
local_events_need_delivery always return 0.

One possible reason why PV drivers still works in HVM is that 
platform pci device happens to share irq line with another pci device 
within Qemu. In that case, the irq from other pci device may still 
bring event back into PV drivers. But can this condition always 
promise?

Anyway, we'd like to know whether this is a real bug. If yes, it may 
have some impact on performance, and then people may clear the 
evtchn_upcall_mask either at domain creation time or at callback_irq 
setting time. In any case, this field does nothing for HVM domain since 
the latter already has 'rflags.if' for same purpose.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.