[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH] Reuse irq number for virq/ipi after vcpu unplug/plug
>From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx] >Sent: 2007年2月4日 12:52 > >On 3/2/07 04:49, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote: > >> Irq nunmber for per-vcpu event (virq/ipi) needs kept >> accross vcpu plug/unplug, once allocated. We just >> reuse this irq number and bind it to a new event port. >> Or else, /proc/interrupt exports messed statistics >> like: > >Noone cares about absolute /proc/interrupt numbers, only rate >of change. If >pushed, I would argue that the stats should be zeroed when an >interrupt line >is freed (since the interrupt then stops appearing in /proc/interrupts, >which logically implies that the stats lifetime has ended, and >so a reuse of >that interrupt is a new lifetime starting from zero). After all, all >irq-evtchn bindings are dynamic running on Xen: a Linux irq may >theoretically get used for all sorts of different devices during the >lifetime of the Linux guest. Should all these uses get >aggregated over time? > >Zeroing the stats would potentially be a patch for lkml, or we >could do it >for dynirqs ourselves in unbind_from_irq(). That's a patch I >would accept. > > -- Keir Basically I agree with you, but I'm not sure about the usage model on such stats. Isn't it up to application to decide? App may decide to do some action (like balance) by rate of change, so does the aggregated value. If both usage models exist, do we need consider both for compatibility to app's assumption? Another point I'm not for sure is the save/restore and migration. >From user perspective, he has no knowledge that cpu1...N are unplugged and then plugged along the process. Thus will anyone feel strange about the fact that all cpus except cpu0 gets their per-vcpu interrupt stats empty after restored or migrated? This seems not an explicit lifetime restart... So I still think virq/ipi are a bit special, and it's reasonable to zero stats for the rest. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |