[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel][PV-ops][PATCH 0/2] VNIF: Using smart polling instead ofevent notification.
> > One of the VNIF driver's scalability issues is the high event > channel frequency. It's highly related to physical NIC's interrupt > frequency in dom0, which could be 20K HZ in some situation. The > high frequency event channel notification makes the guest and dom0 > CPU utilization at a high value, especially in multi-VM cases. > The following two patches uses smart polling mechanism to > replace event notification to reduce the CPU Utilization. > > Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx> > I really think that this problem would be better solved in the backend. In its simplest form, the backend simply needs to know the maximum acceptable packet latency between putting a packet on the rx ring and notifying the frontend (in addition to the existing event notification mechanism). The algorithm is something like: Send Packet to frontend: . put packet on ring . if prod crosses event then notify as normal . if event is > prod and there is no outstanding timer set then set the timer to now + 1ms (or whatever the maximum latency is set to). On Timer: . notify the frontend A more complicated form could also: . define the maximum amount of 'work' per notify (eg always notify when there is (eg) 256k of data or more, regardless of timers or other criteria) . define a 'max time since last packet' vs 'max time since first packet' timers, to allow more packets to build up if they are coming in in a steady stream. A question though, a lot of hardware adapters already support interrupt moderation, which would result in 'bursty' traffic in the backend, does that affect this sort of optimization? I have some comments on your patches too, I'll follow up in a separate email. James _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |