[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 09/27] ARM: GICv3: introduce separate pending_irq structs for LPIs
Hi, On 27/03/17 10:02, Andre Przywara wrote: On 24/03/17 17:26, Stefano Stabellini wrote:On Fri, 24 Mar 2017, Andre Przywara wrote:I am afraid that this would lead to situations where we needlessly allocate and deallocate pending_irqs. Under normal load I'd expect to have something like zero to three LPIs pending at any given point in time (mostly zero, to be honest). So this will lead to a situation where *every* LPI that becomes pending triggers a memory allocation - in the hot path. That's why the pool idea. So if we are going to shrink the pool, I'd stop at something like five entries, to not penalize the common case. Does that sound useful? Not answering directly to the question here. I will summarize the face to face discussion I had with Andre this morning. So allocating the pending_irq in the IRQ path is not a solution because memory allocation should not happen in IRQ context, see ASSERT(!in_irq()) in _xmalloc. Regardless the ASSERT, it will also increase the time to handle and forward an interrupt when there are no pending_irq free because it is necessary to allocate a new one. Lastly, we have no way to tell the guest: "Try again" if it Xen is running out of memory. The outcome of the discussion is to pre-allocate the pending_irq when a device is assigned to a domain. We know the maximum number of event supported by a device and that 1 event = 1 LPI. This may allocate more memory (a pending_irq is 56 bytes), but at least we don't need allocation on the fly and can report error. One could argue that we could allocate on MAPTI to limit the allocation. However, as we are not able to rate-limit/defer the execution of the command queue so far, a guest could potentially flood with MAPTI and monopolize the pCPU for a long time. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |