[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.16] Revert "domctl: improve locking during domain destruction"
Hi, On 09/11/2021 14:55, Roger Pau Monné wrote: On Tue, Nov 09, 2021 at 02:42:58PM +0000, Julien Grall wrote:Hi Roger, On 09/11/2021 14:31, Roger Pau Monne wrote:This reverts commit 228ab9992ffb1d8f9d2475f2581e68b2913acb88. Performance analysis has shown that dropping the domctl lock during domain destruction greatly increases the contention in the heap_lock, thus making parallel destruction of domains slower. The following lockperf data shows the difference between the current code and the reverted one: lock: 3342357(2.268295505s), block: 3263853(18.556650797s) lock: 2788704(0.362311723s), block: 222681( 0.091152276s)Thanks for the numbers, this is already an improvement from the reverted. Can you also please provide some details on the setup that was used to get the number? (e.g. how many guests, amount of memory...).Those are from Dmitry, and are gathered after destroying 5 guests in parallel. Given his previous emails he seems to use 2GB HVM guests for other tests, so I would assume that's the case for the lock profile data also (albeit it's not said explicitly): https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg01515.html I'm not sure it's worth adding this explicitly, as it's not a very complex test case. Probably any attempts to destroy a minimal amount of guests in parallel (5?) will already show the lock contention in the profiling. In this case, I am not too concerned about not been able to reproduce it. However, I think it is a good practice to always post the setup along with the numbers. This makes easier to understand the context of the patch and avoid spending time digging into the archives to find the original report. Anyway, you already wrote everything above. So this is a matter of adding your first paragraph in the commit message + maybe a link to the original discussion(s). Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |