[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Ideas Re: [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling
On Mon, Mar 07, 2016 at 05:19:59PM +0100, Dario Faggioli wrote: > On Mon, 2016-03-07 at 10:53 -0500, Konrad Rzeszutek Wilk wrote: > > On Mon, Mar 07, 2016 at 11:21:33AM +0000, George Dunlap wrote: > > > > > > > <handwaving> > > > > Would it be perhaps possible to have an anti-affinity flag to > > > > deter the > > > > scheduler from this? That is whichever struct vcpu has 'anti- > > > > affinity' flag > > > > set - the scheduler will try as much as it can _to not_ schedule > > > > the 'struct vcpu' > > > > if the previous 'struct vcpu' had this flag as well on this pCPU? > > > That can also be seen as step in the direction of (supporting) gang > scheduling, which we've said already it would be something interesting > to look at, although difficult to implement and even more difficult to > figure out whether it is actually a good thing for most workloads. > > In any case, I see from where this comes, and am up for thinking about > it, although my fear is that it would complicate the code by quite a > bit, so I agree with George that profiling work is necessary to try to > assess whether it could be really useful (as well as, once > implemented/drafted, whether it is really good and does not cause perf > regressions). > > > > On the whole it seems unlikely that having two vcpus on a single > > > pcpu > > > is a "stable" situation -- it's likely to be pretty transient, and > > > thus not have a major impact on performance. > > Except that we are concerned with it - in fact we are disabling this > > feature because it may happen. > > > I'm sorry, I'm not getting, what feature are you disabling? It is already disabled in the code: 62 /* 63 * In the current implementation of VT-d posted interrupts, in some extreme 64 * cases, the per cpu list which saves the blocked vCPU will be very long, 65 * and this will affect the interrupt latency, so let this feature off by 66 * default until we find a good solution to resolve it. 67 */ 68 bool_t __read_mostly iommu_intpost; > > > > But I think some profiling is in order before anyone does serious > > > work on this. > > I appreciate your response being 'profiling' instead of 'Are you > > NUTS!?' :-) > > > That's only because everyone knows you're nuts, there's no need to > state it all the times! :-P :-P <laughs> Glad that you have the _right_ expectations of me :) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |