[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 15/17] vmx: VT-d posted-interrupt core logic handling
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Thursday, September 24, 2015 3:52 PM > To: Wu, Feng > Cc: Andrew Cooper; Dario Faggioli; George Dunlap; George Dunlap; Tian, Kevin; > xen-devel@xxxxxxxxxxxxx; Keir Fraser > Subject: RE: [Xen-devel] [PATCH v7 15/17] vmx: VT-d posted-interrupt core > logic > handling > > >>> On 24.09.15 at 03:50, <feng.wu@xxxxxxxxx> wrote: > > One issue is the number of vmexits is far far bigger than the number of > > context switch. I test it for a quite short time and it shows there are > > 2910043 vmexits and 71733 context switch (only count the number in > > __context_switch() since we only change the PI state in this function). > > If we change the PI state in each vmexit/vmentry, I am afraid this will > > hurt the performance. > > Note that George has already asked whether the updating of the > PI descriptor is expensive, without you answering. Updating the PI descriptor needs to be atomic, I think it should be a little expensive. > If this is basically > just a memory or VMCS field write, I don't think it really matters in > which code path it sits, regardless of the frequency of either path > being used. Also note that whatever measuring you do in an area > like this, it'll only be an example, I DON'T think it is just an example, the vmexit numbers is definitely far larger than context switch. > unlikely to be representative of anything. I don't think so! > Plus with the tendency to eliminate VMEXITs with newer > hardware, the penalty of this sitting in the VMEXIT path ought to go > down. Broadwell is really very new hardware, the VMEXITs and the number of context switch is not in the same order of magnitudes. Thanks, Feng > > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |