[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support



On Fri, Apr 04, 2014 at 01:58:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Apr 04, 2014 at 01:13:17PM -0400, Waiman Long wrote:
> > On 04/04/2014 12:55 PM, Konrad Rzeszutek Wilk wrote:
> > >On Thu, Apr 03, 2014 at 10:57:18PM -0400, Waiman Long wrote:
> > >>On 04/03/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:
> > >>>On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote:
> > >>>>On 04/02/2014 04:35 PM, Waiman Long wrote:
> > >>>>>On 04/02/2014 10:32 AM, Konrad Rzeszutek Wilk wrote:
> > >>>>>>On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote:
> > >>>>>>>N.B. Sorry for the duplicate. This patch series were resent as the
> > >>>>>>>      original one was rejected by the vger.kernel.org list server
> > >>>>>>>      due to long header. There is no change in content.
> > >>>>>>>
> > >>>>>>>v7->v8:
> > >>>>>>>   - Remove one unneeded atomic operation from the slowpath, thus
> > >>>>>>>     improving performance.
> > >>>>>>>   - Simplify some of the codes and add more comments.
> > >>>>>>>   - Test for X86_FEATURE_HYPERVISOR CPU feature bit to 
> > >>>>>>> enable/disable
> > >>>>>>>     unfair lock.
> > >>>>>>>   - Reduce unfair lock slowpath lock stealing frequency depending
> > >>>>>>>     on its distance from the queue head.
> > >>>>>>>   - Add performance data for IvyBridge-EX CPU.
> > >>>>>>FYI, your v7 patch with 32 VCPUs (on a 32 cpu socket machine) on an
> > >>>>>>HVM guest under Xen after a while stops working. The workload
> > >>>>>>is doing 'make -j32' on the Linux kernel.
> > >>>>>>
> > >>>>>>Completely unresponsive. Thoughts?
> > >>>>>>
> > >>>>>Thank for reporting that. I haven't done that much testing on Xen.
> > >>>>>My focus was in KVM. I will perform more test on Xen to see if I
> > >>>>>can reproduce the problem.
> > >>>>>
> > >>>>BTW, does the halting and sending IPI mechanism work in HVM? I saw
> > >>>Yes.
> > >>>>that in RHEL7, PV spinlock was explicitly disabled when in HVM mode.
> > >>>>However, this piece of code isn't in upstream code. So I wonder if
> > >>>>there is problem with that.
> > >>>The PV ticketlock fixed it for HVM. It was disabled before because
> > >>>the PV guests were using bytelocks while the HVM were using ticketlocks
> > >>>and you couldnt' swap in PV bytelocks for ticketlocks during startup.
> > >>The RHEL7 code has used PV ticketlock already. RHEL7 uses a single
> > >>kernel for all configurations. So PV ticketlock as well as Xen and
> > >>KVM support was compiled in. I think booting the kernel on bare
> > >>metal will cause the Xen code to work in HVM mode thus activating
> > >>the PV spinlock code which has a negative impact on performance.
> > >Huh? -EPARSE
> > >
> > >>That may be why it was disabled so that the bare metal performance
> > >>will not be impacted.
> > >I am not following you.
> > 
> > What I am saying is that when XEN and PV spinlock is compiled into
> > the current upstream kernel, the PV spinlock jump label is turned on
> > when booted on bare metal. In other words, the PV spinlock code is
> 
> How does it turn it on? I see that the jump lables are only turned
> on when the jump label is enable when it detects that it is running
> under Xen or KVM. It won't turn it on under baremetal.

Well, it seems that it does turn it on baremetal which is an stupid mistake.

Sending a patch shortly.
> 
> > active even when they are not needed and actually slow thing down in
> > that situation. This is a problem and we need to find way to make
> > sure that the PV spinlock code won't be activated on bare metal.
> 
> Could you explain to me which piece of code enables the jump labels
> on baremetal please?
> > 
> > -Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.