[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen/arm: Trap and yield on WFE instructions



Hi Anup and Ian,

On 07/16/2014 03:49 PM, Ian Campbell wrote:
> 
> On Wed, 2014-07-16 at 16:02 +0530, Anup Patel wrote:
>> If we have a Guest/DomU with two or more of its VCPUs running
>> on same host CPU then it can quite likely happen that these
>> VCPUs fight for same spinlock and one of them will waste CPU
>> cycles in WFE instruction. This patch makes WFE instruction
>> trap for VCPU and forces VCPU to yield its timeslice.
>>
>> The KVM ARM/ARM64 also does similar thing for handling WFE
>> instructions. (Please refer,
>> https://lists.cs.columbia.edu/pipermail/kvmarm/2013-November/006259.html)
>>
>> In general, this patch is more of an optimization for an
>> oversubscribed system having number of VCPUs more than
>> underlying host CPUs.
>>
>> Changes since V1:
>>  - Added separate member in union hsr for decoding WFI/WFE
>>    related info.
>>
>> Signed-off-by: Anup Patel <anup.patel@xxxxxxxxxx>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@xxxxxxxxxx>
>> Tested-by: Pranavkumar Sawargaonkar <pranavkumar@xxxxxxxxxx>
> 
> Acked + applied. There was a conflict with "[PATCH v4 1/2] xen/arm :
> Adding helper function for WFI" which I just applied before it. I fixed
> it up and the result is below, please check it is ok.

This patch make Xen unstable on some platform ARM32. hackbench will
crash after few minutes on the guest.

Test hackbench on guest
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 1.135
Running in process mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 1.056
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 10000 messages of 100 bytes
Time: 105.583
Running in process mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 10000 messages of 100 bytes
*** Error in `/usr/bin/hackbench': free(): invalid pointer: 0x016831c0 ***
*** Error in `/usr/bin/hackbench': free(): invalid pointer: 0x01682fe0 ***
SENDER: write (error: Connection reset by peer)
SENDER: write (error: Broken pipe)
SENDER: write (error: Broken pipe)
SENDER: write (error: Broken pipe)

This has been tested on midway via Linaro CI loop (failing [1] and
working [2]). For the moment I'm not able to reproduce locally on my
midway node.

I checked Linux ARM32 KVM code and there is no specific change from the
ARM64 implementation.

AFAIU the commit message, this patch doesn't fix any issue but improve
performance. Hence, from the Anup's mail on V1:

"I did not try any benchmarks myself but hackbench shows good
improvement for KVM hence it is a worthy optimization for Xen too.

I found this change missing for Xen hence this patch."

So nobody try to exercise this patch via a benchmark or other things on
Xen ARM{64,32}... Unless someone has an idea how to fix the memory
corruption quickly, I request to revert this patch. I prefer a stable
Xen rather a fast one.

Regards,

[1] based on commit af82c49
https://validation.linaro.org/dashboard/permalink/bundle/ca2c130a5ccdfbb00bb1dec51f50bfdd870588e2/

[2] based on commit c047211
https://validation.linaro.org/dashboard/permalink/bundle/d7fef09bb3b7cbdcfc546c0759b209831bde0cb7/

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.