[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7] xSplice v1 design and implementation.

On 04/11/2016 04:43 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Apr 11, 2016 at 10:32:20AM -0400, Konrad Rzeszutek Wilk wrote:
*Hypervisor Maintainers*

Jan, the hypervisor patches #2, #5-#17, #21-#23 need your Ack.

s/Ack./Ack please./
*Are there any TODOs left from v5 or v6 reviews?*

One I hope can be deferred - that is xensyms_read which we use in
  "[PATCH v7 12/24] xsplice,symbols: Implement symbol name resolution on
  address." is not the fastest. It will need some tweaking (or a new
function will have to be written) and I hope that this can be done in v4.8.

Let me correct myself. I am right now looking at it - so I may have it
ready soonish but there are also bugs to work on.

The other is to test this on a 8 socket machine with tons of CPUs.
Somebody else is using this beast right now. The impact is whether the default
timeout of 30ms to quisce the CPUs should be increased on those beasts.

A 240 CPUs box that is idle had no trouble with the default 30ms. I
will be putting some load on it and see how that goes later today.

100 guests (each 2VCPU) doing CPU intensive workload and the default 30ms
timeout worked out just fine.

Yeah that works fine. The problem comes (on any machine) if your CPUs are busy in Xen code, then it's possible to fail a 30ms timeout. One way to reproduce this is to use several large guests with several vCPUs each, simultaneously localhost migrate them, then try and apply a patch. It's possible for this to trip a 5s watchdog timeout, let alone 30ms timeout. So what is important is not really the number of pCPUs but the kind of workload that it being run.

Ross Lagerwall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.