[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] query memory allocation per NUMA node




On 01/17/2017 12:23 AM, Dario Faggioli wrote:
> On Mon, 2017-01-16 at 13:18 +0100, Eike Waldt wrote:
>> On 01/12/2017 01:45 AM, Dario Faggioli wrote:
>>> On Mon, 2017-01-09 at 15:47 +0100, Eike Waldt wrote:
>>>> Only doing soft-pinning is way worse for the overall performance,
>>>> as
>>>> hard-pinning (according to my first tests).
>>>>
>>> Can you elaborate on this? I'm curious (what tests, what does the
>>> numbers look like in the two cases, etc).
>>>
>> setup:
>> - 144 vCPUs on a server with 4 NUMA Nodes
>> - pinning Dom0 CPUs (0-15)
>> - running 60 DomUs (40 Linux (para), 20 Windows (HVM))
>> - doing 2/3 CPU load with stressaptest(CPU,RAM) and one fio(write
>> I/O)
>> thread in all linux VMs
>>
> Ok. You didn't say how many vCPUs each VM has. I'm assuming 1?
> 
VMs have different kind of t-shirt sizes.

> Also, how are you "pinning Dom0 CPUs", and why?
> 
According to the wiki page "Tuning_Xen_for_Performance", pinning Dom0
CPUs affects the performance as "Dom0 doesn't have to schedule out".

dom0_max_vcpus=16 dom0_vcpus_pin
+ don't pin DomU's CPUs to Dom0's CPUs (via a custom script)
>> soft-pinning whole NUMA nodes per DomU (depending on NUMA Node memory
>> placement):
>> The load on Dom0 is about 200,
>> the i/o wait is about 30 and
>> the cpu steal time for each vCPU in Dom0 is about 50!
>> Dom0 and DomUs respond very slow.
>>
>> hard-pinning whole NUMA nodes per DomU (depending on NUMA Node memory
>> placement):
>> The load on Dom0 is about 90,
>> the i/o wait is about 30 and
>> the cpu steal time is about 2!
>> Dom0 and DomUs respond ok.
>>
> Mmm..  If possible, I'd like to see the output of the following
> commands, with all the domains created (it's not important that they
> run a benchmark, they just need to be live.
> 
> # xl info -n
> # xl list -n
> # xl vcpu-list
> # xl debug-key u ; xl dmesg
> 
> And this is for both the configuration you say you've tried above.
> 
>> This simple test tells me, that soft-pinning is way worse than hard-
>> pinning.
>>
> That may well be. But it sounds strange. I'd be inclined to think that
> there is something else going on.. Or maybe I'm just not understanding
> what you mean with "pinning while NUMA nodes per DomU" (and that's why
> I'm asking for the commands output :-)).
> 
I simply mean that you always pin ALL DomU vCPUs to a whole NUMA node
(or more) and not single vCPUs.

One detail to mention would be, that we run all DomU filesystems on NFS
storage mounted on the Dom0.
Another interesting fact is, that (as said above) we're doing some fio
write tests. These go to NFS filesystems and the write speed is about
1000 MB/s (8000 Mbit/s) in the hard-pinning scenario and only 100 MB/s
in the soft-pinning scenario.

I'll send you some outputs.

>> It may be a corner case though and nobody might ever tested it in
>> this
>> "dimension" ;)
>>
> Actually, we tested it even for higher "dimensions"! But true, corner
> cases will always exist. :-)
> 
> Regards,
> Dario
> 

-- 
Eike Waldt
Linux Consultant
Tel.: +49-175-7241189
Mail: waldt@xxxxxxxxxxxxx

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.