[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] query memory allocation per NUMA node



On Thu, 2017-01-19 at 17:48 +0100, Eike Waldt wrote:
> On 01/18/2017 07:25 PM, Dario Faggioli wrote:
> > To achieve this, I think you should get rid of dom0_vcpus_pin, keep
> > dom0_max_vcpus=16 and add dom0_nodes=0,relaxed (or something like
> > that). This will probably set the vcpu-affinity of dom0 to 'all/0-
> > 35',
> > which you can change to 'all/0-15' after boot.
> I got rid of "dom0_vcpus_pin" and did some tests...
> all/0-15 or 0-15/all or all/all for Dom0 does not make a difference
> according to my tests in the soft-pinning case.
> I suppose that is because the CPUs 0-15 are assigned anyhow.
> 
Well, yes, it looks like, in this case of yours, having dom0 isolated
helps a lot.

Considering that, I certainly wouldn't have expected this setup to work
as well as the hard pinned (with isolated dom0) one. It's a bit strange
that you don't see much difference, but, hey...

> The "dom0_nodes=0,relaxed"...
> Checked it out and it does exactly what you (and the manpage) said:
> relaxed --> all / 0-35
> strict  --> 0-35 / 0-35
> 
> Interesting is, that "xl debug-keys u; xl dmesg" still shows memory
> pages for NUMA Node3 even though it says in the manpage "dom0_nodes
> [..]
> Defaults for vCPU-s created and memory assigned to Dom0 [..]."
> There have to be enough free pages on Node0 (there is no other DomU
> running directly after startup).
> 
Yeah. If it's just a few pages (few in a relative sense, i.e., as
compared to the total number of pages dom0 has), it's a known issue,
that is revealing itself a bit difficult to track down, as it only
manifests on some systems.

> > 2) properly isolate dom0, even in the soft-affinity case. That
> > would
> > mean keeping dom0 affinity as you already have it, but change
> > **all**
> > the other domains' affinity from 'all/xx-yy' (where xx and yy vary
> > from
> > domain to domain) to '16-143/xx-yy'.
> That was a very good hint!
> I did not realize that before, thank you so much!
> The "issues" with stealing and bad NFS performance are gone now.
> 
Ah, great to hear! :-)

> > Let me say again that I'm not at all saying that I'm sure that
> > either 1
> > or 2 will certainly perform better than the hard pinning case. This
> > is
> > impossible to tell without trying.
> > 
> > But, like this, it's a more fair --and hence more interesting--
> > comparison, and IMO it's worth a try.
> > 
> When I isolate the Dom0 properly in the soft-pinning scenario,
> compared
> to hard-pinning everything, I could not see any performance
> differences.
> But this is very hard to measure I think.
> 
Yep, and this now makes a lot more sense. :-)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.