[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] best practice NUMA config for dom0 ?

Den 18. mai 2016 18:23, skrev Dario Faggioli:
> On Wed, 2016-05-18 at 16:29 +0100, Wei Liu wrote:
>> CC Dario who implemented numa placement.
> Thanks Wei.
>> On Tue, May 10, 2016 at 05:40:44PM +0200, Håkon Alstadheim wrote:
>>> It has been my understanding, without any documentation to back it
>>> up,
>>> that on a Xen server the hypervisor does all then NUMA handling,
>>> and
>>> that linux in dom0 or domU should keep its hands off. 
> About the lack of documentation, there is something but I agree it is
> not sufficient. I'll find some time to improve things.
> About dom0 and domU keeping their hands off from NUMA handling, well, that's 
> more than just true, as neither of them _can_, right now, do much about the 
> NUMA-ness of the server.
> The only thing that dom0 is in control of, wrt NUMA placement of guests, is 
> whether or not to specify one in guests' config files (e.g., by specifying an 
> hard or soft vcpu affinity). But that's it.
> If no such hint is provided, xl and libxl will figure out an automatic 
> placement for the new domain on the host NUMA nodes.
I have not been using using cpu pinning on any domU, just dom0 is
pinned, with 4 out of 24 virtual cpus for dom0 .
2 dies with 6 physical cores each, altogether 24 hyper-threads.
>>> Now I notice that
>>> xl is complaining:
>>> libxl: notice: libxl_numa.c:499:libxl__get_numa_candidate: NUMA
>>> placement failed, performance might be affected
> Mmm.. well, the first thing to figure out is why placement is failing.
> If you use `xl -vvv create ...` it should tell you more.
> Also, it would be helpful if also you tell us the characteristics of
> both the host and the guests, such as:
>  - number of pCPUs
>  - number of NUMA nodes
>  - amount or RAM per node
>  - amount of free RAM in each node, as reported by `xl info -n'
Ram-amount might have played a part. If I remember correctly I had one
rather large VM running in addition to what I usually do, when I noticed
the error. It happened intermittently seven times during two days about
a week ago, with two different vms. Will investigate further.
>  - number of vCPUs of the guest
>  - amount of RAM of the guest
I seem to be unable to reproduce the errors at present. Since I reported
the errors I have enabled some basic NUMA awareness in dom0, but that
should not have any bearing on Xen behaviour, and no benefit for dom0,
if I understand you correctly.

I will provide better info if I am able to reproduce at a later date.
For now I'm satisfied that this was not a misunderstanding on my part
regarding NUMA config.

>>> So, given that dom0 is running a fairly recent kernel, and Xen is
>>> the
>>> latest stable (4.6.1), how should I configure the linux kernel for
>>> best
>>> numa handling ? I have two cpu-dies with ram that will benefit from
>>> NUMA
>>> aware allocation.
> No special configuration is necessary. As a matter of fact, for now,
> since we don't fully support virtual NUMA topology for dom0 and domUs,
> you may well disable NUMA from both the dom0 and domU kernels.
> But even if you don't (i.e., you leave it enabled/compiled in), that
> should not hurt, and I don't think is in any way what is responsible
> for the placement issue you're facing.
Thank you, my concern that I might have mis-configured something has
been put to rest. I will gather a better report if the error-message
comes back.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.