[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Problems with merlot* AMD Opteron 6376 systems (Was Re: stable trees (was: [xen-4.2-testing test] 58584: regressions))



>>> On 24.06.15 at 15:15, <dario.faggioli@xxxxxxxxxx> wrote:
> [Moving most people to Bcc, as this is indeed unrelated to the original
> topic]
> 
> On Wed, 2015-06-24 at 13:41 +0100, Jan Beulich wrote:
>> >>> On 24.06.15 at 14:29, <dario.faggioli@xxxxxxxxxx> wrote:
>> > On Wed, 2015-06-24 at 10:38 +0100, Ian Campbell wrote:
>> >> The memory info
>> >> Jun 23 15:56:27.749008 (XEN) Memory location of each domain:
>> >> Jun 23 15:56:27.756965 (XEN) Domain 0 (total: 131072):
>> >> Jun 23 15:56:27.756983 (XEN)     Node 0: 126905
>> >> Jun 23 15:56:27.756998 (XEN)     Node 1: 0
>> >> Jun 23 15:56:27.764952 (XEN)     Node 2: 4167
>> >> Jun 23 15:56:27.764969 (XEN)     Node 3: 0
>> >> suggests at least a small amount of cross-node memory allocation (16M
>> >> out of dom0s 512M total). That's probably small enough to be OK.
>> >> 
>> > Yeah, that is in line with what you usually get with dom0_nodes. Most of
>> > the memory, as you noted, comes from the proper node. We're just not
>> > (yet?) at the point where _all_ of it can come from there.
>> 
>> Actually as long as there is enough memory on the requested node
>> (minus any amount set aside for the DMA pool), this shouldn't
>> happen (and I had seen this to be clean in my own testing). 
>>
> ISTR some allocation not being 'converted'. Perhaps I'm misremembering.

Quite possible that I overlooked some.

>> There
>> being 8Gb per node, I see no immediate reason why memory from
>> node 2 would be handed out. Still I wouldn't suspect this to matter
>> here.
>> 
> On my 2 nodes test box with the following configuration:
> (XEN) SRAT: Node 1 PXM 1 0-dc000000
> (XEN) SRAT: Node 1 PXM 1 100000000-1a4000000
> (XEN) SRAT: Node 0 PXM 0 1a4000000-324000000
> 
> with 'dom0_nodes=0', I see this:
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 131072):
> (XEN)     Node 0: 114664
> (XEN)     Node 1: 16408
> 
> while with 'dom0_nodes=1', this:
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 131072):
> (XEN)     Node 0: 7749
> (XEN)     Node 1: 123323

In the latter case I'm not surprised, except by the odd number: The
SWIOTLB would (except on very small systems, which normally
wouldn't be NUMA anyway) always live on node 0.

But overall it looks like there's something needing to be fixed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.