[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] fix zone-over-node preference when allocating memory
You have no guarantee that the DMA pool memory belongs to the allocating node either (although it happens to be the case in the scenario you are trying to fix). Instead I suggest that default dma_bitsize should depend on NUMA characteristics of the system. For example, we could specify that dma_bitsize should not cover more than 25% of the memory of any one NUMA node. In your example this would cause you to have dma_bitsize=30. I was going to suggest we get rid of dma_bitsize now we have the per-bitwidth zones, but actually it probably is needed specifically for NUMA systems. If we have one NUMA node with all memory below 4GB, we'd probably like it to fall back to allocating memory from other nodes before it exhausts all the available below-4GB memory. -- Keir On 21/12/07 22:27, "Andre Przywara" <andre.przywara@xxxxxxx> wrote: > When Xen allocates the guest's memory, it will try to use non-DMA-able > zones first (probably because they are less precious). If there are no > such pages available on a certain node, Xen will revert to allocating > low pages from another node and thus ignoring the node-preference. This > patch fixes this by first checking if non-DMA pages are available on a > node and reverting to DMA-able pages if not. This fixes incorrect NUMA > memory allocation on nodes with memory below the DMA border (4GB on > x86-64, affects for instance dual-node machines with 4gig on each node). > > Andre. > > P.S. This fix was already part of my NUMA guest patches back in August, > this is just an extract of these. > > Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |