[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] xen: introduce an arch helper for default dma zone status



commit 15e64b8a099eb9d37485fdc2046ac769cc6a1628
Author:     Wei Chen <wei.chen@xxxxxxx>
AuthorDate: Fri Jun 10 13:53:11 2022 +0800
Commit:     Julien Grall <jgrall@xxxxxxxxxx>
CommitDate: Fri Jun 17 09:36:12 2022 +0100

    xen: introduce an arch helper for default dma zone status
    
    In current code, when Xen is running in a multiple nodes
    NUMA system, it will set dma_bitsize in end_boot_allocator
    to reserve some low address memory as DMA zone.
    
    There are some x86 implications in the implementation.
    Because on x86, memory starts from 0. On a multiple-nodes
    NUMA system, if a single node contains the majority or all
    of the DMA memory, x86 prefers to give out memory from
    non-local allocations rather than exhausting the DMA memory
    ranges. Hence x86 uses dma_bitsize to set aside some largely
    arbitrary amount of memory for DMA zone. The allocations
    from DMA zone would happen only after exhausting all other
    nodes' memory.
    
    But the implications are not shared across all architectures.
    For example, Arm cannot guarantee the availability of memory
    below a certain boundary for DMA limited-capability devices
    either. But currently, Arm doesn't need a reserved DMA zone
    in Xen. Because there is no DMA device in Xen. And for guests,
    Xen Arm only allows Dom0 to have DMA operations without IOMMU.
    Xen will try to allocate memory under 4GB or memory range that
    is limited by dma_bitsize for Dom0 in boot time. For DomU, even
    Xen can passthrough devices to DomU without IOMMU, but Xen Arm
    doesn't guarantee their DMA operations. So, Xen Arm doesn't
    need a reserved DMA zone to provide DMA memory for guests.
    
    In this patch, we introduce an arch_want_default_dmazone helper
    for different architectures to determine whether they need to
    set dma_bitsize for DMA zone reservation or not.
    
    At the same time, when x86 Xen is built with CONFIG_PV=n could
    probably leverage this new helper to actually not trigger DMA
    zone reservation.
    
    Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
    Tested-by: Jiamei Xie <jiamei.xie@xxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
---
 xen/arch/arm/include/asm/numa.h | 1 +
 xen/arch/x86/include/asm/numa.h | 1 +
 xen/common/page_alloc.c         | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 31a6de4e23..e4c4d89192 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -24,6 +24,7 @@ extern mfn_t first_valid_mfn;
 #define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
+#define arch_want_default_dmazone() (false)
 
 #endif /* __ARCH_ARM_NUMA_H */
 /*
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index bada2c0bb9..5d8385f2e1 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -74,6 +74,7 @@ static inline __attribute__((pure)) nodeid_t 
phys_to_nid(paddr_t addr)
 #define node_spanned_pages(nid)        (NODE_DATA(nid)->node_spanned_pages)
 #define node_end_pfn(nid)       (NODE_DATA(nid)->node_start_pfn + \
                                 NODE_DATA(nid)->node_spanned_pages)
+#define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 extern int valid_numa_range(u64 start, u64 end, nodeid_t node);
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ea59cd1a4a..000ae6b972 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1889,7 +1889,7 @@ void __init end_boot_allocator(void)
     }
     nr_bootmem_regions = 0;
 
-    if ( !dma_bitsize && (num_online_nodes() > 1) )
+    if ( !dma_bitsize && arch_want_default_dmazone() )
         dma_bitsize = arch_get_dma_bitsize();
 
     printk("Domain heap initialised");
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.