[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 04/16] x86/srat: vmap the pages for acpi_slit
On 30.11.2020 19:11, Hongyan Xia wrote: > On Mon, 2020-11-30 at 11:16 +0100, Jan Beulich wrote: >> On 30.04.2020 22:44, Hongyan Xia wrote: >>> --- a/xen/arch/x86/srat.c >>> +++ b/xen/arch/x86/srat.c >>> @@ -196,7 +196,8 @@ void __init acpi_numa_slit_init(struct >>> acpi_table_slit *slit) >>> return; >>> } >>> mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1); >>> - acpi_slit = mfn_to_virt(mfn_x(mfn)); >>> + acpi_slit = vmap_boot_pages(mfn, PFN_UP(slit->header.length)); >>> + BUG_ON(!acpi_slit); >>> memcpy(acpi_slit, slit, slit->header.length); >>> } >> >> I'm not sure in how far this series is still to be considered >> active / pending; I still have it in my inbox as something to >> look at in any event. If it is, then I think the latest by this >> patch it becomes clear that we either want to make vmalloc() >> boot-allocator capable, or introduce e.g. vmalloc_boot(). >> Having this recurring pattern including the somewhat odd >> vmap_boot_pages() is imo not the best way forward. It would >> then also no longer be necessary to allocate contiguous pages, >> as none of the users up to here appear to have such a need. > > This series is blocked on the PTE domheap conversion series so I will > definitely come back here after that series is merged. > > vmap_boot_pages() (poorly named, there is nothing "boot" about it) is > actually useful in other patches as well, especially when there is no > direct map but we need to map a contiguous range, since > map_domain_page() can only handle a single one. So I would say there > will be a need for this function (maybe call it vmap_contig_pages()?) > even if for this patch a boot-capable vmalloc can do the job. Question is in how many cases contiguous allocations are actually needed. I suspect there aren't many, and hence vmalloc() (or a boot time clone of it, if need be) may again be the better choice. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |