[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations
>>> On 05.12.18 at 15:55, <roger.pau@xxxxxxxxxx> wrote: > +unsigned long __init dom0_hap_pages(const struct domain *d, > + unsigned long nr_pages) > +{ > + /* > + * Attempt to account for at least some of the MMIO regions by adding the > + * size of the holes in the memory map to the amount of pages to map. > Note > + * this will obviously not account for MMIO regions that are past the > last > + * RAM range in the memory map. > + */ > + nr_pages += max_page - total_pages; > + /* > + * Approximate the memory required for the HAP/IOMMU page tables by > + * pessimistically assuming each page will consume a 8 byte page table > + * entry. > + */ > + return DIV_ROUND_UP(nr_pages * 8, PAGE_SIZE << PAGE_ORDER_4K); With enough memory handed to Dom0 the memory needed for L2 and higher page tables will matter as well. I'm anyway having difficulty seeing why HAP and shadow would have to use different calculations, the more that shadow relies on the same P2M code that shadow uses in the AMD/SVM case. Plus, as iirc was said by someone else already, I don't think we can (continue to) neglect the MMIO space needs for MMCFG and PCI devices, especially with devices having multi-Gb BARs. > +} > + > + No double blank lines please. > @@ -324,8 +342,13 @@ unsigned long __init dom0_compute_nr_pages( > if ( !need_paging ) > break; > > - /* Reserve memory for shadow or HAP. */ > - avail -= dom0_shadow_pages(d, nr_pages); > + /* Reserve memory for CPU and IOMMU page tables. */ > + if ( paging_mode_hap(d) ) > + avail -= dom0_hap_pages(d, nr_pages) * > + (iommu_hap_pt_share ? 1 : 2); Use "<< !iommu_hap_pt_share" instead? > + else > + avail -= dom0_shadow_pages(d, nr_pages) + > + dom0_hap_pages(d, nr_pages); > } Doesn't dom0_shadow_pages() (mean to) already include the amount needed for the P2M? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |