[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/2] x86/dom0: rename paging function
>>> On 12.12.18 at 16:56, <roger.pau@xxxxxxxxxx> wrote: > On Wed, Dec 12, 2018 at 03:32:53AM -0700, Jan Beulich wrote: >> >>> On 12.12.18 at 11:04, <roger.pau@xxxxxxxxxx> wrote: >> > You mentioned there's some code (for PV?) to calculate the size of the >> > page tables but I'm having trouble finding it (mainly because I'm not >> > that familiar with PV), could you point me to it? >> >> In dom0_construct_pv() you'll find a loop starting with >> "for ( nr_pt_pages = 2; ; nr_pt_pages++ )". It's not the neatest, >> but at least we've never had reports of failure. > > That seems quite complicated, what about using the formula below: > > /* > * Approximate the memory required for the HAP/IOMMU page tables by > * pessimistically assuming every guest page will use a p2m page table > * entry. > */ > return DIV_ROUND_UP(( > /* Account for one entry in the L1 per page. */ > nr_pages + > /* Account for one entry in the L2 per 512 pages. */ > DIV_ROUND_UP(nr_pages, 512) + > /* Account for one entry in the L3 per 512^2 pages. */ > DIV_ROUND_UP(nr_pages, 512 * 512) + > /* Account for one entry in the L4 per 512^3 pages. */ > DIV_ROUND_UP(nr_pages, 512 * 512 * 512) + > ) * 8, PAGE_SIZE << PAGE_ORDER_4K); > > That takes into account higher level page table structures. That's a fair approximation without 2M and 1G pages available. I'm unconvinced we want to over-estimate this heavily in the more common case of large page mappings being available. Otoh this provides enough resources to later also deal with shattering of large pages. The MMIO side of things of course still remains unclear. What I don't understand in any case though is "PAGE_SIZE << PAGE_ORDER_4K". This is x86 code - why not just PAGE_SIZE? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |