[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
On Fri, May 27, 2022 at 01:17:35PM +0200, Jan Beulich wrote: > Page tables are used for two purposes after allocation: They either > start out all empty, or they are filled to replace a superpage. > Subsequently, to replace all empty or fully contiguous page tables, > contiguous sub-regions will be recorded within individual page tables. > Install the initial set of markers immediately after allocation. Make > sure to retain these markers when further populating a page table in > preparation for it to replace a superpage. > > The markers are simply 4-bit fields holding the order value of > contiguous entries. To demonstrate this, if a page table had just 16 > entries, this would be the initial (fully contiguous) set of markers: > > index 0 1 2 3 4 5 6 7 8 9 A B C D E F > marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0 > > "Contiguous" here means not only present entries with successively > increasing MFNs, each one suitably aligned for its slot, and identical > attributes, but also a respective number of all non-present (zero except > for the markers) entries. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > --- a/xen/drivers/passthrough/x86/iommu.c > +++ b/xen/drivers/passthrough/x86/iommu.c > @@ -26,6 +26,7 @@ > #include <asm/hvm/io.h> > #include <asm/io_apic.h> > #include <asm/mem_paging.h> > +#include <asm/pt-contig-markers.h> > #include <asm/setup.h> > > const struct iommu_init_ops *__initdata iommu_init_ops; > @@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d > return 0; > } > > -struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd) > +struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd, > + uint64_t contig_mask) > { > unsigned int memflags = 0; > struct page_info *pg; > - void *p; > + uint64_t *p; > > #ifdef CONFIG_NUMA > if ( hd->node != NUMA_NO_NODE ) > @@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st > return NULL; > > p = __map_domain_page(pg); > - clear_page(p); > + > + if ( contig_mask ) > + { > + /* See pt-contig-markers.h for a description of the marker scheme. */ > + unsigned int i, shift = find_first_set_bit(contig_mask); > + > + ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == > CONTIG_LEVEL_SHIFT); > + > + p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift; > + p[1] = 0; > + p[2] = 1ull << shift; > + p[3] = 0; > + > + for ( i = 4; i < PAGE_SIZE / 8; i += 4 ) FWIW, you could also use sizeof(*p) instead of hardcoding 8. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |