[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
On Mon, 2020-07-27 at 15:21 +0100, Hongyan Xia wrote: > From: Wei Liu <wei.liu2@xxxxxxxxxx> > > Rewrite those functions to use the new APIs. Modify its callers to > unmap > the pointer returned. Since alloc_xen_pagetable_new() is almost never > useful unless accompanied by page clearing and a mapping, introduce a > helper alloc_map_clear_xen_pt() for this sequence. > > Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn() > to > unmap the page, which requires domain_page.h header in vmap. > > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> > Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx> > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> I believe the vmap part can be removed since x86 now handles superpages. > @@ -5085,8 +5117,8 @@ int map_pages_to_xen( > unsigned int flags) > { > bool locking = system_state > SYS_STATE_boot; > - l3_pgentry_t *pl3e, ol3e; > - l2_pgentry_t *pl2e, ol2e; > + l3_pgentry_t *pl3e = NULL, ol3e; > + l2_pgentry_t *pl2e = NULL, ol2e; > l1_pgentry_t *pl1e, ol1e; > unsigned int i; > int rc = -ENOMEM; > @@ -5107,6 +5139,10 @@ int map_pages_to_xen( > > while ( nr_mfns != 0 ) > { > + /* Clean up mappings mapped in the previous iteration. */ > + UNMAP_DOMAIN_PAGE(pl3e); > + UNMAP_DOMAIN_PAGE(pl2e); > + > pl3e = virt_to_xen_l3e(virt); > > if ( !pl3e ) While rebasing, I found another issue. XSA-345 now locks the L3 table by L3T_LOCK(virt_to_page(pl3e)) but for this series we cannot call virt_to_page() here. We could call domain_page_map_to_mfn() on pl3e to get back the mfn, which should be fine since this function is rarely used outside boot so the overhead should be low. We could probably pass an *mfn in as an additional argument, but do we want to change this also for virt_to_xen_l[21]e() to be consistent (although they don't need the mfn)? I might also need to remove the R-b due to this non-trivial change. Thoughts? Hongyan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |