[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 5/9] x86/mem-sharing: copy GADDR based shared guest areas
On Fri, Sep 29, 2023 at 08:31:58AM -0400, Tamas K Lengyel wrote: > On Thu, Sep 28, 2023 at 10:08 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > > > On Thu, Sep 28, 2023 at 08:57:04AM -0400, Tamas K Lengyel wrote: > > > On Thu, Sep 28, 2023 at 7:08 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> > > > wrote: > > > > > > > > On Thu, Sep 28, 2023 at 12:11:02PM +0200, Jan Beulich wrote: > > > > > On 28.09.2023 11:51, Roger Pau Monné wrote: > > > > > > On Thu, Sep 28, 2023 at 09:16:20AM +0200, Jan Beulich wrote: > > > > > >> --- a/xen/arch/x86/mm/mem_sharing.c > > > > > >> +++ b/xen/arch/x86/mm/mem_sharing.c > > > > > >> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc > > > > > >> hvm_set_nonreg_state(cd_vcpu, &nrs); > > > > > >> } > > > > > >> > > > > > >> +static int copy_guest_area(struct guest_area *cd_area, > > > > > >> + const struct guest_area *d_area, > > > > > >> + struct vcpu *cd_vcpu, > > > > > >> + const struct domain *d) > > > > > >> +{ > > > > > >> + mfn_t d_mfn, cd_mfn; > > > > > >> + > > > > > >> + if ( !d_area->pg ) > > > > > >> + return 0; > > > > > >> + > > > > > >> + d_mfn = page_to_mfn(d_area->pg); > > > > > >> + > > > > > >> + /* Allocate & map a page for the area if it hasn't been > > > > > >> already. */ > > > > > >> + if ( !cd_area->pg ) > > > > > >> + { > > > > > >> + gfn_t gfn = mfn_to_gfn(d, d_mfn); > > > > > >> + struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain); > > > > > >> + p2m_type_t p2mt; > > > > > >> + p2m_access_t p2ma; > > > > > >> + unsigned int offset; > > > > > >> + int ret; > > > > > >> + > > > > > >> + cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, > > > > > >> NULL); > > > > > >> + if ( mfn_eq(cd_mfn, INVALID_MFN) ) > > > > > >> + { > > > > > >> + struct page_info *pg = > > > > > >> alloc_domheap_page(cd_vcpu->domain, 0); > > > > > >> + > > > > > >> + if ( !pg ) > > > > > >> + return -ENOMEM; > > > > > >> + > > > > > >> + cd_mfn = page_to_mfn(pg); > > > > > >> + set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn)); > > > > > >> + > > > > > >> + ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, > > > > > >> p2m_ram_rw, > > > > > >> + p2m->default_access, -1); > > > > > >> + if ( ret ) > > > > > >> + return ret; > > > > > >> + } > > > > > > > > > > > > I'm still unsure why map_guest_area() shouldn't be able to deal > > > > > > with a > > > > > > forked child needing the page to be mapped. What happens when a > > > > > > forked child executes the hypercall to map such areas against not > > > > > > yet > > > > > > populated gfns? > > > > > > > > > > > > Shouldn't map_guest_area() be capable of handling those calls and > > > > > > populating on demand? > > > > > > > > > > Funny you should use this wording: P2M_ALLOC will deal with populating > > > > > PoD slots, yes, but aiui forks don't fill their p2m with all PoD > > > > > slots, > > > > > but rather leave them empty. Yet again I need to leave it to Tamas to > > > > > confirm or prove me wrong. > > > > > > > > If the child p2m populating is only triggered by guest accesses then a > > > > lot of hypercalls are likely to not work correctly on childs. > > > > > > That's not what's happening. As I said before, ALL access paths, be > > > that from the guest, dom0 or Xen trigger page population. If there is > > > a hole and P2M_ALLOC is set we do the following in > > > p2m_get_gfn_type_access: > > > > > > /* Check if we need to fork the page */ > > > if ( (q & P2M_ALLOC) && p2m_is_hole(*t) && > > > !mem_sharing_fork_page(p2m->domain, gfn, q & P2M_UNSHARE) ) > > > mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL); > > > > > > Depending on the type of access we either populate the hole with a > > > shared memory entry or a copy. > > > > Then the code here is redundant, as the call to get_page_from_gfn(..., > > P2M_UNSHARE) in map_vcpu_info() will already take care of populating > > the child p2m and copy the parents page contents? > > Reading the code now, yes, calling map_vcpu_info() would take care of > populating the page for us as well so we can remove that code from > mem_sharing. Thanks for confirming. Will try to prepare a patch next week to get rid of the explicit child p2m populate for the vcpu_info page, and hopefully simply the code here also as a result. Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |