[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: > > In preparation of the introduction of new vCPU operations allowing to > register the respective areas (one of the two is x86-specific) by > guest-physical address, add the necessary fork handling (with the > backing function yet to be filled in). > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > --- a/xen/arch/x86/mm/mem_sharing.c > +++ b/xen/arch/x86/mm/mem_sharing.c > @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc > hvm_set_nonreg_state(cd_vcpu, &nrs); > } > > +static int copy_guest_area(struct guest_area *cd_area, > + const struct guest_area *d_area, > + struct vcpu *cd_vcpu, > + const struct domain *d) > +{ > + mfn_t d_mfn, cd_mfn; > + > + if ( !d_area->pg ) > + return 0; > + > + d_mfn = page_to_mfn(d_area->pg); > + > + /* Allocate & map a page for the area if it hasn't been already. */ > + if ( !cd_area->pg ) > + { > + gfn_t gfn = mfn_to_gfn(d, d_mfn); > + struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain); > + p2m_type_t p2mt; > + p2m_access_t p2ma; > + unsigned int offset; > + int ret; > + > + cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL); > + if ( mfn_eq(cd_mfn, INVALID_MFN) ) > + { > + struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0); > + > + if ( !pg ) > + return -ENOMEM; > + > + cd_mfn = page_to_mfn(pg); > + set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn)); > + > + ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, p2m_ram_rw, > + p2m->default_access, -1); > + if ( ret ) > + return ret; > + } > + else if ( p2mt != p2m_ram_rw ) > + return -EBUSY; > + > + /* > + * Simply specify the entire range up to the end of the page. All the > + * function uses it for is a check for not crossing page boundaries. > + */ > + offset = PAGE_OFFSET(d_area->map); > + ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset, > + PAGE_SIZE - offset, cd_area, NULL); > + if ( ret ) > + return ret; > + } > + else > + cd_mfn = page_to_mfn(cd_area->pg); Everything to this point seems to be non mem-sharing/forking related. Could these live somewhere else? There must be some other place where allocating these areas happens already for non-fork VMs so it would make sense to just refactor that code to be callable from here. > + copy_domain_page(cd_mfn, d_mfn); > + > + return 0; > +} > + > static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu) > { > struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu); > @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom > copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn); > } > > + /* Same for the (physically registered) runstate and time info areas. */ > + ret = copy_guest_area(&cd_vcpu->runstate_guest_area, > + &d_vcpu->runstate_guest_area, cd_vcpu, d); > + if ( ret ) > + return ret; > + ret = copy_guest_area(&cd_vcpu->arch.time_guest_area, > + &d_vcpu->arch.time_guest_area, cd_vcpu, d); > + if ( ret ) > + return ret; > + > ret = copy_vpmu(d_vcpu, cd_vcpu); > if ( ret ) > return ret; > @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain > > state: > if ( reset_state ) > + { > rc = copy_settings(d, pd); > + /* TBD: What to do here with -ERESTART? */ Where does ERESTART coming from?
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |