[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2 for-4.12] xen: implement VCPUOP_register_runstate_phys_memory_area
>>> On 05.03.19 at 14:14, <andrii.anisov@xxxxxxxxx> wrote: > --- a/xen/arch/arm/domain.c > +++ b/xen/arch/arm/domain.c > @@ -305,6 +305,26 @@ static void update_runstate_area(struct vcpu *v) > 1); > } > } > + else if ( v->runstate_guest_type == RUNSTATE_PADDR ) > + { > + if ( VM_ASSIST(v->domain, runstate_update_flag) ) > + { > + runstate_guest(v).p->state_entry_time |= XEN_RUNSTATE_UPDATE; > + smp_wmb(); > + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; > + } > + > + memcpy(runstate_guest(v).p, &v->runstate, sizeof(v->runstate)); > + > + if ( VM_ASSIST(v->domain, runstate_update_flag) ) > + { > + runstate_guest(v).p->state_entry_time &= ~XEN_RUNSTATE_UPDATE; > + smp_wmb(); > + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; > + } > + } > + else > + { /* No actions required */ } See my respective comment on patch 1. > @@ -1648,9 +1648,37 @@ bool update_runstate_area(struct vcpu *v) > (void *)(&v->runstate.state_entry_time + 1) > - 1, 1); > } > } > - else > + else if ( v->runstate_guest_type == RUNSTATE_PADDR ) > { > - rc = true; > + if ( VM_ASSIST(v->domain, runstate_update_flag) ) > + { > + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; > + if ( has_32bit_shinfo((v)->domain) ) Stray inner parentheses (at least one more instance further down). > --- a/xen/common/domain.c > +++ b/xen/common/domain.c > @@ -738,7 +738,14 @@ int domain_kill(struct domain *d) > if ( cpupool_move_domain(d, cpupool0) ) > return -ERESTART; > for_each_vcpu ( d, v ) > + { > + if ( v->runstate_guest_type == RUNSTATE_VADDR ) > + set_xen_guest_handle(runstate_guest(v), NULL); > + else > + unmap_runstate_area(v); Since this recurs further down, can't the VADDR case also be taken care of by unmap_runstate_area()? > @@ -1333,6 +1344,65 @@ void unmap_vcpu_info(struct vcpu *v) > put_page_and_type(mfn_to_page(mfn)); > } > > +int map_runstate_area(struct vcpu *v, > + struct vcpu_register_runstate_memory_area *area) Neither this nor the unmap function look to be used outside this file, so should be static. It also looks as if the second parameter could be constified. > +{ > + unsigned long offset = area->addr.p & ~PAGE_MASK; > + gfn_t gfn = gaddr_to_gfn(area->addr.p); > + struct domain *d = v->domain; > + void *mapping; > + struct page_info *page; > + size_t size = sizeof (struct vcpu_runstate_info ); Stray blanks. > + ASSERT(v->runstate_guest_type == RUNSTATE_PADDR ); Another one. > + if ( offset > (PAGE_SIZE - size) ) > + return -EINVAL; > + > + page = get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); Don't you also need P2M_UNSHARE? And is the p2m type of the page really of no interest here? > +void unmap_runstate_area(struct vcpu *v) > +{ > + mfn_t mfn; > + > + if ( v->runstate_guest_type != RUNSTATE_PADDR ) > + return; > + > + if ( guest_handle_is_null(runstate_guest(v)) ) > + return; > + > + mfn = _mfn(virt_to_mfn(runstate_guest(v).p)); The pointer is the result of __map_domain_page_global() - I don't think you can legitimately use virt_to_mfn() on it, at least not on x86; domain_page_map_to_mfn() is what you want to use here. > @@ -1532,13 +1602,29 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, > XEN_GUEST_HANDLE_PARAM(void) arg) > vcpu_runstate_get(v, &runstate); > __copy_to_guest(runstate_guest(v), &runstate, 1); > } > - > break; > } Undue removal of a blank line. > case VCPUOP_register_runstate_phys_memory_area: > { > - rc = -ENOSYS; > + struct vcpu_register_runstate_memory_area area; > + > + rc = -EFAULT; > + if ( copy_from_guest(&area, arg, 1) ) > + break; > + > + unmap_runstate_area(v); > + v->runstate_guest_type = RUNSTATE_PADDR; > + rc = map_runstate_area(v, &area); > + > + if ( rc ) > + { > + v->runstate_guest_type = RUNSTATE_NONE; All of this updating of type and mapping are racy, and I guess that's also what Julien has already pointed out. As long as the mapping can be updated by remote CPUs, you have to make sure that the consuming side can't see partially updated values. On option _might_ be to switch to NONE first, and to PADDR upon success (with suitable barriers in between). The logic then of course needs mirroring to the VADDR path. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |