|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 8/8] common: convert vCPU info area registration
On Thu, Sep 28, 2023 at 11:53:36AM +0200, Jan Beulich wrote:
> On 28.09.2023 10:24, Roger Pau Monné wrote:
> > On Wed, May 03, 2023 at 05:58:30PM +0200, Jan Beulich wrote:
> >> Switch to using map_guest_area(). Noteworthy differences from
> >> map_vcpu_info():
> >> - remote vCPU-s are paused rather than checked for being down (which in
> >> principle can change right after the check),
> >> - the domain lock is taken for a much smaller region,
> >> - the error code for an attempt to re-register the area is now -EBUSY,
> >> - we could in principle permit de-registration when no area was
> >> previously registered (which would permit "probing", if necessary for
> >> anything).
> >>
> >> Note that this eliminates a bug in copy_vcpu_settings(): The function
> >> did allocate a new page regardless of the GFN already having a mapping,
> >> thus in particular breaking the case of two vCPU-s having their info
> >> areas on the same page.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >
> > Some minor comments below:
> >
> > Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>
> Thanks.
>
> >> --- a/xen/common/domain.c
> >> +++ b/xen/common/domain.c
> >> @@ -127,10 +127,10 @@ static void vcpu_info_reset(struct vcpu
> >> {
> >> struct domain *d = v->domain;
> >
> > d could likely be made const?
>
> Probably, but this is just context here.
>
> >> @@ -1633,14 +1528,44 @@ int map_guest_area(struct vcpu *v, paddr
> >>
> >> domain_lock(d);
> >>
> >> - if ( map )
> >> - populate(map, v);
> >> + /* No re-registration of the vCPU info area. */
> >> + if ( area != &v->vcpu_info_area || !area->pg )
> >
> > It would be nice if this check could be done earlier, as to avoid
> > having to fetch and map the page just to discard it. That would
> > however require taking the domain lock earlier.
>
> In order to kind of balance the conflicting goals, there is an unlocked
> early check in the caller. See also the related RFC remark; I might
> interpret your remark as "yes, keep the early check".
Oh, yes, do keep the check.
> >> + {
> >> + if ( map )
> >> + populate(map, v);
> >>
> >> - SWAP(area->pg, pg);
> >> - SWAP(area->map, map);
> >> + SWAP(area->pg, pg);
> >> + SWAP(area->map, map);
> >> + }
> >> + else
> >> + rc = -EBUSY;
> >>
> >> domain_unlock(d);
> >>
> >> + /* Set pending flags /after/ new vcpu_info pointer was set. */
> >> + if ( area == &v->vcpu_info_area && !rc )
> >> + {
> >> + /*
> >> + * Mark everything as being pending just to make sure nothing gets
> >> + * lost. The domain will get a spurious event, but it can cope.
> >> + */
> >> +#ifdef CONFIG_COMPAT
> >> + if ( !has_32bit_shinfo(d) )
> >> + {
> >> + vcpu_info_t *info = area->map;
> >> +
> >> + /* For VCPUOP_register_vcpu_info handling in
> >> common_vcpu_op(). */
> >> + BUILD_BUG_ON(sizeof(*info) != sizeof(info->compat));
> >> + write_atomic(&info->native.evtchn_pending_sel, ~0);
> >> + }
> >> + else
> >> +#endif
> >> + write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
> >
> > Can't the setting of evtchn_pending_sel be done in
> > vcpu_info_populate()?
>
> No, see the comment ahead of the outer if(). populate() needs calling
> ahead of updating the pointer.
I'm afraid I don't obviously see why evtchn_pending_sel can't be set
before v->vcpu_info is updated. It will end up being ~0 anyway,
regardless of the order of operations, and we do force an event
channel injection. There's something I'm clearly missing.
vcpu_mark_events_pending() and force_update_vcpu_system_time()
obviously cannot be moved to the populate hook.
> >> @@ -1801,6 +1729,27 @@ bool update_runstate_area(struct vcpu *v
> >> return rc;
> >> }
> >>
> >> +/*
> >> + * This makes sure that the vcpu_info is always pointing at a valid piece
> >> of
> >> + * memory, and it sets a pending event to make sure that a pending event
> >> + * doesn't get missed.
> >> + */
> >> +static void cf_check
> >> +vcpu_info_populate(void *map, struct vcpu *v)
> >> +{
> >> + vcpu_info_t *info = map;
> >> +
> >> + if ( v->vcpu_info_area.map == &dummy_vcpu_info )
> >> + {
> >> + memset(info, 0, sizeof(*info));
> >> +#ifdef XEN_HAVE_PV_UPCALL_MASK
> >> + __vcpu_info(v, info, evtchn_upcall_mask) = 1;
> >> +#endif
> >
> > I'm not sure about the point of those guards, this will always be 1,
> > as we always build the hypervisor with the headers in xen/public?
>
> That's the case on x86, but this is common code, and hence the build would
> break on other architectures if the guard wasn't there.
Ah, I see, sorry for the noise.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |