[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] xen/arm: Convert runstate address during hypcall



On Fri, 12 Jun 2020, Bertrand Marquis wrote:
> > On 12 Jun 2020, at 02:09, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> > 
> > On Thu, 11 Jun 2020, Julien Grall wrote:
> >> Hi Stefano,
> >> 
> >> On 11/06/2020 19:50, Stefano Stabellini wrote:
> >>> On Thu, 11 Jun 2020, Julien Grall wrote:
> >>>>>> +        return -EINVAL;
> >>>>>>      }
> >>>>>> 
> >>>>>> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> >>>>>> +    v->arch.runstate_guest.page = page;
> >>>>>> +    v->arch.runstate_guest.offset = offset;
> >>>>>> +
> >>>>>> +    spin_unlock(&v->arch.runstate_guest.lock);
> >>>>>> +
> >>>>>> +    return 0;
> >>>>>> +}
> >>>>>> +
> >>>>>> +
> >>>>>> +/* Update per-VCPU guest runstate shared memory area (if registered).
> >>>>>> */
> >>>>>> +static void update_runstate_area(struct vcpu *v)
> >>>>>> +{
> >>>>>> +    struct vcpu_runstate_info *guest_runstate;
> >>>>>> +    void *p;
> >>>>>> +
> >>>>>> +    spin_lock(&v->arch.runstate_guest.lock);
> >>>>>> 
> >>>>>> -    if ( guest_handle )
> >>>>>> +    if ( v->arch.runstate_guest.page )
> >>>>>>      {
> >>>>>> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> >>>>>> +        p = __map_domain_page(v->arch.runstate_guest.page);
> >>>>>> +        guest_runstate = p + v->arch.runstate_guest.offset;
> >>>>>> +
> >>>>>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> >>>>>> +        {
> >>>>>> +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> >>>>>> +            guest_runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
> >>>>> 
> >>>>> I think that this write to guest_runstate should use write_atomic or
> >>>>> another atomic write operation.
> >>>> 
> >>>> I thought about suggesting the same, but  guest_copy_* helpers may not
> >>>> do a single memory write to state_entry_time.
> >>>> What are you trying to prevent with the write_atomic()?
> >>> 
> >>> I am thinking that without using an atomic write, it would be (at least
> >>> theoretically) possible for a guest to see a partial write to
> >>> state_entry_time, which is not good. 
> >> 
> >> It is already the case with existing implementation as Xen may write byte 
> >> by
> >> byte. So are you suggesting the existing code is also buggy?
> > 
> > Writing byte by byte is a different case. That is OK. In that case, the
> > guest could see the state after 3 bytes written and it would be fine and
> > consistent. If this hadn't been the case, then yes, the existing code
> > would also be buggy.
> > 
> > So if we did the write with a memcpy, it would be fine, no need for
> > atomics:
> > 
> >  memcpy(&guest_runstate->state_entry_time,
> >         &v->runstate.state_entry_time,
> >         XXX);
> > 
> > 
> > The |= case is different: GCC could implement it in any way it likes,
> > including going through a zero-write to any of the bytes in the word, or
> > doing an addition then a subtraction. GCC doesn't make any guarantees.
> > If we want guarantees we need to use atomics.
> 
> Wouldn’t that require all accesses to state_entry_time to use also atomic 
> operations ?
> In this case we could not propagate the changes to a guest without changing 
> the interface itself.
> 
> As the copy time needs to be protected, the write barriers are there to make 
> sure that during the copy the bit is set and that when we unset it, the copy 
> is done.
> I added for this purpose a barrier after the memcpy to make sure that when/if 
> we unset the bit the copy has already been done.

As you say, we have a flag to mark a transitiong period, the flag is
XEN_RUNSTATE_UPDATE. So, I think it is OK if we don't use atomics during
the transitioning period. But we need to make sure to use atomics for the
update of the XEN_RUNSTATE_UPDATE flag itself.

Does it make sense? Or maybe I misunderstood some of the things you
wrote?

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.