[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held



On Mon, Dec 7, 2020 at 12:30 PM Julien Grall <julien@xxxxxxx> wrote:
>
> Hi Jan,
>
> On 07/12/2020 15:28, Jan Beulich wrote:
> > On 04.12.2020 20:15, Tamas K Lengyel wrote:
> >> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xxxxxxx> wrote:
> >>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
> >>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xxxxxxx> wrote:
> >>>>> On 03/12/2020 10:09, Jan Beulich wrote:
> >>>>>> On 02.12.2020 22:10, Julien Grall wrote:
> >>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
> >>>>>>>> While there don't look to be any problems with this right now, the 
> >>>>>>>> lock
> >>>>>>>> order implications from holding the lock can be very difficult to 
> >>>>>>>> follow
> >>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
> >>>>>>>> (and no such callback should) have any need for the lock to be held.
> >>>>>>>>
> >>>>>>>> However, vm_event_disable() frees the structures used by respective
> >>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
> >>>>>>>> callbacks, so maintain a count of in-progress calls, for 
> >>>>>>>> evtchn_close()
> >>>>>>>> to wait to drop to zero before freeing the port (and dropping the 
> >>>>>>>> lock).
> >>>>>>>
> >>>>>>> AFAICT, this callback is not the only place where the synchronization 
> >>>>>>> is
> >>>>>>> missing in the VM event code.
> >>>>>>>
> >>>>>>> For instance, vm_event_put_request() can also race against
> >>>>>>> vm_event_disable().
> >>>>>>>
> >>>>>>> So shouldn't we handle this issue properly in VM event?
> >>>>>>
> >>>>>> I suppose that's a question to the VM event folks rather than me?
> >>>>>
> >>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
> >>>>> monitoring software to do the right thing.
> >>>>>
> >>>>> I will refrain to comment on this approach. However, given the race is
> >>>>> much wider than the event channel, I would recommend to not add more
> >>>>> code in the event channel to deal with such problem.
> >>>>>
> >>>>> Instead, this should be fixed in the VM event code when someone has time
> >>>>> to harden the subsystem.
> >>>>
> >>>> I double-checked and the disable route is actually more robust, we
> >>>> don't just rely on the toolstack doing the right thing. The domain
> >>>> gets paused before any calls to vm_event_disable. So I don't think
> >>>> there is really a race-condition here.
> >>>
> >>> The code will *only* pause the monitored domain. I can see two issues:
> >>>      1) The toolstack is still sending event while destroy is happening.
> >>> This is the race discussed here.
> >>>      2) The implement of vm_event_put_request() suggests that it can be
> >>> called with not-current domain.
> >>>
> >>> I don't see how just pausing the monitored domain is enough here.
> >>
> >> Requests only get generated by the monitored domain. So if the domain
> >> is not running you won't get more of them. The toolstack can only send
> >> replies.
> >
> > Julien,
> >
> > does this change your view on the refcounting added by the patch
> > at the root of this sub-thread?
>
> I still think the code is at best fragile. One example I can find is:
>
>    -> guest_remove_page()
>      -> p2m_mem_paging_drop_page()
>       -> vm_event_put_request()
>
> guest_remove_page() is not always call on the current domain. So there
> are possibility for vm_event_put_request() to happen on a foreign domain
> and therefore wouldn't be protected by the current hypercall.
>
> Anyway, I don't think the refcounting should be part of the event
> channel without any idea on how this would fit in fixing the VM event race.

If the problematic patterns only appear with mem_paging I would
suggest just removing the mem_paging code completely. It's been
abandoned for several years now.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.