[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv2 3/5] evtchn: use a per-event channel lock for sending events



On 16/06/15 17:19, Jan Beulich wrote:
>>>> On 16.06.15 at 17:58, <david.vrabel@xxxxxxxxxx> wrote:
>> On 16/06/15 16:19, David Vrabel wrote:
>>>>> @@ -1221,6 +1277,8 @@ void notify_via_xen_event_channel(struct domain 
>>>>> *ld, 
>> int lport)
>>>>>          evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>>>>>      }
>>>>>  
>>>>> +    spin_unlock(&lchn->lock);
>>>>> +
>>>>>      spin_unlock(&ld->event_lock);
>>>>>  }
>>>>
>>>> Again I think the event lock can be dropped earlier now.
>>>
>>> Ditto.
>>
>> Uh, no. This is notify.  I've kept the locking like this because of the
>> ld->is_dying check.  I think we need the ld->event_lock in case
>> d->is_dying is set and evtchn_destroy(ld) is called.
> 
> Right, but if evtchn_destroy() was a concern, then this wouldn't
> apply just here, but also in the sending path you are relaxing.
> Afaict due to the channel lock being taken in __evtchn_close()
> you can drop the even lock here as the latest after you acquired
> the channel one (I haven't been able to convince myself yet that
> dropping it even before that would be okay).

But in the evtchn_send() case, we're in a hypercall so we know
ld->is_dying is false and thus cannot be racing with evtchn_destroy(ld).

It would be good to remove event_lock from notify_xen_event_channel() as
well since this is heavily used for ioreqs and vm events.  Let me have a
more careful look.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.