[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] evtchn/Flask: pre-allocate node on send path



On 25.09.2020 16:57, Jürgen Groß wrote:
> On 25.09.20 14:21, Jan Beulich wrote:
>> On 25.09.2020 12:34, Julien Grall wrote:
>>> On 24/09/2020 11:53, Jan Beulich wrote:
>>>> xmalloc() & Co may not be called with IRQs off, or else check_lock()
>>>> will have its assertion trigger about locks getting acquired
>>>> inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
>>>> very reasonable, especially since the per-channel lock was introduced to
>>>> avoid acquiring the per-domain event lock on the send paths. Issue a
>>>> second call to xsm_evtchn_send() instead, before acquiring the lock, to
>>>> give XSM / Flask a chance to pre-allocate whatever it may need.
>>>
>>> This is the sort of fall-out I was expecting when we decide to turn off
>>> the interrupts for big chunk of code. I couldn't find any at the time
>>> though...
>>>
>>> Can you remind which caller of send_guest{global, vcpu}_virq() will call
>>> them with interrupts off?
>>
>> I don't recall which one of the two it was that I hit; we wanted
>> both to use the lock anyway. send_guest_pirq() very clearly also
>> gets called with IRQs off.
>>
>>> Would it be possible to consider deferring the call to a softirq
>>> taslket? If so, this would allow us to turn the interrupts again.
>>
>> Of course this is in principle possible; the question is how
>> involved this is going to get. However, on x86 oprofile's call to
>> send_guest_vcpu_virq() can't easily be replaced - it's dangerous
>> enough already that in involves locks in NMI context. I don't
>> fancy seeing it use more commonly used ones.
> 
> Is it really so hard to avoid calling send_guest_vcpu_virq() in NMI
> context?
> 
> Today it is called only if the NMI happened inside the guest, so the
> main Xen stack is unused at this time. It should be rather straight
> forward to mimic a stack frame on the main stack and iret to a special
> handler from NMI context. This handler would then call
> send_guest_vcpu_virq() and return to the guest.

Quite possible that it's not overly difficult to arrange for. But
even with this out of the way I don't really view this softirq
tasklet route as viable; I could be proven wrong by demonstrating
that it's sufficiently straightforward.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.