[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-for-4.14] ioreq: handle pending emulation racing with ioreq server destruction



On 08.06.2020 13:04, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: 08 June 2020 11:58
>>
>> On 08.06.2020 11:46, Paul Durrant wrote:
>>> --- a/xen/arch/x86/hvm/ioreq.c
>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>> @@ -109,15 +109,7 @@ static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, 
>>> uint64_t data)
>>>      ioreq_t *ioreq = &v->arch.hvm.hvm_io.io_req;
>>>
>>>      if ( hvm_ioreq_needs_completion(ioreq) )
>>> -    {
>>> -        ioreq->state = STATE_IORESP_READY;
>>>          ioreq->data = data;
>>> -    }
>>> -    else
>>> -        ioreq->state = STATE_IOREQ_NONE;
>>> -
>>> -    msix_write_completion(v);
>>> -    vcpu_end_shutdown_deferral(v);
>>>
>>>      sv->pending = false;
>>>  }
>>
>> With this, is the function worth keeping at all?
> 
> I did think about that, but it is called in more than one place. So,
> in the interest of trying to make back-porting straightforward, I
> thought it best to keep it simple.

Fair enough, but going forward I still think it would be nice to get
rid of the function. To do this sufficiently cleanly, the main
question I have is: Why is hvm_wait_for_io() implemented as a loop?
Hasn't this become pointless with the XSA-262 fix? Switching this to
a do {} while(false) construct (seeing that the only caller has
already checked sv->pending) would look to allow moving what is now
in hvm_io_assist() immediately past this construct, at the expense
of a local variable holding ~0ul initially and then in the common
case p->data.

Thoughts? (I'll be happy to make such a patch, I'm just checking
whether I'm overlooking something crucial.)

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.