[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] pci: clear host_maskall field on assign



On 08.10.2019 11:23, Roger Pau Monné  wrote:
> On Wed, Oct 02, 2019 at 03:33:43PM +0200, Jan Beulich wrote:
>> On 02.10.2019 12:49, Roger Pau Monne wrote:
>>> The current implementation of host_maskall makes it sticky across
>>> assign and deassign calls, which means that once a guest forces Xen to
>>> set host_maskall the maskall bit is not going to be cleared until a
>>> call to PHYSDEVOP_prepare_msix is performed. Such call however
>>> shouldn't be part of the normal flow when doing PCI passthrough, and
>>> hence the flag needs to be cleared when assigning in order to prevent
>>> host_maskall being carried over from previous assignations.
>>>
>>> Note that other mask fields, like guest_masked or the entry maskbit
>>> are already reset when the msix capability is initialized.
>>
>> I take it you mean a guest action here, as PHYSDEVOP_prepare_msix is
>> specifically about setting up the actual hardware's one?
> 
> Right, or any path that calls into msix_capability_init (ie: QEMU
> requesting to map a PIRQ to an MSIX entry will also call into
> msix_capability_init).
> 
>> This happens
>> quite a bit later though, i.e. ->guest_maskall may need explicitly
>> setting at the same time as you clear ->host_maskall. Furthermore ...
>>> --- a/xen/drivers/passthrough/pci.c
>>> +++ b/xen/drivers/passthrough/pci.c
>>> @@ -1504,7 +1504,10 @@ static int assign_device(struct domain *d, u16 seg, 
>>> u8 bus, u8 devfn, u32 flag)
>>>      }
>>>  
>>>      if ( pdev->msix )
>>> +    {
>>>          msixtbl_init(d);
>>> +        pdev->msix->host_maskall = false;
>>> +    }
>>
>> ... doing just this would violate an assumed property: It ought to
>> be fine to assert at every entry or exit point that the physical
>> maskall bit of an MSI-X-enabled device is the logical OR of
>> ->host_maskall and ->guest_maskall.
> 
> Is this still valid at this point, even without my patch?
> 
> The hardware domain should have performed a reset of the device, so
> the state of the maskall hardware bit should be true, regardless of
> the previous state of either the guest_maskall or the host_maskall
> bits.

But a reset _clears_ the hardware maskall bit (alongside the
enable one).

>> I.e. I see the following
>> options:
>>
>> 1) your variant accompanied by updating of the hardware bit,
>>
>> 2)
>>
>>         pdev->msix->guest_maskall = pdev->msix->host_maskall;
>>         pdev->msix->host_maskall = false;
>>
>> leaving the hardware bit alone, as the above transformation
>> wouldn't change what it's supposed to be set to,
>>
>> 3)
>>
>>         pdev->msix->guest_maskall = true;
>>         pdev->msix->host_maskall = false;
>>
>> alongside setting the bit in hardware (if not already set),
> 
> That seems like the best option IMO, since it's the reset state of the
> device, and should be the expected one when assigning a device to a
> guest.

As per above - no, it's not. We mask interrupts in hardware
(through individual mask bits iirc) because pci_prepare_msix()
gets invoked by Dom0 ahead of giving the device to the guest,
which involves setting the enable bit (and hence unmasked
interrupts could trigger).

>> 4)
>>
>>         pdev->msix->guest_maskall = false;
>>         pdev->msix->host_maskall = false;
>>
>> alongside clearing the bit in hardware (if not already clear),
>> relying on all entries being individually masked (which ought
>> to be the state after the initial msix_capability_init()).
>>
>> In all cases the operation would likely better be done by
>> calling a function to be put in x86/msi.c.
> 
> Maybe name it pci_reset_msix_state?

The common naming pattern in the file seems to be to start with
msi_ or msix_, so perhaps better msix_reset_state()? I could
live with your suggested named though.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.