 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2.1 2/2] vpci/msix: fix PBA accesses
 On 01.03.2022 10:08, Roger Pau Monné wrote:
> On Tue, Mar 01, 2022 at 09:46:13AM +0100, Jan Beulich wrote:
>> On 26.02.2022 11:05, Roger Pau Monne wrote:
>>> --- a/xen/drivers/vpci/msix.c
>>> +++ b/xen/drivers/vpci/msix.c
>>> @@ -198,8 +198,13 @@ static int cf_check msix_read(
>>>      if ( !access_allowed(msix->pdev, addr, len) )
>>>          return X86EMUL_OKAY;
>>>  
>>> +    spin_lock(&msix->pdev->vpci->lock);
>>>      if ( VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, VPCI_MSIX_PBA) )
>>>      {
>>> +        struct vpci *vpci = msix->pdev->vpci;
>>> +        paddr_t base = vmsix_table_addr(vpci, VPCI_MSIX_PBA);
>>> +        unsigned int idx = addr - base;
>>> +
>>>          /*
>>>           * Access to PBA.
>>>           *
>>> @@ -207,25 +212,43 @@ static int cf_check msix_read(
>>>           * guest address space. If this changes the address will need to be
>>>           * translated.
>>>           */
>>> +
>>> +        if ( !msix->pba )
>>> +        {
>>> +            msix->pba = ioremap(base, vmsix_table_size(vpci, 
>>> VPCI_MSIX_PBA));
>>> +            if ( !msix->pba )
>>> +            {
>>> +                /*
>>> +                 * If unable to map the PBA return all 1s (all pending): 
>>> it's
>>> +                 * likely better to trigger spurious events than drop them.
>>> +                 */
>>> +                spin_unlock(&vpci->lock);
>>> +                gprintk(XENLOG_WARNING,
>>> +                        "%pp: unable to map MSI-X PBA, report all 
>>> pending\n",
>>> +                        msix->pdev);
>>> +                return X86EMUL_OKAY;
>>
>> Hmm, this may report more set bits than there are vectors. Which is
>> probably fine, but the comment may want adjusting a little to make
>> clear this is understood and meant to be that way.
> 
> Yes, it could return more bits than vectors, but that area is also
> part of the PBA (as the end is aligned to 8 bytes). I will adjust the
> comment.
> 
>>> +           }
>>> +        }
>>
>> Imo it would make sense to limit the locked region to around just this
>> check-and-map logic. There's no need for ...
>>
>>>          switch ( len )
>>>          {
>>>          case 4:
>>> -            *data = readl(addr);
>>> +            *data = readl(msix->pba + idx);
>>>              break;
>>>  
>>>          case 8:
>>> -            *data = readq(addr);
>>> +            *data = readq(msix->pba + idx);
>>>              break;
>>>  
>>>          default:
>>>              ASSERT_UNREACHABLE();
>>>              break;
>>>          }
>>> +        spin_unlock(&vpci->lock);
>>
>> ... the actual access to happen under lock, as you remove the mapping
>> only when the device is being removed. I'm inclined to suggest making
>> a helper function, which does an unlocked check, then the ioremap(),
>> then takes the lock and re-checks whether the field's still NULL, and
>> either installs the mapping or (after unlocking) iounmap()s it.
> 
> I'm fine with dropping the lock earlier, but I'm not sure there's much
> point in placing this in a separate helper, as it's the mapping of at
> most 2 pages (PBA is 2048 bytes in size, 64bit aligned).
> 
> I guess you are suggesting this in preparation for adding support to
> access the non PBA area falling into the same page(s)?
Not just. The write path wants to use the same logic, and with it
becoming a little more involved I think it would be better to have it
in just one place.
Jan
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |