|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4] vpci/msix: fix PBA accesses
On Tue, Mar 08, 2022 at 11:46:20AM +0100, Jan Beulich wrote:
> On 08.03.2022 10:05, Roger Pau Monné wrote:
> > On Tue, Mar 08, 2022 at 09:31:34AM +0100, Jan Beulich wrote:
> >> On 07.03.2022 17:37, Roger Pau Monne wrote:
> >>> Map the PBA in order to access it from the MSI-X read and write
> >>> handlers. Note that previously the handlers would pass the physical
> >>> host address into the {read,write}{l,q} handlers, which is wrong as
> >>> those expect a linear address.
> >>>
> >>> Map the PBA using ioremap when the first access is performed. Note
> >>> that 32bit arches might want to abstract the call to ioremap into a
> >>> vPCI arch handler, so they can use a fixmap range to map the PBA.
> >>>
> >>> Reported-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> >>
> >> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>
> >>> Cc: Alex Olson <this.is.a0lson@xxxxxxxxx>
> >>
> >> I'll wait a little with committing, in the hope for Alex to re-provide
> >> a Tested-by.
> >>
> >>> --- a/xen/drivers/vpci/msix.c
> >>> +++ b/xen/drivers/vpci/msix.c
> >>> @@ -182,6 +182,38 @@ static struct vpci_msix_entry *get_entry(struct
> >>> vpci_msix *msix,
> >>> return &msix->entries[(addr - start) / PCI_MSIX_ENTRY_SIZE];
> >>> }
> >>>
> >>> +static void __iomem *get_pba(struct vpci *vpci)
> >>> +{
> >>> + struct vpci_msix *msix = vpci->msix;
> >>> + /*
> >>> + * PBA will only be unmapped when the device is deassigned, so
> >>> access it
> >>> + * without holding the vpci lock.
> >>> + */
> >>> + void __iomem *pba = read_atomic(&msix->pba);
> >>> +
> >>> + if ( likely(pba) )
> >>> + return pba;
> >>> +
> >>> + pba = ioremap(vmsix_table_addr(vpci, VPCI_MSIX_PBA),
> >>> + vmsix_table_size(vpci, VPCI_MSIX_PBA));
> >>> + if ( !pba )
> >>> + return read_atomic(&msix->pba);
> >>> +
> >>> + spin_lock(&vpci->lock);
> >>> + if ( !msix->pba )
> >>> + {
> >>> + write_atomic(&msix->pba, pba);
> >>> + spin_unlock(&vpci->lock);
> >>> + }
> >>> + else
> >>> + {
> >>> + spin_unlock(&vpci->lock);
> >>> + iounmap(pba);
> >>> + }
> >>
> >> TBH I had been hoping for just a single spin_unlock(), but you're
> >> the maintainer of this code ...
> >
> > Would you prefer something like:
> >
> > spin_lock(&vpci->lock);
> > if ( !msix->pba )
> > write_atomic(&msix->pba, pba);
> > spin_unlock(&vpci->lock);
> >
> > if ( read_atomic(&msix->pba) != pba )
> > iounmap(pba);
>
> This or (to avoid the re-read)
>
> spin_lock(&vpci->lock);
> if ( !msix->pba )
> {
> write_atomic(&msix->pba, pba);
> pba = NULL;
> }
> spin_unlock(&vpci->lock);
>
> if ( pba )
> iounmap(pba);
>
> Iirc we have similar constructs elsewhere in a few places.
I don't really have a strong opinion. I agree the duplicated
spin_unlock() call is not nice, but I think the logic is easier to
follow by using a single if ... else ... section.
Feel free to adjust at commit, or else I can resend if you prefer.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |