[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2 V2] iommu/amd: Workaround for erratum 787



At 16:03 +0100 on 10 Jun (1370880211), Jan Beulich wrote:
> >>> On 10.06.13 at 15:55, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> >>>> On 10.06.13 at 14:43, Tim Deegan <tim@xxxxxxx> wrote:
> >> How about:
> >>  write-to-clear status.pending
> >>  process the log
> >>  if (status.pending)
> >>     reschedule the tasklet
> >>  else
> >>     unmask the interrupt.
> > 
> > Actually, I don't think that's right: Clearing the pending bit with
> > the respective interrupt source disabled doesn't allow the
> > pending bit to become set again upon arrival of a new log entry,

From my reading of the IOMMU spec the pending bit gets set whether the
enable bit is set or not:

  PPRInt: peripheral page service request interrupt. Revision 1:
  RO. Reset 0b. Reserved.  Revision 2: RW1C. Reset 0b. 1=PPR request
  entry written to the PPR log by the IOMMU. 0=No PPR entry written to
  the PPR log by the IOMMU. An interrupt is generated when PPRInt=1b and
  MMIO Offset 0018h[PPRIntEn]=1b.

and

  EventLogInt: event log interrupt. RW1C. Reset 0b. 1=Event entry
  written to the event log by the IOMMU. 0=No event entry written to the
  event log by the IOMMU. An interrupt is generated when EventLogInt=1b
  and MMIO Offset 0018h[EventIntEn]=1b.

so we should be OK without a second pass if the pending bit is still
clear after unmasking the interrupt.

> So with this done I now realized that all of these transformations
> don't really belong in this erratum workaround patch.

Agreed.  I think this reshuffle to avoid lost entries should be its own
patch, and the erratum 787 one should follow it -- unless the logic we
end up with handles erratum 787 as a side-effect. :)

> static void iommu_check_event_log(struct amd_iommu *iommu)
> {
>     u32 entry;
>     unsigned long flags;
> 
>     for ( ; ; )
>     {
>         /* RW1C interrupt status bit */
>         writel(IOMMU_STATUS_EVENT_LOG_INT_MASK,
>                iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
> 
>         iommu_read_log(iommu, &iommu->event_log,
>                        sizeof(event_entry_t), parse_event_log_entry);
> 
>         spin_lock_irqsave(&iommu->lock, flags);
> 
>         /* Check event overflow. */
>         entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>         if ( entry & IOMMU_STATUS_EVENT_OVERFLOW_MASK )
>             iommu_reset_log(iommu, &iommu->event_log,
>                             set_iommu_event_log_control);
>         else
>         {
>             entry = readl(iommu->mmio_base + IOMMU_CONTROL_MMIO_OFFSET);
>             if ( !(entry & IOMMU_CONTROL_EVENT_LOG_INT_MASK) )
>             {
>                 entry |= IOMMU_CONTROL_EVENT_LOG_INT_MASK;
>                 writel(entry, iommu->mmio_base + IOMMU_CONTROL_MMIO_OFFSET);
>                 spin_unlock_irqrestore(&iommu->lock, flags);
>                 continue;
>             }
>         }
> 
>         break;
>     }
> 
>     /*
>      * Workaround for erratum787:
>      * Re-check to make sure the bit has been cleared.
>      */
>     entry = readl(iommu->mmio_base + IOMMU_STATUS_MMIO_OFFSET);
>     if ( entry & IOMMU_STATUS_EVENT_LOG_INT_MASK )
>         tasklet_schedule(&amd_iommu_irq_tasklet);
> 
>     spin_unlock_irqrestore(&iommu->lock, flags);
> }
> 
> You'll note that even here we're having a loop again, which you
> presumably won't like much.

Well, this time the event handling is inside the loop, so it ought to
catch and disable bad passthrough devices.  I'd still prefer having the
tasklet schedule itself and terminate, but I'm happy to defer to your
taste.

> The only alternative I see is to run
> iommu_read_log() with iommu->lock held, which has the caveat of
> the function itself taking a lock (albeit - without having done any
> proving yet - I think that lock is taken completely pointlessly).
> 
> In any case - the erratum workaround is really just the comment
> and three following lines. Everything else belongs elsewhere imo.

Yep.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.