[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2 V2] iommu/amd: Workaround for erratum 787
>>> On 10.06.13 at 11:35, Tim Deegan <tim@xxxxxxx> wrote: > At 00:05 -0500 on 10 Jun (1370822751), suravee.suthikulpanit@xxxxxxx wrote: >> From: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx> >> >> The IOMMU interrupt handling in bottom half must clear the PPR log interrupt >> and event log interrupt bits to re-enable the interrupt. This is done by >> writing 1 to the memory mapped register to clear the bit. Due to hardware > bug, >> if the driver tries to clear this bit while the IOMMU hardware also setting >> this bit, the conflict will result with the bit being set. If the interrupt >> handling code does not make sure to clear this bit, subsequent changes in > the >> event/PPR logs will no longer generating interrupts, and would result if >> buffer overflow. After clearing the bits, the driver must read back >> the register to verify. > > Is there a risk of livelock here? That is, if some device is causing a > lot of IOMMU faults, a CPU could get stuck in this loop re-enabling > interrupts as fast as they are raised. > > The solution suggested in the erratum seems better: if the bit is set > after clearing, process the interrupts again (i.e. run/schedule the > top-half handler). That way the bottom-half handler will definitely > terminate and the system can make some progress. That's what's being done really: The actual interrupt handler disables the interrupt sources, and the tasklet re-enables them (or at least is supposed to do so - patch 1 isn't really correct in the respect). The only thing that I think is wrong (but again already in patch 1) is that the status bit should get cleared before an interrupt source gets re-enabled. I started cleaning up patch 1 anyway, so I'll post a v3 once done. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |