[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/3] VT-d: correct dma_msi_set_affinity()
>>> On 13.12.16 at 06:23, <kevin.tian@xxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> Sent: Friday, December 09, 2016 4:47 PM >> >> >>> On 08.12.16 at 18:33, <andrew.cooper3@xxxxxxxxxx> wrote: >> > On 08/12/16 16:01, Jan Beulich wrote: >> >> That commit ("VT-d: use msi_compose_msg()) together with 15aa6c6748 >> > >> > Which commit? >> >> Oops - initially I had intended the title to include the hash: 83cd2038fe. >> I've adjusted the text. >> >> >> ("amd iommu: use base platform MSI implementation") introducing the use >> >> of a per-CPU scratch CPU mask went too far: dma_msi_set_affinity() may, >> >> at least in theory, be called in interrupt context, and hence the use >> >> of that scratch variable is not correct. >> >> >> >> Since the function overwrites the destination information anyway, >> >> allow msi_compose_msg() to be called with a NULL CPU mask, avoiding the >> >> use of that scratch variable. >> > >> > Which function overwrites what? I can't see dma_msi_set_affinity() >> > doing anything to clobber msg.dest32, so I don't understand why this >> > change is correct. >> >> msg.dest32 simply isn't being used. msg is local to that function, so >> all that matters is which fields the function consumes. Is uses only >> address and data, and updates address to properly specify the >> intended destination. To guard against stale data (in >> iommu->msi.msg), it may be reasonable to nevertheless set dest32 >> before storing msg into that field. > > So do you plan to send v2 or stick with current version? Well, so far I haven't heard back from Andrew, and hence didn't plan on sending a v2 yet. If that addition is going to be the only adjustment, I'm also not sure sending a v2 is actually necessary. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |