[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] VT-d: make flush-all actually flush all

>>> On 10.12.15 at 04:06, <feng.wu@xxxxxxxxx> wrote:

>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Wednesday, December 9, 2015 10:53 PM
>> To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
>> Cc: Wu, Feng <feng.wu@xxxxxxxxx>; Tian, Kevin <kevin.tian@xxxxxxxxx>
>> Subject: [PATCH] VT-d: make flush-all actually flush all
>> VT-d: make flush-all actually flush all
>> Passing gfn=0 and page_count=0 actually avoids the
>> iommu_flush_iotlb_dsi() and results in page-specific invalidation
>> instead.
>> Reported-by: "åæ" <zhangzhi2014@xxxxxxx>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -583,7 +583,7 @@ static void __intel_iommu_iotlb_flush(st
>>          if ( iommu_domid == -1 )
>>              continue;
>> -        if ( page_count > 1 || gfn == -1 )
>> +        if ( page_count != 1 || gfn == INVALID_GFN )
> This patch looks good me, but I think using 'page_count'  to decide
> whether using dsi or psi is not a good idea, since psi should also support
> invalidate multiple pages from VT-d Spec. (Seems no support in Xen?)

I'm fine with this getting improved in a subsequent patch, but I
don't see this to be done here - what you propose is an
enhancement, while here I'm fixing a latent bug (which originally
got reported to security@, and we were able to discard security
concerns merely because the sole intel_iommu_iotlb_flush_all()
caller sits on a code path reachable only through an XSA-77
covered domctl). The more that there currently is no caller
passing in other than 0 or 1.

In an ideal world, my expectation really would have been for
you to ack this change (of course unless you see anything
actively wrong with it) and immediately follow it up with the
described improvement (with the caveat that - see above -
you'd have difficulty actually testing such a change).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.