[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 1/6] IOMMU: add a timeout parameter for device IOTLB invalidation



>>> On 27.06.16 at 10:19, <quan.xu@xxxxxxxxx> wrote:
> On June 27, 2016 4:03 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> >>> On 24.06.16 at 07:51, <quan.xu@xxxxxxxxx> wrote:
>> > From: Quan Xu <quan.xu@xxxxxxxxx>
>> >
>> > The parameter 'iommu_dev_iotlb_timeout' specifies the timeout of
>> > device IOTLB invalidation in milliseconds. By default, the timeout is
>> > 1000 milliseconds, which can be boot-time changed.
>> >
>> > We also confirmed with VT-d hardware team that 1 milliseconds is large
>> > enough for VT-d IOMMU internal invalidation.
>> >
>> > the existing panic() is eliminated and we bubble up the timeout of
>> > device IOTLB invalidation for further processing, as the PCI-e Address
>> > Translation Services (ATS) mandates a timeout of
>> > 60 seconds for device IOTLB invalidation. Obviously we can't spin for
>> > 60 seconds or otherwise Xen hypervisor hangs.
>> >
>> > Add a __must_check annotation. The followup patch titled 'VT-d
>> > IOTLB/Context/IEC flush issue' addresses the __mustcheck.
>> > That is the other callers of this routine (two or three levels up)
>> > ignore the return code. This patch does not address this but the other
>> > does.
>> 
>> The patch itself looks okay,
> 
> Jan, thanks for your review.
> 
>> but I'm confused by this paragraph:
>> There's no patch with the named title later in this series. And having gone
>> through this patch I also don't see what remains to be addressed wrt the
>> __must_check-s getting added here.
> 
> This paragraph was added from a few rounds ago. I will drop it in next 
> version.

Well, if dropping this paragraph is all that's needed, I can do this
while committing: Patches 1-3 appear to be ready to go in.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.