[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 2/3] VT-d: wrap a _sync version for all VT-d flush interfaces



On April 11, 2016 2:53pm, Tian, Kevin wrote:
> > From: Xu, Quan
> > Sent: Thursday, April 07, 2016 3:45 PM
> >
> > On April 05, 2016 5:35pm, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> > > >>> On 01.04.16 at 16:47, <quan.xu@xxxxxxxxx> wrote:
> > > > The dev_invalidate_iotlb() scans ats_devices list to flush ATS
> > > > devices, and the invalidate_sync() is put after
> > > > dev_invalidate_iotlb() to synchronize with hardware for flush
> > > > status. If we assign multiple ATS devices to a domain, the flush
> > > > status is about all these multiple ATS devices. Once flush timeout
> > > > expires, we couldn't find out which one is the buggy ATS device.
> > >
> > > Is that true? Or is that just a limitation of our implementation?
> > >
> >
> > IMO, both.
> > I hope vt-d maintainers help me double check it.
> 
> Just a limitation of our implementation. Now dev_invalidate_iotlb puts all
> possible IOTLB flush requests to the queue, and then invalidate_sync pushes a
> wait descriptor w/ timeout to detect error. VT-d spec says one or more
> descriptors may be fetched together by hardware. So when a timeout is
> triggered, we cannot tell which flush request actually has problem by reading
> queue head register. If we change the implementation to
> one-invalidation-sync-per-request, then we can tell. I discussed with Quan not
> to go that complexity though.
> 

Thanks for your correction!
I will enhance the commit log and send out later.

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.