[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 1/3] vt-d: add a timeout parameter for Queued Invalidation
On May 19, 2016 8:33 AM, Tian, Kevin <kevin.tian@xxxxxxxxx> wrote: > > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > > Sent: Wednesday, May 18, 2016 11:05 PM > > > > >>> On 18.05.16 at 14:53, <quan.xu@xxxxxxxxx> wrote: > > > On May 17, 2016 3:48 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote: > > >> >>> On 17.05.16 at 05:19, <kevin.tian@xxxxxxxxx> wrote: > > >> >> From: Xu, Quan > > >> >> Sent: Monday, May 16, 2016 11:26 PM > > >> >> > > >> >> On May 13, 2016 11:28 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote: > > >> >> > >>> On 22.04.16 at 12:54, <quan.xu@xxxxxxxxx> wrote: > > >> >> > > --- a/docs/misc/xen-command-line.markdown > > >> >> > > +++ b/docs/misc/xen-command-line.markdown > > >> >> > > @@ -1532,6 +1532,16 @@ Note that if **watchdog** option is > > >> >> > > also > > >> >> > specified vpmu will be turned off. > > >> >> > > As the virtualisation is not 100% safe, don't use the vpmu > > >> >> > > flag on production systems (see > > >> >> > > http://xenbits.xen.org/xsa/advisory- > > >> 163.html)! > > >> >> > > > > >> >> > > +### vtd\_qi\_timeout (VT-d) > > >> >> > > +> `= <integer>` > > >> >> > > + > > >> >> > > +> Default: `1` > > >> >> > > + > > >> >> > > +Specify the timeout of the VT-d Queued Invalidation in > milliseconds. > > >> >> > > + > > >> >> > > +By default, the timeout is 1ms. When you see error 'Queue > > >> >> > > +invalidate wait descriptor timed out', try increasing this value. > > >> >> > > > >> >> > So when someone enables ATS, will the 1ms timeout apply to the > > >> >> > dev iotlb invalidations too? > > >> >> > > >> >> Yes, > > >> >> The timeout is the same for IOTLB, Context, IEC and Device-TLB > invalidation. > > >> >> > > >> >> > If so, that's surely too short, and would ideally be adjusted > > >> >> > automatically, but the need for a higher timeout in that case > > >> >> > should in any event be mentioned here. > > >> >> > > >> >> I can try to use 1ms for IOTLB, Context and IEC invalidation. > > >> >> As mentioned, 1 ms is enough for IOTLB, Context and IEC invalidation. > > >> >> What about 10 ms for Device-TLB (10 ms is just a higher timeout, > > >> >> no > > >> specific meaning)? > > >> > > > >> > I remember in earlier discussion we agreed to use 1ms as the > > >> > default for both IOMMU-side and device-side flushes. For > > >> > device-side flushes, we checked internal HW team that 1ms is a > > >> > reasonable threshold for integrated devices. It's likely > > >> > insufficient for discrete devices. We may check any automatic > > >> > adjustment method later when it becomes a real problem. For now, > please elaborate above information in the text. > > >> > > >> Well, taking care of automation later is fine with me, but tying > > >> everything to a single timeout, when device iotlb invalidation may > > >> require a much larger value, isn't. > > >> > > > > > > A little bit confused. Check it -- could I leave patch 1/3 as is? > > > > The patch can imo remain as is only if the new default timeout is > > large enough for all possible cases (including those users who are > > adventurous enough to turn on ATS). > > Jan, I only have an ATS device (MYRICOM Inc. Myri-10G Dual-Protocol NIC). 1 ms is large enough for invalidation so far. Any suggestion for this new default timeout? > A single default value for both IOMMU-side and device-side is anyway not > optimal. What about introducing a new knob e.g. vtd_qi_device_timeout > specifically for device-side flush while using vtd_qi_timeout for other > places? If > device-side timeout is not specified, it is then default to vtd_qi_timeout. > IMO, we are better to introduce only one variable for VT-d invalidation. The users may be not interested in such a detailed VT-d knowledge. Also this is taking into consideration of consistency. :) Quan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |