[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 7/7] vtd: add lookup_page method to iommu_ops



>>> On 14.09.18 at 10:11, <Paul.Durrant@xxxxxxxxxx> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 14 September 2018 08:59
>> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
>> Cc: George Dunlap <George.Dunlap@xxxxxxxxxx>; Wei Liu
>> <wei.liu2@xxxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; xen-devel <xen-
>> devel@xxxxxxxxxxxxxxxxxxxx>
>> Subject: RE: [PATCH v8 7/7] vtd: add lookup_page method to iommu_ops
>> 
>> >>> On 13.09.18 at 16:53, <Paul.Durrant@xxxxxxxxxx> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >> Sent: 13 September 2018 15:51
>> >>
>> >> >>> On 13.09.18 at 16:04, <Paul.Durrant@xxxxxxxxxx> wrote:
>> >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >> >> Sent: 13 September 2018 14:57
>> >> >>
>> >> >> >>> On 13.09.18 at 15:50, <Paul.Durrant@xxxxxxxxxx> wrote:
>> >> >> > Ok. I'll spell it out in the header if you think it is non-
>> obvious.
>> >> >>
>> >> >> Obvious or not - do we _have_ any such outer locking in place right
>> >> now?
>> >> >>
>> >> >
>> >> > Yes. The locking is either via the P2M or grant table locks for all
>> >> current
>> >> > uses or map and unmap that I can see.
>> >>
>> >> But two different locks still means no guarantees at all.
>> >>
>> >
>> > So, are you saying the current implementation is not fit for purpose? Do
>> you
>> > want me to add locking at the wrapper level?
>> 
>> Well, I haven't looked closely enough to be certain, but I'm afraid there
>> might be an issue, and if so I certainly think it needs taking care of
>> before
>> widening the problem by exposing (more of it) to guests. Of course it
>> may well be that addition of another locking layer may have adverse
>> effects, to existing code and/or your additions.
>> 
> 
> Given that all uses of map and unmap are under the grant or p2m locks then 
> there should only be an issue if a race occurs between a grant table op and 
> something modifying the p2m, and I doubt such an issue would be limited to 
> leaving just the IOMMU in an incorrect state. This patch does nothing to rule 
> out such a race, but nor does it make things any worse; it is completely 
> orthogonal as it introduces a brand new function with (as yet) no callers.
> 
> This new lookup function is not intended for use by any of the existing code 
> executed under grant table or p2m lock though. Once PV-IOMMU becomes 
> operational then all of that automatic map/unmap code will cease to be 
> operational. As you pointed out in review of another patch, I have completely 
> neglected to add suitable locking into the PV-IOMMU code but once I add that 
> then map, unmap and lookup operations coming via hypercall will be protected 
> from each other.

Well, okay - I agree (following the second paragraph) that the situation indeed
doesn't become worse with your additions. Hence I further agree that it's
perhaps not really appropriate to make fixing the original issue a prereq. But
that leaves us in a situation where we will continue to live with an unfixed
(suspected) bug (I don't anticipate to have the time to look into addressing
this any time soon). My personal approach to a situation like this is that I try
to fix issues found on the road of making a (larger) change in a certain area
(a good recent example is "x86/HVM: correct hvmemul_map_linear_addr()
for multi-page case", which I've noticed to be an issue while putting together
"x86/HVM: split page straddling emulated accesses in more cases").

Bottom line - I'm not going to insist on you taking care of this unrelated
problem, the more that this would likely be a larger (and possibly intrusive)
piece of work.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.