[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: fix paging_log_dirty_op to work with paging guests



On Thu, Dec 13, 2018 at 08:53:22AM -0700, Jan Beulich wrote:
> >>> On 13.12.18 at 16:34, <roger.pau@xxxxxxxxxx> wrote:
> > On Thu, Dec 13, 2018 at 07:52:16AM -0700, Jan Beulich wrote:
> >> >>> On 13.12.18 at 15:14, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Thu, Dec 13, 2018 at 05:51:51AM -0700, Jan Beulich wrote:
> >> >> >>> On 13.12.18 at 12:39, <roger.pau@xxxxxxxxxx> wrote:
> >> >> > Well, Just keeping correct order between each domain locks should be
> >> >> > enough?
> >> >> > 
> >> >> > Ie: exactly the same that Xen currently does but on a per-domain
> >> >> > basis. This is feasible, but each CPU would need to store the lock
> >> >> > order of each possible domain:
> >> >> > 
> >> >> > DECLARE_PER_CPU(uint8_t, mm_lock_level[DOMID_FIRST_RESERVED]);
> >> >> > 
> >> >> > This would consume ~32KB per CPU, which is not that much but seems a
> >> >> > waste when most of the time a single entry will be used.
> >> >> 
> >> >> Well, tracking by domain ID wouldn't help you - the controlling
> >> >> domain may well have a higher ID than the being controlled one,
> >> >> i.e. the nesting you want needs to be independent of domain ID.
> >> > 
> >> > It's not tracking the domain ID, but rather tracking the lock level of
> >> > each different domain, hence the need for the array in the pcpu
> >> > structure. The lock checker would take a domain id and a level, and
> >> > perform the check as:
> >> > 
> >> > if ( mm_lock_level[domid] > level )
> >> >     panic
> >> 
> >> But this would open things up for deadlocks because of intermixed
> >> lock usage between the calling domain's and the subject one's.
> >> There needs to be a linear sequence of locks (of all involved
> >> domains) describing the one and only order in which they may be
> >> acquired.
> > 
> > Well, my plan was to only check for deadlocks between the locks of the
> > same domain, without taking into account intermixed domain locking.
> > 
> > I guess at this point I will need some input from Tim and George about
> > how to proceed, because I'm not sure how to weight locks when using
> > intermixed domain locks, neither what is the correct order. The order
> > in paging_log_dirty_op looks like a valid order that we want to
> > support, but are there any others?
> > 
> > Is it possible to have multiple valid interdomain lock orders that
> > cannot be expressed using the current weighted lock ordering?
> 
> Well, first of all I'm afraid I didn't look closely enough at you
> original mail: We're not talking about the paging lock of two
> domains here, but about he paging lock of the subject domain
> and dom0's p2m lock.
> 
> Second I then notice that
> 
> (XEN) mm locking order violation: 64 > 16
> 
> indicates that it might not have complained when two similar
> locks of different domains were acquired in a nested fashion,
> which I'd call a shortcoming that would be nice to eliminate at
> this same occasion.

Yes, that's a current shortcoming, but then I'm not sure if such case
would be a violation of the lock ordering if the locks belong to
different domains and arbitrary interdomain locking is allowed.

> And third, to answer your question, I can't see anything
> conceptually wrong with an arbitrary intermix of locks from
> two different domains, as long their inner-domain ordering is
> correct. E.g. a Dom0 hypercall may find a need to acquire
> - the subject domain's p2m lock
> - its own p2m lock
> - the subject domain's PoD lock
> - its own paging lock

OK, so if the plan is to allow arbitrary intermix of locks from
different domains then using a per-domain lock level tracking seems
like the best option, along the lines of what I was proposing earlier:

if ( mm_lock_level[domid] > level )
    panic

> Of course it may be possible to determine that "own" locks
> are not supposed to be acquired outside of any "subject
> domain" ones, in which case we'd have a workable hierarchy
> (along the lines of what you had earlier suggested).

I expect the interdomain locking as a result of using a paging caller
domain is going to be restricted to the p2m lock of the caller domain,
as a result of the usage of copy to/from helpers.

Maybe the less intrusive change would be to just allow locking the
caller p2m lock (that only lock) regardless of the subject domain lock
level?

> But I'm
> not sure how expensive (in terms of code auditing) such
> determination is going be, which is why for the moment I'm
> trying to think of a solution (ordering criteria) for the general
> case.

Yes, I think auditing current interdomain locking (if there's more
apart from the paging logdirty hypercall) will be expensive.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.