[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: fix paging_log_dirty_op to work with paging guests



>>> On 13.12.18 at 15:14, <roger.pau@xxxxxxxxxx> wrote:
> On Thu, Dec 13, 2018 at 05:51:51AM -0700, Jan Beulich wrote:
>> >>> On 13.12.18 at 12:39, <roger.pau@xxxxxxxxxx> wrote:
>> > Well, Just keeping correct order between each domain locks should be
>> > enough?
>> > 
>> > Ie: exactly the same that Xen currently does but on a per-domain
>> > basis. This is feasible, but each CPU would need to store the lock
>> > order of each possible domain:
>> > 
>> > DECLARE_PER_CPU(uint8_t, mm_lock_level[DOMID_FIRST_RESERVED]);
>> > 
>> > This would consume ~32KB per CPU, which is not that much but seems a
>> > waste when most of the time a single entry will be used.
>> 
>> Well, tracking by domain ID wouldn't help you - the controlling
>> domain may well have a higher ID than the being controlled one,
>> i.e. the nesting you want needs to be independent of domain ID.
> 
> It's not tracking the domain ID, but rather tracking the lock level of
> each different domain, hence the need for the array in the pcpu
> structure. The lock checker would take a domain id and a level, and
> perform the check as:
> 
> if ( mm_lock_level[domid] > level )
>     panic

But this would open things up for deadlocks because of intermixed
lock usage between the calling domain's and the subject one's.
There needs to be a linear sequence of locks (of all involved
domains) describing the one and only order in which they may be
acquired.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.