[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] RFC: x86/mm: conditionally check page_lock/page_unlock ownership



On Fri, Apr 12, 2019 at 8:39 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> >>> On 12.04.19 at 15:55, <tamas@xxxxxxxxxxxxx> wrote:
> > On Fri, Apr 12, 2019 at 6:01 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>> On 12.04.19 at 06:29, <tamas@xxxxxxxxxxxxx> wrote:
> >> > Patch cf4b30dca0a "Add debug code to detect illegal page_lock and 
> >> > put_page_type
> >> > ordering" added extra sanity checking to page_lock/page_unlock for debug 
> >> > builds
> >> > with the assumption that no hypervisor path ever locks two pages at once.
> >> >
> >> > This assumption doesn't hold during memory sharing.
> >>
> >> A fundamental question is - doesn't mem-sharing abuse page_lock()?
> >> I've never understood why it uses it, and back when it was introduced
> >> the function clearly wasn't meant to be used outside of PV memory
> >> management code.
> >
> > I have limited understanding of what the limitations are of page_lock
> > but here is the reasoning for taking the lock for two pages.
> > Mem_sharing takes the pages it wants to be sharable and assigns them
> > to dom_cow at least with one reference being held (since we just took
> > these pages from a live domain). Those pages may then be assigned to
> > several actual domains at the same time, and each domain could be
> > destroyed at any given point. There is no requirement that the
> > original domain still be alive since the owner of these pages is now
> > dom_cow. The backing pages owned by dom_cow only get dropped when the
> > reference count hits zero, ie. no actual domain actually uses those
> > pages anymore. This could happen if all domains unshare the gfn's that
> > use this page as the backing or if all domains that use them for
> > mem_sharing get destroyed. When we share pages, we want to make sure
> > those don't get dropped under our feet while we are in the middle of
> > this function. So we lock both pages.
>
> Thanks for the explanation, but I'm afraid this doesn't address
> my remark: You need _some_ kind of lock for this, but I was
> questioning whether using page_lock() is the right choice. From
> what you say, obtaining proper references to the page would
> seem what you actually want to avoid them disappearing
> behind your back.

Yes, but beside the references ensuring the pages don't get dropped
there is also a bunch of book-keeping operations in mem_sharing (the
rmap bits) that also look like they need the locking to ensure no
concurrent updates happen for the same page. I guess we could
introduce a new lock but it would also need to be per-page, so to me
it looks like it would just duplicate what page_lock does already.

> page_lock() is used on PV page table pages to exclude multiple
> updates to the same page table happening at the same time.
> In a !PV build it shouldn't even be available.

I don't see page_lock being encapsulated by CONFIG_PV blocks so that
should be fine.

> > Also, I don't see why it would be fundamentally wrong to hold the
> > locks to two pages at once. Is this a problem with the lock
> > implementation itself?
>
> No, the implementation itself should be fine (but I also didn't say
> there's a problem locking two pages). Lock order might be an issue,
> but seem to be dealing with that fine (considering mem_sharing.c
> alone).

Could you expand on the lock ordering requirement? I noticed
mem_sharing aquires the locks to the pages in increasing mfn order and
tried to decipher why that is done but I couldn't really make heads or
tails.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.