[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/4] x86/mem_sharing: introduce and use page_lock_memshr instead of page_lock



On Tue, Apr 30, 2019 at 8:43 AM George Dunlap <george.dunlap@xxxxxxxxxx> wrote:
>
> On 4/30/19 9:44 AM, Jan Beulich wrote:
> >>>> On 30.04.19 at 10:28, <tamas@xxxxxxxxxxxxx> wrote:
> >> On Tue, Apr 30, 2019 at 1:15 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>>
> >>>>>> On 29.04.19 at 18:35, <tamas@xxxxxxxxxxxxx> wrote:
> >>>> On Mon, Apr 29, 2019 at 9:18 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>>>>>>> On 26.04.19 at 19:21, <tamas@xxxxxxxxxxxxx> wrote:
> >>>>>> --- a/xen/arch/x86/mm.c
> >>>>>> +++ b/xen/arch/x86/mm.c
> >>>>>> @@ -2030,12 +2030,11 @@ static inline bool
> >> current_locked_page_ne_check(struct page_info *page) {
> >>>>>>  #define current_locked_page_ne_check(x) true
> >>>>>>  #endif
> >>>>>>
> >>>>>> -int page_lock(struct page_info *page)
> >>>>>> +#if defined(CONFIG_PV) || defined(CONFIG_HAS_MEM_SHARING)
> >>>>>> +static int _page_lock(struct page_info *page)
> >>>>>
> >>>>> As per above, personally I'm against introducing
> >>>>> page_{,un}lock_memshr(), as that makes the abuse even more
> >>>>> look like proper use. But if this was to be kept this way, may I
> >>>>> ask that you switch int -> bool in the return types at this occasion?
> >>>>
> >>>> Switching them to bool would be fine. Replacing them with something
> >>>> saner is unfortunately out-of-scope at the moment. Unless someone has
> >>>> a specific solution that can be put in place. I don't have one.
> >>>
> >>> I've outlined a solution already: Make a mem-sharing private variant
> >>> of page_{,un}lock(), derived from the PV ones (but with pieces
> >>> dropped you don't want/need).
> >>
> >> Well, that's what I already did here in this patch. No?
> >
> > No - you've retained a shared _page_{,un}lock(), whereas my
> > suggestion was to have a completely independent pair of
> > functions in mem_sharing.c. The only thing needed by both PV
> > and HVM would then be the PGT_locked flag.
>
> But it wasn't obvious to me how the implementations of the actual lock
> function would be be different.  And there's no point in having two
> identical implementations; in fact, it would be harmful.

I also think it's wasteful and an invitation for future breakage. But
right now I just want the functions working without them intentionally
crashing the hypervisor under me - which is the case right now.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.