[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/4] x86: correct instances of PGC_allocated clearing



On Tue, Nov 20, 2018 at 10:12 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> >>> On 20.11.18 at 17:59, <andrew.cooper3@xxxxxxxxxx> wrote:
> > On 20/11/2018 16:18, Jan Beulich wrote:
> >> For domain heap pages assigned to a domain dropping the page reference
> >> tied to PGC_allocated may not drop the last reference, as otherwise the
> >> test_and_clear_bit() might already act on an unowned page.
> >>
> >> Work around this where possible, but the need to acquire extra page
> >> references is a fair hint that references should have been acquired in
> >> other places instead.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> Compile tested only, as I have neither a mem-sharing nor a mem-paging
> >> environment set up ready to be used for such testing.

This is how I test memsharing: you save a vm such that it's kept
paused (xl save -p), copy its config file but change the name, restore
the saved image with this new config file and the vm kept paused (xl
restore -p). Then you can use tests/mem-sharing on the two VMs:
./memshrtool enable <first_vm> &&./memshrtool enable <second_vm> &&
./memshrtool range <first_vm> <second_vm> 0 1000

Unpausing the second vm afterwards will exercise memsharing.

> >
> > Perhaps we should compile them out by default?  It's clear there are no
> > production users, given the quality of the code and how many security
> > issues we spot accidentally.
>
> Yeah, well - if we're going to have a perhaps much wider set of
> config options, then these two surely should become "default n"
> until they've been brought out of their sorry state.

+1

Code also looks OK to me:

Acked-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.