[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.8] altp2m: don't attempt to unshare pages during change_altp2m_gfn op



On Oct 20, 2016 18:40, "George Dunlap" <george.dunlap@xxxxxxxxxx> wrote:
>
> On 20/10/16 17:29, Tamas K Lengyel wrote:
> > On Oct 20, 2016 18:18, "George Dunlap" <george.dunlap@xxxxxxxxxx> wrote:
> >>
> >> On 14/10/16 01:00, Tamas K Lengyel wrote:
> >>> Attempting to change gfn mappings with altp2m on a memory shared page
> > results
> >>> in a lock-order violation (mm locking order violation: 282 > 254), which
> >>> crashes the hypervisor. Don't attempt to automatically unshare such
> > pages and
> >>> just fall back to failing the op if the page type is not correct.
> >>>
> >>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
> >>
> >> It would be nice to try to untangle thus such that you can reasonably
> >> unshare a page in this circumstance; but given the point in the release
> >> cycle, making it return an error instead of crashing is probably the
> >> right thing to do.
> >
> > You can unshare these pages, just have to do in a separate op so the locks
> > are taken in the right order (memshare before altp2m). Reversing the lock
> > order is not possible because otherwise the automatic unsharing and
> > propagation during runtime runs into the lock order problem without the
> > possibility of recovering. This way the user has the option to handle it
> > gracefully here.
>
> Yay locks. :-)
>
> It would probably be helpful to have a comment there explaining the
> situation, so that people in the future don't need to re-discover this
> issue.
>
> Do you want to toss together a patch adding such a comment, or shall I?
>

Please do so if you can, I'm traveling at the moment so it would be a couple days before I could send a patch for that.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.