[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE



On Thu, Jun 18, 2020 at 12:31 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 17.06.2020 18:19, Tamas K Lengyel wrote:
> > While forking VMs running a small RTOS system (Zephyr) a Xen crash has been
> > observed due to a mm-lock order violation while copying the HVM CPU context
> > from the parent. This issue has been identified to be due to
> > hap_update_paging_modes first getting a lock on the gfn using get_gfn. This
> > call also creates a shared entry in the fork's memory map for the cr3 gfn. 
> > The
> > function later calls hap_update_cr3 while holding the paging_lock, which
> > results in the lock-order violation in vmx_load_pdptrs when it tries to 
> > unshare
> > the above entry when it grabs the page with the P2M_UNSHARE flag set.
> >
> > Since vmx_load_pdptrs only reads from the page its usage of P2M_UNSHARE was
> > unnecessary to start with. Using P2M_ALLOC is the appropriate flag to ensure
> > the p2m is properly populated and to avoid the lock-order violation we
> > observed.
>
> Using P2M_ALLOC is not going to address the original problem though
> afaict: You may hit the mem_sharing_fork_page() path that way, and
> via nominate_page() => __grab_shared_page() => mem_sharing_page_lock()
> you'd run into a lock order violation again.

Note that the nominate_page you see in that path is for the parent VM.
The paging lock is not taken for the parent VM thus nominate_page
succeeds without any issues any time fork_page is called. There is no
nominate_page called for the client domain as there is nothing to
nominate when plugging a hole.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.