[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v3 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE


  • To: "Lengyel, Tamas" <tamas.lengyel@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Fri, 19 Jun 2020 01:27:43 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Irwg3jMVO+U7YBGlwxYPxnl6+M8wEfdDrLbZpDSWY9k=; b=SfMaAMHCqW0XV1+UWzTxzQnrtz26lKCAoNlRjrHCEYF+iDczAW2ZCYiL+BEgmOrV141HW2cuoyoUBRg8urF2r4TG7mWBP4PtLGps/OzZZoeAn37aRhRpN99vuhZHd2pSMvANSgPu1oIXNQo/fidnsOHN3OO7G7XQI3UzhSGTstXfHoLqDXhFB9J8r412OqdZ9vBQuLk3ZDXCdw2dImCkbLlRQk1WFdGsgQK+wNQ31NXOlzGUpzkX0K+1Le4fipVfXQI0tLjuD9y3OiBnHJdm0L61of/At2eW6ucF/l06clMUxk2SXELhlEclMFo6y34KK4lXm1O0YYsShq/emTZL3Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aWqbPTyeUgc7lVAaHjfkp2MWkcZjR3RdboiFVf7gPAPsSNmFNnemolAOv8NngHTiwVlHVbZNB2mXRfmC2FwWlMpA9AKqX5HlWiS8Cp+dFhZYYA2NuLpzD5jdqQ+EIeXJHrJD9eiw21mHyfnucr3fICOxejvdHvoRC6Hy5wUHUetIrKS3OfPp0sjortw9nq/rRtJiylCHHffo7YYZwCZ4Nul8m2rD8HT32gPfeKtcClV27Z2e54sKM2pJQUVQ7vlO/DZ91KisxRtvaxw6EyuSi4GvQsfuzRz3N6z98x46H7Z6a7Omt4aMkE/taXBaPmj6VcZ4xLn9W1KuCMctej0gXQ==
  • Authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com;
  • Cc: "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 19 Jun 2020 01:27:50 +0000
  • Dlp-product: dlpe-windows
  • Dlp-reaction: no-action
  • Dlp-version: 11.2.0.6
  • Ironport-sdr: 5cQN70sYRDBDUgKM6/Jv6UJpBdbnkcaeIrUUapb3Q7FcFSnTzc5+ZHVgdufo4vZHHizDAJI5te n/c/z2FL5dbw==
  • Ironport-sdr: zpGYIWk+nr+0Gpvxm8VTM6Qr/0DjX3+SBXWecR5T1hgP4oet72IlNj0457sr5F85Ee7YTtZuoZ tjG1ipVY6wMw==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHWRX5CY0soX8hKWUKiNsqVFtfc26jfJoww
  • Thread-topic: [PATCH v3 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE

> From: Lengyel, Tamas <tamas.lengyel@xxxxxxxxx>
> Sent: Thursday, June 18, 2020 10:39 PM
> 
> While forking VMs running a small RTOS system (Zephyr) a Xen crash has
> been
> observed due to a mm-lock order violation while copying the HVM CPU
> context
> from the parent. This issue has been identified to be due to
> hap_update_paging_modes first getting a lock on the gfn using get_gfn. This
> call also creates a shared entry in the fork's memory map for the cr3 gfn. The
> function later calls hap_update_cr3 while holding the paging_lock, which
> results in the lock-order violation in vmx_load_pdptrs when it tries to
> unshare
> the above entry when it grabs the page with the P2M_UNSHARE flag set.
> 
> Since vmx_load_pdptrs only reads from the page its usage of P2M_UNSHARE
> was
> unnecessary to start with. Using P2M_ALLOC is the appropriate flag to ensure
> the p2m is properly populated.
> 
> Note that the lock order violation is avoided because before the paging_lock
> is
> taken a lookup is performed with P2M_ALLOC that forks the page, thus the
> second
> lookup in vmx_load_pdptrs succeeds without having to perform the fork. We
> keep
> P2M_ALLOC in vmx_load_pdptrs because there are code-paths leading up to
> it
> which don't take the paging_lock and that have no previous lookup.
> Currently no
> other code-path exists leading there with the paging_lock taken, thus no
> further adjustments are necessary.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>

Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>

> ---
> v3: expand commit message to explain why there is no lock-order violation
> ---
>  xen/arch/x86/hvm/vmx/vmx.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index ab19d9424e..cc6d4ece22 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -1325,7 +1325,7 @@ static void vmx_load_pdptrs(struct vcpu *v)
>      if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) )
>          goto crash;
> 
> -    page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt,
> P2M_UNSHARE);
> +    page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt,
> P2M_ALLOC);
>      if ( !page )
>      {
>          /* Ideally you don't want to crash but rather go into a wait
> --
> 2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.