[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/PoD: defer nested P2M flushes


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 19 Oct 2021 10:09:29 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ew51ff4cjuK6i/x94JRSduNg2YkJI3cjcbfYpMfKBwA=; b=LmJoVGBrTIGxvIlVeBKkLN2s39XO4PJFIjxoG2vsyhwqBP3xCdJNnc+/xW22clylm8AS41EfVlZXrLbX7EUjBX+I8ON6hXpiTGGeNTqdMlEi0E0X5CNmsYny8UtHaQRuzZEg78lARDeknByhE3fgWzbwp/p8R8jxuJ115xItsHBdk5eDtM4Bir6i2nm7ILn/Y3Lg5UDWH55LKdM3zl0YIRuBlflppKpEzHW0nFiISX+LBaAq/hR47tj5Q/8YHQ9l2D9gtu1WY6Ql0WInc6OWa0fioPdM5EWZPw8miDr6b1uutfAImofpCv3ihSlHkqSfpQGf/Q5llN1F0DvAT+RnyA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ImN1mTlpbholNxKP/OJ6WW0a6Sh+0vM7bJ3jC2q4ICoG2q8LoxLi5DqnXO8hMzie06F9Y4I4CHL7dL49pVIKBeomUnUit/6T35PNNXwZhA1vYoX/xx2XbzLY4QucEnDX+FsHGlvTZJYJeKG3lKReAFMvffQ/LHgDBHs1RMNyvT3nnMwjpwheoMv7aNyDNCkauTLDWinFEnyycpYzO1h5lKH1u+b1qm9UUSEra/xdPu6eB0uXxIPlmrpsenHRKyfZaXgJK/Cio4A12p69lhFb9uhlS6LtyojN6MzL1YOSLgyO56WyMdf3MLzVerMF9CcSTfacW2k1+WYfOC6UlH13vw==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Tue, 19 Oct 2021 08:10:10 +0000
  • Ironport-data: A9a23:nCUV6KzS+4NA7b0+F6t6t+fjwSrEfRIJ4+MujC+fZmUNrF6WrkUFn TcXXWqGMv+OZGene90iYIiy9klU75/Rm9VlTVdl+CAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx 59DAjUVBJlsFhcwnvopW1TYhSEUOZugH9IQM8aZfHAsLeNYYH1500s6w7do2tcAbeWRWGthh /uj+6UzB3f9s9JEGjp8B3Wr8U4HUFza4Vv0j3RmDRx5lAa2e0o9VfrzEZqZPXrgKrS4K8bhL wr1IBNVyUuCl/slIovNfr8W6STmSJaKVeSFoiI+t6RPHnGuD8H9u0o2HKN0VKtZt9mGt9JAj 5ZSr5HpcAZ3DPfRiv1FXD1+TQgraMWq+JefSZS+mcmazkmAeHrw2fR+SkoxOOX0+M4uXzsIr 6ZBbmlQMFbT3Ipaw5riIgVort4kI8TxepsWp1lrzC3DDOZgSpfGK0nPzYIFhGhq2pAQdRrYT 8sSdytGSznnWj1KKFEeWb5umba3pUCqJlW0r3rK/PFqsgA/1jdZwLXrddbYZNGObcFUhVqD4 HLL+XzjBRMXP8DZziCKmlqujOLSmSLwWKoJCaa1sPVthTWuKnc7UUNMEwHh+L/g1xD4C4k3x 1EoFjQGr5l1t1OxQ4DEXkOK+mOWp0YTf/wOOrhvgO2S8Zb87wGcD2kCazdObt06qcM7LQAXO k+1c8DBXmM37uXEIZ6J3vLN92nqYHlKRYMXTXZcFVNt3jX1nG0kYvsjpP5YG6mpksa9Jzj0x z2bxMTVr+RO1ZBVv0lXEFauvt5NmnQrZlJqjuk0djj8hu+cWGJDT9b4gbQ8xa0YRLt1tnHb4 BA5dzG2tYji962lmi2XW/kqF7q0/fuDOzC0qQcxRMV8p27zoyT9JNA4DNRCyKFBaJZsldjBO xe7hO+szMULYCvCgVFfMupd9PjGPYC/TI+4B5g4n/JFY4RrdR/vwc2dTRX44owZq2B1yftXE c7CKa6EVC9GYYw6nGveb7pMitcDm3FhrV4/sLimlnxLJ5LFPyXLIVrEWXPTBt0EAFSs8FyEr YcPaZHUo/idOcWnChTqHUcoBQliBVAwBIzsqtwRceiGIwF8H3omBeOXyrQkE7GJVYwP/gsR1 n3iCEJe1nTlgnjLdVeDZnx5Meu9Vpdjt3MreycrOA/wiXQkZI+u6oYZdoc2IuZ7pLAyk6YsQ qlXYdiED9ROVi/Dp2YXY67iodEwbx+snw+PYXaoOWBtY556SgXV0db4ZQ+zpjIWBy+6uJJm8 b2t3w/WW7QZQAFmAJqEYf6j1Qrp73MchPhzTw3DJdwKIBfg941jKirQiP4rIp5TdUWfl2XCj wvPWEUWv+jApYMx4eLlv6Hcotf7CfZ6E2pbA3LfseS8Ox7F8zfx2oRHSuuJI2zQDTum5KW4a OxJ5PjgK/lbzk1Suo9xHrs3n6Iz49zj++1Twgh+RSiZal2qDvVrI2Wc3NkJvapIn+cLtQyzU 0OJ299bJbTWZ5+1TA9PfFIoPraZyPUZujjO9vBkckz16Rh+8KeDTUgPbQKHjzZQLectPY4oq Qv7VBX6N+BrZsIWD+u7
  • Ironport-hdrordr: A9a23:b7WRvKw298TAAfFlVygZKrPwKL1zdoMgy1knxilNoHtuA6ulfq GV7ZAmPHrP4wr5N0tNpTntAsa9qBDnlaKdg7N+AV7KZmCP0gaVxepZjLfK8nnNHDD/6/4Y9Y oISdkaNDQoNykYsS8t2njbL+od
  • Ironport-sdr: x3jiimF6kf2hFUnrYlWnrVJAXPdteawT8PzR/RTSpfFyYC2PFlliS3Uq3ZIaa6wrBUfqP3wbcU Rjs6Sk7WZsHliWauxf6kA/GxV4ykjO2fLKkG+LSWWgbRBcysmUKDu2tVz9w7jG9qRc4xCOmDmJ d17xqCWoDwSqQOGqAf4Hf38+Vcj+k9QYKwhcW3hX0MRdoA25pxN2kpt35C/XDq3U6r60qB5mPW T8dubCBFysa+8FC/KcwhvkQBRN8BW6EpYIML+seo9lRpFLJrZAe8q8co6SG1QvDqwkuNVuCDo+ bIL8iIPLfZtYDVK4vIHOV/RK
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Oct 11, 2021 at 10:17:08AM +0200, Jan Beulich wrote:
> With NPT or shadow in use, the p2m_set_entry() -> p2m_pt_set_entry() ->
> write_p2m_entry() -> p2m_flush_nestedp2m() call sequence triggers a lock
> order violation when the PoD lock is held around it. Hence such flushing
> needs to be deferred. Steal the approach from p2m_change_type_range().
> 
> Similarly for EPT I think ept_set_entry() -> ept_sync_domain() ->
> ept_sync_domain_prepare() -> p2m_flush_nestedp2m() is affected.
> 
> Reported-by: Elliott Mitchell <ehem+xen@xxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -24,6 +24,7 @@
>  #include <xen/mm.h>
>  #include <xen/sched.h>
>  #include <xen/trace.h>
> +#include <asm/hvm/nestedhvm.h>
>  #include <asm/page.h>
>  #include <asm/paging.h>
>  #include <asm/p2m.h>
> @@ -494,6 +495,13 @@ p2m_pod_offline_or_broken_replace(struct
>  static int
>  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn);
>  
> +static void pod_unlock_and_flush(struct p2m_domain *p2m)
> +{
> +    pod_unlock(p2m);
> +    p2m->defer_nested_flush = false;
> +    if ( nestedhvm_enabled(p2m->domain) )
> +        p2m_flush_nestedp2m(p2m->domain);
> +}
>  
>  /*
>   * This function is needed for two reasons:
> @@ -514,6 +522,7 @@ p2m_pod_decrease_reservation(struct doma
>  
>      gfn_lock(p2m, gfn, order);
>      pod_lock(p2m);
> +    p2m->defer_nested_flush = true;
>  
>      /*
>       * If we don't have any outstanding PoD entries, let things take their
> @@ -665,7 +674,7 @@ out_entry_check:
>      }
>  
>  out_unlock:
> -    pod_unlock(p2m);
> +    pod_unlock_and_flush(p2m);
>      gfn_unlock(p2m, gfn, order);
>      return ret;
>  }
> @@ -1144,8 +1153,10 @@ p2m_pod_demand_populate(struct p2m_domai
>       * won't start until we're done.
>       */
>      if ( unlikely(d->is_dying) )
> -        goto out_fail;
> -
> +    {
> +        pod_unlock(p2m);
> +        return false;
> +    }
>  
>      /*
>       * Because PoD does not have cache list for 1GB pages, it has to remap
> @@ -1167,6 +1178,8 @@ p2m_pod_demand_populate(struct p2m_domai
>                                p2m_populate_on_demand, p2m->default_access);
>      }
>  
> +    p2m->defer_nested_flush = true;
> +
>      /* Only reclaim if we're in actual need of more cache. */
>      if ( p2m->pod.entry_count > p2m->pod.count )
>          pod_eager_reclaim(p2m);
> @@ -1229,8 +1242,9 @@ p2m_pod_demand_populate(struct p2m_domai
>          __trace_var(TRC_MEM_POD_POPULATE, 0, sizeof(t), &t);
>      }
>  
> -    pod_unlock(p2m);
> +    pod_unlock_and_flush(p2m);
>      return true;
> +
>  out_of_memory:
>      pod_unlock(p2m);

Don't you need to set defer_nested_flush = false in the out_of_memory
label? (as you don't call pod_unlock_and_flush that would do it)

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.