[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Partially revert "x86/mm: Clean IOMMU flags from p2m-pt code"



On 28.08.2019 15:32, Roger Pau Monne wrote:
> This partially reverts commit
> 854a49a7486a02edae5b3e53617bace526e9c1b1 by re-adding the logic that
> propagates changes to the domain physmap done by p2m_pt_set_entry into
> the iommu page tables. Without this logic changes to the guest physmap
> are not propagated to the iommu, leaving stale iommu entries that can
> leak data, or failing to add new entries.

Oh, indeed - how did I miss this while reviewing?

> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -35,6 +35,7 @@
>  #include <asm/p2m.h>
>  #include <asm/mem_sharing.h>
>  #include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/svm/amd-iommu-proto.h>

I guess you don't really need to re-add this, as ...

> @@ -640,9 +671,24 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, 
> mfn_t mfn,
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
>  
> +    if ( iommu_enabled && (iommu_old_flags != iommu_pte_flags ||
> +                           old_mfn != mfn_x(mfn)) )
> +    {
> +        ASSERT(rc == 0);
> +
> +        if ( need_iommu_pt_sync(p2m->domain) )
> +            rc = iommu_pte_flags ?
> +                iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
> +                                 iommu_pte_flags) :
> +                iommu_legacy_unmap(d, _dfn(gfn), page_order);
> +        else if ( iommu_use_hap_pt(d) && iommu_old_flags )
> +            amd_iommu_flush_pages(p2m->domain, gfn, page_order);

... I don't think the "else if()" needs restoring (with there not
being any sharing of page tables for AMD/SVM).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.