[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-swiotlb: use actually allocated size on check physical contiguous



Below module would help people reproduce the issue to understand the symptom:

https://github.com/finallyjustice/patchset/blob/master/xen-swiotlb-panic.c

In addition, on the xen hypervisor side, the memory_exchange() in xen hypervisor
does not check if the the pfn of input mfn belong to the same extent are
continuous in guest domain. As a result, the wrong page is stolen from guest 
domain.

Can we assume it is fine to not check if pfn of mfn are continuous in xen
hypervisor?

I think it is OK as the guest domain is responsible to maintain its memory
pages. xen is happy to serve any operations from guest domain, unless such
operation is harmful to xen hypervisor or other domains. In the worst cause, the
dom0 (or domU with passthrough) would panic itself.

 496 static long memory_exchange(...
... ...
 609         /* Steal a chunk's worth of input pages from the domain. */
 610         for ( j = 0; j < (1UL << in_chunk_order); j++ )
 611         {
 612             if ( unlikely(__copy_from_guest_offset(
 613                 &gmfn, exch.in.extent_start, (i<<in_chunk_order)+j, 1)) )
 614             {
 615                 rc = -EFAULT;
 616                 goto fail;
 617             }
 618

 -------> not checking if the gfn of gmfn are continuous in below 'for loop' for
each extent.

 619             for ( k = 0; k < (1UL << exch.in.extent_order); k++ )
 620             {
 621 #ifdef CONFIG_X86
 622                 p2m_type_t p2mt;
 623
 624                 /* Shared pages cannot be exchanged */
 625                 mfn = get_gfn_unshare(d, gmfn + k, &p2mt);
 626                 if ( p2m_is_shared(p2mt) )
 627                 {
 628                     put_gfn(d, gmfn + k);
 629                     rc = -ENOMEM;
 630                     goto fail;
 631                 }
 632 #else /* !CONFIG_X86 */
 633                 mfn = gfn_to_mfn(d, _gfn(gmfn + k));
 634 #endif
 635                 if ( unlikely(!mfn_valid(mfn)) )
 636                 {
 637                     put_gfn(d, gmfn + k);
 638                     rc = -EINVAL;
 639                     goto fail;
 640                 }
 641
 642                 page = mfn_to_page(mfn);
 643

-----> As a result, the wrong page is stolen.

 644                 rc = steal_page(d, page, MEMF_no_refcount);
 645                 if ( unlikely(rc) )
 646                 {
 647                     put_gfn(d, gmfn + k);
 648                     goto fail;
 649                 }
 650
 651                 page_list_add(page, &in_chunk_list);
 652                 put_gfn(d, gmfn + k);
 653             }
 654         }


Dongli Zhang



On 09/05/2018 02:16 AM, Joe Jin wrote:
> xen_swiotlb_{alloc,free}_coherent() actually allocate/free size by order
> but used the required size to check if address is physical contiguous,
> if first pages are physical contiguous also passed
> range_straddles_page_boundary() check, but others were not it will
> lead kernel panic.
> 
> Signed-off-by: Joe Jin <joe.jin@xxxxxxxxxx>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> ---
>  drivers/xen/swiotlb-xen.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index a6f9ba85dc4b..aa081f806728 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t 
> size,
>       */
>       flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
>  
> +     /* Convert the size to actually allocated. */
> +     size = 1UL << (order + XEN_PAGE_SHIFT);
> +
>       /* On ARM this function returns an ioremap'ped virtual address for
>        * which virt_to_phys doesn't return the corresponding physical
>        * address. In fact on ARM virt_to_phys only works for kernel direct
> @@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t 
> size, void *vaddr,
>        * physical address */
>       phys = xen_bus_to_phys(dev_addr);
>  
> +     /* Convert the size to actually allocated. */
> +     size = 1UL << (order + XEN_PAGE_SHIFT);
> +
>       if (((dev_addr + size - 1 <= dma_mask)) ||
>           range_straddles_page_boundary(phys, size))
>               xen_destroy_contiguous_region(phys, order);
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.