[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen only when pages are contiguous
On 10/24/18 10:43 AM, Joe Jin wrote: > On 10/24/18 6:57 AM, Boris Ostrovsky wrote: >> On 10/24/18 9:02 AM, Konrad Rzeszutek Wilk wrote: >>> On Tue, Oct 23, 2018 at 08:09:04PM -0700, Joe Jin wrote: >>>> Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for >>>> xen_swiotlb_free_coherent" only fixed memory address check condition >>>> on xen_swiotlb_free_coherent(), when memory was not physically >>>> contiguous and tried to exchanged with Xen via >>>> xen_destroy_contiguous_region it will lead kernel panic. >>> s/it will lead/which lead to/? >>> >>>> The correct check condition should be memory is in DMA area and >>>> physically contiguous. >>> "The correct check condition to make Xen hypercall to revert the >>> memory back from its 32-bit pool is if it is: >>> 1) Above its DMA bit mask (for example 32-bit devices can only address >>> up to 4GB, and we may want 4GB+2K), and >> Is this "and' or 'or'? >> >>> 2) If it not physically contingous >>> >>> N.B. The logic in the code is inverted, which leads to all sorts of >>> confusions." >> >> I would, in fact, suggest to make the logic the same in both >> xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() to avoid >> this. This will involve swapping if and else in the former. >> >> >>> Does that sound correct? >>> >>>> Thank you Boris for pointing it out. >>>> >>> Fixes: 4855c92dbb7 ("xen-sw..") ? >>> >>>> Signed-off-by: Joe Jin <joe.jin@xxxxxxxxxx> >>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> >>>> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> >>> Reported-by: Boris Ostrovs... ? >>>> Cc: Christoph Helwig <hch@xxxxxx> >>>> Cc: Dongli Zhang <dongli.zhang@xxxxxxxxxx> >>>> Cc: John Sobecki <john.sobecki@xxxxxxxxxx> >>>> --- >>>> drivers/xen/swiotlb-xen.c | 4 ++-- >>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c >>>> index f5c1af4ce9ab..aed92fa019f9 100644 >>>> --- a/drivers/xen/swiotlb-xen.c >>>> +++ b/drivers/xen/swiotlb-xen.c >>>> @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t >>>> size, void *vaddr, >>>> /* Convert the size to actually allocated. */ >>>> size = 1UL << (order + XEN_PAGE_SHIFT); >>>> >>>> - if (((dev_addr + size - 1 <= dma_mask)) || >>>> - range_straddles_page_boundary(phys, size)) >>>> + if ((dev_addr + size - 1 <= dma_mask) && >>>> + !range_straddles_page_boundary(phys, size)) >>>> xen_destroy_contiguous_region(phys, order); >> >> I don't think this is right. >> >> if ((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, >> size)) >> >> No? > No this is not correct. > > When allocate memory, it tried to allocated from Dom0/Guest, then check if > physical > address is DMA memory also contiguous, if no, exchange with Hypervisor, code > as below: > > 326 phys = *dma_handle; > > 327 dev_addr = xen_phys_to_bus(phys); > > 328 if (((dev_addr + size - 1 <= dma_mask)) && > > 329 !range_straddles_page_boundary(phys, size)) > > 330 *dma_handle = dev_addr; > > 331 else { > > 332 if (xen_create_contiguous_region(phys, order, > > 333 fls64(dma_mask), > dma_handle) != 0) { > 334 xen_free_coherent_pages(hwdev, size, ret, > (dma_addr_t)phys, attrs); > 335 return NULL; > > 336 } > > 337 } > > > > On freeing, need to return the memory to Xen, otherwise DMA memory will be > used > up(this is the issue the patch intend to fix), so when memory is DMAable and > contiguous then call xen_destroy_contiguous_region(), return DMA memory to > Xen. So if you want to allocate 1 byte at address 0 (and dev_addr=phys), xen_create_contiguous_region() will not be called. And yet you will call xen_destroy_contiguous_region() in the free path. Is this the expected behavior? -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |