[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question regarding swiotlb-xen in Linux kernel
On 4/18/19 2:09 PM, Boris Ostrovsky wrote: > On 4/18/19 3:36 AM, Juergen Gross wrote: >> I'm currently investigating a problem related to swiotlb-xen. With a >> specific driver a customer is capable to trigger a situation where a >> MFN is mapped to multiple dom0 PFNs at the same time. There is no >> guest involved, so this is not related to grants. >> >> Wit a debug kernel I have managed to track the inconsistency to a >> call of xen_destroy_contiguous_region() from xen_swiotlb_free_coherent() >> where the region was obviously not contiguous. >> >> xen_swiotlb_free_coherent() contains: >> >> if (((dev_addr + size - 1 <= dma_mask)) || >> range_straddles_page_boundary(phys, size)) >> xen_destroy_contiguous_region(phys, order); >> >> Shouldn't it be either: >> >> if (((dev_addr + size - 1 <= dma_mask)) && >> !range_straddles_page_boundary(phys, size)) >> xen_destroy_contiguous_region(phys, order); > > +Joe > > https://lists.xenproject.org/archives/html/xen-devel/2018-10/msg01920.html > > (The discussion happened in > https://lists.xenproject.org/archives/html/xen-devel/2018-10/msg01814.html) > > And looks like we dropped it. Or was there a reason we haven't picked it up? I remembered the concern was whether memory not from Xen. Thanks, Joe > > > -boris > > >> >> >> or: >> >> if (dev_addr + size - 1 <= dma_mask) { >> BUG_ON(range_straddles_page_boundary(phys, size)); >> xen_destroy_contiguous_region(phys, order); >> } >> >> as calling xen_destroy_contiguous_region() with a non-contiguous memory >> region is a perfect receipt for a latent crash? >> >> The remaining question is why the driver is calling >> xen_swiotlb_free_coherent() for a non-contiguous region, of course. >> >> >> Juergen > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |