[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: i915 "GPU HANG", bisected to a2daa27c0c61 "swiotlb: simplify swiotlb_max_segment"
- To: Christoph Hellwig <hch@xxxxxx>
- From: Jan Beulich <jbeulich@xxxxxxxx>
- Date: Tue, 18 Oct 2022 10:57:37 +0200
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SqwJCSGjT2esYjKBkbxmpZdGWTZBZZK5hYeibCpufYU=; b=fgUwqBAnrhhZyMUo0rr+hp4iQyHyJJibsa71cgrtD5KkRwyr1UKXt91LCW1ZjYwxiOqojYkxzKTpCD462HJobuFLJa3o4jeR8xCj7ETQ91UexmTfV/8MZ/ANWsDeIwVxPOHqEaNG8nGHQs5JnRIfjHfJes8RrCuROo/pX7QXwcRV6SnVKSlmTG8rrwdqdLm8peYDtKKeU59uioGO7mDSEUXi1vvE85vr+UZxIsmMmTaw25YuAomYRZbT3LI6F9rDIKuZgEmdemaTnSdnHLdNCdg6rmHnze/k5rAoovwmro3drMJJNWtg9LbOTJX0J4Q1YWpfUNJvwlDaZspAtfquUw==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lfMLewIA6CxCD3w4c1eKdqTLBZc7ZvIHLkpDuP2UgI2pxhiVydU3guJzts9HAVTiSfY3PzsFAvrSsqssdsaUGwoK9CY4g0G5kki1l6Gy0SxMrjHt5VsjLuXZPP407/fp8JYdtYHnaeZIfoLOhALQoi0158IdKk2zQU8hp1l7QXcqIf6UptJtno75ipUbWZRA46In8pEBT8y5V4F4N2nzZqFQ8idn48h+EHw8nVy2R6WfQDnJxw+1B4g1pg0aZD6fWzXuqPVh4YzIvWU7FHPkbwl82zg+NeLfB0ZTYuWNokQpjpzzQmpFHW0kJaj+t0B+IFGSt2p8ZOGf/mjXBOvRYA==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
- Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Anshuman Khandual <anshuman.khandual@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, regressions@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, iommu@xxxxxxxxxxxxxxx, Robert Beckett <bob.beckett@xxxxxxxxxxxxx>, Jani Nikula <jani.nikula@xxxxxxxxxxxxxxx>, Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>, Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx>, Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx>, Matthew Auld <matthew.auld@xxxxxxxxx>, intel-gfx@xxxxxxxxxxxxxxxxxxxxx, dri-devel@xxxxxxxxxxxxxxxxxxxxx, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
- Delivery-date: Tue, 18 Oct 2022 08:57:53 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 18.10.2022 10:24, Christoph Hellwig wrote:
> @@ -127,19 +128,22 @@ static inline unsigned int i915_sg_dma_sizes(struct
> scatterlist *sg)
> return page_sizes;
> }
>
> -static inline unsigned int i915_sg_segment_size(void)
> +static inline unsigned int i915_sg_segment_size(struct device *dev)
> {
> - unsigned int size = swiotlb_max_segment();
> -
> - if (size == 0)
> - size = UINT_MAX;
> -
> - size = rounddown(size, PAGE_SIZE);
> - /* swiotlb_max_segment_size can return 1 byte when it means one page. */
> - if (size < PAGE_SIZE)
> - size = PAGE_SIZE;
> -
> - return size;
> + size_t max = min_t(size_t, UINT_MAX, dma_max_mapping_size(dev));
> +
> + /*
> + * Xen on x86 can reshuffle pages under us. The DMA API takes
> + * care of that both in dma_alloc_* (by calling into the hypervisor
> + * to make the pages contigous) and in dma_map_* (by bounce buffering).
> + * But i915 abuses ignores the coherency aspects of the DMA API and
> + * thus can't cope with bounce buffering actually happening, so add
> + * a hack here to force small allocations and mapping when running on
> + * Xen. (good luck with TDX, btw --hch)
> + */
> + if (IS_ENABLED(CONFIG_X86) && xen_domain())
> + max = PAGE_SIZE;
> + return round_down(max, PAGE_SIZE);
> }
Shouldn't this then be xen_pv_domain() that you use here, and - if you
really want IS_ENABLED() in addition - CONFIG_XEN_PV?
Jan
|