[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 03/12] swiotlb-xen: maintain slab count properly


  • To: Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 7 Sep 2021 14:05:12 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Um9DC69BNNgCMREa9h68z95lfy848yTwNtWb0Y+Y3VA=; b=B8vpRUnG/Yg1Rfe5G/aeQCNwpoFRYh3m/0Ei3R4L/33fYRBEtoZHj6vQGFmiZnmwCUHPXWkgZXNNKYNuRX07zeIdO+kVlaTLa1LwOfcPIseu4wsmPl1LNI1PFxBCCYstSXlUPQwQoRM9MkjwouML6cO4BB2khl1FyyycQyA6aZMXjC5dCd/HsuXfIxEsxcsPd7HrsPBJ1RuPNdE3HumwE7GXzqAVKqK/KNrqlEw5wFu/NowcBdBOCVH0hQZogcaeQsO6v+yQUioD0e92VgI5piJMLqMDdomrIzBXpi50ZMyUrIRP/ylR6STk2P9AE70SofFB5UaoL6bbxISXih0hUA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kmPC5szowPAPAcKQtrFxAUEPkIW1vldLPfpE1mITFMGlnxYUxjnG45p7jkouAGh+wah6/yS2nWVTiLx1AcU8Y8RiO7EOptLW3e7FAl4HxRQCc8W3qOn2P5VHL6z+yMvC3Fjh96JzeY2r7pTi3UrAM1Imi4IfPI/BhP48+N2rQ+XGf/aof+O4JoGzWaUL6Wr/50ZmSHR51GyxH4Z8NXmiJPdKnRlctIOc51w2stDm1Hbzpin6FtL9YpAIEllPB9hrUTpkaxxwbCD4vpMW2MNTpXh133deEewqx3wCAmNVt6nQwfeHweNYOrDhppT65RN3EoA+7L/l98bkFdAJAr0tlA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, lkml <linux-kernel@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 07 Sep 2021 12:05:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Generic swiotlb code makes sure to keep the slab count a multiple of the
number of slabs per segment. Yet even without checking whether any such
assumption is made elsewhere, it is easy to see that xen_swiotlb_fixup()
might alter unrelated memory when calling xen_create_contiguous_region()
for the last segment, when that's not a full one - the function acts on
full order-N regions, not individual pages.

Align the slab count suitably when halving it for a retry. Add a build
time check and a runtime one. Replace the no longer useful local
variable "slabs" by an "order" one calculated just once, outside of the
loop. Re-use "order" for calculating "dma_bits", and change the type of
the latter as well as the one of "i" while touching this anyway.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -106,27 +106,26 @@ static int is_xen_swiotlb_buffer(struct
 
 static int xen_swiotlb_fixup(void *buf, unsigned long nslabs)
 {
-       int i, rc;
-       int dma_bits;
+       int rc;
+       unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
+       unsigned int i, dma_bits = order + PAGE_SHIFT;
        dma_addr_t dma_handle;
        phys_addr_t p = virt_to_phys(buf);
 
-       dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
+       BUILD_BUG_ON(IO_TLB_SEGSIZE & (IO_TLB_SEGSIZE - 1));
+       BUG_ON(nslabs % IO_TLB_SEGSIZE);
 
        i = 0;
        do {
-               int slabs = min(nslabs - i, (unsigned long)IO_TLB_SEGSIZE);
-
                do {
                        rc = xen_create_contiguous_region(
-                               p + (i << IO_TLB_SHIFT),
-                               get_order(slabs << IO_TLB_SHIFT),
+                               p + (i << IO_TLB_SHIFT), order,
                                dma_bits, &dma_handle);
                } while (rc && dma_bits++ < MAX_DMA_BITS);
                if (rc)
                        return rc;
 
-               i += slabs;
+               i += IO_TLB_SEGSIZE;
        } while (i < nslabs);
        return 0;
 }
@@ -210,7 +209,7 @@ retry:
 error:
        if (repeat--) {
                /* Min is 2MB */
-               nslabs = max(1024UL, (nslabs >> 1));
+               nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));
                bytes = nslabs << IO_TLB_SHIFT;
                pr_info("Lowering to %luMB\n", bytes >> 20);
                goto retry;
@@ -245,7 +244,7 @@ retry:
                memblock_free(__pa(start), PAGE_ALIGN(bytes));
                if (repeat--) {
                        /* Min is 2MB */
-                       nslabs = max(1024UL, (nslabs >> 1));
+                       nslabs = max(1024UL, ALIGN(nslabs >> 1, 
IO_TLB_SEGSIZE));
                        bytes = nslabs << IO_TLB_SHIFT;
                        pr_info("Lowering to %luMB\n", bytes >> 20);
                        goto retry;




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.