[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: v5.4.289 failed to boot with error megasas_build_io_fusion 3219 sge_count (-12) is out of range


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Jürgen Groß <jgross@xxxxxxxx>
  • Date: Fri, 31 Jan 2025 07:38:53 +0100
  • Autocrypt: addr=jgross@xxxxxxxx; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNH0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT7CwHkEEwECACMFAlOMcK8CGwMH CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO RoVBYuiocc51872tRGywc03xaQydB+9R7BHPzsBNBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAHCwF8EGAECAAkFAlOMcBYCGwwACgkQsN6d 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHfw==
  • Cc: Harshvardhan Jha <harshvardhan.j.jha@xxxxxxxxxx>, Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx>, Konrad Wilk <konrad.wilk@xxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, Harshit Mogalapalli <harshit.m.mogalapalli@xxxxxxxxxx>, stable@xxxxxxxxxxxxxxx
  • Delivery-date: Fri, 31 Jan 2025 06:39:00 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 30.01.25 21:28, Stefano Stabellini wrote:
On Thu, 30 Jan 2025, Jürgen Groß wrote:
Can you try the attached patch, please? I don't have a system at hand
showing the problem.

 From cff43e997f79a95dc44e02debaeafe5f127f40bb Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@xxxxxxxx>
Date: Thu, 30 Jan 2025 09:56:57 +0100
Subject: [PATCH] x86/xen: allow larger contiguous memory regions in PV guests

Today a PV guest (including dom0) can create 2MB contiguous memory
regions for DMA buffers at max. This has led to problems at least
with the megaraid_sas driver, which wants to allocate a 2.3MB DMA
buffer.

The limiting factor is the frame array used to do the hypercall for
making the memory contiguous, which has 512 entries and is just a
static array in mmu_pv.c.

In case a contiguous memory area larger than the initially supported
2MB is requested, allocate a larger buffer for the frame list. Note
that such an allocation is tried only after memory management has been
initialized properly, which is tested via the early_boot_irqs_disabled
flag.

Fixes: 9f40ec84a797 ("xen/swiotlb: add alignment check for dma buffers")
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---
Note that the "Fixes:" tag is not really correct, as that patch didn't
introduce the problem, but rather made it visible. OTOH it is the best
indicator we have to identify kernel versions this patch should be
backported to.
---
  arch/x86/xen/mmu_pv.c | 44 ++++++++++++++++++++++++++++++++++++-------
  1 file changed, 37 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 55a4996d0c04..62aec29b8174 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2200,8 +2200,10 @@ void __init xen_init_mmu_ops(void)
  }
/* Protected by xen_reservation_lock. */
-#define MAX_CONTIG_ORDER 9 /* 2MB */
-static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER];
+#define MIN_CONTIG_ORDER 9 /* 2MB */
+static unsigned int discontig_frames_order = MIN_CONTIG_ORDER;
+static unsigned long discontig_frames_early[1UL << MIN_CONTIG_ORDER];
+static unsigned long *discontig_frames = discontig_frames_early;
#define VOID_PTE (mfn_pte(0, __pgprot(0)))
  static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
@@ -2319,18 +2321,44 @@ int xen_create_contiguous_region(phys_addr_t pstart, 
unsigned int order,
                                 unsigned int address_bits,
                                 dma_addr_t *dma_handle)
  {
-       unsigned long *in_frames = discontig_frames, out_frame;
+       unsigned long *in_frames, out_frame;
+       unsigned long *new_array, *old_array;
        unsigned long  flags;
        int            success;
        unsigned long vstart = (unsigned long)phys_to_virt(pstart);
- if (unlikely(order > MAX_CONTIG_ORDER))
-               return -ENOMEM;
+       if (unlikely(order > discontig_frames_order)) {
+               if (early_boot_irqs_disabled)
+                       return -ENOMEM;
+
+               new_array = vmalloc(sizeof(unsigned long) * (1UL << order));
+
+               if (!new_array)
+                       return -ENOMEM;
+
+               spin_lock_irqsave(&xen_reservation_lock, flags);
+
+               if (order > discontig_frames_order) {


This second if check should not be needed because it is the same as the
outer if check.

It is needed, as inside the locked region I need to verify that no
concurrent call did already update the buffer, maybe with an even
larger size.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.