[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: GNTTABOP_setup_table yields -1 PFNs
On 08.07.2024 11:09, Jan Beulich wrote: > On 06.07.2024 04:22, Taylor R Campbell wrote: >> On a Xen 4.14 host (with extraversion=.0.88.g1d1d1f53), with version 1 >> grant tables, where GNTTABOP_query_size initially returns nr_frames=32 >> max_nr_frames=64, a NetBSD guest repeatedly queries >> GNTTABOP_setup_table for successively larger nr_frames from 1 up. > > First question: Is there some earlier GNTTABOP_setup_table that you invoke? > I'd expect (and also observe) nr_frames=1 initially. > > Second: The version you name is pretty unclear from an upstream perspective. > Leaving aside that 4.14 is out of support, it's entirely unclear whether you > at least have all bug fixes in place that we have upstream (4.14.6). Without > that it's hard to see what you're asking for. > >> The guest initially gets arrays of valid-looking PFNs. But then at >> nr_frames=33, the PFNs [0] through [31] in the resulting array are >> valid but PFN [32] is -1, i.e., all bits set. >> >> GNTTABOP_setup_table returned 0 and op.status = GNTST_okay, so it >> didn't fail -- it just returned an invalid PFN. And _after_ >> GNTTABOP_setup_table yields -1 as a PFN, GNTTABOP_query_size returns >> nr_frames=33 max_nr_frames=64, so the host thinks it has successfully >> allocated more frames. >> >> What could cause the host to return a PFN -1? Is there anything the >> guest does that could provoke this? Are there any diagnostics that >> the guest could print to help track this down? (I don't control the >> host.) Should a guest just check for -1 and stop as if it had hit >> max_nr_frames? > > I'm afraid for the moment, from just the information provided, I can't > reproduce this using a simple patch on top of XTF's self-test (see below). > Neither with a 64-bit PV guest, nor with a 32-bit one. I've been doing > this with a pretty recent 4.19 Xen, though. Doesn't reproduce for me with 4.14.6 either. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |