[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: hvmloader - allow_memory_relocate overlaps
On 2024-02-07 16:02, Jan Beulich wrote: > On 04.01.2024 14:16, Jan Beulich wrote: > > On 22.12.2023 16:49, Neowutran wrote: > >> Full logs without my patch to set allow-memory-relocate > >> (https://github.com/neowutran/qubes-vmm-xen/blob/allowmemoryrelocate/ALLOWMEMORYRELOCATE.patch) > >> https://pastebin.com/g > >> QGg55WZ > >> (GPU passthrough doesn't work, hvmloader overlaps with guest memory) > > > > So there are oddities, but I can't spot any overlaps. What's odd is that > > the two blocks already above 4Gb are accounted for (and later relocated) > > when calculating total MMIO size. BARs of size 2Gb and more shouldn't be > > accounted for at all when deciding whether low RAM needs relocating, as > > those can't live below 4Gb anyway. I vaguely recall pointing this out > > years ago, but it was thought we'd get away for a fair while. What's > > further odd is where the two blocks are moved to: F800000 moves (down) > > to C00000, while the smaller FC00000 moves further up to FC80000. > > > > I'll try to get to addressing at least the first oddity; if I can figure > > out why the second one occurs, I may try to address that as well. > > Could you give the patch below a try? I don't have a device with large > enough a BAR that I could sensibly pass through to a guest, so I was > only able to build-test the change. Hi and thanks, I just tested it, it indeed work well when the GPU have bar > 1Go. ------------ Setup: I removed my patch ( --- a/tools/libs/light/libxl_dm.c +++ b/tools/libs/light/libxl_dm.c @@ -2431,6 +2431,10 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss) libxl__xs_get_dompath(gc, guest_domid)), "%s", libxl_bios_type_to_string(guest_config->b_info.u.hvm.bios)); + libxl__xs_printf(gc, XBT_NULL, + libxl__sprintf(gc, "%s/hvmloader/allow-memory-relocate", libxl__xs_get_dompath(gc, guest_domid)), + "%d", + 0); } ret = xc_domain_set_target(ctx->xch, dm_domid, guest_domid); if (ret<0) { ) and applied your suggested "skip huge BARs" patch. My GPU: nvidia 4080 ------------- When the option "Resizable BAR support" is activated in my bios, the BAR1 size of my gpu is reported to be 16GB. With this patch, gpu passthrough work. When the option "Resizable BAR support" is desactivated in my bios, the BAR1 size of my gpu is reported to be 256MB. With this patch, gpu passthrough doesn't work (same crash as before) ( note: the option "Resizable BAR support" may or may not exist depending on the motherboard model, on some it is 'hardcoded' to activated, on some it is 'hardcoded' to desactivated) > Jan > > hvmloader/PCI: skip huge BARs in certain calculations > > BARs of size 2Gb and up can't possibly fit below 4Gb: Both the bottom of > the lower 2Gb range and the top of the higher 2Gb range have special > purpose. Don't even have them influence whether to (perhaps) relocate > low RAM. > > Reported-by: Neowutran <xen@xxxxxxxxxxxxx> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > --- > If we wanted to fit e.g. multiple 1Gb BARs, it would likely be prudent > to similarly avoid low RAM relocation in the first place. Yet accounting > for things differently depending on many large BARs there are would > require more intrusive code changes. > > That said, I'm open to further lowering of the threshold. That'll > require different justification then, though. > > --- a/tools/firmware/hvmloader/pci.c > +++ b/tools/firmware/hvmloader/pci.c > @@ -33,6 +33,13 @@ uint32_t pci_mem_start = HVM_BELOW_4G_MM > const uint32_t pci_mem_end = RESERVED_MEMBASE; > uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0; > > +/* > + * BARs larger than this value are put in 64-bit space unconditionally. That > + * is, such BARs also don't play into the determination of how big the lowmem > + * MMIO hole needs to be. > + */ > +#define HUGE_BAR_THRESH GB(1) > + > enum virtual_vga virtual_vga = VGA_none; > unsigned long igd_opregion_pgbase = 0; > > @@ -286,9 +293,11 @@ void pci_setup(void) > bars[i].bar_reg = bar_reg; > bars[i].bar_sz = bar_sz; > > - if ( ((bar_data & PCI_BASE_ADDRESS_SPACE) == > - PCI_BASE_ADDRESS_SPACE_MEMORY) || > - (bar_reg == PCI_ROM_ADDRESS) ) > + if ( is_64bar && bar_sz > HUGE_BAR_THRESH ) > + bar64_relocate = 1; > + else if ( ((bar_data & PCI_BASE_ADDRESS_SPACE) == > + PCI_BASE_ADDRESS_SPACE_MEMORY) || > + (bar_reg == PCI_ROM_ADDRESS) ) > mmio_total += bar_sz; > > nr_bars++; > @@ -367,7 +376,7 @@ void pci_setup(void) > pci_mem_start = hvm_info->low_mem_pgend << PAGE_SHIFT; > } > > - if ( mmio_total > (pci_mem_end - pci_mem_start) ) > + if ( mmio_total > (pci_mem_end - pci_mem_start) || bar64_relocate ) > { > printf("Low MMIO hole not large enough for all devices," > " relocating some BARs to 64-bit\n"); > @@ -446,8 +455,9 @@ void pci_setup(void) > * the code here assumes it to be.) > * Should either of those two conditions change, this code will > break. > */ > - using_64bar = bars[i].is_64bar && bar64_relocate > - && (mmio_total > (mem_resource.max - mem_resource.base)); > + using_64bar = bars[i].is_64bar && bar64_relocate && > + (mmio_total > (mem_resource.max - mem_resource.base) || > + bar_sz > HUGE_BAR_THRESH); > bar_data = pci_readl(devfn, bar_reg); > > if ( (bar_data & PCI_BASE_ADDRESS_SPACE) == > @@ -467,7 +477,8 @@ void pci_setup(void) > resource = &mem_resource; > bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK; > } > - mmio_total -= bar_sz; > + if ( bar_sz <= HUGE_BAR_THRESH ) > + mmio_total -= bar_sz; > } > else > { >
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |