[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 8/8] libxl, hvmloader: Don't relocate memory for MMIO hole
>>> On 20.06.13 at 18:33, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote: > @@ -57,6 +59,32 @@ void pci_setup(void) > } *bars = (struct bars *)scratch_start; > unsigned int i, nr_bars = 0; > > + const char *s; > + /* > + * Do we allow hvmloader to relocate guest memory in order to > + * increase the size of the lowmem MMIO hole? Defaulting to 1 > + * here will mean that non-libxl toolstacks (including xend and > + * home-grown ones) will experience this series as "no change". > + * It does mean that those using qemu-xen will still experience > + * the bug (described below); but it also means that those using > + * qemu-traditional will *not* experience any change; and it also > + * means that there is a work-around for those using qemu-xen, > + * namely switching to qemu-traditional. > + * > + * If we defaulted to 0, and failing to resize the hole caused any > + * problems with qemu-traditional, then there is no work-around. > + * > + * Since xend can't talk to qemu-traditional, I think this is the qemu-xen? > + * option that will have the least impact. > + */ > + bool allow_memory_relocate = 1; > + > + s = xenstore_read(HVM_XS_ALLOW_MEMORY_RELOCATE, NULL); > + if ( s ) > + allow_memory_relocate = strtoll(s, NULL, 0); > + printf("Relocating guest memory for lowmem MMIO space %s\n", > + allow_memory_relocate?"enabled":"disabled"); > + > /* Program PCI-ISA bridge with appropriate link routes. */ > isa_irq = 0; > for ( link = 0; link < 4; link++ ) > @@ -208,8 +236,25 @@ void pci_setup(void) > pci_writew(devfn, PCI_COMMAND, cmd); > } > > - while ( (mmio_total > (pci_mem_end - pci_mem_start)) && > - ((pci_mem_start << 1) != 0) ) > + /* > + * At the moment qemu-xen can't deal with relocated memory regions. > + * It's too close to the release to make a proper fix; for now, > + * only allow the MMIO hole to grow large enough to move guest memory > + * if we're running qemu-traditional. Items that don't fit will be > + * relocated into the 64-bit address space. > + * > + * This loop now does the following: > + * - If allow_memory_relocate, increase the MMIO hole until it's > + * big enough, or until it's 2GiB > + * - If !allow_memory_relocate, increase the MMIO hole until it's > + * big enough, or until it's 2GiB, or until it overlaps guest > + * memory > + */ > + while ( (mmio_total > (pci_mem_end - pci_mem_start)) > + && ((pci_mem_start << 1) != 0) > + && (allow_memory_relocate > + || (((pci_mem_start << 1) >> PAGE_SHIFT) > + >= hvm_info->low_mem_pgend)) ) > pci_mem_start <<= 1; > > if ( mmio_total > (pci_mem_end - pci_mem_start) ) { > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c > index ac1f90e..342d2ce 100644 > --- a/tools/libxl/libxl_dm.c > +++ b/tools/libxl/libxl_dm.c > @@ -1154,6 +1154,12 @@ void libxl__spawn_local_dm(libxl__egc *egc, > libxl__dm_spawn_state *dmss) > libxl__xs_write(gc, XBT_NULL, > libxl__sprintf(gc, "%s/hvmloader/bios", path), > "%s", libxl_bios_type_to_string(b_info->u.hvm.bios)); > + /* Disable relocating memory to make the MMIO hole larger > + * unless we're running qemu-traditional */ > + libxl__xs_write(gc, XBT_NULL, > + libxl__sprintf(gc, > "%s/hvmloader/allow-memory-relocate", path), > + "%d", > + > b_info->device_model_version==LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONA > L); > free(path); > } > > diff --git a/xen/include/public/hvm/hvm_xs_strings.h > b/xen/include/public/hvm/hvm_xs_strings.h > index 9042303..4de5881 100644 > --- a/xen/include/public/hvm/hvm_xs_strings.h > +++ b/xen/include/public/hvm/hvm_xs_strings.h > @@ -28,6 +28,7 @@ > #define HVM_XS_HVMLOADER "hvmloader" > #define HVM_XS_BIOS "hvmloader/bios" > #define HVM_XS_GENERATION_ID_ADDRESS "hvmloader/generation-id-address" > +#define HVM_XS_ALLOW_MEMORY_RELOCATE "hvmloader/allow-memory-relocate" > > /* The following values allow additional ACPI tables to be added to the > * virtual ACPI BIOS that hvmloader constructs. The values specify the > guest > -- > 1.7.9.5 > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |