[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 2/8] hvmloader: Make the printfs more informative
On Fri, 21 Jun 2013, George Dunlap wrote: > On 21/06/13 10:35, George Dunlap wrote: > > On 20/06/13 18:12, Stefano Stabellini wrote: > > > On Thu, 20 Jun 2013, George Dunlap wrote: > > > > * Warn that you're relocating some BARs to 64-bit > > > > > > > > * Warn that you're relocating guest pages, and how many > > > > > > > > * Include upper 32-bits of the base register when printing the bar > > > > placement info > > > > > > > > Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> > > > > CC: Ian Campbell <ian.campbell@xxxxxxxxxx> > > > > CC: Ian Jackson <ian.jackson@xxxxxxxxxx> > > > > CC: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx> > > > > CC: Hanweidong <hanweidong@xxxxxxxxxx> > > > > CC: Keir Fraser <keir@xxxxxxx> > > > > --- > > > > tools/firmware/hvmloader/pci.c | 13 ++++++++++--- > > > > 1 file changed, 10 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/tools/firmware/hvmloader/pci.c > > > > b/tools/firmware/hvmloader/pci.c > > > > index aa54bc1..d8592b0 100644 > > > > --- a/tools/firmware/hvmloader/pci.c > > > > +++ b/tools/firmware/hvmloader/pci.c > > > > @@ -213,10 +213,17 @@ void pci_setup(void) > > > > ((pci_mem_start << 1) != 0) ) > > > > pci_mem_start <<= 1; > > > > - if ( (pci_mem_start << 1) != 0 ) > > > > + if ( (pci_mem_start << 1) != 0 ) { > > > > + printf("Low MMIO hole not large enough for all devices," > > > > + " relocating some BARs to 64-bit\n"); > > > > bar64_relocate = 1; > > > > + } > > > > /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */ > > > > + if ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend ) > > > > + printf("Relocating 0x%lx pages to highmem for lowmem MMIO > > > > hole\n", > > > > + hvm_info->low_mem_pgend - (pci_mem_start >> > > > > PAGE_SHIFT)); > > > Shouldn't this be: > > > > > > min_t(unsigned int, > > > hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT), > > > (1u << 16) - 1); > > > > > > to match exactly what we do in the relocation code? > > > > No; the relocation is done in a loop which will run until the condition in > > the if above is satisfied. > > > > We could, I suppose, do the printf on each iteration of the loop; if I'm > > doing the math right*, the maximum iterations around the loop should be 8, > > and a typical number would be just 1 or 2. > > > > * Maximum MMIO size: 2GiB == 1<<31. In 4-k pages, that's 1<<(31-12) == > > 1<<19. This will do a batch of 1<<16 pages at a time, leaving 1<<3 > > iterations maximum, or 8. (1<<16 pages is 1<<(16+12) or 1<<28 bytes, or > > 1<<8 == 256 megabytes moved at a time.) > > > > > > > > Regarding the message, what about: > > > > > > printf("Relocating 0x%lx pages from 0x%lx to 0x%lx%lx to make room for a > > > larger MMIO hole\n", > > > min_t(unsigned int, > > > hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT), > > > (1u << 16) - 1), > > > hvm_info->low_mem_pgend, > > > hvm_info->high_mem_pgend); > > > > Lemme see how it looks... > > If you have a 2GiB MMIO hole, it looks like this: > > (XEN) HVM42: Relocating 0xffff pages from 0e0001000 to 20f800000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 0d0002000 to 21f7ff000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 0c0003000 to 22f7fe000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 0b0004000 to 23f7fd000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 0a0005000 to 24f7fc000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 090006000 to 25f7fb000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0xffff pages from 080007000 to 26f7fa000 for lowmem > MMIO hole > (XEN) HVM42: Relocating 0x7 pages from 080000000 to 27f7f9000 for lowmem MMIO > hole > > > Kind of ugly; I think I liked it better with just one printf. I agree _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |