[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [BUG 1747]Guest could't find bootable device with memory more than 3600M
>>> On 11.06.13 at 19:26, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> >>> wrote: > I went through the code that maps the PCI MMIO regions in hvmloader > (tools/firmware/hvmloader/pci.c:pci_setup) and it looks like it already > maps the PCI region to high memory if the PCI bar is 64-bit and the MMIO > region is larger than 512MB. > > Maybe we could just relax this condition and map the device memory to > high memory no matter the size of the MMIO region if the PCI bar is > 64-bit? I can only recommend not to: For one, guests not using PAE or PSE-36 can't map such space at all (and older OSes may not properly deal with 64-bit BARs at all). And then one would generally expect this allocation to be done top down (to minimize risk of running into RAM), and doing so is going to present further risks of incompatibilities with guest OSes (Linux for example learned only in 2.6.36 that PFNs in ioremap() can exceed 32 bits, but even in 3.10-rc5 ioremap_pte_range(), while using "u64 pfn", passes the PFN to pfn_pte(), the respective parameter of which is "unsigned long"). I think this ought to be done in an iterative process - if all MMIO regions together don't fit below 4G, the biggest one should be moved up beyond 4G first, followed by the next to biggest one etc. And, just like many BIOSes have, there ought to be a guest (config) controlled option to shrink the RAM portion below 4G allowing more MMIO blocks to fit. Finally we shouldn't forget the option of not doing any assignment at all in the BIOS, allowing/forcing the OS to use suitable address ranges. Of course any OS is permitted to re-assign resources, but I think they will frequently prefer to avoid re-assignment if already done by the BIOS. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |