[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 24/35] OvmfPkg/XenPlatformPei: Rework memory detection
On Tue, Jul 23, 2019 at 11:42:07AM +0200, Roger Pau Monné wrote: > On Mon, Jul 22, 2019 at 03:53:19PM +0100, Anthony PERARD wrote: > > On Mon, Jul 15, 2019 at 04:15:21PM +0200, Roger Pau Monné wrote: > > > On Thu, Jul 04, 2019 at 03:42:22PM +0100, Anthony PERARD wrote: > > > You could maybe initialize this as a global to avoid having to issue > > > a hypercall each time you need to get something from the memory map. > > > > That function does that, it only make the hypercall once. (The hypercall > > can only be made once anyway, the second time Xen doesn't return the > > map.) > > Why? I'm looking at the implementation in Xen of XENMEM_memory_map and > I'm not sure I see how/why the hypercall can only be made once. AFAICT > you should be able to call XENMEM_memory_map multiple times without > issues, or else it's a bug somewhere. :-(, I probably made a mistake when testing that. I tried again and calling the hypercall serveral time gave the same result. Sorry for the noise. > > > > +} > > > > > > > > UINT32 > > > > GetSystemMemorySizeBelow4gb ( > > > > @@ -105,6 +146,19 @@ GetSystemMemorySizeBelow4gb ( > > > > UINT8 Cmos0x34; > > > > UINT8 Cmos0x35; > > > > > > > > + // > > > > + // In PVH case, there is no CMOS, we have to calculate the memory > > > > size > > > > + // from parsing the E820 > > > > + // > > > > + if (XenPvhDetected ()) { > > > > > > IIRC on HVM you can also get the memory map from the hypercall, in > > > which case you could use the same code path for both HVM and PVH. > > > > I think that wouldn't work because in my experiment, the hypercall would > > only return the map the first time (at least on PVH). hvmloader already > > make the hypercall so OVMF can't. > > OK, I'm not sure the reason for this, as I said above I think this is > a bug somewhere. You should be able to call XENMEM_memory_map multiple > times. > > > On the other hand, XenGetE820Map() return an E820 map, it doesn't matter > > if it's the one passed by hvmloader, or the one we've got directly from > > Xen. So I guess we could ignore what hvmloader have written in the CMOS > > Hm, I'm not sure hvmloader uploads a new memory map to Xen (using > XENMEM_set_memory_map) if it does any modifications to it. It should > certainly do it, so that the guest OS gets the same memory map from > the hypercall or from the firmware. hvmloader doesn't call XENMEM_set_memory_map (I don't find that string in the source code), also, I've tested again calling the get memory_map hypercall in HVM guests and the e820 from hvmloader is different from the one from the hypercall: from hvmloader: Type Mem - 00000000 -> 000A0000 Type Res - 000F0000 -> 00100000 Type Mem - 00100000 -> 3F6B3000 Type Res - FC000000 -> 100000000 from Xen: Type Mem - 00100000 -> 3F800000 > > > > + switch (Entry->Type) { > > > > + case EfiAcpiAddressRangeMemory: > > > > + AddMemoryRangeHob (Base, End); > > > > + break; > > > > + case EfiAcpiAddressRangeACPI: > > > > + // > > > > + // Ignore, OVMF should read the ACPI tables and provide them to > > > > linux > > > > + // from a different location. > > > > > > Will OVMF also parse dynamic tables to check for references there? > > > > I haven't looked at what OVMF does with the ACPI tables, but Linux seems > > fine. I've compared the boot output of linux running as PVH vs booted > > via OVMF. Beside the location of the table been different, the number of > > table where the same, I don't remember other difference. > > OK, what I find weird is that you seem to discard quite a lot of stuff > from the original memory map, and then reconstruct it afterwards I > assume? > > It would seem safer to not discard regions from the memory map > provided to OVMF, and instead just build on top of it. I would expect OK, I'll add back the EfiAcpiAddressRangeACPI into the reserved regions. > for example that OVMF will use some of the RAM regions on the memory > map, and it should likely turn those areas from RAM into reserved > regions. > > > > > + // error message: CpuDxe: IntersectMemoryDescriptor: > > > > + // desc [FC000000, 100000000) type 1 cap 8700000000026001 > > > > + // conflicts with aperture [FEE00000, FEE01000) cap 1 > > > > // > > > > - if (Entry->Type != EfiAcpiAddressRangeMemory) { > > > > - continue; > > > > + if (!XenHvmloaderDetected ()) { > > > > + AddReservedMemoryBaseSizeHob (Base, End - Base, FALSE); > > > > > > This special casing for PVH looks weird, ideally we would like to use > > > the same code path, or else it should be explicitly mentioned why PVH > > > has diverging behaviour. > > > > I think hvmloader is the issue rather than PVH. Here is part of the > > "memory map" as found in hvmloader/config.h: > > > > /* Special BIOS mappings, etc. are allocated from here upwards... */ > > #define RESERVED_MEMBASE 0xFC000000 > > /* NB. ACPI_INFO_PHYSICAL_ADDRESS *MUST* match definition in > > acpi/dsdt.asl! */ > > #define ACPI_INFO_PHYSICAL_ADDRESS 0xFC000000 > > #define RESERVED_MEMORY_DYNAMIC_START 0xFC001000 > > #define RESERVED_MEMORY_DYNAMIC_END 0xFE000000 > > > > and hvmloader simply creates a single e820 reserved entry, from > > RESERVED_MEMBASE to the top of 4GB. It's probably too much. > > But isn't this kind of dangerous? How can you assure future versions > of hvmloader won't use this space? > > > If hvmloader only reserved > > ACPI_INFO_PHYSICAL_ADDRESS-RESERVED_MEMORY_DYNAMIC_END, I might not have > > to special case hvmloader. > > Could we look into getting this fixed in hvmloader then? > > I think it's dangerous for OVMF to play such tricks with the memory > map. It's not worse than before :), there's a lot more to cleanup :(. I've left an hvmloader specific call AddReservedMemoryBaseSizeHob (0xFC000000, 0x1000000, FALSE); which is outside of that XenPublishRamRegions() function. See: https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00360.html I probably need to remove that now and see if I can simply use hvmloader's e820. -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |