[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] OvmfPkg/XenPlatformPei: Grab 64-bit PCI MMIO hole size from OVMF info table
On Mon, Jan 11, 2021 at 03:45:18AM +0000, Igor Druzhinin wrote: > diff --git a/OvmfPkg/XenPlatformPei/MemDetect.c > b/OvmfPkg/XenPlatformPei/MemDetect.c > index 1f81eee..4175a2f 100644 > --- a/OvmfPkg/XenPlatformPei/MemDetect.c > +++ b/OvmfPkg/XenPlatformPei/MemDetect.c > @@ -227,6 +227,7 @@ GetFirstNonAddress ( > UINT64 FirstNonAddress; > UINT64 Pci64Base, Pci64Size; > RETURN_STATUS PcdStatus; > + EFI_STATUS Status; > > FirstNonAddress = BASE_4GB + GetSystemMemorySizeAbove4gb (); > > @@ -245,7 +246,10 @@ GetFirstNonAddress ( > // Otherwise, in order to calculate the highest address plus one, we must > // consider the 64-bit PCI host aperture too. Fetch the default size. > // > - Pci64Size = PcdGet64 (PcdPciMmio64Size); > + Status = XenGetPciMmioInfo (NULL, NULL, &Pci64Base, &Pci64Size); Pci64Base is overridden later (25 line bellow) by the value from FirstNonAddress, shouldn't this be avoided? Pci64Base = ALIGN_VALUE (FirstNonAddress, (UINT64)SIZE_1GB); > diff --git a/OvmfPkg/XenPlatformPei/Xen.h b/OvmfPkg/XenPlatformPei/Xen.h > index 2605481..c6e5fbb 100644 > --- a/OvmfPkg/XenPlatformPei/Xen.h > +++ b/OvmfPkg/XenPlatformPei/Xen.h > @@ -34,6 +34,16 @@ typedef struct { > EFI_PHYSICAL_ADDRESS E820; > UINT32 E820EntriesCount; > } EFI_XEN_OVMF_INFO; > + > +// This extra table gives layout of PCI apertures in a Xen guest > +#define OVMF_INFO_PCI_TABLE 0 > + > +typedef struct { > + EFI_PHYSICAL_ADDRESS LowStart; > + EFI_PHYSICAL_ADDRESS LowEnd; > + EFI_PHYSICAL_ADDRESS HiStart; > + EFI_PHYSICAL_ADDRESS HiEnd; In the hvmloader patch, these are uint64. It doesn't seems like a good idea to use the type EFI_PHYSICAL_ADDRESS here. Could you change to UINT64 here? (even if EFI_PHYSICAL_ADDRESS seems to always be UINT64, in the source code.) > +} EFI_XEN_OVMF_PCI_INFO; > #pragma pack() > > #endif /* __XEN_H__ */ Thanks, -- Anthony PERARD
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |