[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [edk2-devel] [PATCH v2 22/31] OvmfPkg/XenPlatformPei: Rework memory detection
On Fri, Apr 12, 2019 at 01:15:48PM +0200, Laszlo Ersek wrote: > On 04/09/19 13:08, Anthony PERARD wrote: > > Rework XenPublishRamRegions for PVH, handle the Reserve type and explain > > about the ACPI type. MTRR settings aren't modified anymore, on HVM, it's > > already done by hvmloader, on PVH it is supposed to have sane default. > > --- > > Notes: > > About MTRR, should we redo the setting in OVMF? Even if in both case of > > PVH and HVM, something would have setup the default type to write back > > and handle a few other ranges like PCI hole, hvmloader for HVM or and > > libxc I think for PVH. > > This patch is *exactly* the kind of change that I want to keep as far as > possible away from code that runs (even if in part) on QEMU. Every time > I need to touch OvmfPkg/PlatformPei/MemDetect.c, and in particular go > near the MTRR setup or the physical memory layout (resource descriptor > HOBs, CPU address width, etc), I start convulsing. Sorry to have cause you fear. I wasn't suggesting to make changes to code that can run on something else than Xen. That notes was really about code that only run on Xen, because the patch removes one thing that OVMF do on Xen, a MtrrSetMemoryAttribute(). Also, I'll need to have Xen-folks to answer the question. > If, under "OvmfPkg/PlatformPei/", you could limit your changes to > "Xen.c", I'd be OK with that. Otherwise, please don't go near that code. Changes would be limited to XenPlatformPei, I wouldn't need to change the code that runs on QEMU. > Again, this is *the* kind of change why we have the platform split / > duplication. I'm glad it's useful to have a separated XenPlatformPei module. Thanks, -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |