[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] efi/boot: Don't free ebmalloc area at all
On Tue, Feb 28, 2017 at 10:24:36AM -0700, Jan Beulich wrote: > >>> On 28.02.17 at 18:09, <daniel.kiper@xxxxxxxxxx> wrote: > > On Tue, Feb 28, 2017 at 09:14:24AM -0700, Jan Beulich wrote: > >> >>> On 28.02.17 at 17:08, <andrew.cooper3@xxxxxxxxxx> wrote: > >> > On 28/02/17 16:03, Jan Beulich wrote: > >> >>>>> On 28.02.17 at 16:20, <andrew.cooper3@xxxxxxxxxx> wrote: > >> >>> Freeing part of the BSS back for general use proves to be problematic. > >> >>> It > >> > is > >> >>> not accounted for in xen_in_range(), causing errors when constructing > >> >>> the > >> >>> IOMMU tables, resulting in a failure to boot. > >> >>> > >> >>> Other smaller issues are that tboot treats the entire BSS as hypervisor > >> > data, > >> >>> creating and checking a MAC of it on S3, and that, by being 1MB in > >> >>> size, > >> >>> freeing it guarentees to shatter the hypervisor superpage mappings. > >> >>> > >> >>> Judging by the content stored in it, 1MB is overkill on size. Drop it > >> >>> to a > >> >>> more-reasonable 32kB and keep the entire buffer around after boot. > >> >> Well, that's just because right now there's only a single user. The > >> >> reason I refused Daniel making it smaller than its predecessor is > >> >> that we can't really give a good estimate of how much data may > >> >> need storing there: The memory map can have hundreds of entries > >> >> and command lines for modules may also be almost arbitrarily long. > >> >> > >> >> What I don't recall, Daniel: Why was it that we can't use EFI boot > >> >> services allocations here? > >> > > >> > From the original commit message, > >> > > >> > 1) We could use native EFI allocation functions (e.g. AllocatePool() > >> > or AllocatePages()) to get memory chunk. However, later (somewhere > >> > in __start_xen()) we must copy its contents to safe place or > >> > reserve > >> > it in e820 memory map and map it in Xen virtual address space. > >> > >> Reading this again, I have to admit that I don't understand why any > >> copying or reserving would need to be done. We'd need to do runtime > >> allocations, sure, but I would have thought this goes without saying. > > > > We discussed this once but I do not remember the thread. In general Xen EFI > > boot code should allocate memory as EfiLoaderData. However, later in > > efi_arch_process_memory_map() we treat this memory type as free. This means > > that it can be overwritten. So, that is why I mentioned copy or reservation. > > There's nothing disallowing runtime data allocations, afaik. Do you mean EfiRuntimeServicesData? Probably possible but not nice IMO. AIUI, this is not intended to use in that way. > >> > This > >> > means that the code referring to Xen command line, loaded modules > >> > and > >> > EFI memory map, mostly in __start_xen(), will be further > >> > complicated > >> > and diverge from legacy BIOS cases. Additionally, both former > >> > things > >> > have to be placed below 4 GiB because their addresses are stored > >> > in > >> > multiboot_info_t structure which has 32-bit relevant members. > >> > > >> > > >> > One way or another, if we don't want to shatter superpages, we either > >> > must keep the entire allocation, or copy the content out later into a > >> > smaller location once other heap facilities are available. > >> > > >> > If we are copying data out, we might as well use EFI heap facilities > >> > rather than rolling our own. > >> > >> Well, copying data later won't work, as there are pointers stored to > >> the original allocation. > > > > This is not impossible. Just requires fiddling with mbi struct. > > That would cover this one specific use only, and even then how > do you know there weren't any derived pointers stored elsewhere? Right, we should avoid it. It is too fragile approach. Daniel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |