[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 3/3] x86/PVH: Support relocatable dom0 kernels
On Thu, Mar 14, 2024 at 09:51:22AM -0400, Jason Andryuk wrote: > On 2024-03-14 05:48, Roger Pau Monné wrote: > > On Wed, Mar 13, 2024 at 03:30:21PM -0400, Jason Andryuk wrote: > > > Xen tries to load a PVH dom0 kernel at the fixed guest physical address > > > from the elf headers. For Linux, this defaults to 0x1000000 (16MB), but > > > it can be configured. > > > > > > Unfortunately there exist firmwares that have reserved regions at this > > > address, so Xen fails to load the dom0 kernel since it's not RAM. > > > > > > The PVH entry code is not relocatable - it loads from absolute > > > addresses, which fail when the kernel is loaded at a different address. > > > With a suitably modified kernel, a reloctable entry point is possible. > > > > > > Add XEN_ELFNOTE_PVH_RELOCATION which specifies the minimum, maximum and > > > alignment needed for the kernel. The presence of the NOTE indicates the > > > kernel supports a relocatable entry path. > > > > > > Change the loading to check for an acceptable load address. If the > > > kernel is relocatable, support finding an alternate load address. > > > > > > Link: https://gitlab.com/xen-project/xen/-/issues/180 > > > Signed-off-by: Jason Andryuk <jason.andryuk@xxxxxxx> > > > --- > > > ELF Note printing looks like: > > > (XEN) ELF: note: PVH_RELOCATION = min: 0x1000000 max: 0xffffffff align: > > > 0x200000 > > > > > > v2: > > > Use elfnote for min, max & align - use 64bit values. > > > Print original and relocated memory addresses > > > Use check_and_adjust_load_address() name > > > Return relocated base instead of offset > > > Use PAGE_ALIGN > > > Don't load above max_phys (expected to be 4GB in kernel elf note) > > > Use single line comments > > > Exit check_load_address loop earlier > > > Add __init to find_kernel_memory() > > > --- > > > xen/arch/x86/hvm/dom0_build.c | 108 +++++++++++++++++++++++++++++ > > > xen/common/libelf/libelf-dominfo.c | 13 ++++ > > > xen/include/public/elfnote.h | 11 +++ > > > xen/include/xen/libelf.h | 3 + > > > 4 files changed, 135 insertions(+) > > > > > > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c > > > index 0ceda4140b..5c6c0d2db3 100644 > > > --- a/xen/arch/x86/hvm/dom0_build.c > > > +++ b/xen/arch/x86/hvm/dom0_build.c > > > @@ -537,6 +537,108 @@ static paddr_t __init find_memory( > > > return INVALID_PADDR; > > > } > > > +static bool __init check_load_address( > > > + const struct domain *d, const struct elf_binary *elf) > > > +{ > > > + paddr_t kernel_start = (paddr_t)elf->dest_base & PAGE_MASK; > > > > Are you sure this is correct? If a program header specifies a non-4K > > aligned load address we should still try to honor it. I think this is > > very unlikely, but still we shouldn't apply non-requested alignments > > to addresses coming from the ELF headers. > > I think it's correct in terms of checking the e820 table. Since the memory > map is limited to 4k granularity, the bounds need to be rounded accordingly. That's for populating the p2m, but I don't see why the kernel load area should be limited by this? There's AFAICt no issue from a kernel requesting that it's start load address is not page aligned (granted that's very unlikely), but I don't see why we would impose an unneeded restriction here. The kernel load area doesn't affect how the p2m is populated, that's mandated by the e820. > > > + paddr_t kernel_end = PAGE_ALIGN((paddr_t)elf->dest_base + > > > elf->dest_size); > > > + unsigned int i; > > > + > > > + /* > > > + * The memory map is sorted and all RAM regions starts and sizes are > > > + * aligned to page boundaries. > > > > Relying on sizes to be page aligned seems fragile: it might work now > > because of the order in which pvh_setup_vmx_realmode_helpers() first > > reserves memory for the TSS and afterwards for the identity page > > tables, but it's not a property this code should assume. > > That can be removed. It would just eliminate the early exit... > > > > + */ > > > + for ( i = 0; i < d->arch.nr_e820; i++ ) > > > + { > > > + paddr_t start = d->arch.e820[i].addr; > > > + paddr_t end = d->arch.e820[i].addr + d->arch.e820[i].size; > > > + > > > + if ( start >= kernel_end ) > > > + return false; > > ... here. I think the sorted aspect is fine, the aligned part is the one I'm complaining about, so the check above can stay. > > > + const struct elf_dom_parms *parms) > > > +{ > > > + paddr_t kernel_start = (paddr_t)elf->dest_base & PAGE_MASK; > > > + paddr_t kernel_end = PAGE_ALIGN((paddr_t)elf->dest_base + > > > elf->dest_size); > > > + paddr_t kernel_size = kernel_end - kernel_start; > > > > Hm, I'm again unsure about the alignments applied here. > > Same as above regarding 4k granularity. > > > I think if anything we want to assert that dest_base is aligned to > > phys_align. > > That would indicate the kernel image is inconsistent. Indeed. I think doing that sanity check would be worth. > > > diff --git a/xen/common/libelf/libelf-dominfo.c > > > b/xen/common/libelf/libelf-dominfo.c > > > index 7cc7b18a51..837a1b0f21 100644 > > > --- a/xen/common/libelf/libelf-dominfo.c > > > +++ b/xen/common/libelf/libelf-dominfo.c > > > @@ -125,6 +125,7 @@ elf_errorstatus elf_xen_parse_note(struct elf_binary > > > *elf, > > > [XEN_ELFNOTE_SUSPEND_CANCEL] = { "SUSPEND_CANCEL", ELFNOTE_INT > > > }, > > > [XEN_ELFNOTE_MOD_START_PFN] = { "MOD_START_PFN", ELFNOTE_INT }, > > > [XEN_ELFNOTE_PHYS32_ENTRY] = { "PHYS32_ENTRY", ELFNOTE_INT }, > > > + [XEN_ELFNOTE_PVH_RELOCATION] = { "PVH_RELOCATION", ELFNOTE_OTHER > > > }, > > > }; > > > /* *INDENT-ON* */ > > > @@ -234,6 +235,17 @@ elf_errorstatus elf_xen_parse_note(struct elf_binary > > > *elf, > > > elf_note_numeric_array(elf, note, 8, 0), > > > elf_note_numeric_array(elf, note, 8, 1)); > > > break; > > > + > > > + case XEN_ELFNOTE_PVH_RELOCATION: > > > + if ( elf_uval(elf, note, descsz) != 3 * sizeof(uint64_t) ) > > > + return -1; > > > + > > > + parms->phys_min = elf_note_numeric_array(elf, note, 8, 0); > > > + parms->phys_max = elf_note_numeric_array(elf, note, 8, 1); > > > + parms->phys_align = elf_note_numeric_array(elf, note, 8, 2); > > > > Size for those needs to be 4 (32bits) as the entry point is in 32bit > > mode? I don't see how we can start past the 4GB boundary. > > I specified the note as 3x 64bit values. It seemed simpler than trying to > support both 32bit and 64bit depending on the kernel arch. Also, just using > 64bit provides room in case it is needed in the future. Why do you say depending on the kernel arch? PVH doesn't know the bitness of the kernel, as the kernel entry point is always started in protected 32bit mode. We should just support 32bit values, regardless of the kernel bitness, because that's the only range that's suitable in order to jump into the entry point. Note how XEN_ELFNOTE_PHYS32_ENTRY is also unconditionally a 32bit integer. > Do you want the note to be changed to 3x 32bit values? Unless anyone objects, yes, that's would be my preference. > > > + elf_msg(elf, "min: %#"PRIx64" max: %#"PRIx64" align: > > > %#"PRIx64"\n", > > > + parms->phys_min, parms->phys_max, parms->phys_align); > > > + break; > > > } > > > return 0; > > > } > > > @@ -545,6 +557,7 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf, > > > parms->p2m_base = UNSET_ADDR; > > > parms->elf_paddr_offset = UNSET_ADDR; > > > parms->phys_entry = UNSET_ADDR32; > > > + parms->phys_align = UNSET_ADDR; > > > > For correctness I would also init phys_{min,max}. > > There is a memset() out of context above to zero the structure. I thought > leaving them both 0 would be fine. 0 would be a valid value, hence it's best to use UNSET_ADDR to clearly notice when a value has been provided by the parsed binary or not. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |