[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 22/22] vixen: dom0 builder support
On Sun, Jan 7, 2018 at 1:02 AM, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > On Sat, Jan 06, 2018 at 02:54:37PM -0800, Anthony Liguori wrote: >> From: Anthony Liguori <aliguori@xxxxxxxxxx> >> >> The dom0 builder requires a number of modifications in order to be >> able to launch unprivileged guests. The console and store pages >> must be mapped in a specific location within the guest's initial >> page table. >> >> We also have to setup the start info to be what's expected for >> unprivileged guests and supress the normal logic to give dom0 >> increased permissions. >> >> We have to pass around the console and store pages which involves >> touching a number of places including the PVH builder. > > AFAICT you are missing a fix for the positions of the p2m mapping in > the hypervisor virtual memory hole for 32bit PV guests [0]. > > Without this fix the 32bit DomU ABI is broken, which mandates the m2p > to always be mapped at virt_hv_start_low, and some early Linux pvops > kernels will fail to boot (IIRC from 2.6.32-2.6.36, because they don't > have XENMEM_machphys_mapping implemented). > > [0] > http://xenbits.xen.org/gitweb/?p=people/liuw/xen.git;a=commit;h=28b2108b362e8976676a96c90eee058605427b57 Thanks! Will pick this up and do a bit of testing. > >> @@ -276,7 +277,9 @@ int __init dom0_construct_pv(struct domain *d, >> unsigned long image_headroom, >> module_t *initrd, >> void *(*bootstrap_map)(const module_t *), >> - char *cmdline) >> + char *cmdline, >> + xen_pfn_t store_mfn, uint32_t store_evtchn, >> + xen_pfn_t console_mfn, uint32_t console_evtchn) >> { >> int i, cpu, rc, compatible, compat32, order, machine; >> struct cpu_user_regs *regs; >> @@ -299,6 +302,7 @@ int __init dom0_construct_pv(struct domain *d, >> l3_pgentry_t *l3tab = NULL, *l3start = NULL; >> l2_pgentry_t *l2tab = NULL, *l2start = NULL; >> l1_pgentry_t *l1tab = NULL, *l1start = NULL; >> + xen_pfn_t saved_pfn = ~0UL; >> >> /* >> * This fully describes the memory layout of the initial domain. All >> @@ -441,8 +445,24 @@ int __init dom0_construct_pv(struct domain *d, >> vphysmap_end = vphysmap_start; >> vstartinfo_start = round_pgup(vphysmap_end); >> vstartinfo_end = (vstartinfo_start + >> - sizeof(struct start_info) + >> - sizeof(struct dom0_vga_console_info)); >> + sizeof(struct start_info)); >> + if ( !is_vixen() ) >> + vstartinfo_end += sizeof(struct dom0_vga_console_info); >> + vstartinfo_end = round_pgup(vstartinfo_end); >> + >> + if ( is_vixen() ) { >> + struct page_info *pg; >> + >> + saved_pfn = (vstartinfo_end - v_start) / PAGE_SIZE; >> + >> + pg = mfn_to_page(store_mfn); >> + share_xen_page_with_guest(pg, d, XENSHARE_writable); >> + vstartinfo_end += PAGE_SIZE; >> + >> + pg = mfn_to_page(console_mfn); >> + share_xen_page_with_guest(pg, d, XENSHARE_writable); >> + vstartinfo_end += PAGE_SIZE; >> + } >> >> vpt_start = round_pgup(vstartinfo_end); >> for ( nr_pt_pages = 2; ; nr_pt_pages++ ) >> @@ -634,7 +654,13 @@ int __init dom0_construct_pv(struct domain *d, >> *l2tab = l2e_from_paddr(__pa(l1start), L2_PROT); >> l2tab++; >> } >> - if ( count < initrd_pfn || count >= initrd_pfn + PFN_UP(initrd_len) >> ) >> + if ( count == saved_pfn ) { >> + mfn = store_mfn; >> + pfn++; >> + } else if ( count == saved_pfn + 1 ) { >> + mfn = console_mfn; >> + pfn++; >> + } else if ( count < initrd_pfn || count >= initrd_pfn + >> PFN_UP(initrd_len) ) >> mfn = pfn++; > > IMHO it's easier to do this fixup afterwards [1] instead of having to > modify the Dom0 build process in different places (the Dom0 PV > building code is already messy enough). > > [1] > http://xenbits.xen.org/gitweb/?p=people/liuw/xen.git;a=commit;h=a38ce82113223e3c5119590c520cc30c8462e709 I'm indifferent on approach but agree that the code is a mess :-D I'll look at that commit a bit more. >> else >> mfn = initrd_mfn++; >> @@ -737,7 +763,8 @@ int __init dom0_construct_pv(struct domain *d, >> >> si->shared_info = virt_to_maddr(d->shared_info); >> >> - si->flags = SIF_PRIVILEGED | SIF_INITDOMAIN; >> + si->flags = is_vixen() ? 0 : (SIF_PRIVILEGED | SIF_INITDOMAIN); >> + >> if ( !vinitrd_start && initrd_len ) >> si->flags |= SIF_MOD_START_PFN; >> si->flags |= (xen_processor_pmbits << 8) & SIF_PM_MASK; >> @@ -818,6 +845,32 @@ int __init dom0_construct_pv(struct domain *d, >> } >> } >> >> + if ( is_vixen() ) >> + { >> + dom0_update_physmap(d, saved_pfn, store_mfn, vphysmap_start); >> + dom0_update_physmap(d, saved_pfn + 1, console_mfn, vphysmap_start); >> + >> + rc = evtchn_alloc_proxy(d, store_evtchn, ECS_INTERDOMAIN); >> + if ( rc ) >> + { >> + printk("Vixen: failed to reserve Xenstore event channel %d => >> %d\n", >> + store_evtchn, rc); >> + goto out; >> + } >> + rc = evtchn_alloc_proxy(d, console_evtchn, ECS_INTERDOMAIN); >> + if ( rc ) >> + { >> + printk("Vixen: failed to reserve Console event channel %d => >> %d\n", >> + console_evtchn, rc); >> + goto out; >> + } > > IMHO you could just panic here. Nothing useful is going to happen > after dom0_construct_pv failing and you avoid the goto. Ack. >> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c >> index 1b89844..c49eeea 100644 >> --- a/xen/arch/x86/setup.c >> +++ b/xen/arch/x86/setup.c >> @@ -663,6 +663,8 @@ void __init noreturn __start_xen(unsigned long mbi_p) >> .stop_bits = 1 >> }; >> struct xen_arch_domainconfig config = { .emulation_flags = 0 }; >> + xen_pfn_t store_mfn = 0, console_mfn = 0; >> + uint32_t store_evtchn = 0, console_evtchn = 0; >> >> /* Critical region without IDT or TSS. Any fault is deadly! */ >> >> @@ -1595,6 +1597,9 @@ void __init noreturn __start_xen(unsigned long mbi_p) >> config.emulation_flags = XEN_X86_EMU_LAPIC|XEN_X86_EMU_IOAPIC; >> } >> >> + if ( is_vixen() ) >> + config.emulation_flags = XEN_X86_EMU_PIT; > > DomUs should not have an emulated PIT, that's only for Dom0 PV. Unfortunately, they do need an emulated PIT. Until PVH got merged, an emulated PIT was always present for DomUs and 2.6.21 era kernels use the PIT for TSC calibration. If a PIT isn't present, then they hang during early boot. Regards, Anthony Liguori > Thanks, Roger. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxxx > https://lists.xenproject.org/mailman/listinfo/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |