[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
On 15.04.2020 11:59, Hongyan Xia wrote: > On Wed, 2020-04-15 at 10:23 +0200, Jan Beulich wrote: >> @@ -627,16 +612,10 @@ void __init paging_init(void) >> #undef MFN >> >> /* Create user-accessible L2 directory to map the MPT for compat >> guests. */ >> - BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) != >> - l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)); >> - l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset( >> - HIRO_COMPAT_MPT_VIRT_START)]); >> if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL ) >> goto nomem; >> compat_idle_pg_table_l2 = l2_ro_mpt; >> clear_page(l2_ro_mpt); >> - l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START) >> ], >> - l3e_from_paddr(__pa(l2_ro_mpt), >> __PAGE_HYPERVISOR_RO)); >> l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START); >> /* Allocate and map the compatibility mode machine-to-phys >> table. */ >> mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1)); > > The code around here, I am wondering if there is a reason to put it in > this patch. If we bisect, we can end up in a commit which says the > address range of compat is still there in config.h, but actually it's > gone, so this probably belongs to the 2nd patch. I could be done either way, I guess. Note though how patch 2's description starts with "Now that we don't properly hook things up into the page tables anymore". I.e. after this patch the address range continues to exist solely for calculation purposes. > Apart from that, > Reviewed-by: Hongyan Xia <hongyxia@xxxxxxxxxx> Thanks. > I would like to drop relevant map/unmap patches and replace them with > the new clean-up ones (and place them at the beginning of the series), > if there is no objection with that. Depending on turnaround, I'd much rather see this go in before you re-post. But if you feel like making it part of your series, go ahead. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |