[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Fix performance problems with mprotect()

On the Xen patch:

Both added case blocks in do_mmu_update() have a nested switch
statement that seems redundant (i.e. the outer switch already only
handle PGT_l1_page_table pages, but the inner switch check this same
value again.

The same two case block access addresses obtained from
map_domain_page_with_cache() through __copy_from_user(). Is there
any particular reason to not use direct accesses here? (Note that
mod_l1_entry() has to use __copy_from_user() as it may be called from
do_update_va_mapping()). Likewise I would think that there's no strict
need for update_intpte_sync() to use paging_cmpxchg_guest_entry(),
but here I would agree that it's easier to re-use the existing function
than to create and use a new one.

An additional piece of concern regarding the bit assignments of
MMU_FLAG_RANGE_UPDATE's val parameter (Keir, maybe you need to
comment on this one): The whole mmu_update interface, being
defined in public/xen.h, is supposed to be sufficiently architecture
neutral, which it won't be anymore in the way it currently is being
modified. But maybe I'm mistaken and the interface's declaration is
just badly placed (would apply to the mmuext interface then, too)?

In the Linux patch, I'd just like to see the abstraction to be less Xen-
specific, i.e. something like

(perhaps in include/asm-generic/pgtable.h)

#ifndef arch_change_pte_range
# define arch_change_pte_range(mm, pmd, addr, end, newprot) 0

and then in change_pmd_range():

                if (pmd_none_or_clear_bad(pmd))
                if (arch_change_pte_range(mm, pmd, addr, next, newprot))
                change_pte_range(mm, pmd, addr, next, newprot);
        } while (pmd++, addr = next, addr != end);

The BUG() (which really can be BUG_ON() here) would go into the actual
function then.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.