[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 1/2] x86/shadow: re-use variables in shadow_get_page_from_l1e()
There's little point in doing multiple mfn_to_page() or page_get_owner() on all the same MFN. Calculate them once at the start of the function. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- a/xen/arch/x86/mm/shadow/set.c +++ b/xen/arch/x86/mm/shadow/set.c @@ -89,25 +89,27 @@ shadow_get_page_from_l1e(shadow_l1e_t sl { int res; mfn_t mfn; - struct domain *owner; + const struct page_info *pg = NULL; + struct domain *owner = NULL; ASSERT(!sh_l1e_is_magic(sl1e)); ASSERT(shadow_mode_refcounts(d)); + if ( mfn_valid(mfn = shadow_l1e_get_mfn(sl1e)) ) + { + pg = mfn_to_page(mfn); + owner = page_get_owner(pg); + } + /* * VMX'es APIC access MFN is just a surrogate page. It doesn't actually * get accessed, and hence there's no need to refcount it (and refcounting * would fail, due to the page having no owner). */ - if ( mfn_valid(mfn = shadow_l1e_get_mfn(sl1e)) ) + if ( pg && !owner && (pg->count_info & PGC_extra) ) { - const struct page_info *pg = mfn_to_page(mfn); - - if ( !page_get_owner(pg) && (pg->count_info & PGC_extra) ) - { - ASSERT(type == p2m_mmio_direct); - return 0; - } + ASSERT(type == p2m_mmio_direct); + return 0; } res = get_page_from_l1e(sl1e, d, d); @@ -118,9 +120,7 @@ shadow_get_page_from_l1e(shadow_l1e_t sl */ if ( unlikely(res < 0) && !shadow_mode_translate(d) && - mfn_valid(mfn = shadow_l1e_get_mfn(sl1e)) && - (owner = page_get_owner(mfn_to_page(mfn))) && - (d != owner) ) + owner && (d != owner) ) { res = xsm_priv_mapping(XSM_TARGET, d, owner); if ( !res ) @@ -143,9 +143,8 @@ shadow_get_page_from_l1e(shadow_l1e_t sl * already have checked that we're supposed to have access, so * we can just grab a reference directly. */ - mfn = shadow_l1e_get_mfn(sl1e); - if ( mfn_valid(mfn) ) - res = get_page_from_l1e(sl1e, d, page_get_owner(mfn_to_page(mfn))); + if ( owner ) + res = get_page_from_l1e(sl1e, d, owner); } if ( unlikely(res < 0) )
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |