[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/PoD: shorten certain operations on higher order ranges
On 28/09/15 15:30, Jan Beulich wrote: > Now that p2m->get_entry() always returns a valid order, utilize this > to accelerate some of the operations in PoD code. (There are two uses > of p2m->get_entry() left which don't easily lend themselves to this > optimization.) > > Also adjust a few types as needed and remove stale comments from > p2m_pod_cache_add() (to avoid duplicating them yet another time). > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > --- > v2: Add a code comment in p2m_pod_decrease_reservation(). > > --- a/xen/arch/x86/mm/p2m-pod.c > +++ b/xen/arch/x86/mm/p2m-pod.c > @@ -119,20 +119,23 @@ p2m_pod_cache_add(struct p2m_domain *p2m > > unlock_page_alloc(p2m); > > - /* Then add the first one to the appropriate populate-on-demand list */ > - switch(order) > + /* Then add to the appropriate populate-on-demand list. */ > + switch ( order ) > { > + case PAGE_ORDER_1G: > + for ( i = 0; i < (1UL << PAGE_ORDER_1G); i += 1UL << PAGE_ORDER_2M ) > + page_list_add_tail(page + i, &p2m->pod.super); > + break; > case PAGE_ORDER_2M: > - page_list_add_tail(page, &p2m->pod.super); /* lock: page_alloc */ > - p2m->pod.count += 1 << order; > + page_list_add_tail(page, &p2m->pod.super); > break; > case PAGE_ORDER_4K: > - page_list_add_tail(page, &p2m->pod.single); /* lock: page_alloc */ > - p2m->pod.count += 1; > + page_list_add_tail(page, &p2m->pod.single); > break; > default: > BUG(); > } > + p2m->pod.count += 1 << order; > > return 0; > } > @@ -502,11 +505,10 @@ p2m_pod_decrease_reservation(struct doma > unsigned int order) > { > int ret=0; > - int i; > + unsigned long i, n; > struct p2m_domain *p2m = p2m_get_hostp2m(d); > - > - int steal_for_cache; > - int pod, nonpod, ram; > + bool_t steal_for_cache; > + long pod, nonpod, ram; > > gfn_lock(p2m, gpfn, order); > pod_lock(p2m); > @@ -525,21 +527,21 @@ recount: > /* Figure out if we need to steal some freed memory for our cache */ > steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); > > - /* FIXME: Add contiguous; query for PSE entries? */ > - for ( i=0; i<(1<<order); i++) > + for ( i = 0; i < (1UL << order); i += n ) > { > p2m_access_t a; > p2m_type_t t; > + unsigned int cur_order; > > - (void)p2m->get_entry(p2m, gpfn + i, &t, &a, 0, NULL, NULL); > - > + p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL); > + n = 1UL << min(order, cur_order); > if ( t == p2m_populate_on_demand ) > - pod++; > + pod += n; > else > { > - nonpod++; > + nonpod += n; > if ( p2m_is_ram(t) ) > - ram++; > + ram += n; > } > } > > @@ -574,41 +576,53 @@ recount: > * + There are PoD entries to handle, or > * + There is ram left, and we want to steal it > */ > - for ( i=0; > - i<(1<<order) && (pod>0 || (steal_for_cache && ram > 0)); > - i++) > + for ( i = 0; > + i < (1UL << order) && (pod > 0 || (steal_for_cache && ram > 0)); > + i += n ) > { > mfn_t mfn; > p2m_type_t t; > p2m_access_t a; > + unsigned int cur_order; > > - mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, NULL, NULL); > + mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL); > + if ( order < cur_order ) > + cur_order = order; > + n = 1UL << cur_order; > if ( t == p2m_populate_on_demand ) > { > - p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, > - p2m->default_access); > - p2m->pod.entry_count--; > + p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order, > + p2m_invalid, p2m->default_access); > + p2m->pod.entry_count -= n; > BUG_ON(p2m->pod.entry_count < 0); > - pod--; > + pod -= n; > } > else if ( steal_for_cache && p2m_is_ram(t) ) > { > + /* > + * If we need less than 1 << cur_order, we may end up stealing > + * more memory here than we actually need. This will be rectified > + * below, however; and stealing too much and then freeing what we > + * need may allow us to free smaller pages from the cache, and > + * avoid breaking up superpages. > + */ > struct page_info *page; > + unsigned int j; > > ASSERT(mfn_valid(mfn)); > > page = mfn_to_page(mfn); > > - p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, > - p2m->default_access); > - set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); > - > - p2m_pod_cache_add(p2m, page, 0); > + p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order, > + p2m_invalid, p2m->default_access); > + for ( j = 0; j < n; ++j ) > + set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); > + p2m_pod_cache_add(p2m, page, cur_order); > > steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); > > - nonpod--; > - ram--; > + nonpod -= n; > + ram -= n; > } > } > > @@ -649,7 +656,8 @@ p2m_pod_zero_check_superpage(struct p2m_ > p2m_type_t type, type0 = 0; > unsigned long * map = NULL; > int ret=0, reset = 0; > - int i, j; > + unsigned long i, n; > + unsigned int j; > int max_ref = 1; > struct domain *d = p2m->domain; > > @@ -668,10 +676,13 @@ p2m_pod_zero_check_superpage(struct p2m_ > > /* Look up the mfns, checking to make sure they're the same mfn > * and aligned, and mapping them. */ > - for ( i=0; i<SUPERPAGE_PAGES; i++ ) > + for ( i = 0; i < SUPERPAGE_PAGES; i += n ) > { > p2m_access_t a; > - mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, NULL, NULL); > + unsigned int cur_order; > + > + mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL); > + n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U); Again just thinking out loud a bit here -- I wonder how often it will be the case that we manage to find a contiguous superpage that is not actually set as a superpage in the p2m. The only reason this loop is here in the first place is because when I wrote it there was as yet no way to get the order of a p2m entry. I'm now thinking it might be better to just read the entry and bail if the order is SUPERPAGE_ORDER. But in any case, this code is correct and doesn't change the end-to-end functionality AFAICT, so: Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |