[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
On 10.09.2021 04:52, Penny Zheng wrote: > In order to deal with the trouble of count-to-order conversion when page > number > is not in a power-of-two, this commit re-define assign_pages for nr pages and > assign_page for original page with a single order. > > Backporting confusion could be helped by altering the order of assign_pages > parameters, such that the compiler would point out that adjustments at call > sites are needed. Thanks, this now takes care of my primary concern. However, I (now?) have another (I thought I would have mentioned this before): > --- a/xen/common/page_alloc.c > +++ b/xen/common/page_alloc.c > @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe) > > > int assign_pages( > - struct domain *d, > struct page_info *pg, > - unsigned int order, > + unsigned long nr, If this really is to be "unsigned long" (and not "unsigned int"), then... > @@ -2281,7 +2281,7 @@ int assign_pages( > { > unsigned int extra_pages = 0; > > - for ( i = 0; i < (1ul << order); i++ ) > + for ( i = 0; i < nr; i++ ) ... you will want to (a) show that there is a need for this in the remaining patches of this series and (b) prove that despite the remaining patches potentially using this, albeit at boot/init time only, there isn't any problem from ending up with 4 billion (or more) iteration loops that would then result. On x86 at least I'm sure this would be an issue. Otherwise I would request that in the subsequent patches requests be suitably broken up, with process_pending_softirqs() invoked in between. Which would get me back to my feeling that the original assign_pages() is quite good enough, as your new code would need to split requests now anyway (and to avoid the need for that splitting was the primary argument in favor of the change here). > @@ -2290,18 +2290,18 @@ int assign_pages( > > ASSERT(!extra_pages || > ((memflags & MEMF_no_refcount) && > - extra_pages == 1u << order)); > + extra_pages == nr)); > } > #endif > > if ( pg[0].count_info & PGC_extra ) > { > - d->extra_pages += 1u << order; > + d->extra_pages += nr; > memflags &= ~MEMF_no_refcount; > } > else if ( !(memflags & MEMF_no_refcount) ) > { > - unsigned int tot_pages = domain_tot_pages(d) + (1 << order); > + unsigned int tot_pages = domain_tot_pages(d) + nr; > > if ( unlikely(tot_pages > d->max_pages) ) > { > @@ -2313,10 +2313,10 @@ int assign_pages( > } > > if ( !(memflags & MEMF_no_refcount) && > - unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) ) > + unlikely(domain_adjust_tot_pages(d, nr) == nr) ) > get_knownalive_domain(d); > > - for ( i = 0; i < (1 << order); i++ ) > + for ( i = 0; i < nr; i++ ) > { > ASSERT(page_get_owner(&pg[i]) == NULL); > page_set_owner(&pg[i], d); > @@ -2331,6 +2331,11 @@ int assign_pages( > return rc; > } > > +int assign_page(struct page_info *pg, unsigned int order, struct domain *d, > + unsigned int memflags) > +{ > + return assign_pages(pg, 1UL << order, d, memflags); > +} > > struct page_info *alloc_domheap_pages( > struct domain *d, unsigned int order, unsigned int memflags) > @@ -2373,7 +2378,7 @@ struct page_info *alloc_domheap_pages( > pg[i].count_info = PGC_extra; > } > } > - if ( assign_pages(d, pg, order, memflags) ) > + if ( assign_page(pg, order, d, memflags) ) Note that here, for example, order is necessarily no larger than MAX_ORDER. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |