[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v3 6/6] xen: retrieve reserved pages on populate_physmap
Hi jan > -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: Wednesday, May 4, 2022 9:45 PM > To: Penny Zheng <Penny.Zheng@xxxxxxx> > Cc: Wei Chen <Wei.Chen@xxxxxxx>; Henry Wang <Henry.Wang@xxxxxxx>; > Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; George Dunlap > <george.dunlap@xxxxxxxxxx>; Julien Grall <julien@xxxxxxx>; Stefano > Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v3 6/6] xen: retrieve reserved pages on > populate_physmap > > On 27.04.2022 11:27, Penny Zheng wrote: > > --- a/xen/common/memory.c > > +++ b/xen/common/memory.c > > @@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args > > *a) > > > > mfn = _mfn(gpfn); > > } > > + else if ( is_domain_using_staticmem(d) ) > > + { > > + /* > > + * No easy way to guarantee the retreived pages are > > + contiguous, > > Nit: retrieved > > > + * so forbid non-zero-order requests here. > > + */ > > + if ( a->extent_order != 0 ) > > + { > > + gdprintk(XENLOG_INFO, > > + "Could not allocate non-zero-order pages > > + for static %pd.\n.", > > Nit: "Could not" reads as if an attempt was made, so maybe better "Cannot"? > I'd also pull "static" ahead of "non-zero-order" and, to help observers of the > message associate it with a call site, actually log the order (i.e. > "order-%u" instead of "non-zero-order"). > > Also please omit full stops in log messages. They serve no purpose but > consume space. > > Finally, here as well as below: Is "info" log level really appropriate? > You're logging error conditions after all, so imo these want to be at least > "warn" level. An alternative would be to omit logging of messages here > altogether. > > > --- a/xen/common/page_alloc.c > > +++ b/xen/common/page_alloc.c > > @@ -2769,12 +2769,50 @@ int __init acquire_domstatic_pages(struct > > domain *d, mfn_t smfn, > > > > return 0; > > } > > + > > +/* > > + * Acquire a page from reserved page list(resv_page_list), when > > +populating > > + * memory for static domain on runtime. > > + */ > > +mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags) > > +{ > > + struct page_info *page; > > + mfn_t smfn; > > + > > + /* Acquire a page from reserved page list(resv_page_list). */ > > + page = page_list_remove_head(&d->resv_page_list); > > + if ( unlikely(!page) ) > > + { > > + printk(XENLOG_ERR > > + "%pd: failed to acquire a reserved page from > > resv_page_list.\n", > > + d); > > A gdprintk() in the caller is acceptable. Two log messages isn't imo, and a > XENLOG_ERR message which a guest can trigger is a security concern (log > spam) anyway. > > > + return INVALID_MFN; > > + } > > + > > + smfn = page_to_mfn(page); > > + > > + if ( acquire_domstatic_pages(d, smfn, 1, memflags) ) > > + return INVALID_MFN; > > Don't you want to add the page back to the reserved list in case of error? > Oh, thanks for pointing that out. > > + return smfn; > > +} > > #else > > void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns, > > bool need_scrub) { > > ASSERT_UNREACHABLE(); > > } > > + > > +int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn, > > + unsigned int nr_mfns, unsigned int > > +memflags) { > > + ASSERT_UNREACHABLE(); > > +} > > I can't spot a caller of this one outside of suitable #ifdef. Also the __init > here > looks wrong and you look to have missed dropping it from the real function. > > > +mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags) > > +{ > > + ASSERT_UNREACHABLE(); > > +} > > #endif > > For this one I'd again expect CSE to leave no callers, just like in the > earlier > patch. Or am I overlooking anything? > In acquire_reserved_page, I've use a few CONFIG_STATIC_MEMORY-only variables, like d->resv_page_list, so I'd expect to let acquire_reserved_page guarded by CONFIG_STATIC_MEMORY too and provide the stub function here to avoid compilation error when !CONFIG_STATIC_MEMORY. > Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |