[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Hi Julien > -----Original Message----- > From: Julien Grall <julien@xxxxxxx> > Sent: Tuesday, May 18, 2021 6:15 PM > To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; > sstabellini@xxxxxxxxxx > Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen > <Wei.Chen@xxxxxxx>; nd <nd@xxxxxxx> > Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages > > Hi Penny, > > On 18/05/2021 06:21, Penny Zheng wrote: > > alloc_staticmem_pages is designated to allocate nr_pfns contiguous > > pages of static memory. And it is the equivalent of alloc_heap_pages > > for static memory. > > This commit only covers allocating at specified starting address. > > > > For each page, it shall check if the page is reserved > > (PGC_reserved) and free. It shall also do a set of necessary > > initialization, which are mostly the same ones in alloc_heap_pages, > > like, following the same cache-coherency policy and turning page > > status into PGC_state_used, etc. > > > > Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx> > > --- > > xen/common/page_alloc.c | 64 > +++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 64 insertions(+) > > > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index > > 58b53c6ac2..adf2889e76 100644 > > --- a/xen/common/page_alloc.c > > +++ b/xen/common/page_alloc.c > > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages( > > return pg; > > } > > > > +/* > > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory. > > + * It is the equivalent of alloc_heap_pages for static memory */ > > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns, > > This wants to be nr_mfns. > > > + paddr_t start, > > I would prefer if this helper takes an mfn_t in parameter. > Sure, I will change both. > > + unsigned int > > +memflags) { > > + bool need_tlbflush = false; > > + uint32_t tlbflush_timestamp = 0; > > + unsigned int i; > > + struct page_info *pg; > > + mfn_t s_mfn; > > + > > + /* For now, it only supports allocating at specified address. */ > > + s_mfn = maddr_to_mfn(start); > > + pg = mfn_to_page(s_mfn); > > We should avoid to make the assumption the start address will be valid. > So you want to call mfn_valid() first. > > At the same time, there is no guarantee that if the first page is valid, then > the > next nr_pfns will be. So the check should be performed for all of them. > Ok. I'll do validation check on both of them. > > + if ( !pg ) > > + return NULL; > > + > > + for ( i = 0; i < nr_pfns; i++) > > + { > > + /* > > + * Reference count must continuously be zero for free pages > > + * of static memory(PGC_reserved). > > + */ > > + ASSERT(pg[i].count_info & PGC_reserved); > > + if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free ) > > + { > > + printk(XENLOG_ERR > > + "Reference count must continuously be zero for free > > pages" > > + "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n", > > + i, mfn_x(page_to_mfn(pg + i)), > > + pg[i].count_info, pg[i].tlbflush_timestamp); > > + BUG(); > > So we would crash Xen if the caller pass a wrong range. Is it what we want? > > Also, who is going to prevent concurrent access? > Sure, to fix concurrency issue, I may need to add one spinlock like `static DEFINE_SPINLOCK(staticmem_lock);` In current alloc_heap_pages, it will do similar check, that pages in free state MUST have zero reference count. I guess, if condition not met, there is no need to proceed. > > + } > > + > > + if ( !(memflags & MEMF_no_tlbflush) ) > > + accumulate_tlbflush(&need_tlbflush, &pg[i], > > + &tlbflush_timestamp); > > + > > + /* > > + * Reserve flag PGC_reserved and change page state > > + * to PGC_state_inuse. > > + */ > > + pg[i].count_info = (pg[i].count_info & PGC_reserved) | > PGC_state_inuse; > > + /* Initialise fields which have other uses for free pages. */ > > + pg[i].u.inuse.type_info = 0; > > + page_set_owner(&pg[i], NULL); > > + > > + /* > > + * Ensure cache and RAM are consistent for platforms where the > > + * guest can control its own visibility of/through the cache. > > + */ > > + flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])), > > + !(memflags & MEMF_no_icache_flush)); > > + } > > + > > + if ( need_tlbflush ) > > + filtered_flush_tlb_mask(tlbflush_timestamp); > > + > > + return pg; > > +} > > + > > /* Remove any offlined page in the buddy pointed to by head. */ > > static int reserve_offlined_page(struct page_info *head) > > { > > > > Cheers, > > -- > Julien Grall Cheers, Penny Zheng
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |