[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages


  • To: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Mon, 24 May 2021 10:10:00 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2qrLhbNtUuHXCY/13qfnE+S4fUhnoLYb40Q5r3kqjPI=; b=n3DSy6fkRSk3WXwoS2aNolibQzebo96RfIrLwjd5+Rx1AxmqVl7VZHFmCQfJJ05pAWz4m8+WQUBju0Rz3E2a07zzICfWHlXN9rQNiJ74R1Tm9yRxzhIezPNWRZKTb0yt5MPcF4+MiBD7Fzjd22QMwzvzbfVpPEcF5B1Oh+51uVPZn58fGVQJm7IKJcBHbR5sHDEnsyBFP/gSrrg1GtbTfIhjJ+zNdbMKyNC+BSsxRzA2GBjN+aWO1mT1Ns48aJxS2pQdTEPHDu/GffQy9QPKbGEeT1psVZp4DT8WwXszarCJQG5JMxzUptwOVHsw0slKJQSvFXwlQb0/X3Ur4XahSg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KYwOAXQNI0nwKyPKMJvPftpa/yUwTOsHPkYaj/oQwwvPg/hhJwVUk3KJ2Q0iiOPL4dLYdGzXltqHGLJ2ZQQErysvc5JzJUKXbh7e4J2YCxC4Cjj7MUgi522gLeLLesrPgmOT6beiY62uhxYE4W6BX2Df+RVOpwwdUdWnb30KYQTBcZPpeEtoSi6qKL1blL3eZQ9PAv6nUjefZnu1ZWL+wwWdcxMFAYqeBeCwqD97Litg2utwWHtPCiuyo+EZkgvI0b9rVpWnfS0kMCUz+us2vjc9BLZuBZW9LAtun8cpY9/cXhfO4cr13sNQNDKGACAE06eiACVybcUkPyYdE8DYfw==
  • Authentication-results-original: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, nd <nd@xxxxxxx>
  • Delivery-date: Mon, 24 May 2021 10:10:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXS6W1I2rvO8FRzkm++uFejE9ZNKrpBh+AgAE7kqCACC9kgA==
  • Thread-topic: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

Hi Julien

> -----Original Message-----
> From: Penny Zheng
> Sent: Wednesday, May 19, 2021 1:24 PM
> To: Julien Grall <julien@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> sstabellini@xxxxxxxxxx
> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen
> <Wei.Chen@xxxxxxx>; nd <nd@xxxxxxx>
> Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Julien
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xxxxxxx>
> > Sent: Tuesday, May 18, 2021 6:15 PM
> > To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> > sstabellini@xxxxxxxxxx
> > Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen
> > <Wei.Chen@xxxxxxx>; nd <nd@xxxxxxx>
> > Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> >
> > Hi Penny,
> >
> > On 18/05/2021 06:21, Penny Zheng wrote:
> > > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > > pages of static memory. And it is the equivalent of alloc_heap_pages
> > > for static memory.
> > > This commit only covers allocating at specified starting address.
> > >
> > > For each page, it shall check if the page is reserved
> > > (PGC_reserved) and free. It shall also do a set of necessary
> > > initialization, which are mostly the same ones in alloc_heap_pages,
> > > like, following the same cache-coherency policy and turning page
> > > status into PGC_state_used, etc.
> > >
> > > Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
> > > ---
> > >   xen/common/page_alloc.c | 64
> > +++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 64 insertions(+)
> > >
> > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > > 58b53c6ac2..adf2889e76 100644
> > > --- a/xen/common/page_alloc.c
> > > +++ b/xen/common/page_alloc.c
> > > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> > >       return pg;
> > >   }
> > >
> > > +/*
> > > + * Allocate nr_pfns contiguous pages, starting at #start, of static 
> > > memory.
> > > + * It is the equivalent of alloc_heap_pages for static memory  */
> > > +static struct page_info *alloc_staticmem_pages(unsigned long
> > > +nr_pfns,
> >
> > This wants to be nr_mfns.
> >
> > > +                                                paddr_t start,
> >
> > I would prefer if this helper takes an mfn_t in parameter.
> >
> 
> Sure, I will change both.
> 
> > > +                                                unsigned int
> > > +memflags) {
> > > +    bool need_tlbflush = false;
> > > +    uint32_t tlbflush_timestamp = 0;
> > > +    unsigned int i;
> > > +    struct page_info *pg;
> > > +    mfn_t s_mfn;
> > > +
> > > +    /* For now, it only supports allocating at specified address. */
> > > +    s_mfn = maddr_to_mfn(start);
> > > +    pg = mfn_to_page(s_mfn);
> >
> > We should avoid to make the assumption the start address will be valid.
> > So you want to call mfn_valid() first.
> >
> > At the same time, there is no guarantee that if the first page is
> > valid, then the next nr_pfns will be. So the check should be performed for 
> > all
> of them.
> >
> 
> Ok. I'll do validation check on both of them.
> 
> > > +    if ( !pg )
> > > +        return NULL;
> > > +
> > > +    for ( i = 0; i < nr_pfns; i++)
> > > +    {
> > > +        /*
> > > +         * Reference count must continuously be zero for free pages
> > > +         * of static memory(PGC_reserved).
> > > +         */
> > > +        ASSERT(pg[i].count_info & PGC_reserved);
> > > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > > +        {
> > > +            printk(XENLOG_ERR
> > > +                    "Reference count must continuously be zero for free 
> > > pages"
> > > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > > +                    i, mfn_x(page_to_mfn(pg + i)),
> > > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> > > +            BUG();
> >
> > So we would crash Xen if the caller pass a wrong range. Is it what we want?
> >
> > Also, who is going to prevent concurrent access?
> >
> 
> Sure, to fix concurrency issue, I may need to add one spinlock like `static
> DEFINE_SPINLOCK(staticmem_lock);`
> 
> In current alloc_heap_pages, it will do similar check, that pages in free 
> state
> MUST have zero reference count. I guess, if condition not met, there is no 
> need
> to proceed.
> 

Another thought on concurrency problem, when constructing patch v2, do we need 
to
consider concurrency here? 
heap_lock is to take care concurrent allocation on the one heap, but static 
memory is
always reserved for only one specific domain.

> > > +        }
> > > +
> > > +        if ( !(memflags & MEMF_no_tlbflush) )
> > > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > > +                                &tlbflush_timestamp);
> > > +
> > > +        /*
> > > +         * Reserve flag PGC_reserved and change page state
> > > +         * to PGC_state_inuse.
> > > +         */
> > > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> > PGC_state_inuse;
> > > +        /* Initialise fields which have other uses for free pages. */
> > > +        pg[i].u.inuse.type_info = 0;
> > > +        page_set_owner(&pg[i], NULL);
> > > +
> > > +        /*
> > > +         * Ensure cache and RAM are consistent for platforms where the
> > > +         * guest can control its own visibility of/through the cache.
> > > +         */
> > > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > > +                            !(memflags & MEMF_no_icache_flush));
> > > +    }
> > > +
> > > +    if ( need_tlbflush )
> > > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> > > +
> > > +    return pg;
> > > +}
> > > +
> > >   /* Remove any offlined page in the buddy pointed to by head. */
> > >   static int reserve_offlined_page(struct page_info *head)
> > >   {
> > >
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 
> Cheers,
> 
> Penny Zheng

Cheers

Penny

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.