[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Thu, 8 Jul 2021 11:07:18 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y3RtZeATn69YkYX3X9oMAoBOMiXYHPyT1fSxmSjkEY4=; b=KlQkdYfXVsDuYlgTJpNHGJlVGRTQ4UAY/oi9cO45o3HRO6Mcv/W4Y0Y6lDEnrv2p2qn+UTCvp1dSbeMVOMc3lwh9Z8DfwM+xkjYuc7wVWmw6BhGMMPnm9CTRVzn0S4J4Esityhi1Tgg1rohf7qV5tOrZdQSFfw2jtlyP9NLvyWHCzurjOWxLHZe2kp6FwNlthcQ+HJmdxcNjFw7OXF3zGEPsITNppRkVjWNbEQD/dpxeBrvfgQSvB7MFLQkK6BZSDIa93lQ3930CL0xEYF/071ock0xkIqb4fR6SkiBya1OwM7da0fsefjOgsYN6mkPfS8N+M8g9IYXjffiUNt6j5Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EUK+dudsTMSQIy81OhLfpZ1INaKYDvXMswi+4TPPSj/bKULovFDot/HH4FQEB5E+nw/4rFi0pxQBZWiXWoBgokuKkSjC9U4l3Iyl4ujS6/AMha72b3D3tbskCniNi9vBDme0JM8iwIo4ainXFDy2eWvM3d4Lfl91eh5tRCkjOFe0dUMqTMb2qdPkBpJ4SiCSShXpNI8bSDfeQQV9B1bll/9ZYmhYzuA8kvEzyUb7DF/U6dsItDdzySXu5riA5oiYITSIY5OkS+Ccb2O93uXlAXIrvOOu7YurB9+4oarYwcg4QISG2I9+s5J2sm89NKKyrxyhxIycyWglaH9vJY9Okg==
  • Authentication-results-original: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>
  • Delivery-date: Thu, 08 Jul 2021 11:07:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXW0b5Q8/Ges+XOkqFV81MfBckcqsNDrqAgChoyZCAADlHgIADP/FggAAadwCAAAIMsA==
  • Thread-topic: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Thursday, July 8, 2021 6:06 PM
> To: Penny Zheng <Penny.Zheng@xxxxxxx>
> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; sstabellini@xxxxxxxxxx; Julien Grall
> <julien@xxxxxxx>; Wei Chen <Wei.Chen@xxxxxxx>
> Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
> alloc_domstatic_pages
> 
> On 08.07.2021 11:09, Penny Zheng wrote:
> > Hi Jan
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@xxxxxxxx>
> >> Sent: Tuesday, July 6, 2021 2:54 PM
> >> To: Penny Zheng <Penny.Zheng@xxxxxxx>
> >> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen
> >> <Wei.Chen@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> >> sstabellini@xxxxxxxxxx; Julien Grall <julien@xxxxxxx>
> >> Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
> >> alloc_domstatic_pages
> >>
> >> On 06.07.2021 07:58, Penny Zheng wrote:
> >>>> From: Jan Beulich <jbeulich@xxxxxxxx>
> >>>> Sent: Thursday, June 10, 2021 6:23 PM
> >>>>
> >>>> On 07.06.2021 04:43, Penny Zheng wrote:
> >>>>> --- a/xen/common/page_alloc.c
> >>>>> +++ b/xen/common/page_alloc.c
> >>>>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
> >>>>>      return pg;
> >>>>>  }
> >>>>>
> >>>>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>>>> +/*
> >>>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static
> memory.
> >>>>> + * It is the equivalent of alloc_heap_pages for static memory  */
> >>>>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> >>>>> +                                               mfn_t smfn,
> >>>>> +                                               unsigned int
> >>>>> +memflags) {
> >>>>> +    bool need_tlbflush = false;
> >>>>> +    uint32_t tlbflush_timestamp = 0;
> >>>>> +    unsigned long i;
> >>>>> +    struct page_info *pg;
> >>>>> +
> >>>>> +    /* For now, it only supports allocating at specified address. */
> >>>>> +    if ( !mfn_valid(smfn) || !nr_mfns )
> >>>>> +    {
> >>>>> +        printk(XENLOG_ERR
> >>>>> +               "Invalid %lu static memory starting at
> >>>>> + %"PRI_mfn"\n",
> >>>>
> >>>> Reading a log containing e.g. "Invalid 0 static memory starting at
> >>>> ..." I don't think I would recognize that the "0" is the count of pages.
> >>>
> >>> Sure. How about "try to allocate out of range page %"PRI_mfn"\n"?
> >>
> >> This still doesn't convey _both_ parts of the if() that would cause
> >> the log message to be issued.
> >>
> >
> > Sorry. How about
> > "
> >         printk(XENLOG_ERR
> >                "Either out-of-range static memory starting at %"PRI_mfn""
> >                "or invalid number of pages: %ul.\n",
> >                mfn_x(smfn), nr_mfns);
> > "
> 
> I'm sorry - while now you convey both aspects, the message has become too
> verbose. What's wrong with "Invalid static memory request: ... pages at ...\"?
> But I wonder anyway if a log message is appropriate here in the first place.
> 

Sorry for my poor English writing. :/
Just having a habit(maybe not a good habit) of printing error messages, if you 
think the
log itself is verbose here, I'll remove it. ;)

> >>>>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
> >>>>>      return pg;
> >>>>>  }
> >>>>>
> >>>>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>>>> +/*
> >>>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of
> >>>>> +static memory,
> >>>>> + * then assign them to one specific domain #d.
> >>>>> + * It is the equivalent of alloc_domheap_pages for static memory.
> >>>>> + */
> >>>>> +struct page_info *alloc_domstatic_pages(
> >>>>> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
> >>>>> +        unsigned int memflags)
> >>>>> +{
> >>>>> +    struct page_info *pg = NULL;
> >>>>> +    unsigned long dma_size;
> >>>>> +
> >>>>> +    ASSERT(!in_irq());
> >>>>> +
> >>>>> +    if ( !dma_bitsize )
> >>>>> +        memflags &= ~MEMF_no_dma;
> >>>>> +    else
> >>>>> +    {
> >>>>> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
> >>>>> +        {
> >>>>> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
> >>>>> +            /* Starting address shall meet the DMA limitation. */
> >>>>> +            if ( mfn_x(smfn) < dma_size )
> >>>>> +                return NULL;
> >>>>
> >>>> I think I did ask this on v1 already: Why the first page? Static
> >>>> memory regions, unlike buddy allocator zones, can cross power-of-2
> >>>> address boundaries. Hence it ought to be the last page that gets
> >>>> checked for fitting address width restriction requirements.
> >>>>
> >>>> And then - is this necessary at all? Shouldn't "pre-defined by
> >>>> configuration using physical address ranges" imply the memory
> >>>> designated for a guest fits its DMA needs?
> >>>>
> >>>
> >>> Hmmm, In my understanding, here is the DMA restriction when using
> >>> buddy
> >> allocator:
> >>>     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
> >>>         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order,
> >>> memflags, d); dma_zone is restricting the starting buddy allocator
> >>> zone, so I am thinking that here, it shall restrict the first page.
> >>>
> >>> imo, if let user define, it also could be missing DMA restriction?
> >>
> >> Did you read my earlier reply? Again: The difference is that ordinary
> >> allocations (buddies) can't cross zone boundaries. Hence it is
> >> irrelevant if you check DMA properties on the first or last page -
> >> both will have the same number of significant bits. The same is -
> >> afaict - not true for static allocation ranges.
> >>
> >
> > True.
> >
> > Ordinary allocations (buddies) can't cross zone boundaries, So I
> > understand that following the logic in "alloc_heap_pages(dma_zone + 1,
> zone_hi, order, memflags, d);"
> > pages of the smallest address shall be allocated from "dma_zone + 1",
> > like you said, it is irrelevant if you check DMA properties on the
> > first or last pages, but imo, no matter first or last page, both shall be 
> > larger
> than (2^(dma_zone + 1)).
> >
> > Taking 32 as dma_bitsize, then the memory with this DMA restriction
> > allocated by "alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags,
> > d);" shall be at least more than 4G.
> 
> DMA restrictions are always "needs to be no more than N bits".
> 
> > That’s why I keep comparing the first page of static allocation, that
> > I am following the "more than" logic here.
> >
> > But you're right, I got a little investigation on ARM DMA limitation,
> > still taking dma_bitsize=32 as an example, we want that the actually
> allocated memory is smaller than 4G, not more than.
> > So I think the logic behind this code line " alloc_heap_pages(dma_zone
> > + 1, zone_hi, order, memflags, d);" is not right for ARM, and it shall
> > be changed to "alloc_heap_pages(zone_lo, dma_zone + 1, order, memflags,
> d);" as correction.
> 
> But this step is to _avoid_ the DMA-reserved part of the heap.

Thanks for the explanation!!!

I totally mis-understood the usage of dma_bitsize. It turns out to be the 
DMA-reserved...
Putting DMA limitation on zone_lo is absolutely right on ARM also.

> The caller requests address restricted memory by passing suitable memflags. If
> the request doesn't require access to the DMA- reserved part of the heap
> (dma_zone < zone_hi) we first try to get memory from there. Only if that fails
> will we fall back and try taking memory from the lower region. IOW the
> problem with your code is more fundamental: You use dma_bitsize when
> really you ought to extract the caller's requested restriction (if any) from
> memflags. I would assume that dma_bitsize is orthogonal to static memory, i.e.
> you don't need to try to preserve low memory.
> 

True, No need to preserve low memory for static memory.

> Jan

Penny Zheng


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.