[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages


  • To: Penny Zheng <Penny.Zheng@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 6 Jul 2021 08:53:49 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zTS+7xxmaN13I6BzFeZcFWfA+d640YbKO6NKqW2txOw=; b=M/3CsalkJ6o6Ta+90SHDGzC2fCQmsvYfnP11z2jjVXaS+5RzLIqh4+NouwUA3zVRAmNMDVydsxqKF9uS6LbEucHTxdCKTfonQdratbcxhdYQ4Dp0KPOL8uH9AyO/BHpjS37QNflH96b6TeZDGrAtPqvr5oTSqGgNCEiresqIST8EzwV+nqc+whvGc+TiYGInHED7yG5jvwNu5nqaep2bJTn6rI6SASacxgAXlnM35ABsKdqFfwtw2oscDOd5/rASLko5QdJpbe9pteAPWIlKgmW4vJ/T6AnteShQKY2CBc/SxXs7BrI2U9hiDS2jI+gXhARwJI1Z1VlP16+JHy8NSA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MBT47U1VREo+CTUNymvzJeHtD8WTLC7XloYx8u6/5WjINYafkBpq7ghMvarsevNSy3MtXL/nLCsDfzo7rCfJ6zagS9iVZCJ1HCRmPLzI3yHOuBzWvgwtFbsc3St6Opf6VVVSE2jQvYUwdkLszsmJxeB1Nmxi/OlAYqviDZnU1hJZ4lfoJdk4f0dOg/6bL/QnRvhwFbIUdXzxMbgF2UxKNLncP/gEYHfjbRoOkMlJgyTFAH5+JX5UwoGQUE1x9XV8qkezXRvl7ULwp+xsBR4gpfErEU7ZP3h5V6PNrNVkGzwpEMCuk3TAZYfbecsrdfpdblSE318flIrwdqDesWc4Ww==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • Delivery-date: Tue, 06 Jul 2021 06:54:00 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 06.07.2021 07:58, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: Thursday, June 10, 2021 6:23 PM
>>
>> On 07.06.2021 04:43, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
>>>      return pg;
>>>  }
>>>
>>> +#ifdef CONFIG_STATIC_ALLOCATION
>>> +/*
>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory.
>>> + * It is the equivalent of alloc_heap_pages for static memory  */
>>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
>>> +                                               mfn_t smfn,
>>> +                                               unsigned int memflags)
>>> +{
>>> +    bool need_tlbflush = false;
>>> +    uint32_t tlbflush_timestamp = 0;
>>> +    unsigned long i;
>>> +    struct page_info *pg;
>>> +
>>> +    /* For now, it only supports allocating at specified address. */
>>> +    if ( !mfn_valid(smfn) || !nr_mfns )
>>> +    {
>>> +        printk(XENLOG_ERR
>>> +               "Invalid %lu static memory starting at %"PRI_mfn"\n",
>>
>> Reading a log containing e.g. "Invalid 0 static memory starting at ..." I 
>> don't
>> think I would recognize that the "0" is the count of pages.
> 
> Sure. How about "try to allocate out of range page %"PRI_mfn"\n"?

This still doesn't convey _both_ parts of the if() that would cause
the log message to be issued.

>>> +               nr_mfns, mfn_x(smfn));
>>> +        return NULL;
>>> +    }
>>> +    pg = mfn_to_page(smfn);
>>> +
>>> +    for ( i = 0; i < nr_mfns; i++ )
>>> +    {
>>> +        /*
>>> +         * Reference count must continuously be zero for free pages
>>> +         * of static memory(PGC_reserved).
>>> +         */
>>> +        ASSERT(pg[i].count_info & PGC_reserved);
>>
>> What logic elsewhere guarantees that this will hold? ASSERT()s are to verify
>> that assumptions are met. But I don't think you can sensibly assume the 
>> caller
>> knows the range is reserved (and free), or else you could get away without 
>> any
>> allocation function.
> 
> The caller shall only call alloc_staticmem_pages when it knows range is 
> reserved,
> like, alloc_staticmem_pages is only called in the context of 
> alloc_domstatic_pages
> for now.

If the caller knows the static ranges, this isn't really "allocation".
I.e. I then question the need for "allocating" in the first place.

>>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
>>> +        {
>>> +            printk(XENLOG_ERR
>>> +                   "Reference count must continuously be zero for free 
>>> pages"
>>> +                   "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n",
>>> +                   i, mfn_x(page_to_mfn(pg + i)),
>>> +                   pg[i].count_info, pg[i].tlbflush_timestamp);
>>> +            BUG();
>>> +        }
>>
>> The same applies here at least until proper locking gets added, which I 
>> guess is
>> happening in the next patch when really it would need to happen right here.
>>
> 
> Ok, I will combine two commits together, and add locking here. 
> I thought the content of this commit is a little bit too much, so maybe
> adding the proper lock shall be created as a new patch. 😉
>  
>> Furthermore I don't see why you don't fold ASSERT() and if into
>>
>>         if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
>>
>> After all PGC_reserved is not similar to PGC_need_scrub, which
>> alloc_heap_pages() masks out the way you also have it here.
>>
> 
> I understand that you prefer if condition is phrased as follows:
>       if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
> Agree that PGC_reserved shall has the same position as PGC_state_free.
> Hmmm, for why I don't fold ASSERT(), do you mean that 
> ASSERT(pg[i].count_info == (PGC_state_free | PGC_reserved))?

No. By converting to the suggested construct the ASSERT() disappears
by way of folding _into_ the if().

>> As to the printk() - the extra verbosity compared to the original isn't 
>> helpful or
>> necessary imo. The message needs to be distinguishable from the other one,
>> yes, so it would better mention "static" in some way. But the prefix you 
>> have is
>> too long for my taste (and lacks a separating blank anyway).
>>
> 
> If you don't like the extra verbosity, maybe just
> " Static pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x.\n"?

Something along these lines, yes, but I wonder how difficult it is
to take the original message and insert "static" at a suitable place.
Any part you omit would again want justifying. Personally I'd go with
"pg[%u] static MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n" unless any
of the parts are provably pointless to log for static pages.

>>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
>>>      return pg;
>>>  }
>>>
>>> +#ifdef CONFIG_STATIC_ALLOCATION
>>> +/*
>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static
>>> +memory,
>>> + * then assign them to one specific domain #d.
>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>> + */
>>> +struct page_info *alloc_domstatic_pages(
>>> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
>>> +        unsigned int memflags)
>>> +{
>>> +    struct page_info *pg = NULL;
>>> +    unsigned long dma_size;
>>> +
>>> +    ASSERT(!in_irq());
>>> +
>>> +    if ( !dma_bitsize )
>>> +        memflags &= ~MEMF_no_dma;
>>> +    else
>>> +    {
>>> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
>>> +        {
>>> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
>>> +            /* Starting address shall meet the DMA limitation. */
>>> +            if ( mfn_x(smfn) < dma_size )
>>> +                return NULL;
>>
>> I think I did ask this on v1 already: Why the first page? Static memory 
>> regions,
>> unlike buddy allocator zones, can cross power-of-2 address boundaries. Hence
>> it ought to be the last page that gets checked for fitting address width
>> restriction requirements.
>>
>> And then - is this necessary at all? Shouldn't "pre-defined by configuration
>> using physical address ranges" imply the memory designated for a guest fits 
>> its
>> DMA needs?
>>
> 
> Hmmm, In my understanding, here is the DMA restriction when using buddy 
> allocator:
>     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
>         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
> dma_zone is restricting the starting buddy allocator zone, so I am thinking 
> that here, it shall
> restrict the first page.
> 
> imo, if let user define, it also could be missing DMA restriction?

Did you read my earlier reply? Again: The difference is that ordinary
allocations (buddies) can't cross zone boundaries. Hence it is
irrelevant if you check DMA properties on the first or last page - both
will have the same number of significant bits. The same is - afaict -
not true for static allocation ranges.

Of course, as expressed before, a question is whether DMA suitability
needs checking in the first place for static allocations: I'd view it
as mis-configuration if a domain was provided memory it can't really
use properly.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.