[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 08/10] xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages


  • To: Penny Zheng <penny.zheng@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 19 Jul 2021 11:26:04 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TpvKOl/SIqEGF6Rn34hhDb+f33JgWt/bwbMjt7i3T/k=; b=g3fEiLqPcez2ZnM3vjgc1rxUZj80V/w7IPaNttmTomZ3vjv77Ly6Gycv6yeLvSHHuFQ3OxIRe6pXxo9P+FVyTGbV/72gKRrJHFUIJtFVVfa0kXaBKtTHNmJLejOLaz7FC4PdddM00+LHoiukrj2gVLIWnap3LWv31Ham1XfJU6MxeBtsQts2lDMCpNYuzRMv5GUH0BbKusXtiMXOGzqc5HUabIRG+aGL7MoX9uakXXxeK9AtY5YT9LBBZqK8dFjyFssiBXxPOIVnybZSG68BlUXxjiD+dlqn0Jbb/3UtpL4vWnkY0d921yhaTrdTcR3fCbi3hoP25EzdbWVshJGshw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H8xyUfVNq/tqBitXQNV1yiKxwP+oqlWIj1MG9hdG/PwjRLnh75bqIDtSamYRt1z5CLG8rilgCdEYn3S+tXS0S3clnbLiXm8keIfE6zeJHZX6cLHasnA3tUroXhS0zVggsxC4b5nOd3qzCZFvEBQoyiAnXZ3Ai/D5h1jvyaNmmiwzP6inEU6watEmgc5nuVBT6OZEgYsllx41+7KQQmk76nu1DfQbMA+p/uFlkEjWAaWaE5UClPnA8xVYV0hPyXGUUleQvq7Ca+gwg+y9UPUiHr+nMsI7Tq/OGSJX/alfhSqX50yV85+KnKwgEuj+4leR6gEWOjZroN5JGpb4aEY0fg==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: Bertrand.Marquis@xxxxxxx, Wei.Chen@xxxxxxx, nd@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, sstabellini@xxxxxxxxxx, julien@xxxxxxx
  • Delivery-date: Mon, 19 Jul 2021 09:26:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.07.2021 07:18, Penny Zheng wrote:
> @@ -1065,6 +1069,73 @@ static struct page_info *alloc_heap_pages(
>      return pg;
>  }
>  
> +#ifdef CONFIG_STATIC_MEMORY
> +/*
> + * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
> + * static memory.
> + */
> +static struct page_info *acquire_staticmem_pages(unsigned long nr_mfns,
> +                                                 mfn_t smfn,
> +                                                 unsigned int memflags)

This and the other function not being __init, and there neither being
any caller nor any freeing, a question is whether __init wants adding;
patch 10 suggests so. The lack of freeing in particular means that
domains created using static memory won't be restartable, requiring a
host reboot instead.

> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned long i;
> +    struct page_info *pg;
> +
> +    /* For now, it only supports pre-configured static memory. */
> +    if ( !mfn_valid(smfn) || !nr_mfns )
> +        return NULL;
> +
> +    spin_lock(&heap_lock);
> +
> +    pg = mfn_to_page(smfn);
> +
> +    for ( i = 0; i < nr_mfns; i++ )
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
> +        {
> +            printk(XENLOG_ERR
> +                   "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                   i, mfn_x(page_to_mfn(pg + i)),
> +                   pg[i].count_info, pg[i].tlbflush_timestamp);
> +            BUG();
> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Preserve flag PGC_reserved and change page state
> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (PGC_reserved | PGC_state_inuse);
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));

Indentation.

> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> +
> +    spin_unlock(&heap_lock);

I'm pretty sure I did comment before on the flush_page_to_ram() being
inside the locked region here. While XSA-364 doesn't apply here because
you don't defer scrubbing (yet), the flushing may still take too long
to happen inside the locked region. Of course there's a dependency here
on when exactly the call(s) to this code will happen. In particular if
other DomU-s can already be running at that point, this may not be
tolerable by some of them. Yet if this is intentional and deemed not a
problem, then I'd have kind of expected the description to mention this
difference from alloc_heap_pages(), which you say this is an equivalent
of.

> @@ -2411,6 +2483,42 @@ struct page_info *alloc_domheap_pages(
>      return pg;
>  }
>  
> +#ifdef CONFIG_STATIC_MEMORY
> +/*
> + * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
> + * then assign them to one specific domain #d.
> + */
> +struct page_info *acquire_domstatic_pages(
> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +
> +    ASSERT(!in_irq());
> +
> +    pg = acquire_staticmem_pages(nr_mfns, smfn, memflags);
> +    if ( !pg )
> +        return NULL;
> +
> +    /* Right now, MEMF_no_owner case is meaningless here. */
> +    ASSERT(d);
> +    if ( memflags & MEMF_no_refcount )
> +    {
> +        unsigned long i;
> +
> +        for ( i = 0; i < nr_mfns; i++ )
> +            pg[i].count_info |= PGC_extra;
> +    }

With MEMF_no_owner now called out as meaningless here, what about
MEMF_no_refcount?

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.