[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 2/8] xen: do not free reserved memory into heap



Hi Penny,

On 20/07/2022 06:46, Penny Zheng wrote:
Pages used as guest RAM for static domain, shall be reserved to this
domain only.
So in case reserved pages being used for other purpose, users
shall not free them back to heap, even when last ref gets dropped.

This commit introduces a new helper free_domstatic_page to free
static page in runtime, and free_staticmem_pages will be called by it
in runtime, so let's drop the __init flag.

Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>

With a couple of comments below:

Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ed56379b96..a12622e921 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -151,10 +151,6 @@
  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
  #endif
-#ifndef PGC_static
-#define PGC_static 0
-#endif
-
  /*
   * Comma-separated list of hexadecimal page numbers containing bad bytes.
   * e.g. 'badpage=0x3f45,0x8a321'.
@@ -2636,12 +2632,14 @@ struct domain *get_pg_owner(domid_t domid)
#ifdef CONFIG_STATIC_MEMORY
  /* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
-void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
-                                 bool need_scrub)
+void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                          bool need_scrub)
  {
      mfn_t mfn = page_to_mfn(pg);
      unsigned long i;
+ spin_lock(&heap_lock);
+
      for ( i = 0; i < nr_mfns; i++ )
      {
          mark_page_free(&pg[i], mfn_add(mfn, i));
@@ -2652,9 +2650,34 @@ void __init free_staticmem_pages(struct page_info *pg, 
unsigned long nr_mfns,
              scrub_one_page(pg);
          }
- /* In case initializing page of static memory, mark it PGC_static. */
          pg[i].count_info |= PGC_static;
      }
+
+    spin_unlock(&heap_lock);
+}
+
+void free_domstatic_page(struct page_info *page)
+{
+    struct domain *d = page_get_owner(page);
+    bool drop_dom_ref;
+
+    ASSERT(d);

I saw Jan commenting on this. I agree with him to switch to

if ( d )
{
  ASSERT_UNREACHABLE();
  return;
}

I would even go further and add a printk() to log the problem in prod.

+
+    ASSERT_ALLOC_CONTEXT();
+
+    /* NB. May recursively lock from relinquish_memory(). */
+    spin_lock_recursive(&d->page_alloc_lock);
+
+    arch_free_heap_page(d, page);
+
+    drop_dom_ref = !domain_adjust_tot_pages(d, -1);
+
+    spin_unlock_recursive(&d->page_alloc_lock);
+
+    free_staticmem_pages(page, 1, scrub_debug);
+
+    if ( drop_dom_ref )
+        put_domain(d);
  }
/*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 3be754da92..f1a7d5c991 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,13 +85,12 @@ bool scrub_free_pages(void);
  } while ( false )
  #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
-#ifdef CONFIG_STATIC_MEMORY
  /* These functions are for static memory */
  void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                            bool need_scrub);
+void free_domstatic_page(struct page_info *page);
  int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int 
nr_mfns,
                              unsigned int memflags);
-#endif

NIT: The removal of #ifdef seems to be unrelated to this patch. If you plan to send a v10, then I would suggest to mention it on the commit message.

/* Map machine page range in Xen virtual address space. */
  int map_pages_to_xen(
@@ -212,6 +211,10 @@ extern struct domain *dom_cow;
#include <asm/mm.h> +#ifndef PGC_static
+#define PGC_static 0
+#endif

I saw Jan commenting on this change. So, FYI, I am OK either way.

+
  static inline bool is_special_page(const struct page_info *page)
  {
      return is_xen_heap_page(page) || (page->count_info & PGC_extra);

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.