[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging] x86/shadow: don't enable shadow mode with too small a shadow allocation



commit 2634b997afabfdc5a972e07e536dfbc6febb4385
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Nov 30 12:10:39 2018 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Nov 30 12:10:39 2018 +0100

    x86/shadow: don't enable shadow mode with too small a shadow allocation
    
    We've had more than one report of host crashes after failed migration,
    and in at least one case we've had a hint towards a too far shrunk
    shadow allocation pool. Instead of just checking the pool for being
    empty, check whether the pool is smaller than what
    shadow_set_allocation() would minimally bump it to if it was invoked in
    the first place.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
---
 xen/arch/x86/mm/shadow/common.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 61304d739d..c49aeb5e60 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -977,7 +977,7 @@ const u8 sh_type_to_size[] = {
  * allow for more than ninety allocated pages per vcpu.  We round that
  * up to 128 pages, or half a megabyte per vcpu, and add 1 more vcpu's
  * worth to make sure we never return zero. */
-static unsigned int shadow_min_acceptable_pages(struct domain *d)
+static unsigned int shadow_min_acceptable_pages(const struct domain *d)
 {
     return (d->max_vcpus + 1) * 128;
 }
@@ -1369,6 +1369,15 @@ shadow_free_p2m_page(struct domain *d, struct page_info 
*pg)
     paging_unlock(d);
 }
 
+static unsigned int sh_min_allocation(const struct domain *d)
+{
+    /*
+     * Don't allocate less than the minimum acceptable, plus one page per
+     * megabyte of RAM (for the p2m table).
+     */
+    return shadow_min_acceptable_pages(d) + (d->tot_pages / 256);
+}
+
 int shadow_set_allocation(struct domain *d, unsigned int pages, bool 
*preempted)
 {
     struct page_info *sp;
@@ -1384,9 +1393,7 @@ int shadow_set_allocation(struct domain *d, unsigned int 
pages, bool *preempted)
         else
             pages -= d->arch.paging.shadow.p2m_pages;
 
-        /* Don't allocate less than the minimum acceptable, plus one page per
-         * megabyte of RAM (for the p2m table) */
-        lower_bound = shadow_min_acceptable_pages(d) + (d->tot_pages / 256);
+        lower_bound = sh_min_allocation(d);
         if ( pages < lower_bound )
             pages = lower_bound;
     }
@@ -2712,7 +2719,7 @@ int shadow_enable(struct domain *d, u32 mode)
 
     /* Init the shadow memory allocation if the user hasn't done so */
     old_pages = d->arch.paging.shadow.total_pages;
-    if ( old_pages == 0 )
+    if ( old_pages < sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages )
     {
         paging_lock(d);
         rv = shadow_set_allocation(d, 1024, NULL); /* Use at least 4MB */
--
generated by git-patchbot for /home/xen/git/xen.git#staging

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.