[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86: run timers when populating Dom0's P2M table



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1251097328 -3600
# Node ID 7e194320394244bc5028881b498d2e01574086cd
# Parent  9189afa1f1e6939fcda5525e225843cfd2325c42
x86: run timers when populating Dom0's P2M table

When booting Dom0 with huge amounts of memory, and/or memory accesses
being sufficiently slow (due to NUMA effects), and the ACPI PM timer
or a high frequency HPET being used, the time it takes to populate the
M2P table may significantly exceed the overflow time of the platform
timer, screwing up time management to the point where Dom0 boot fails.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
---
 xen/arch/x86/domain_build.c |    6 ++++++
 1 files changed, 6 insertions(+)

diff -r 9189afa1f1e6 -r 7e1943203942 xen/arch/x86/domain_build.c
--- a/xen/arch/x86/domain_build.c       Fri Aug 21 17:14:35 2009 +0100
+++ b/xen/arch/x86/domain_build.c       Mon Aug 24 08:02:08 2009 +0100
@@ -927,6 +927,8 @@ int __init construct_dom0(
         else
             ((unsigned int *)vphysmap_start)[pfn] = mfn;
         set_gpfn_from_mfn(mfn, pfn);
+        if (!(pfn & 0xfffff))
+            process_pending_timers();
     }
     si->first_p2m_pfn = pfn;
     si->nr_p2m_frames = d->tot_pages - count;
@@ -945,6 +947,8 @@ int __init construct_dom0(
 #ifndef NDEBUG
             ++alloc_epfn;
 #endif
+            if (!(pfn & 0xfffff))
+                process_pending_timers();
         }
     }
     BUG_ON(pfn != d->tot_pages);
@@ -965,6 +969,8 @@ int __init construct_dom0(
             set_gpfn_from_mfn(mfn, pfn);
 #undef pfn
             page++; pfn++;
+            if (!(pfn & 0xfffff))
+                process_pending_timers();
         }
     }
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.