[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86_64: widen bit width usable for struct domain allocation



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1265966658 0
# Node ID 3bb163b7467362e1d39eaf44365e54c0cbfad927
# Parent  a948403c8f99013cd7bfd5c441e84c41a0e4009e
x86_64: widen bit width usable for struct domain allocation

With it being a PDX (instead of a PFN) that gets stored when a 32-bit
quantity is needed, we should also account for the bits removed during
PFN-to-PDX conversion when doing the allocation.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
---
 xen/arch/x86/domain.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff -r a948403c8f99 -r 3bb163b74673 xen/arch/x86/domain.c
--- a/xen/arch/x86/domain.c     Fri Feb 12 09:23:10 2010 +0000
+++ b/xen/arch/x86/domain.c     Fri Feb 12 09:24:18 2010 +0000
@@ -174,11 +174,15 @@ struct domain *alloc_domain_struct(void)
 {
     struct domain *d;
     /*
-     * We pack the MFN of the domain structure into a 32-bit field within
+     * We pack the PDX of the domain structure into a 32-bit field within
      * the page_info structure. Hence the MEMF_bits() restriction.
      */
-    d = alloc_xenheap_pages(
-        get_order_from_bytes(sizeof(*d)), MEMF_bits(32 + PAGE_SHIFT));
+    unsigned int bits = 32 + PAGE_SHIFT;
+
+#ifdef __x86_64__
+    bits += pfn_pdx_hole_shift;
+#endif
+    d = alloc_xenheap_pages(get_order_from_bytes(sizeof(*d)), MEMF_bits(bits));
     if ( d != NULL )
         memset(d, 0, sizeof(*d));
     return d;

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.