[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-3.4-testing] PoD: Scrub pages before adding to the cache



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1254409710 -3600
# Node ID 2519769ba3be0a647c3fcd87c3cd632ce5c4a60f
# Parent  e34a589a1bc896fb9582a1387f99cda5a1624807
PoD: Scrub pages before adding to the cache

Neither memory from the allocator nor memory from
the balloon driver is guaranteed to be zero.  Scrub it
before adding to the cache.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
xen-unstable changeset:   20191:3deb2bd7aade
xen-unstable date:        Tue Sep 15 09:08:36 2009 +0100

PoD: Fix debug build.

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
xen-unstable changeset:   20193:973f4bbf4723
xen-unstable date:        Tue Sep 15 09:13:01 2009 +0100
---
 xen/arch/x86/mm/p2m.c |   12 ++++++++++++
 1 files changed, 12 insertions(+)

diff -r e34a589a1bc8 -r 2519769ba3be xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c     Thu Oct 01 16:05:38 2009 +0100
+++ b/xen/arch/x86/mm/p2m.c     Thu Oct 01 16:08:30 2009 +0100
@@ -296,6 +296,18 @@ p2m_pod_cache_add(struct domain *d,
         }
     }
 #endif
+
+    /*
+     * Pages from domain_alloc and returned by the balloon driver aren't
+     * guaranteed to be zero; but by reclaiming zero pages, we implicitly
+     * promise to provide zero pages. So we scrub pages before using.
+     */
+    for ( i = 0; i < (1 << order); i++ )
+    {
+        char *b = map_domain_page(mfn_x(page_to_mfn(page)) + i);
+        clear_page(b);
+        unmap_domain_page(b);
+    }
 
     spin_lock(&d->page_alloc_lock);
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.