[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH for-4.6] xen/mm: populate_physmap: validate correctly the gfn for direct mapped domain



Direct mapped domain has already the memory allocated 1:1, so we are
directly using the gfn as mfn to map the RAM in the guest.

While we are validating that the page associated to the first mfn belongs to
the domain, the subsequent MFN are not validated when the extent_order
is > 0.

This may result to map memory region (MMIO, RAM) which doesn't belong to the
domain.

Although, only DOM0 on ARM is using a direct memory mapped. So it
doesn't affect any guest (at least on the upstream version) or even x86.

Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>

---
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>

    This patch is a candidate for Xen 4.6 and backport to Xen 4.5 (and
    maybe 4.4).

    The hypercall is used for the ballooning code and only DOM0 on ARM
    is a direct mapped domain.

    The current code to validate the GFN passed by the populate physmap
    hypercall has been moved in a for loop in order to validate each
    GFN when the extent order is > 0.

    Without it, direct mapping domain may be able to map memory region
    which doesn't belong to it.

    I think the switch to the for loop is pretty safe and contained. The
    worst thing that could happen is denying mapping more often than we
    use to do. Although, IIRC Linux is mostly mapping page one by one
    (i.e extent order = 0).
---
 xen/common/memory.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 61bb94c..7cc8af4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -126,22 +126,28 @@ static void populate_physmap(struct memop_args *a)
             if ( is_domain_direct_mapped(d) )
             {
                 mfn = gpfn;
-                if ( !mfn_valid(mfn) )
+
+                for ( j = 0; j < (1 << a->extent_order); j++, mfn++ )
                 {
-                    gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n",
+                    if ( !mfn_valid(mfn) )
+                    {
+                        gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n",
                              mfn);
-                    goto out;
+                        goto out;
+                    }
+
+                    page = mfn_to_page(mfn);
+                    if ( !get_page(page, d) )
+                    {
+                        gdprintk(XENLOG_INFO,
+                                 "mfn %#"PRI_xen_pfn" doesn't belong to the"
+                                 " domain\n", mfn);
+                        goto out;
+                    }
+                    put_page(page);
                 }
 
-                page = mfn_to_page(mfn);
-                if ( !get_page(page, d) )
-                {
-                    gdprintk(XENLOG_INFO,
-                             "mfn %#"PRI_xen_pfn" doesn't belong to the"
-                             " domain\n", mfn);
-                    goto out;
-                }
-                put_page(page);
+                page = mfn_to_page(gpfn);
             }
             else
                 page = alloc_domheap_pages(d, a->extent_order, a->memflags);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.