[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging-4.11] x86/hvm/ioreq: use ref-counted target-assigned shared pages



commit b88ccb3ae79decfa495ae965c02aeedc8fda2bcb
Author:     Paul Durrant <paul.durrant@xxxxxxxxxx>
AuthorDate: Tue Nov 20 15:31:48 2018 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Nov 20 15:31:48 2018 +0100

    x86/hvm/ioreq: use ref-counted target-assigned shared pages
    
    Passing MEMF_no_refcount to alloc_domheap_pages() will allocate, as
    expected, a page that is assigned to the specified domain but is not
    accounted for in tot_pages. Unfortunately there is no logic for tracking
    such allocations and avoiding any adjustment to tot_pages when the page
    is freed.
    
    The only caller of alloc_domheap_pages() that passes MEMF_no_refcount is
    hvm_alloc_ioreq_mfn() so this patch removes use of the flag from that
    call-site to avoid the possibility of a domain using an ioreq server as
    a means to adjust its tot_pages and hence allocate more memory than it
    should be able to.
    
    However, the reason for using the flag in the first place was to avoid
    the allocation failing if the emulator domain is already at its maximum
    memory limit. Hence this patch switches to allocating memory from the
    target domain instead of the emulator domain. There is already an extra
    memory allowance of 2MB (LIBXL_HVM_EXTRA_MEMORY) applied to HVM guests,
    which is sufficient to cover the pages required by the supported
    configuration of a single IOREQ server for QEMU. (Stub-domains do not,
    so far, use resource mapping). It also also the case the QEMU will have
    mapped the IOREQ server pages before the guest boots, hence it is not
    possible for the guest to inflate its balloon to consume these pages.
    
    Reported-by: Julien Grall <julien.grall@xxxxxxx>
    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    master commit: e862e6ceb1fd971d755a0c57d6a0f3b8065187dc
    master date: 2018-11-20 14:57:38 +0100
---
 xen/arch/x86/hvm/ioreq.c | 12 ++----------
 xen/arch/x86/mm.c        |  6 ------
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index bdc2687014..fd10ee6146 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -342,20 +342,12 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server 
*s, bool buf)
         return 0;
     }
 
-    /*
-     * Allocated IOREQ server pages are assigned to the emulating
-     * domain, not the target domain. This is safe because the emulating
-     * domain cannot be destroyed until the ioreq server is destroyed.
-     * Also we must use MEMF_no_refcount otherwise page allocation
-     * could fail if the emulating domain has already reached its
-     * maximum allocation.
-     */
-    page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
+    page = alloc_domheap_page(s->target, 0);
 
     if ( !page )
         return -ENOMEM;
 
-    if ( !get_page_and_type(page, s->emulator, PGT_writable_page) )
+    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
     {
         /*
          * The domain can't possibly know about this page yet, so failure
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 608ff2495f..9d29f3127d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4396,12 +4396,6 @@ int arch_acquire_resource(struct domain *d, unsigned int 
type,
 
             mfn_list[i] = mfn_x(mfn);
         }
-
-        /*
-         * The frames will have been assigned to the domain that created
-         * the ioreq server.
-         */
-        *flags |= XENMEM_rsrc_acq_caller_owned;
         break;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.11

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.