[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [IA64] get_pfn_list workaround



# HG changeset patch
# User awilliam@xxxxxxxxxxx
# Node ID 90813b66c3cf444af80fc7974595cc8741a49a3a
# Parent  0a7e619a248fd77eabf4468d20bd05f1c6b353a5
[IA64] get_pfn_list workaround

As we know, the mechanism for hypervisor to pass parameter through pointer
is not complete. Hypervisor use copy_from/to_user functions to copy parameter
to hypervisor and copy result to user, if there is a tlb miss happening and 
hypervisor can't handled this. This hypercall fails, there is no mechanism to 
handle this failure, that may cause domain crash. Get_pfn_list hypercall copy
large data from hypervisor to user, it is easy to trigger this issue when 
creating VTI-domain.

If fails, Get_pfn_list returns the number of pfn entries, and this patch will 
Dummy access parameter memory block to cause this tlb mapping tracked by 
Hypervisor, then continue to do get_pfn_list.

It's a workaround before we design a new mechanism of passing parameter thr
pointer.

Signed-off-by: Anthony Xu <anthony.xu@xxxxxxxxx>

diff -r 0a7e619a248f -r 90813b66c3cf tools/libxc/xc_ia64_stubs.c
--- a/tools/libxc/xc_ia64_stubs.c       Mon Apr 10 14:54:35 2006 -0600
+++ b/tools/libxc/xc_ia64_stubs.c       Mon Apr 10 15:13:42 2006 -0600
@@ -48,6 +48,12 @@ xc_plan9_build(int xc_handle,
     PERROR("xc_plan9_build not implemented\n");
     return -1;
 }
+/*  
+    VMM uses put_user to copy pfn_list to guest buffer, this maybe fail,
+    VMM don't handle this now.
+    This method will touch guest buffer to make sure the buffer's mapping
+    is tracked by VMM,
+  */
 
 int xc_ia64_get_pfn_list(int xc_handle,
                          uint32_t domid, 
@@ -56,27 +62,48 @@ int xc_ia64_get_pfn_list(int xc_handle,
                          unsigned int nr_pages)
 {
     dom0_op_t op;
-    int ret;
-    unsigned long max_pfns = ((unsigned long)start_page << 32) | nr_pages;
-
-    op.cmd = DOM0_GETMEMLIST;
-    op.u.getmemlist.domain   = (domid_t)domid;
-    op.u.getmemlist.max_pfns = max_pfns;
-    op.u.getmemlist.buffer   = pfn_buf;
-
-    if ( (max_pfns != -1UL)
-               && mlock(pfn_buf, nr_pages * sizeof(unsigned long)) != 0 )
-    {
-        PERROR("Could not lock pfn list buffer");
-        return -1;
+    int num_pfns,ret;
+    unsigned int __start_page, __nr_pages;
+    unsigned long max_pfns;
+    unsigned long *__pfn_buf;
+    __start_page = start_page;
+    __nr_pages = nr_pages;
+    __pfn_buf = pfn_buf;
+  
+    while(__nr_pages){
+        max_pfns = ((unsigned long)__start_page << 32) | __nr_pages;
+        op.cmd = DOM0_GETMEMLIST;
+        op.u.getmemlist.domain   = (domid_t)domid;
+        op.u.getmemlist.max_pfns = max_pfns;
+        op.u.getmemlist.buffer   = __pfn_buf;
+
+        if ( (max_pfns != -1UL)
+                   && mlock(__pfn_buf, __nr_pages * sizeof(unsigned long)) != 
0 )
+        {
+            PERROR("Could not lock pfn list buffer");
+            return -1;
+        }    
+
+        ret = do_dom0_op(xc_handle, &op);
+
+        if (max_pfns != -1UL)
+               (void)munlock(__pfn_buf, __nr_pages * sizeof(unsigned long));
+
+        if (max_pfns == -1UL)
+            return 0;
+        
+        num_pfns = op.u.getmemlist.num_pfns;
+        __start_page += num_pfns;
+        __nr_pages -= num_pfns;
+        __pfn_buf += num_pfns;
+
+        if (ret < 0) 
+            // dummy write to make sure this tlb mapping is tracked by VMM 
+            *__pfn_buf = 0;
+        else 
+            return nr_pages;    
     }    
-
-    ret = do_dom0_op(xc_handle, &op);
-
-    if (max_pfns != -1UL)
-       (void)munlock(pfn_buf, nr_pages * sizeof(unsigned long));
-
-    return (ret < 0) ? -1 : op.u.getmemlist.num_pfns;
+    return nr_pages;
 }
 
 long xc_get_max_pages(int xc_handle, uint32_t domid)

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.