[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Changeset 20209 causes an issue in xen_in_range()



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 14.01.10 11:04 >>>
>On 14/01/2010 08:38, "Cui, Dexuan" <dexuan.cui@xxxxxxxxx> wrote:
>
>> Currently PERCPU_SIZE is 2 4K-pages and only 1 page is used actually.
>> 
>> From xen 20209 on, the second unused page is freed and returned to domheap in
>> debug=n build (MEMORY_GUARD is not defined):
>> percpu_free_unused_areas() -> free_xen_data() -> init_xenheap_pages().
>> Later the returned pages could be allocated to dom0 and dom0 could use them 
>> as
>> DMA buffer.
>> 
>> However, in iommu_set_dom0_mapping(), xen_in_range() is still True for the
>> freed pages above, so devices in Dom0 can meet with DMA fault.
>
>Should be fixed by c/s 20803.

Do you really think so? Masking "start" with PERCPU_SIZE-1 doesn't
make sense, as it may be arbitrarily before __percpu_start and
not aligned. Below my take on it.

Jan

--- 2010-01-06.orig/xen/arch/x86/setup.c        2010-01-05 13:29:13.000000000 
+0100
+++ 2010-01-06/xen/arch/x86/setup.c     2010-01-14 11:03:10.000000000 +0100
@@ -230,7 +230,7 @@ static void __init percpu_free_unused_ar
     /* Free all unused per-cpu data areas. */
     free_xen_data(&__per_cpu_start[first_unused << PERCPU_SHIFT], __bss_start);
 
-    data_size = (data_size + PAGE_SIZE + 1) & PAGE_MASK;
+    BUG_ON(data_size & ~PAGE_MASK);
     if ( data_size != PERCPU_SIZE )
         for ( i = 0; i < first_unused; i++ )
             free_xen_data(&__per_cpu_start[(i << PERCPU_SHIFT) + data_size],
@@ -1200,7 +1200,7 @@ int xen_in_range(paddr_t start, paddr_t 
     int i;
     static struct {
         paddr_t s, e;
-    } xen_regions[4];
+    } xen_regions[3];
 
     /* initialize first time */
     if ( !xen_regions[0].s )
@@ -1211,10 +1211,6 @@ int xen_in_range(paddr_t start, paddr_t 
         /* hypervisor code + data */
         xen_regions[1].s =__pa(&_stext);
         xen_regions[1].e = __pa(&__init_begin);
-        /* per-cpu data */
-        xen_regions[2].s = __pa(&__per_cpu_start);
-        xen_regions[2].e = xen_regions[2].s +
-            (((paddr_t)last_cpu(cpu_possible_map) + 1) << PERCPU_SHIFT);
         /* bss */
         xen_regions[3].s = __pa(&__bss_start);
         xen_regions[3].e = __pa(&_end);
@@ -1226,6 +1222,14 @@ int xen_in_range(paddr_t start, paddr_t 
             return 1;
     }
 
+    /* per-cpu data */
+    for_each_possible_cpu(i)
+    {
+        if ( (start < __pa(&__per_cpu_data_end[i << PERCPU_SHIFT])) &&
+             (end > __pa(&__per_cpu_start[i << PERCPU_SHIFT])) )
+            return 1;
+    }
+
     return 0;
 }
 
--- 2010-01-06.orig/xen/arch/x86/xen.lds.S      2009-10-15 11:42:12.000000000 
+0200
+++ 2010-01-06/xen/arch/x86/xen.lds.S   2010-01-14 11:01:19.000000000 +0100
@@ -104,10 +104,10 @@ SECTIONS
        *(.data.percpu)
        . = ALIGN(SMP_CACHE_BYTES);
        *(.data.percpu.read_mostly)
+       . = ALIGN(PAGE_SIZE);
        __per_cpu_data_end = .;
   } :text
   . = __per_cpu_start + (NR_CPUS << PERCPU_SHIFT);
-  . = ALIGN(PAGE_SIZE);
 
   /*
    * Do not insert anything here - the unused portion of .data.percpu



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.