[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pvh dom0: memory leak from iomem map



On 05/06/14 01:32, Mukesh Rathor wrote:
> On Wed, 04 Jun 2014 08:33:59 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> 
>>>>> On 04.06.14 at 03:29, <mukesh.rathor@xxxxxxxxxx> wrote:
>>> Hi Tim,
>>>
>>> When building a dom0 pvh, we populate the p2m with 0..N pfns
>>> upfront. Then in pvh_map_all_iomem, we walk the e820 and map all
>>> iomem 1:1. As such any iomem range below N would cause those ram
>>> frames to be silently dropped. 
>>>
>>> Since the holes could be pretty big, I am concenred this could
>>> result in significant loss of frames. 
>>>
>>> In my very early patches I had:
>>>
>>> set_typed_p2m_entry():
>>> ...
>>>     else if ( p2m_is_ram(ot) )
>>>     {
>>>          if ( is_pvh_domain(d) )                    <---
>>>              free_domheap_page(mfn_to_page(omfn));  <---
>>>
>>>          ASSERT(mfn_valid(omfn));
>>>          set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
>>> ..
>>>
>>> I'd like you to reconsider it. Since there is a dislike using
>>> is_pvh, I suppose one alternative could be, 'if ( gfn_p2mt ==
>>> p2m_mmio_direct)'.
>>>
>>> If you have any other suggestions, I'm open to them. LMK your
>>> thoughts..
>>
>> Isn't Roger's af06d66e ("x86: fix setup of PVH Dom0 memory map")
>> already taking care of this?
> 
> Not quite. He is adding N pages from domheap (d->page_list) to the end
> of memory map, where N is the number of pages freed during walking holes.
> When walking holes, I call set_mmio_p2m_entry to do 1:1 mapping. In that
> path I don't see the old ram page being put back to the domheap.

I'm quite sure I'm missing something here, but I don't see were those 
pages are removed from the domheap page list (d->page_list). In fact 
I've created a small debug patch to show that the pages removed by the 
MMIO holes are still in the domheap list:

---
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index ba42fc9..d54929c 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -312,13 +312,29 @@ static __init void pvh_add_mem_mapping(struct domain *d, 
unsigned long gfn,
                                        unsigned long mfn, unsigned long 
nr_mfns)
 {
     unsigned long i;
+    unsigned long omfn;
+    struct page_info *page, *pg;
+    p2m_type_t t;
     int rc;
 
     for ( i = 0; i < nr_mfns; i++ )
     {
+        page = NULL;
+        omfn = mfn_x(get_gfn_query_unlocked(d, gfn + i, &t));
+        if ( mfn_valid(omfn) )
+            page = mfn_to_page(omfn);
         if ( (rc = set_mmio_p2m_entry(d, gfn + i, _mfn(mfn + i))) )
             panic("pvh_add_mem_mapping: gfn:%lx mfn:%lx i:%ld rc:%d\n",
                   gfn, mfn, i, rc);
+        if ( !page )
+            goto done;
+        page_list_for_each( pg, &d->page_list )
+        {
+            if ( pg == page )
+                goto done;
+        }
+        panic("Unable to find page: %p in d->page_list\n", page);
+ done:
         if ( !(i & 0xfffff) )
                 process_pending_softirqs();
     }
---

What I tried to do in af06d66e is to reuse the pages 
previously removed from the MMIO holes to populate the end of the 
memory map.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.