[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Populate-on-demand memory problem


  • To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • From: Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
  • Date: Wed, 28 Jul 2010 10:05:52 +0200
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 28 Jul 2010 01:06:53 -0700
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:From:To:Subject:Date:User-Agent:Cc: References:In-Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Message-Id; b=Til+DvK11xMXEzw4mrIX6OZpWaMyZYAIaxz1+2gvl2Su9FRlu46xCkav nLshF/h8KaoA9zwLWI+dcDJydOT7jX3ji65O8KA+pj1Jz4o8ARN23d106 H1bSGM6pizOyRoUcXHVVO2VHatBh4vXX6JahqzHO5y0bIp5IPT1uBZ/xp +Z1etYJ1AeVSkwR0QpMGnO6vOU/B0RGfGAmPqhd6+qdvJ6Yx4dxs9kGpF OwXkcb7El8Y6j/O6gwoXhsUQqH/BB;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Am 27.07.2010 schrieb George Dunlap:
> Hmm, looks like I neglected to push a fix upstream.  Can you test it
> with the attached patch, and tell me if that fixes your problem?
With this patch all went fine again and the counters seem to be right.
Please add it to xen-unstable.
Many thanks.
Dietmar.

> 
>  -George
> 
> On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
> <dietmar.hahn@xxxxxxxxxxxxxx> wrote:
> > Hi list,
> >
> > we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
> > xen-4.0 and ran into some trouble with the pod stuff.
> > We have a HVM guest and already used target_mem < max_mem on startup of
> > the guest.
> > With the new xen version we get
> > (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 
> > 792792 pod_entries 800
> > I did some code revisions and looking at pod patches
> > (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
> > to understand the behavior. We use the following configuration:
> > maxmem = 4096
> > memory = 3096
> > What I see is:
> >  - our guest boots with e820 map showing maxmem.
> >  - reading xenstore memory/target returns '3170304' means 3096MB, 792576 
> > pages
> > Now our guest uses the target memory and gives back 1000MB via
> > hypervisor call XENMEM_decrease_reservation to the hypervisor.
> >
> > Later I try to map the complete domU memory into dom0 kernel space and here 
> > I
> > get the 'Out of populate-on-demand memory' crash.
> >
> > As far as I understand (ignoring the p2m_pod_emergency_sweep)
> > - on populating a page
> >   - the page is taken from the pod cache
> >   - p2md->pod.count--
> >   - p2md->pod.entry_count--
> >   - page gets type p2m_ram_rw
> > - decreasing a page
> >   - p2md->pod.entry_count--
> >   - page gets type p2m_invalid
> >
> > So if the guest uses all the target memory and gave back all
> > the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should 
> > be
> > zero.
> > I added some tracing in the hypervisor and see on start of the guest:
> > p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
> > This pod.count is lower then the target seen in the guest!
> > On the first call of p2m_pod_demand_populate() I can see
> > p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 
> > 791264 tot_pages: 792792
> > So pod.entry_count=1048064 (4096MB) complies to maxmem but
> > pod.count=791264 is lower then the target memory in xenstore.
> >
> > Any help is welcome!
> > Thanks.
> > Dietmar.
> >
-- 
Company details: http://ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.