[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] auto balloon initial domain and fix dom0_mem=X inconsistencies (v5).



On 01/05/12 17:37, Konrad Rzeszutek Wilk wrote:
> On Mon, Apr 16, 2012 at 01:15:31PM -0400, Konrad Rzeszutek Wilk wrote:
>> Changelog v5 [since v4]:
>>  - used populate_physmap, fixed bugs.
>> [v2-v4: not posted]
>>  - reworked the code in setup.c to work properly.
>> [v1: https://lkml.org/lkml/2012/3/30/492]
>>  - initial patchset
> 
> One bug I found was that with 'dom0_mem=max:1G' (with and without these
> patches) I would get a bunch of
> 
> (XEN) page_alloc.c:1148:d0 Over-allocation for domain 0: 2097153 > 2097152
> (XEN) memory.c:133:d0 Could not allocate order=0 extent: id=0 memflags=0 (0 
> of 17)
> 
> where the (0 of X), sometimes was 1, 2,3,4 or 17 -depending on the machine
> I ran on it. I figured it out that the difference was in the ACPI tables
> that are allocated - and that those regions - even though are returned
> back to the hypervisor, cannot be repopulated. I can't find the actual
> exact piece of code in the hypervisor to pin-point and say "Aha".

It was tricky to track down what is going here but I think I see what's
happening.

The problem pages (on the system I looked at) were located just before
the ISA memory region (so PFN < a0) and so they are mapped in the
bootstrap page tables and have an additional ref so are not immediately
freed when the page is released.  They do get freed later on, presumably
when the page tables are swapped over.

I think the mapping needs to be removed with
HYPERVISOR_update_va_mapping() before releasing the page.  This is
already done for the ISA region in xen_ident_map_ISA().

I may be easier to avoid doing anything with the PFNs < 0x100 and take
the minimal lose of memory.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.