[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Claim mode and HVM PoD interact badly



On Fri, Jan 10, 2014 at 04:03:51PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > > create ^
> > > > > owner Wei Liu <wei.liu2@xxxxxxxxxx>
> > > > > thanks
> > > > > 
> > > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > > When I have following configuration in HVM config file:
> > > > > >   memory=128
> > > > > >   maxmem=256
> > > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > > 
> > > > > > xc: error: Could not allocate memory for HVM guest as we cannot 
> > > > > > claim memory! (22 = Invalid argument): Internal error
> > > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot 
> > > > > > (re-)build domain: -3
> > > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find 
> > > > > > device model pid in /local/domain/82/image/device-model-pid
> > > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: 
> > > > > > libxl__destroy_device_model failed for 82
> > > > > > 
> > > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > > 
> > > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > > 
> > > > No. 128MB actually.
> > > > 
> > > 
> > > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > > 8MB video ram). Did I misread your message...
> > 
> > The 'claim' being the hypercall to set the 'clamp' on how much memory
> > the guest can allocate. This is based on:
> > 
> > 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> 
> This is in fact initialized to 'maxmem' in guest's config file and
> 
> 243     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;
> 
> This is in fact 'memory' in guest's config file.
> 
> So when you try to claim "maxmem" and the current limit is "memory" it
> would not work.
> 
> So guest should only claim target_pages sans 0x20 pages if PoD enabled.
> Oh this is what your initial patch did. I don't know whether this is
> conceptually correct though. :-P

Heh.
> 
> Further more, should guest only allow to claim target_pages, regardless

No.
> whether PoD is enabled? When only "memory" is specify, "maxmem"

That is indeed happening at some point. When you modify the 'target_pages'
(so 'xl mem-set' or 'xl mem-max') you will move the ceiling and allow
the guest (via ballooning) to increase or decrease tot_pages.

You don't need the 'claim' at that point as the hypervisor is the
one that deals with many concurrent guests competing for memory.
And it has the proper locking mechanics to tell guests to buzz
off if there is not enough memory.

But keep in mind that the 'claim' (or outstanding pages) is more
of a reservation. Or a lock. Or a stick in the ground.
It says: "To allocate this guest I need X pages' - and if you cannot
guarantee that amount then -ENOMEM right away. Which it did.

And said 'X' pages is incorrect for PoD guests. The patch I posted
sets the ceiling to the 'maxmem'.

Pls also note that the claim hypercall, or reservation, is cancelled
right after the guests' memory has been allocated:

530     /* ensure no unclaimed pages are left unused */                         
    
531     xc_domain_claim_pages(xch, dom, 0 /* cancels the claim */);            

It is a very short lived 'lock' on the memory - all contained
within 'setup_guest' for HVM and 'arch_setup_meminit' for PV.


> equals to "memory". So conceptually what we really care about is
> "memory" not "maxmem".

Uh, at the start of the life of the guest - sure. During its
build-up - well, we seem to have a spike of memory for the
PoD to allocate and free memory.

The time-flow seem to be:

 memory ... maxmem ... memory.. [start of guest]


That actually seems a bit silly - we could as well just check
how much free memory the hypervisor has and return 'ENOMEM'
if it does not have enough. But I am very likely mis-reading the
early setup of the PoD code or misunderstanding the implications
of PoD allocating its cache and freeing it.

> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.