[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory"



On Thu, 16 Dec 2010, Keir Fraser wrote:
> On 16/12/2010 20:44, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
> 
> >>> On 12/16/2010 at 01:33 PM, in message <C9302813.2966F%keir@xxxxxxx>, Keir
> > Fraser <keir@xxxxxxx> wrote:
> >> On 16/12/2010 19:23, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
> >> 
> >>> The bug is that qemu-dm seems to make the assumption that it can mmap from
> >>> dom0 all the memory with which the guest has been defined instead of the
> >>> memory
> >>> that is actually available on the host.
> >> 
> >> 32-bit dom0? Hm, I thought the qemu mapcache was supposed to limit the 
> >> total
> >> amount of guest memory mapped at one time, for a 32-bit qemu. For 64-bit
> >> qemu I wouldn't expect to find a limit as low as 3.25G.
> > 
> > Sorry, I should have specified that it is a 64 bit dom0 / hypervisor.
> 
> Okay, well I'm not sure what limit qemu-dm is hitting then. Mapping 3.25G of
> guest memory will only require a few megabytes of pagetables for the qemu
> process in dom0. Perhaps there is a ulimit or something set on the qemu
> process?
> 
> If we can work out and detect this limit, perhaps 64-bit qemu-dm could have
> a mapping cache similar to 32-bit qemu-dm, limited to some fraction of the
> detected mapping limit. And/or, on mapping failure, we could reclaim
> resources by simply zapping the existing cached mappings. Seems there's a
> few options. I don't really maintain qemu-dm myself -- you might get some
> help from Ian Jackson, Stefano, or Anthony Perard if you need more advice.

The mapcache size limit should be 64GB on a 64bit qemu-dm.
Any interesting error messages in the qemu logs?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.