[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [hybrid] : mmap pfn space...



On Wed, 2012-04-18 at 02:20 +0100, Mukesh Rathor wrote:
> On Mon, 16 Apr 2012 17:22:14 +0100
> Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> 
> > On Mon, 16 Apr 2012, Ian Campbell wrote:
> > > > In a nutshell, I am still trying to figure how to allocate rsvd
> > > > pfn's for privcmd without writing a slab allocator.
> > > 
> > > Can't you just use the core get_page function (or
> > > alloc_xenballooned_pages) and move the associated mfn aside
> > > temporarily (or not if using alloc_xenballooned_pages)?
> > 
> > I think that is a good suggestion: if we are trying to get in
> > something that works but might not be the best solution, then using
> > alloc_xenballooned_pages to get some pages and then changing the p2m
> > is the best option: it wastes a non-trivial amount of memory in dom0
> > but at least it is known to work well and it wouldn't be an "hack".
> > 
> > Give a look at gntdev_alloc_map, gnttab_map_refs and m2p_add_override
> > for an example.
> 
> 
> Ok. I changed to using alloc_xenballooned_pages. In future, if we run
> into problems, we can look into alternatives. In past we've had
> problems with limits reached ballooing down. We run with small dom0.

You don't really need to increase the size of dom0, just the size of the
balloon. e.g. if you run dom0_mem=512M,max:1024M then you get a dom0
with 512M of RAM, but a total PFN space of 1024M, which means you have
512M of balloon available for alloc_xenballooned_pages.

If you just do dom0_mem=512M then I believe you get 512M of RAM but PFN
space sized for the entire host, which is going to give you more than
enough balloon space on any typical host (but there are obviously
downsides if the host has lots of RAM relative to 512M!)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.