[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Monday, December 15, 2014 5:23 PM
> 
> >>> On 15.12.14 at 10:05, <kevin.tian@xxxxxxxxx> wrote:
> > yes, definitely host RAM is the upper limit, and what I'm concerning here
> > is how to reserve (at boot time) or allocate (on-demand) such large PFN
> > resource, w/o collision with other PFN reservation usage (ballooning
> > should be fine since it's operating existing RAM ranges in dom0 e820
> > table).
> 
> I don't think ballooning is restricted to the regions named RAM in
> Dom0's E820 table (at least it shouldn't be, and wasn't in the
> classic Xen kernels).

well, nice to know that.

> 
> > Maybe we can reserve a big-enough reserved region in dom0's
> > e820 table at boot time, for all PFN reservation usages, and then allocate
> > them on-demand for specific usages?
> 
> What would "big enough" here mean (i.e. how would one determine
> the needed size up front)? Plus any form of allocation would need a
> reasonable approach to avoid fragmentation. And anyway I'm not
> getting what position you're on: Do you expect to be able to fit
> everything that needs mapping into the available mapping space (as
> your reply above seems to imply) or do you think there won't be
> enough mapping space (as earlier replies of yours appeared to
> indicate)?
> 

I expect to have everything mapped into the available mapping space,
and is asking for suggestions what's the best way to find and reserve
available PFNs in a way not conflicting with other usages (either
virtualization features like ballooning that you mentioned, or bare 
metal features like PCI hotplug or memory hotplug).

Tanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.