[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] sparse M2P table




Keir Fraser wrote:
> On 17/09/2009 06:42, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
> 
>> I'm working on similar staff also, mainly because of memory hotplug
>> requires such support. Maybe I can base my work on your work :-)
>> 
>> 256G is not enough if memory hotplug is enabled. In some platform,
>> user can set the start address for hotpluged memory and the default
>> value is 1024G. That means 4M memory will be used for the L3 table
>> entries (1024 * 4K). 
> 
> Jan doesn't mean that the M2P only addresses up to 256G; he
> means that the
> M2P itself can be up to 256G in size! I.e., it will be able to
> address up to
> 128TB :-)
> 
> The only structure in Xen that I think doesn't just work with
> expanding its virtual memory allocation and sparse-mapping is the '1:1
> memory mapping'.
> Because to address such large sparse memory maps, the virtual memory
> allocation would be too large. So I'm guessing the '1:1 memory
> map' area
> will end up divided into say 1TB strips with phys_to_virt() executing
> a radix-tree lookup to map physical address ranges onto those
> dynamically-allocated strips. 

Yes, and the strip size maybe dynamic determined. 

Also if we will always keep the 1:1 memory mapping for all memory? Currently we 
have at most 5TB  virtual address for the 1:1 mapping, hopefully it will work 
for most system now.

Some other need changed for hot-add and sparse memory. For example the 
phys_to_nid/memnodemap etc.

--jyh

> 
> -- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.