[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] sparse M2P table




Jan Beulich wrote:
>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 17.09.09 08:19 >>>
>> The only structure in Xen that I think doesn't just work with
>> expanding its virtual memory allocation and sparse-mapping is the
>> '1:1 memory mapping'. 
> 
> The frame_table really also needs compression - a 256G M2P would imply
> 2T of frame table, which isn't reasonable. I'm therefore using the
> same indexing for 1:1 mapping and frame table.

Hmm, originally I thought it is ok to have hole in the frame_table, but yes, 
seems we can't have hole in it. So either we squeze out the hole, or we need 
waste a lot of memory.

> 
>> Because to address such large sparse memory maps, the virtual memory
>> allocation would be too large. So I'm guessing the '1:1 memory map'
>> area will end up divided into say 1TB strips with phys_to_virt()
>> executing a radix-tree lookup to map physical address ranges onto
>> those dynamically-allocated strips.
> 
> Actually, I considered anything but a simple address transformation as
> too expensive (for a first cut at least), and I'm thus not
> using any sort
> of lookup, but rather determine bits below the most
> significant one that
> aren't used in any (valid) mfn. Thus the transformation is two and-s,
> a shift, and an or. 

Can you elaborate it a bit? For example, considering system with following 
memory layout: 1G ~ 3G, 1024G ~ 1028G, 1056G~1060G, I did't catch you algrithom 
:$

> 
> A more involved translation (including some sort of lookup) can imo be
> used as replacement if this simple mechanism turns out insufficient.
> 
> Btw., I also decided against filling the holes in the M2P
> table mapping -
> for debuggability purposes, I definitely want to keep the holes in the
> writeable copy of the table (so that invalid accesses crash rather
> than causing data corruption). Instead, I now fill the holes only in
> the XENMEM_machphys_mfn_list handler (and I'm intentionally using the
> most recently stored mfn in favor of the first one to reduce the
> chance of reference count overflows when these get passed back
> to mmu_update - if the holes turn out still too large, this might need
> further tuning, but otoh in such setups [as said in an earlier reply]
> I think the tools should avoid mapping the whole M2P in a single
> chunk, and hence immediately recurring mfn-s can serve as a good
> indication to the tools that this ought to happen).
> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.