[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)



>> We saw this issue on our boxes too.
>> http://lists.xensource.com/archives/html/xen-devel/2007-08/msg00479.html 
>> I am trying to figure out how to write the copy-to-low-memory path.
>> Keir, could you give me some suggestions?
>
>In gnttab_transfer(), if the foreign domain (e) is 32-on-64 and the page
>being stolen from the local domain (d) is above 166GB then allocate anothr
>domheap page for e, copy the stolen page contents to it. Then free the
>stolen page and the new page takes its place.

I think page allocation in this path isn't nice, at least not without success
guarantee (not the least because because netback doesn't check return
values). I would therefore rather see a solution in placing the burden of
ensuring accessibility on the producer (netback) of the page, and fail the
transfer if the destination domain can't access the page (whether to be
nice and try an allocate-and-copy operation here is a secondary thing).

Netback would then need to determine the address size of netfront's domain
(just like blkback and blktap do, except that HVM domains should also be
treated as not requiring address restriction), and have two pools of pages
for use in transfers - one unrestricted and one limited to 37 address bits (the
two could be folded for resource efficiency if the machine has less than
128G). Besides that, netback would also start checking return values of the
multicall pieces.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.