[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Tue, 27 Nov 2007 08:56:27 +0000
  • Delivery-date: Tue, 27 Nov 2007 00:51:00 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acgw02Tzo16GQJzGEdyEaQAWy6hiGQ==
  • Thread-topic: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)

On 27/11/07 08:43, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> I think page allocation in this path isn't nice, at least not without success
> guarantee (not the least because because netback doesn't check return
> values). I would therefore rather see a solution in placing the burden of
> ensuring accessibility on the producer (netback) of the page, and fail the
> transfer if the destination domain can't access the page (whether to be
> nice and try an allocate-and-copy operation here is a secondary thing).
> 
> Netback would then need to determine the address size of netfront's domain
> (just like blkback and blktap do, except that HVM domains should also be
> treated as not requiring address restriction), and have two pools of pages
> for use in transfers - one unrestricted and one limited to 37 address bits
> (the
> two could be folded for resource efficiency if the machine has less than
> 128G). Besides that, netback would also start checking return values of the
> multicall pieces.

I don't get how your netback approach works. The pages we transfer do not
originate from netback, so it has little control over them. And, even if it
did, when we allocate pages for network receive we do not know which
domain's packet will end up in each buffer.

Personally I think doing it in Xen is perfectly good enough for supporting
this very out-of-date network receive mechanism.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.