[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [RFC] Xen NUMA strategy




>-----Original Message-----
>From: Ian Pratt [mailto:Ian.Pratt@xxxxxxxxxxxx]
>Sent: Tuesday, September 18, 2007 4:43 PM
>To: Xu, Anthony; Akio Takebe; Andre Przywara;
xen-devel@xxxxxxxxxxxxxxxxxxx
>Cc: ian.pratt@xxxxxxxxxxxx
>Subject: RE: [Xen-devel] [RFC] Xen NUMA strategy
>
>> >We may need to write something about guest NUMA in guest
>configuration
>> file.
>> >For example, in guest configuration file;
>> >vnode = <a number of guest node>
>> >vcpu = [<vcpus# pinned into the node: machine node#>, ...]
>> >memory = [<amount of memory per node: machine node#>, ...]
>> >
>> >e.g.
>> >vnode = 2
>> >vcpu = [0-1:0, 2-3:1]
>> >memory = [128:0, 128:1]
>> >
>> >If we setup vnode=1, old OSes should work fine.
>
>We need to think carefully about NUMA use cases before implementing a
>bunch of mechanism.

Agree, that's why we posted this thread, we hope we can get enough
input.



>
>The way I see it, in most situations it will not make sense for guests
>to span NUMA nodes: you'll have a number of guests with relatively
small
>numbers of vCPUs, and it probably makes sense to allow the guests to be
>pinned to nodes.What we have in Xen today works pretty well for this
>case, but we could make configuration easier by looking at more
>sophisticated mechanisms for specifying CPU groups rather than just
>pinning. Migration between nodes could be handled with a locahost
>migrate, but we could obviously come up with something more time/space
>efficient (particularly for HVM gusts) if required.
>
>There may be some usage scenarios where having a large SMP guest that
>spans multiple nodes would be desirable. However, there's a bunch of
>scalability works that's required in Xen before this will really make
>sense, and all of this is much higher priority (and more generally
>useful) than figuring out how to expose NUMA topology to guests. I'd
>definitely encourage looking at the guest scalability issues first.


        What have you said maybe true, many of guests have small numbers
of vCPUs. In this situation, we need to pin guest to node for good
performance.
Pining guest to node may lead to imbalance after some creating and
destroying guest. We also need to handle imbalance. Better host NUMA
support is needed.
        
        Even we don't have big guest, we may also need to let guest span
NUMA node.  For example, when we create a guest which has big memory,
none of the NUMA node can satisfy the memory request, so this guest has
to span NUMA node. We need to provide guest the NUMA information.

        There is still very small NUMA node. May be one CPU per node, if
guest has two vCPUs, we need provide guest NUMA information, and
otherwise it will impact performance badly.


- Anthony

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.