[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 02/23] xen: move NUMA_NO_NODE to public memory.h as XEN_NUMA_NO_NODE
On 02/03/15 17:34, David Vrabel wrote: > On 02/03/15 17:26, Jan Beulich wrote: >>>>> On 02.03.15 at 18:01, <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 02/03/15 16:50, Jan Beulich wrote: >>>>>>> On 02.03.15 at 17:39, <wei.liu2@xxxxxxxxxx> wrote: >>>>> On Mon, Mar 02, 2015 at 04:27:25PM +0000, Jan Beulich wrote: >>>>>>>>> On 02.03.15 at 17:08, <wei.liu2@xxxxxxxxxx> wrote: >>>>>>> On Mon, Mar 02, 2015 at 03:51:37PM +0000, Jan Beulich wrote: >>>>>>>>>>> On 02.03.15 at 16:38, <wei.liu2@xxxxxxxxxx> wrote: >>>>>>>>> On Mon, Mar 02, 2015 at 03:30:21PM +0000, Ian Campbell wrote: >>>>>>>>>> On Mon, 2015-03-02 at 07:04 +0000, Jan Beulich wrote: >>>>>>>>>>>>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 02/27/15 5:58 PM >>> >>>>>>>>>>>> On 27/02/15 16:51, Wei Liu wrote: >>>>>>>>>>>>> During last round review, Andrew wanted me to move this to Xen >>>>>>>>>>>>> public >>>>>>>>>>>>> header to avoid reinventing it in libxc. Now this value is used >>>>>>>>>>>>> in libxc >>>>>>>>>>>>> patch. >>>>>>>>>>>>> >>>>>>>>>>>>> But I don't particularly mind whether we move it or not, it's up >>>>>>>>>>>>> to you >>>>>>>>>>>>> maintainers to decide. >>>>>>>>>>>> It is a sentinel value used in the public ABI. It should therefore >>>>>>>>>>>> appear in the public API. >>>>>>>>>>> Which it already does, as XENMEMF_get_node(0). I don't think it >>>>>>>>>>> needs >>>>>>>>>>> particular naming as a new constant, even more that it isn't >>>>>>>>>>> intended to >>>>>>>>>>> be used explicitly in any of the memops. >>>>>>>>>> IMHO the named constant does seem to make the tools code at least >>>>>>>>>> more >>>>>>>>>> readable, but without Wei having said where this is to be used I'm >>>>>>>>>> not >>>>>>>>>> sure where it should live. In particular I'm unsure if/how/where this >>>>>>>>>> value gets passed to a hypercall, as opposed to perhaps being used >>>>>>>>>> as a >>>>>>>>> This is used to fill in vnode_to_pnode array. That array get >>>>>>>>> subsequently passed down to hypervisor. >>>>>>>> Do we really accept NUMA_NO_NODE to be passed that way? >>>>>>>> >>>>>>> public/domctl.h:struct xen_domctl_vnuma has vnode_to_pnode array. >>>>>> That wasn't my concern - I was rather wondering why we would >>>>>> accept any of this array's fields to be set to "no node". >>>>>> >>>>> If you want to have numa topology exposed to guest but doesn't care >>>>> about underly memory affinity? >>>> Is this useful for anything in reality? >>> Yes. If there is insufficient memory on real numa nodes, the memory >>> could be striped and the guest told that its memory really is scattered >>> all over. >> I which case it could as well not be handed any vNUMA info at all. > A guest that previously had 2 vNUMA nodes is migrated to a host with > only 1 pNUMA node. It should still have 2 vNUMA nodes. A natural consequence of vNUMA is that the guest must expect the vNUMA layout to change across suspend/resume. The toolstack cannot guarentee that it can construct a similar vNUMA layout after a migration. This includes the toolstack indicating that it was unable to make any useful NUMA affinity with the memory ranges. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |