[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] query memory allocation per NUMA node



On Thu, 2017-01-12 at 01:37 +0000, Kun Cheng wrote:
> Hello Dario,
> 
Hi! :-)

On thing: is it possible for you to avoid HTML emails? They're bad for
many reasons in mailing lists (always?), the main one for me is that it
is very hard to distinguish between what you are quoting and what is
actually new.

Thanks in advance.

> On Thu, Jan 12, 2017 at 8:33 AM Dario Faggioli
> <dario.faggioli@xxxxxxxxxx> wrote:
> > I lost you. As you say, first of all, placement algorithm
> > determines a
> > set of NUMA nodes. It may be one or more nodes, depending on the
> > actual
> > situation.
> > 
> > Then, memory is distributed among the nodes that are part of that
> > set
> > roughly evenly.
> > 
> > That's what happens.
> > 
> > > As having 800MB on node 0 is pretty much the same as 900MB on
> > node 0
> > > if your VM requires 1GB, both will have a similar performance
> > impact
> > > on your VM.
> > >
> > Lost you again. 800 or 900 MB on node 0, and where's the rest? What
> > was
> > the output of the automatic placement?
> 
> OK. What I wanted to say was considering an example, where we have a
> new VM requiring 1GB memory but Xen couldn't find a suitable node due
> to heavy load. Then perhaps hypervisor was going to allocate the
> memory among two or more nodes (let's say it's node 0 & 1 here).
>
Yes, sure. If there is more than 1GB available in the system, but only
oddly/randomly spread among the various nodes, Xen would successfully
build the domain using that memory.

It's going to be suboptimal, but better than not creating the domain at
all. :-)

>  In such a case, the performance impact of having 800 or 900MB
> allocated on node 0 for that VM was alomost the same. As much as I
> understands, both would cause performance drop comparied to placing
> that VM on one node, then it's just a matter of how much the figure
> (drop) is.
>
Well, it's really hard to tell. In general, I think I understand that
either proper placement is possible, or we just fall back to best
effort (after warning the user).

What the actual impact will be depends on many things, such as, what
memory is allocated where, on what pCPUs the vCPUs accessing that
memory run (and for how long and how frequently).

> I just wanted to use this example to indicate that once you
> distributed memory on multiple nodes then it would cause performance
> drop no matter how you optimized the distribution.
>
Distributing memory (evenly, ideally) on, say, 2 nodes, and soft-
pinning the vCPUs to those two nodes should not perform too bad. And in
fact, the placement algorithm consider this solution (and even
solutions with more nodes), if it finds impossible to only use 1.

But sure, more than 1 node is worse than just 1 node.

> > > Second, a VM can be migrated to other nodes due to load
> > balancing,
> > > which may makes it harder to count how much memory has been
> > allocated
> > > for a certain VM on each node.
> > >
> > No, it can't. And if it could, updating the counters of how many
> > pages
> > are moved between nodes wouldn't be difficult at all (while,
> > unfortunately, other things are, which is why, as I said, that's
> > not
> > possible yet).
> 
> I remembered, in credit2 Xen would only migrate the VCPUs rather than
> allocated memory for a VM. 
>
It's not a credit1 or credit2 thing. It's Xen that does not move the
memory, because it's not capable of such, no matter what scheduler is
used.

> I mixed it up with the previous optimization I wanted to do after I
> wrote to your a year ago (I thought that could laed to a situation
> where VCPUs and memory are on different nodes). At that time I wanted
> to migrate the memory together with VCPUs in an elegent way (not just
> moving memory or hot memory immediately ater each time VCPUs'
> migration). 
>
Yep, and this is what's tricky, from a load balancing perspective. But
we want Xen to be able to move the memory, before starting thinking of
a policy for this. :-O

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.