[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [DOC RFC] Heterogeneous Multi Processing Support in Xen



On Thu, 2016-12-08 at 11:38 +0100, Juergen Gross wrote:
> So you really solved the following problem in credit2?
> 
> You have three domains with 2 vcpus each and different weights. Run
> them
> on 3 physical cpus with following pinning:
> 
> dom1: pcpu 1 and 2
> dom2: pcpu 2 and 3
> dom3: pcpu 1 and 3
> 
> How do you decide which vcpu to run on which pcpu for how long?
> 
Ok, back to this (sorry, a bit later than how I'd hoped). So, I tried
to think a bit at the described scenario, but could not figure out what
you are hinting at.

There are missing pieces of information, such as what the vcpus do, and
what exactly are the weights (besides than being different).

Therefore, I decided to put together a quick eperiment. I've created
the domains, sat up all their vcpus to run cpu-hog tasks, picked up a
configuration of my choice for the weights, and run them under both
Credit1 and Credit2.

It's a very simple tests, but it will hopefully be helpful in
understanding the situation better.

Here's the result.

On Credit1, equal weigths, unpinned (i.e., plenty of pCPUs available):
 NAME  CPU(%) [1]
 vm1   199.9
 vm2   199.9
 vm3   199.9

Pinning as you suggest (i.e., to 3 pCPUs):
 NAME  CPU(%) [2]
 vm1   149.0
 vm2    66.2
 vm3    84.8

Changing the weights:
 Name  ID Weight  Cap [3]
 vm1   8    256    0
 vm2   9    512    0
 vm3   6   1024    0
 NAME  CPU(%)
 vm1   100.0
 vm2   100.0
 vm3   100.0

So, here in Credit1, things are ok when there's no pinning in place [1]. As 
soon as we pin, _even_without_ touching the weights [2], things become *crazy*. 
In fact, there's absolutely no reason why CPU% numbers would look like how they 
look in [2].

This does not surprise me much, though. Credit1's load balancer basically moves 
vcpus around in a pseudo random fashion, and having to enforce pinning 
constraints make things even more unpredictable.

Then it comes the amusing part. At this point, I wonder if I haven't done 
something wrong in setting up the experiments... Because things really looks 
too funny. :-O
In fact, for some reasons, changing the weights as shown [3] cause CPU% numbers 
to fluctuate a bit (not visible above) and then to stabilize at 100%. That may 
look like an improvement, but certainly does not reflect the chosen set of 
weights.

So, I'd say you were right. Or, actually, things are even worse than what you 
said: in Credit1, it's not only that pinning and weights does not play well 
together, it's that even pinning alone works pretty bad.


Now, on Credit2, equal weigths, unpinned (i.e., plenty of pCPUs
available):
 NAME  CPU(%) [4]
 vm1   199.9
 vm2   199.9
 vm3   199.9

Pinning as you suggest (i.e., to 3 pCPUs):
 NAME  CPU(%) [5]
 vm1   100.0
 vm2   100.1
 vm3   100.0

Changing the weights:
 Name  ID Weight [6]
 vm1   2    256
 vm2   3    512
 vm3   6   1024
 NAME  CPU(%)
 vm1    44.1
 vm2    87.2
 vm3   168.7

Which looks nearly *perfect* to me. :-)

In fact, with no constraints [4], each VM gets the 200% share it's
asking for.

When only 3 pCPUs can be used, by means of pinning [5], each VM gets
its fair share of 100%.

When setting up weights in such a way that vm2 should get 2x CPU time
than vm1 and vm3 should get 2x CPU time than vm2 [6], things looks,
well, exactly like that! :-P

So, since I did not fully understand the problem, I'm not sure whether
this really answers your question, but it look to me like it actually
could! :-D

For sure, it puts Credit2 in rather a good light :-P.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.