[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Notes on stubdoms and latency on ARM



On Fri, 2017-07-07 at 10:03 -0700, Volodymyr Babchuk wrote:
> On 7 July 2017 at 09:41, Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> wrote:
> > 
> > Also, are you sure (e.g., because of how the Linux driver is done)
> > that
> > this always happen on one vCPU?
> 
> No, I can't guarantee that. Linux driver is single threaded, but I
> did
> nothing to pin in to a certain CPU.
> 
Ok, it was just to understand.

> > 
> > > - In total there are 6 vcpus active
> > > 
> > > I run test in DomU:
> > > real 113.08
> > > user 0.00
> > > sys 113.04
> > > 
> > 
> > Ok, so there's contention for pCPUs. Dom0's vCPUs are CPU hogs,
> > while,
> > if my assumption above is correct, the "SMC vCPU" of the DomU is
> > I/O
> > bound, in the sense that it blocks on an operation --which turns
> > out to
> > be SMC call to MiniOS-- then resumes and block again almost
> > immediately.
> > 
> > Since you are using Credit, can you try to disable context switch
> > rate
> > limiting? Something like:
> > 
> > # xl sched-credit -s -r 0
> > 
> > should work.
> 
> Yep. You are right. In the environment described above (Case 2) I now
> get much better results:
> 
>  real 1.85
> user 0.00
> sys 1.85
> 
Ok, glad to hear it worked! :-)

> > This looks to me like one of those typical scenario where rate
> > limiting
> > is counterproductive. In fact, every time that your SMC vCPU is
> > woken
> > up, despite being boosted, it finds all the pCPUs busy, and it
> > can't
> > preempt any of the vCPUs that are running there, until rate
> > limiting
> > expires.
> > 
> > That means it has to wait an interval of time that varies between 0
> > and
> > 1ms. This happens 100000 times, and 1ms*100000 is 100 seconds...
> > Which
> > is roughly how the test takes, in the overcommitted case.
> 
> Yes, looks like that was the case. Does this means that ratelimiting
> should be disabled for any domain that is backed up with device
> model?
> AFAIK, device models are working in the exactly same way.
> 
Rate limiting is a scheduler-wide thing. If it's on, all the context
switching rate of all domains is limited. If it's off, none is.

We'll have to see when we will have something that is less of a proof-
of-concept, but it is very likely that, for your use case, rate-
limiting should just be kept disabled (you can do that with a Xen boot
time parameter, so that you don't have to issue the command all the
times).

> > Yes, but it again makes sense. In fact, now there are 3 CPUs in
> > Pool-0,
> > and all are kept always busy by the the 3 DomU vCPUs running
> > endless
> > loops. So, when the DomU's SMC vCPU wakes up, has again to wait for
> > the
> > rate limit to expire on one of them.
> 
> Yes, as this was caused by ratelimit, this makes perfect sense. Thank
> you.
> 
> I tried number of different cases. Now execution time depends
> linearly
> on number of over-committed vCPUs (about +200ms for every busy vCPU).
> That is what I'm expected.
>
Is this the case even when MiniOS is in its own cpupool? If yes, it
means that what is that the slowdown is caused by the contention
between the vCPU that is doing the SMC calls, and the other vCPUs (of
either the same or other domains).

Which should not really happen in this case (or, at least, not to grow
linearly), since you are on Credit1, and in there, the SMC vCPU should
pretty much be always boosted, and hence get to be scheduled almost
immediately, no matter how many CPU hogs there are around.

Depending on the specific details of your usecase/product, we can try
to assign to the various domains different weights... but I need to
think a bit more about this...

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.