[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen optimization



Hi,
sorry, my explanation wasn't precise and I missed the point.
vCPU pinning with sched=null I put "just in case", because it doesn't hurt.

Yes, PetaLinux domain is dom0.

Tested with Credit scheduler before (it was just the LED blink
application but anyway), it results with bigger jitter then null
scheduler. For example, with Credit scheduler LED blinking result in
approximately 3us jitter where with null scheduler there is no jitter.
vwfi=native was giving the domain destruction problem which you fixed
by sending me patch, approximately 2 weeks ago if you recall :) but I
still didn't test it's impact on performance, I will do it ASAP and
share results (I think that without vwfi=native jitter will be the
same or even bigger).

 When I say "without Xen", yes, I mean without any OS. Just hardware
and this bare-metal app. I do expect latency to be higher in the Xen
case and I'm curious how much exactly (which is the point of my work
and also master thesis for my faculty :D).

Now, the point is that when I set only LED blinking (without timer) in
my application there is no jitter (in Xen case) but when I add timer
which generates interrupt every us, jitter of 3 us occurs. Timer I use
is zynq ultrascale's triple timer counter. I'm suspecting that timer
interrupt is creating that jitter.

For interrupts I use passthrough in bare-metal application's
configuration file (which works for GPIO LED because there is no
jitter, interrupt can "freely go" from guest domain directly to GPIO
LED).

Also, when I create guest domain (which is this bare-metal
application) I get this messages:

(XEN) printk: 54 messages suppressed.
(XEN) d2v0 No valid vCPU found for vIRQ32 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ33 in the target list (0x2). Skip it
root@uz3eg-iocc-2018-2:~# (XEN) d2v0 No valid vCPU found for vIRQ34 in
the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ35 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ36 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ37 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ38 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ39 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ40 in the target list (0x2). Skip it
(XEN) d2v0 No valid vCPU found for vIRQ41 in the target list (0x2). Skip it

In attachments I included dmesg, xl dmesg and bare-metal application's
configuration file.

Thanks in advance, Milan Boberic.






On Tue, Oct 9, 2018 at 6:46 PM Dario Faggioli <dfaggioli@xxxxxxxx> wrote:
>
> On Tue, 2018-10-09 at 12:59 +0200, Milan Boberic wrote:
> > Hi,
> >
> Hi Milan,
>
> > I'm testing Xen Hypervisor 4.10 performance on UltraZed-EG board with
> > carrier card.
> > I created bare-metal application in Xilinx SDK.
> > In bm application I:
> >            - start triple timer counter (ttc) which generates
> > interrupt every 1us
> >            - turn on PS LED
> >            - call function 100 times in for loop (function that sets
> > some values)
> >            - turn off LED
> >            - stop triple timer counter
> >            - reset counter value
> >
> Ok, I'm adding Stefano, Julien, and a couple of other people interested
> in RT/lowlat on Xen.
>
> > I ran this bare-metal application under Xen Hypervisor with following
> > settings:
> >     - used null scheduler (sched=null) and vwfi=native
> >     - bare-metal application have one vCPU and it is pinned for pCPU1
> >     - domain which is PetaLinux also have one vCPU pinned for pCPU0,
> > other pCPUs are unused.
> > Under Xen Hypervisor I can see 3us jitter on oscilloscope.
> >
> So, this is probably me not being familiar with Xen on Xilinx (and with
> Xen on ARM as a whole), but there's a few things I'm not sure I
> understand:
> - you say you use sched=null _and_ pinning? That should not be
>   necessary (although, it shouldn't hurt either)
> - "domain which is PetaLinux", is that dom0?
>
> IAC, if it's not terrible hard to run this kind of test, I'd say, try
> without 'vwfi=native', and also with another scheduler, like Credit,
> (but then do make sure you use pinning).
>
> > When I ran same bm application with JTAG from Xilinx SDK (without Xen
> > Hypervisor, directly on the board) there is no jitter.
> >
> Here, when you say "without Xen", do you also mean without any
> baremetal OS at all?
>
> > I'm curios what causes this 3us jitter in Xen (which isn't small
> > jitter at all) and is there any way of decreasing it?
> >
> Right. So, I'm not sure I've understood the test scenario either. But
> yeah, 3us jitter seems significant. Still, if we're comparing with
> bare-hw, without even an OS at all, I think it could have been expected
> for latency and jitter to be higher in the Xen case.
>
> Anyway, I am not sure anyone has done a kind of analysis that could
> help us identify accurately from where things like that come, and in
> what proportions.
>
> It would be really awesome to have something like that, so do go ahead
> if you feel like it. :-)
>
> I think tracing could help a little (although we don't have a super-
> sophisticated tracing infrastructure like Linux's perf and such), but
> sadly enough, that's still not available on ARM, I think. :-/
>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Software Engineer @ SUSE https://www.suse.com/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.