[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen on ARM IRQ latency and scheduler overhead



Hi all,

I have run some IRQ latency measurements on Xen on ARM on a Xilinx
ZynqMP board (four Cortex A53 cores, GICv2).

Dom0 has 1 vcpu pinned to cpu0, DomU has 1 vcpu pinned to cpu2.
Dom0 is Ubuntu. DomU is an ad-hoc baremetal app to measure interrupt
latency: https://github.com/edgarigl/tbm

I modified the app to use the phys_timer instead of the virt_timer.  You
can build it with:

make CFG=configs/xen-guest-irq-latency.cfg 

I modified Xen to export the phys_timer to guests, see the very hacky
patch attached. This way, the phys_timer interrupt should behave like
any conventional device interrupts assigned to a guest.

These are the results, in nanosec:

                        AVG     MIN     MAX     WARM MAX

NODEBUG no WFI          1890    1800    3170    2070
NODEBUG WFI             4850    4810    7030    4980
NODEBUG no WFI credit2  2217    2090    3420    2650
NODEBUG WFI credit2     8080    7890    10320   8300

DEBUG no WFI            2252    2080    3320    2650
DEBUG WFI               6500    6140    8520    8130
DEBUG WFI, credit2      8050    7870    10680   8450

DEBUG means Xen DEBUG build.
WARM MAX is the maximum latency, taking out the first few interrupts to
warm the caches.
WFI is the ARM and ARM64 sleeping instruction, trapped and emulated by
Xen by calling vcpu_block.

As you can see, depending on whether the guest issues a WFI or not while
waiting for interrupts, the results change significantly. Interestingly,
credit2 does worse than credit1 in this area.

Trying to figure out where those 3000-4000ns of difference between the
WFI and non-WFI cases come from, I wrote a patch to zero the latency
introduced by xen/arch/arm/domain.c:schedule_tail. That saves about
1000ns. There are no other arch specific context switch functions worth
optimizing.

We are down to 2000-3000ns. Then, I started investigating the scheduler.
I measured how long it takes to run "vcpu_unblock": 1050ns, which is
significant. I don't know what is causing the remaining 1000-2000ns, but
I bet on another scheduler function. Do you have any suggestions on
which one?


Assuming that the problem is indeed the scheduler, one workaround that
we could introduce today would be to avoid calling vcpu_unblock on guest
WFI and call vcpu_yield instead. This change makes things significantly
better:

                                     AVG     MIN     MAX     WARM MAX
DEBUG WFI (yield, no block)          2900    2190    5130    5130
DEBUG WFI (yield, no block) credit2  3514    2280    6180    5430

Is that a reasonable change to make? Would it cause significantly more
power consumption in Xen (because xen/arch/arm/domain.c:idle_loop might
not be called anymore)?


If we wanted to zero the difference between the WFI and non-WFI cases,
would we need a new scheduler? A simple "noop scheduler" that statically
assigns vcpus to pcpus, one by one, until they run out, then return
error? Or do we need more extensive modifications to
xen/common/schedule.c? Any other ideas?


Thanks,

Stefano

Attachment: time
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.