[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Performance drop after enabling hardware IOMMU



Hi,

We have a standard environment that we deploy nightly on our servers.  We are 
currently doing an upgrade to Xen 
4.8.2 and at the same time trying to enable the hardware iommu.  However when 
we do this the time to do the 
deployment is about 50% longer (~65 mins vs ~45 mins).  We are using a driver 
domain which has the disk 
controller passed through.  I have tried to boot 4.8.2 with the hardware iommu 
disabled but the driver domain 
crashes in this configuration where with 4.4.3 it did not.  Kernel for dom0 and 
domU is 4.1.44.  Could there be 
an obvious explanation for this performance drop off?  Is there other 
information about the environment which 
could be useful?

What I do see happening is that the IRQ for the disk controller gets changed 
from 93 to 16 once bound to pciback 
which then results in it being shared with a network card passed through to 
another driver domain.  I also get 
the stack trace below triggered in dom0 shortly after the disk driver domain 
boots (but definitely after the 
card is initialised and performing I/O).  Using the irqpoll option as 
suggestion reduces performance further.

Thanks for any insights,
James

[  415.321675] irq 16: nobody cared (try booting with the "irqpoll" option)
[  415.322098] CPU: 0 PID: 0 Comm: swapper/0 Tainted: P           OE   
4.1.44-040144-zdom0 #201709190942
[  415.322099] Hardware name: HP ProLiant EC200a/ProLiant EC200a, BIOS U26 
11/09/2016
[  415.322101]  0000000000000000 ffff880287e03d78 ffffffff817e2122 
ffff880281c0e000
[  415.322104]  ffff880281c0e0b4 ffff880287e03da8 ffffffff810cf0a6 
ffff880287e03dc8
[  415.322106]  ffff880281c0e000 0000000000000010 0000000000000000 
ffff880287e03df8
[  415.322109] Call Trace:
[  415.322110]  <IRQ>  [<ffffffff817e2122>] dump_stack+0x63/0x81
[  415.322122]  [<ffffffff810cf0a6>] __report_bad_irq+0x36/0xd0
[  415.322124]  [<ffffffff810cf60c>] note_interrupt+0x24c/0x2a0
[  415.322129]  [<ffffffff814f076a>] ? add_interrupt_randomness+0x3a/0x1e0
[  415.322132]  [<ffffffff810cc9ce>] handle_irq_event_percpu+0xbe/0x1f0
[  415.322134]  [<ffffffff810ccb4a>] handle_irq_event+0x4a/0x70
[  415.322136]  [<ffffffff810cfabe>] handle_fasteoi_irq+0x9e/0x170
[  415.322140]  [<ffffffff817e9a9c>] ? _raw_spin_lock_irq+0xc/0x60
[  415.322142]  [<ffffffff810cbefb>] generic_handle_irq+0x2b/0x40
[  415.322145]  [<ffffffff814a2bd2>] evtchn_fifo_handle_events+0x162/0x170
[  415.322149]  [<ffffffff8149fa2f>] __xen_evtchn_do_upcall+0x4f/0x90
[  415.322150]  [<ffffffff814a1834>] xen_evtchn_do_upcall+0x34/0x50
[  415.322153]  [<ffffffff817eb89e>] xen_do_hypervisor_callback+0x1e/0x40
[  415.322154]  <EOI>  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[  415.322159]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
[  415.322163]  [<ffffffff8100b1a0>] ? xen_safe_halt+0x10/0x20
[  415.322166]  [<ffffffff81020ade>] ? default_idle+0x1e/0x100
[  415.322169]  [<ffffffff810216ef>] ? arch_cpu_idle+0xf/0x20
[  415.322173]  [<ffffffff810bd86c>] ? cpu_startup_entry+0x32c/0x3d0
[  415.322175]  [<ffffffff817d275c>] ? rest_init+0x7c/0x80
[  415.322179]  [<ffffffff81d510f9>] ? start_kernel+0x497/0x4a4
[  415.322181]  [<ffffffff81d50a52>] ? set_init_arg+0x55/0x55
[  415.322183]  [<ffffffff81d505ee>] ? x86_64_start_reservations+0x2a/0x2c
[  415.322186]  [<ffffffff81d5470b>] ? xen_start_kernel+0x518/0x524
[  415.322187] handlers:
[  415.322318] [<ffffffffc05e8a80>] xen_pcibk_guest_interrupt [xen_pciback]
[  415.322788] Disabling IRQ #16


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.