[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 0/4] xen: events: improve event channel IRQ allocation strategy.
On Tue, 2011-01-11 at 20:40 +0000, Konrad Rzeszutek Wilk wrote: > On Tue, Jan 11, 2011 at 07:25:18PM +0000, Ian Campbell wrote: > > On Tue, 2011-01-11 at 18:34 +0000, Konrad Rzeszutek Wilk wrote: > > > > > > > The series has been tested as: > > > > * PV guest with a PCI passthrough device. > > > > > > Which type of domain0? A 2.6.37 + Xen PCI backend + your patches? > > > Or the older 2.6.32? > > > > The 2.6.32 variant. Same for the HVM test. Dom 0 was 2.6.37 + patches. > > Ok. Could you test the PV guest with a PCI passthrough device > on a PV domain 0 that is 2.6.37 based? It _should_ work > but it never hurts to try. > > For your convience I've put up a merge tree with stable/* patches, > devel/irq.rework (has mine and these patches you have posted), > devel/xen-pciback, devel/gntdev (Stefano's last posting) > > It is devel/next-2.6.37 I tested using this kernel as dom0 and my previous 2.6.37+patches as PV domU and PCI passthrough seemed to work ok. In particular: [ 0.380253] uhci_hcd 0000:00:00.0: enabling device (0000 -> 0001) [ 0.380606] uhci_hcd 0000:00:00.0: Xen PCI mapped GSI20 to IRQ44 [ 0.380669] uhci_hcd 0000:00:00.0: enabling bus mastering [ 0.380794] uhci_hcd 0000:00:00.0: setting latency timer to 64 [ 0.380844] uhci_hcd 0000:00:00.0: UHCI Host Controller [ 0.381369] uhci_hcd 0000:00:00.0: new USB bus registered, assigned bus number 1 It didn't stop me testing but I got a bunch of spew in the form of ten score of these earlish in the boot of devel/next-2.6.37 as dom0: WARNING: at /local/scratch/ianc/devel/kernels/linux-2.6/arch/x86/xen/multicalls.c:182 xen_mc_flush+0x293/0x2a0() Hardware name: PowerEdge 860 Modules linked in: Pid: 0, comm: swapper Tainted: G W 2.6.37-x86_32p-xen0-00105-g4abcf5c #99 Call Trace: [<c1003c33>] ? xen_mc_flush+0x293/0x2a0 [<c1003c33>] ? xen_mc_flush+0x293/0x2a0 [<c103ef7c>] warn_slowpath_common+0x6c/0xa0 [<c1003c33>] ? xen_mc_flush+0x293/0x2a0 [<c103efcd>] warn_slowpath_null+0x1d/0x20 [<c1003c33>] xen_mc_flush+0x293/0x2a0 [<c1006597>] ? xen_set_domain_pte+0x57/0x100 [<c10065df>] xen_set_domain_pte+0x9f/0x100 [<c1003e00>] ? __raw_callee_save_xen_pte_val+0x0/0x8 [<c1006716>] xen_set_pte+0x86/0x90 [<c1003e00>] ? __raw_callee_save_xen_pte_val+0x0/0x8 [<c1562e8b>] xen_set_pte_init+0x8a/0x96 [<c1571825>] kernel_physical_mapping_init+0x2d3/0x3de [<c138117e>] init_memory_mapping+0x27e/0x4f0 [<c156461e>] setup_arch+0x74d/0xcc4 [<c139fa7f>] ? _raw_spin_unlock_irqrestore+0x3f/0x70 [<c103fd2d>] ? vprintk+0x2ad/0x470 [<c1069c28>] ? trace_hardirqs_off_caller+0xa8/0x140 [<c1006990>] ? __raw_callee_save_xen_save_fl+0x0/0x8 [<c1006998>] ? __raw_callee_save_xen_restore_fl+0x0/0x8 [<c1069c28>] ? trace_hardirqs_off_caller+0xa8/0x140 [<c1175712>] ? __raw_spin_lock_init+0x32/0x60 [<c1002640>] ? xen_cpuid+0x0/0xa0 [<c1002640>] ? xen_cpuid+0x0/0xa0 [<c155e72c>] start_kernel+0x8b/0x381 [<c155e0b3>] i386_start_kernel+0xa2/0xde [<c156175a>] xen_start_kernel+0x5fa/0x6b0 ---[ end trace 4eaa2a86a8e4bf7c ]--- Full bootlog is attached. Attachment:
bootlog.gz _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |