[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Is it ok to routing periperal irq to any Domain0's vCPU on Xen ARM 4.5.x?



On Tue, 21 Apr 2015, ììì wrote:
> Thanks your reply!
>
> Â
>
> I Think I find a Interrupt mechanism problem in Xen ARM 4.5Âand I fix that 
> simply.
>
> After fix it my irq routing code is working well on Xen ARM 4.5 too.
>
> Â
>
> I will start new thread about that problem.
>
> please confirm please.

Confirm what? You are welcome to start a new thread an any interrupt
bugs you might have found.


> Â
>
> Thanks
>
> Â
>
> -----Original Message-----
> From: "Stefano Stabellini"<stefano.stabellini@xxxxxxxxxxxxx>
> To: "ììì"<supsup5642@xxxxxxxxx>;
> Cc: "Stefano Stabellini"<stefano.stabellini@xxxxxxxxxxxxx>; "Ian 
> Campbell"<ian.campbell@xxxxxxxxxx>; <xen-devel@xxxxxxxxxxxxx>;
> Sent: 2015-04-21 (í) 19:13:54
> Subject: Re: [Xen-devel] Is it ok to routing periperal irq to any Domain0's 
> vCPU on Xen ARM 4.5.x?
> Â
>
> On Tue, 21 Apr 2015, ììì wrote:
> > I have a one more question.Â
> >
> > Â
> >
> > In Xen ARM 4.5, All SPI is routed to pcpu0 that run domain0's vcpu0.
> >
> > If domain0's vcpu0 run on pcpu0 All SPI is routed to pcpu0
> >
> > If domain0's vcpu0 run on pcpu1 All SPI is routed to pcpu1
>
> That is correct.
>
>
> > these mean that Xen ARM 4.5 can inject spi only toÂdomain0's vcpu0
> >
> > and Xen ARM 4.5 cannot inject spi toÂdomain0's vcpu1.
> >
> > Right?
>
> No, that is wrong. If the guest requests the spis to be routed to
> another vcpu, writing the appropriate values to the virtual GICD, then
> Xen will route the spis to the pcpu running the requested vcpu.
>
> So if your guest is Linux and you
>
> echo 2 >Â/proc/irq/SPI_NUMBER/smp_affinity
>
> then you should see that Xen will start injecting the SPI to vcpu1.
>
>
>
> > And is this reason ARM 4.5 don't use maintanance interrupt?
>
> No, that is just a performance optimization.
>
>
> Â
> >
> > Thanks
> >
> > Â
> >
> > -----Original Message-----
> > From: "Stefano Stabellini"<stefano.stabellini@xxxxxxxxxxxxx>
> > To: "ììì"<supsup5642@xxxxxxxxx>;
> > Cc: "Stefano Stabellini"<stefano.stabellini@xxxxxxxxxxxxx>; "Ian 
> > Campbell"<ian.campbell@xxxxxxxxxx>; <xen-devel@xxxxxxxxxxxxx>;
> > Sent: 2015-04-21 (í) 02:25:09
> > Subject: Re: [Xen-devel] Is it ok to routing periperal irq to any Domain0's 
> > vCPU on Xen ARM 4.5.x?
> > Â
> >
> > On Mon, 20 Apr 2015, ììì wrote:
> > > Thanks your rely. But sorry i can't understand your explanation fully.
> > >
> > > Â
> > >
> > > I don't want to change GICD setting. I only want to change target 
> > > Domain0' vcpu injected SPI. vcpu0 or vcpu1.
> > >
> > > Â
> > >
> > > I understand like below.
> > >
> > > In Xen4.4, vgic_vcpu_inject_irq()Âcan inject SPI to any Domain0's vcpu on 
> > > any pcpu.
> > >
> > > But int Xen4.5Âvgic_vcpu_inject_irq() can inject SPI on only pcpu that 
> > > receive SPI from GICD.
> > >
> > > Right?
> >
> > Yes, if you meant the virtual GICD (not the physical GICD).
> >
> > I'll repeat:
> >
> > In Xen 4.5Âvgic_vcpu_inject_irq can inject a given SPI only to the pcpu
> > that is set to run the vcpu that should receive the interrupt, as per
> > the vGICD configuration.
> >
> > So if you
> >
> > echo VCPU_NUMBER > /proc/irq/IRQ_NUMBER/smp_affinity
> >
> > in the guest, it should work and it should have a concrete effect in the
> > delivery of the physical interrupt.
> >
> >
> > > Â
> > >
> > > Â
> > >
> > > -----Original Message-----
> > > From: "Stefano Stabellini"<stefano.stabellini@xxxxxxxxxxxxx>
> > > To: "ììì"<supsup5642@xxxxxxxxx>;
> > > Cc: "Ian Campbell"<ian.campbell@xxxxxxxxxx>; <xen-devel@xxxxxxxxxxxxx>; 
> > > "Stefano
> Stabellini"<Stefano.Stabellini@xxxxxxxxxxxxx>;
> > > Sent: 2015-04-20 (ì) 19:49:50
> > > Subject: Re: [Xen-devel] Is it ok to routing periperal irq to any 
> > > Domain0's vCPU on Xen ARM 4.5.x?
> > > Â
> > >
> > > In Xen 4.5 we rely on the fact that the physical irq is routed to the
> > > physical cpu running the vcpu of the domain that needs to receive the
> > > corresponding virq.
> > >
> > > So if you want to inject IRQ 100 to CPU 1, while Dom0 is set to receive
> > > vIRQ 100 (virtual irq corresponding to IRQ 100) to vcpu0, running on
> > > CPU 0, that won't work.
> > >
> > >
> > > On Sat, 18 Apr 2015, ììì wrote:
> > > > NO
> > > >
> > > > Â
> > > >
> > > > "Peripheral IRQ routing" means thatÂÂ
> > > >
> > > > Xen select itself one of domain0's vCPU to inject periperal IRQ.
> > > >
> > > > Â
> > > >
> > > > So below SimpleÂperipheral IRQ routing Code is a Example ofÂPeripheral 
> > > > IRQ routing.
> > > >
> > > > periperal IRQ is injected to Domain0' vcpu0 or vcpu1 without vGIC 
> > > > Information.
> > > >
> > > > Â
> > > >
> > > > I know that periperal IRQ can be process on any cpuÂin linux.
> > > >
> > > > So All Domain0's vcpu can process periperal IRQ injected by Xen.
> > > >
> > > > Â
> > > >
> > > > On Xen 4.4.1 my simpleÂSimpleÂperipheral irq routing Code is working 
> > > > well. (below)
> > > >
> > > > But Xen 4.5.0 it dosen't.
> > > >
> > > > Â
> > > >
> > > > Â
> > > >
> > > > -----Original Message-----
> > > > From: "Ian Campbell"<ian.campbell@xxxxxxxxxx>
> > > > To: "ììì"<supsup5642@xxxxxxxxx>;
> > > > Cc: <xen-devel@xxxxxxxxxxxxx>; "Stefano 
> > > > Stabellini"<Stefano.Stabellini@xxxxxxxxxxxxx>;
> > > > Sent: 2015-04-17 (ê) 18:49:39
> > > > Subject: Re: [Xen-devel] Is it ok to routing periperal irq to any 
> > > > Domain0's vCPU on Xen ARM 4.5.x?
> > > > Â
> > > >
> > > > On Fri, 2015-04-17 at 11:36 +0900, ììì wrote:
> > > > >
> > > > >
> > > > > I'm studying periperal irq routing to Domain0's vCPU
> > > >
> > > > What do you mean by "peripheral irq routing"? Do you mean supporting the
> > > > guest writing to GICD_ITARGER to cause an interrupt to be injected to a
> > > > specific vcpu?
> > > >
> > > > I thought that was supposed to work, Stefano?
> > > >
> > > > >
> > > > >
> > > > >
> > > > > I'm testing on Arndale Broad and Domain 0 has 2 vCPU.
> > > > >
> > > > > So Xen can select vcpu0 or vcpu1 to inject periperal irq.
> > > > >
> > > > >
> > > > >
> > > > > I tested periperal routing on Xen 4.4.1 and it works well.
> > > > >
> > > > > But I tested periperal routing on Xen 4.5.0 but irq dosen't works
> > > > > well.
> > > > >
> > > > >
> > > > >
> > > > > So I tested very simple periperal routing code like this.
> > > > >
> > > > > 'flag' is grobal variable.
> > > > >
> > > > >
> > > > >
> > > > > * In "do_IRQ" function on Xen 4.4.1
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > > - from
> > > > >
> > > > > if ( desc->status & IRQ_GUEST )
> > > > >
> > > > > {
> > > > >
> > > > > struct domain *d = action->dev_id;
> > > > >
> > > > >
> > > > >
> > > > > desc->handler->end(desc);
> > > > >
> > > > >
> > > > >
> > > > > desc->status = IRQ_INPROGRESS;
> > > > >
> > > > > desc->arch.eoi_cpu = smp_processor_id();
> > > > >
> > > > >
> > > > >
> > > > > /* XXX: inject irq into all guest vcpus */
> > > > >
> > > > > vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
> > > > >
> > > > > goto out_no_end;
> > > > >
> > > > > }
> > > > >
> > > > > -to if ( desc->status & IRQ_GUEST ) {
> > > > >
> > > > > struct domain *d = action->dev_id;
> > > > >
> > > > >
> > > > >
> > > > > desc->handler->end(desc);
> > > > >
> > > > >
> > > > >
> > > > > desc->status = IRQ_INPROGRESS;
> > > > >
> > > > > desc->arch.eoi_cpu = smp_processor_id();
> > > > >
> > > > >
> > > > >
> > > > > /* XXX: inject irq into all guest vcpus */
> > > > >
> > > > > vgic_vcpu_inject_irq(d->vcpu[++flag % 2], irq, 0);
> > > > >
> > > > > goto out_no_end;
> > > > >
> > > > > }
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > >
> > > > >
> > > > > * In "vgic_vcpu_inject_spi" function on Xen 4.5.0
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > > -from
> > > > >
> > > > > void vgic_vcpu_inject_spi(struct domain *d, unsigned int irq)
> > > > >
> > > > > {
> > > > >
> > > > > struct vcpu *v;
> > > > >
> > > > >
> > > > >
> > > > > /* the IRQ needs to be an SPI */
> > > > >
> > > > > ASSERT(irq >= 32 && irq <= gic_number_lines());
> > > > >
> > > > >
> > > > >
> > > > > v = vgic_get_target_vcpu(d->vcpu[0], irq);
> > > > >
> > > > > vgic_vcpu_inject_irq(v, irq);
> > > > >
> > > > > }
> > > > >
> > > > > -tovoid vgic_vcpu_inject_spi(struct domain *d, unsigned int irq)
> > > > >
> > > > > {
> > > > >
> > > > > struct vcpu *v;
> > > > >
> > > > >
> > > > >
> > > > > /* the IRQ needs to be an SPI */
> > > > >
> > > > > ASSERT(irq >= 32 && irq <= gic_number_lines());
> > > > >
> > > > >
> > > > >
> > > > > vgic_vcpu_inject_irq(d->vcpu[++flag % 2], irq);
> > > > >
> > > > > }
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > > so periperal irq injected to Domain0's vCPU0 or vCPU1.
> > > > >
> > > > >
> > > > >
> > > > > on Xen 4.4.1 it work well and i can confirm
> > > > >
> > > > > periperal irq routed vcpu0 or vcpu1 by using cat /proc/interrupts
> > > > > command.
> > > > >
> > > > >
> > > > >
> > > > > * cat /proc/interrupts command on Xen 4.4.1
> > > > >
> > > > > --------------------------------------------------
> > > > >
> > > > > CPU0 CPU1
> > > > >
> > > > > 27: 8690 8558 GIC 27 arch_timer
> > > > >
> > > > > 31: 34 1 GIC 31 events
> > > > >
> > > > > 65: 0 0 GIC 65 10800000.mdma
> > > > >
> > > > > 66: 0 0 GIC 66 121a0000.pdma
> > > > >
> > > > > 67: 0 0 GIC 67 121b0000.pdma
> > > > >
> > > > > 74: 0 0 GIC 74 101d0000.watchdog
> > > > >
> > > > > 75: 0 0 GIC 75 s3c2410-rtc alarm
> > > > >
> > > > > 76: 0 0 GIC 76 s3c2410-rtc tick
> > > > >
> > > > > 77: 0 0 GIC 77 13400000.pinctrl
> > > > >
> > > > > 78: 0 0 GIC 78 11400000.pinctrl
> > > > >
> > > > > 79: 0 0 GIC 79 3860000.pinctrl
> > > > >
> > > > > 82: 0 0 GIC 82 10d10000.pinctrl
> > > > >
> > > > > 88: 229 233 GIC 88 12c60000.i2c
> > > > >
> > > > > 90: 0 0 GIC 90 12c80000.i2c
> > > > >
> > > > > 91: 0 0 GIC 91 12c90000.i2c
> > > > >
> > > > > 96: 0 0 GIC 96 12ce0000.i2c
> > > > >
> > > > > 97: 0 0 GIC 97 10060000.tmu
> > > > >
> > > > > 103: 257 246 GIC 103 ehci_hcd:usb3, ohci_hcd:usb4
> > > > >
> > > > > 104: 0 0 GIC 104 xhci-hcd:usb1
> > > > >
> > > > > 107: 710 710 GIC 107 dw-mci
> > > > >
> > > > > 109: 9602 9610 GIC 109 dw-mci
> > > > >
> > > > > 156: 0 0 GIC 156 11c10000.mdma
> > > > >
> > > > > 160: 0 0 xen-dyn-event xenbus
> > > > >
> > > > > 183: 1 0 exynos_wkup_irq_chip 2 s5m8767
> > > > >
> > > > > 184: 33 0 xen-percpu-virq hvc_console
> > > > >
> > > > > 185: 0 0 s5m8767 12 rtc-alarm0
> > > > >
> > > > > 186: 0 0 exynos_wkup_irq_chip 4 SW-TACT2
> > > > >
> > > > > 187: 0 0 exynos_wkup_irq_chip 5 SW-TACT3
> > > > >
> > > > > 188: 0 0 exynos_wkup_irq_chip 6 SW-TACT4
> > > > >
> > > > > 189: 0 0 exynos_wkup_irq_chip 7 SW-TACT5
> > > > >
> > > > > 190: 0 0 exynos_wkup_irq_chip 0 SW-TACT6
> > > > >
> > > > > 191: 0 0 exynos_wkup_irq_chip 1 SW-TACT7
> > > > >
> > > > > IPI0: 0 0 CPU wakeup interrupts
> > > > >
> > > > > IPI1: 0 0 Timer broadcast interrupts
> > > > >
> > > > > IPI2: 6660 6920 Rescheduling interrupts
> > > > >
> > > > > IPI3: 0 0 Function call interrupts
> > > > >
> > > > > IPI4: 9 3 Single function call interrupts
> > > > >
> > > > > IPI5: 0 0 CPU stop interrupts
> > > > >
> > > > > IPI6: 0 0 IRQ work interrupts
> > > > >
> > > > > IPI7: 0 0 completion interrupts
> > > > >
> > > > >
> > > > >
> > > > > Err: 0
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > >
> > > > >
> > > > > But on Xen 4.5.0, Dom0 can not booting.
> > > > >
> > > > > below is domain0's booting message on Xen 4.5.0
> > > > >
> > > > >
> > > > >
> > > > > * domain0's booting message on Xen 4.5.0
> > > > >
> > > > > -----------------------------------------------------
> > > > >
> > > > > [ 3.900830] usb 3-3.2: new high-speed USB device number 3 using
> > > > > exynos-ehci
> > > > >
> > > > > [ 4.012184] usb 3-3.2: New USB device found, idVendor=05e3,
> > > > > idProduct=0610
> > > > >
> > > > > [ 4.017685] usb 3-3.2: New USB device strings: Mfr=0, Product=1,
> > > > > SerialNumber=0
> > > > >
> > > > > [ 4.025075] usb 3-3.2: Product: USB2.0 Hub
> > > > >
> > > > > [ 4.030156] hub 3-3.2:1.0: USB hub found
> > > > >
> > > > > [ 4.033555] hub 3-3.2:1.0: 4 ports detected
> > > > >
> > > > > [ 4.310681] usb 3-3.2.4: new high-speed USB device number 4 using
> > > > > exynos-ehci
> > > > >
> > > > > [ 4.406697] usb 3-3.2.4: New USB device found, idVendor=0b95,
> > > > > idProduct=772a
> > > > >
> > > > > [ 4.412372] usb 3-3.2.4: New USB device strings: Mfr=1, Product=2,
> > > > > SerialNumber=3
> > > > >
> > > > > [ 4.419921] usb 3-3.2.4: Product: AX88772
> > > > >
> > > > > [ 4.424087] usb 3-3.2.4: Manufacturer: ASIX Elec. Corp.
> > > > >
> > > > > [ 4.429393] usb 3-3.2.4: SerialNumber: 000001
> > > > >
> > > > > [ 4.435809] asix 3-3.2.4:1.0 (unnamed net_device) (uninitialized):
> > > > > invalid hw address, using random
> > > > >
> > > > > [ 5.229663] asix 3-3.2.4:1.0 eth0: register 'asix' at
> > > > > usb-12110000.usb-3.2.4, ASIX AX88772 USB 2.0 Ethernet, ee:21:96:b8:
> > > > >
> > > > > [ 7.925810] kjournald starting. Commit interval 5 seconds
> > > > >
> > > > > [ 7.929993] EXT3-fs (mmcblk1p2): using internal journal
> > > > >
> > > > > [ 7.944820] EXT3-fs (mmcblk1p2): recovery complete
> > > > >
> > > > > [ 7.948228] EXT3-fs (mmcblk1p2): mounted filesystem with ordered data
> > > > > mode
> > > > >
> > > > > [ 7.955194] VFS: Mounted root (ext3 filesystem) on device 179:34.
> > > > >
> > > > > [ 7.963607] devtmpfs: mounted
> > > > >
> > > > > [ 7.965377] Freeing unused kernel memory: 304K (c066e000 - c06ba000)
> > > > >
> > > > > [ 8.156858] random: init urandom read with 86 bits of entropy
> > > > > available
> > > > >
> > > > > [ 8.378207] init: ureadahead main process (1407) terminated with
> > > > > status 5
> > > > >
> > > > > [ 12.790491] random: nonblocking pool is initialized
> > > > >
> > > > > [ 240.105444] INFO: task kjournald:1402 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 240.110770] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 240.115105] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 240.123005] kjournald D c04aa028 0 1402 2 0x00000000
> > > > >
> > > > > [ 240.129430] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 240.136811] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 240.144273] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 240.151912] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 240.160593] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f69a8>]
> > > > > (__sync_dirty_buffer+0xc0/0xec)
> > > > >
> > > > > [ 240.169797] [<c00f69a8>] (__sync_dirty_buffer) from [<c0182244>]
> > > > > (journal_commit_transaction+0xfc8/0x139c)
> > > > >
> > > > > [ 240.179518] [<c0182244>] (journal_commit_transaction) from
> > > > > [<c0184e48>] (kjournald+0xe4/0x268)
> > > > >
> > > > > [ 240.188206] [<c0184e48>] (kjournald) from [<c0039bb0>] (kthread
> > > > > +0xd8/0xf0)
> > > > >
> > > > > [ 240.195137] [<c0039bb0>] (kthread) from [<c000f1b8>] (ret_from_fork
> > > > > +0x14/0x3c)
> > > > >
> > > > > [ 240.202427] INFO: task upstart-udev-br:1524 blocked for more than
> > > > > 120 seconds.
> > > > >
> > > > > [ 240.209712] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 240.214051] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 240.221953] upstart-udev-br D c04aa028 0 1524 1 0x00000000
> > > > >
> > > > > [ 240.228385] [<c04aa028>] (__schedule) from [<c01848cc>]
> > > > > (log_wait_commit+0xd8/0x120)
> > > > >
> > > > > [ 240.236203] [<c01848cc>] (log_wait_commit) from [<c00f0f44>]
> > > > > (do_fsync+0x50/0x78)
> > > > >
> > > > > [ 240.243746] [<c00f0f44>] (do_fsync) from [<c000f120>]
> > > > > (ret_fast_syscall+0x0/0x30)
> > > > >
> > > > > [ 240.251295] INFO: task systemd-udevd:1528 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 240.258417] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 240.262746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 240.270646] systemd-udevd D c04aa028 0 1528 1 0x00000004
> > > > >
> > > > > [ 240.277076] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 240.284454] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 240.291920] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 240.299556] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 240.308238] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f4f04>]
> > > > > (__bread_gfp+0xa8/0xec)
> > > > >
> > > > > [ 240.316747] [<c00f4f04>] (__bread_gfp) from [<c0127c14>]
> > > > > (ext3_get_branch+0x88/0x14c)
> > > > >
> > > > > [ 240.324656] [<c0127c14>] (ext3_get_branch) from [<c0129800>]
> > > > > (ext3_get_blocks_handle+0x90/0xa40)
> > > > >
> > > > > [ 240.333498] [<c0129800>] (ext3_get_blocks_handle) from [<c012a24c>]
> > > > > (ext3_get_block+0x9c/0xdc)
> > > > >
> > > > > [ 240.342180] [<c012a24c>] (ext3_get_block) from [<c00fce08>]
> > > > > (do_mpage_readpage+0x470/0x7ac)
> > > > >
> > > > > [ 240.350605] [<c00fce08>] (do_mpage_readpage) from [<c00fd20c>]
> > > > > (mpage_readpages+0xc8/0x118)
> > > > >
> > > > > [ 240.359016] [<c00fd20c>] (mpage_readpages) from [<c00943d8>]
> > > > > (__do_page_cache_readahead+0x1b0/0x260)
> > > > >
> > > > > [ 240.368224] [<c00943d8>] (__do_page_cache_readahead) from
> > > > > [<c008c0fc>] (filemap_fault+0x3ac/0x474)
> > > > >
> > > > > [ 240.377247] [<c008c0fc>] (filemap_fault) from [<c00aa310>]
> > > > > (__do_fault+0x34/0x88)
> > > > >
> > > > > [ 240.384798] [<c00aa310>] (__do_fault) from [<c00abfc8>]
> > > > > (do_cow_fault.isra.95+0x5c/0x17c)
> > > > >
> > > > > [ 240.393044] [<c00abfc8>] (do_cow_fault.isra.95) from [<c00add2c>]
> > > > > (handle_mm_fault+0x410/0x8d8)
> > > > >
> > > > > [ 240.401815] [<c00add2c>] (handle_mm_fault) from [<c0019140>]
> > > > > (do_page_fault+0x194/0x280)
> > > > >
> > > > > [ 240.409966] [<c0019140>] (do_page_fault) from [<c0008560>]
> > > > > (do_DataAbort+0x38/0x9c)
> > > > >
> > > > > [ 240.417691] [<c0008560>] (do_DataAbort) from [<c0012a18>]
> > > > > (__dabt_svc+0x38/0x60)
> > > > >
> > > > > [ 240.425151] Exception stack(0xcaee7e78 to 0xcaee7ec0)
> > > > >
> > > > > [ 240.430272] 7e60: 00037044 00000fb4
> > > > >
> > > > > [ 240.438523] 7e80: 00000000 00000000 cbaf3880 caca7a00 caee7ed0
> > > > > 00037044 cba3b900 cae46c00
> > > > >
> > > > > [ 240.446768] 7ea0: 00037940 00037044 00000000 caee7ec0 c010ada4
> > > > > c020cb84 20000013 ffffffff
> > > > >
> > > > > [ 240.455028] [<c0012a18>] (__dabt_svc) from [<c020cb84>]
> > > > > (__clear_user_std+0x34/0x64)
> > > > >
> > > > > [ 360.460441] INFO: task kjournald:1402 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 360.465763] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 360.470089] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 360.477997] kjournald D c04aa028 0 1402 2 0x00000000
> > > > >
> > > > > [ 360.484419] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 360.491805] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 360.499268] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 360.506908] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 360.515586] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f69a8>]
> > > > > (__sync_dirty_buffer+0xc0/0xec)
> > > > >
> > > > > [ 360.524789] [<c00f69a8>] (__sync_dirty_buffer) from [<c0182244>]
> > > > > (journal_commit_transaction+0xfc8/0x139c)
> > > > >
> > > > > [ 360.534512] [<c0182244>] (journal_commit_transaction) from
> > > > > [<c0184e48>] (kjournald+0xe4/0x268)
> > > > >
> > > > > [ 360.543201] [<c0184e48>] (kjournald) from [<c0039bb0>] (kthread
> > > > > +0xd8/0xf0)
> > > > >
> > > > > [ 360.550132] [<c0039bb0>] (kthread) from [<c000f1b8>] (ret_from_fork
> > > > > +0x14/0x3c)
> > > > >
> > > > > [ 360.557422] INFO: task upstart-udev-br:1524 blocked for more than
> > > > > 120 seconds.
> > > > >
> > > > > [ 360.564708] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 360.569047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 360.576948] upstart-udev-br D c04aa028 0 1524 1 0x00000000
> > > > >
> > > > > [ 360.583380] [<c04aa028>] (__schedule) from [<c01848cc>]
> > > > > (log_wait_commit+0xd8/0x120)
> > > > >
> > > > > [ 360.591197] [<c01848cc>] (log_wait_commit) from [<c00f0f44>]
> > > > > (do_fsync+0x50/0x78)
> > > > >
> > > > > [ 360.598741] [<c00f0f44>] (do_fsync) from [<c000f120>]
> > > > > (ret_fast_syscall+0x0/0x30)
> > > > >
> > > > > [ 360.606289] INFO: task systemd-udevd:1528 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 360.613412] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 360.617742] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 360.625642] systemd-udevd D c04aa028 0 1528 1 0x00000004
> > > > >
> > > > > [ 360.632071] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 360.639449] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 360.646916] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 360.654552] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 360.663234] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f4f04>]
> > > > > (__bread_gfp+0xa8/0xec)
> > > > >
> > > > > [ 360.671742] [<c00f4f04>] (__bread_gfp) from [<c0127c14>]
> > > > > (ext3_get_branch+0x88/0x14c)
> > > > >
> > > > > [ 360.679651] [<c0127c14>] (ext3_get_branch) from [<c0129800>]
> > > > > (ext3_get_blocks_handle+0x90/0xa40)
> > > > >
> > > > > [ 360.688494] [<c0129800>] (ext3_get_blocks_handle) from [<c012a24c>]
> > > > > (ext3_get_block+0x9c/0xdc)
> > > > >
> > > > > [ 360.697174] [<c012a24c>] (ext3_get_block) from [<c00fce08>]
> > > > > (do_mpage_readpage+0x470/0x7ac)
> > > > >
> > > > > [ 360.705600] [<c00fce08>] (do_mpage_readpage) from [<c00fd20c>]
> > > > > (mpage_readpages+0xc8/0x118)
> > > > >
> > > > > [ 360.714011] [<c00fd20c>] (mpage_readpages) from [<c00943d8>]
> > > > > (__do_page_cache_readahead+0x1b0/0x260)
> > > > >
> > > > > [ 360.723217] [<c00943d8>] (__do_page_cache_readahead) from
> > > > > [<c008c0fc>] (filemap_fault+0x3ac/0x474)
> > > > >
> > > > > [ 360.732243] [<c008c0fc>] (filemap_fault) from [<c00aa310>]
> > > > > (__do_fault+0x34/0x88)
> > > > >
> > > > > [ 360.739792] [<c00aa310>] (__do_fault) from [<c00abfc8>]
> > > > > (do_cow_fault.isra.95+0x5c/0x17c)
> > > > >
> > > > > [ 360.748040] [<c00abfc8>] (do_cow_fault.isra.95) from [<c00add2c>]
> > > > > (handle_mm_fault+0x410/0x8d8)
> > > > >
> > > > > [ 360.756807] [<c00add2c>] (handle_mm_fault) from [<c0019140>]
> > > > > (do_page_fault+0x194/0x280)
> > > > >
> > > > > [ 360.764961] [<c0019140>] (do_page_fault) from [<c0008560>]
> > > > > (do_DataAbort+0x38/0x9c)
> > > > >
> > > > > [ 360.772688] [<c0008560>] (do_DataAbort) from [<c0012a18>]
> > > > > (__dabt_svc+0x38/0x60)
> > > > >
> > > > > [ 360.780147] Exception stack(0xcaee7e78 to 0xcaee7ec0)
> > > > >
> > > > > [ 360.785269] 7e60: 00037044 00000fb4
> > > > >
> > > > > [ 360.793540] 7e80: 00000000 00000000 cbaf3880 caca7a00 caee7ed0
> > > > > 00037044 cba3b900 cae46c00
> > > > >
> > > > > [ 360.801765] 7ea0: 00037940 00037044 00000000 caee7ec0 c010ada4
> > > > > c020cb84 20000013 ffffffff
> > > > >
> > > > > [ 360.810023] [<c0012a18>] (__dabt_svc) from [<c020cb84>]
> > > > > (__clear_user_std+0x34/0x64)
> > > > >
> > > > > [ 480.815443] INFO: task kjournald:1402 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 480.820838] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 480.825094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 480.833006] kjournald D c04aa028 0 1402 2 0x00000000
> > > > >
> > > > > [ 480.839427] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 480.846811] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 480.854273] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 480.861914] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 480.870593] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f69a8>]
> > > > > (__sync_dirty_buffer+0xc0/0xec)
> > > > >
> > > > > [ 480.879795] [<c00f69a8>] (__sync_dirty_buffer) from [<c0182244>]
> > > > > (journal_commit_transaction+0xfc8/0x139c)
> > > > >
> > > > > [ 480.889518] [<c0182244>] (journal_commit_transaction) from
> > > > > [<c0184e48>] (kjournald+0xe4/0x268)
> > > > >
> > > > > [ 480.898196] [<c0184e48>] (kjournald) from [<c0039bb0>] (kthread
> > > > > +0xd8/0xf0)
> > > > >
> > > > > [ 480.905137] [<c0039bb0>] (kthread) from [<c000f1b8>] (ret_from_fork
> > > > > +0x14/0x3c)
> > > > >
> > > > > [ 480.912426] INFO: task upstart-udev-br:1524 blocked for more than
> > > > > 120 seconds.
> > > > >
> > > > > [ 480.919713] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 480.924052] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 480.931952] upstart-udev-br D c04aa028 0 1524 1 0x00000000
> > > > >
> > > > > [ 480.938392] [<c04aa028>] (__schedule) from [<c01848cc>]
> > > > > (log_wait_commit+0xd8/0x120)
> > > > >
> > > > > [ 480.946200] [<c01848cc>] (log_wait_commit) from [<c00f0f44>]
> > > > > (do_fsync+0x50/0x78)
> > > > >
> > > > > [ 480.953747] [<c00f0f44>] (do_fsync) from [<c000f120>]
> > > > > (ret_fast_syscall+0x0/0x30)
> > > > >
> > > > > [ 480.961295] INFO: task systemd-udevd:1528 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 480.968417] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 480.972747] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 480.980660] systemd-udevd D c04aa028 0 1528 1 0x00000004
> > > > >
> > > > > [ 480.987076] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 480.994473] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 481.001922] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 481.009557] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 481.018238] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f4f04>]
> > > > > (__bread_gfp+0xa8/0xec)
> > > > >
> > > > > [ 481.026746] [<c00f4f04>] (__bread_gfp) from [<c0127c14>]
> > > > > (ext3_get_branch+0x88/0x14c)
> > > > >
> > > > > [ 481.034644] [<c0127c14>] (ext3_get_branch) from [<c0129800>]
> > > > > (ext3_get_blocks_handle+0x90/0xa40)
> > > > >
> > > > > [ 481.043500] [<c0129800>] (ext3_get_blocks_handle) from [<c012a24c>]
> > > > > (ext3_get_block+0x9c/0xdc)
> > > > >
> > > > > [ 481.052180] [<c012a24c>] (ext3_get_block) from [<c00fce08>]
> > > > > (do_mpage_readpage+0x470/0x7ac)
> > > > >
> > > > > [ 481.060606] [<c00fce08>] (do_mpage_readpage) from [<c00fd20c>]
> > > > > (mpage_readpages+0xc8/0x118)
> > > > >
> > > > > [ 481.069015] [<c00fd20c>] (mpage_readpages) from [<c00943d8>]
> > > > > (__do_page_cache_readahead+0x1b0/0x260)
> > > > >
> > > > > [ 481.078223] [<c00943d8>] (__do_page_cache_readahead) from
> > > > > [<c008c0fc>] (filemap_fault+0x3ac/0x474)
> > > > >
> > > > > [ 481.087247] [<c008c0fc>] (filemap_fault) from [<c00aa310>]
> > > > > (__do_fault+0x34/0x88)
> > > > >
> > > > > [ 481.094797] [<c00aa310>] (__do_fault) from [<c00abfc8>]
> > > > > (do_cow_fault.isra.95+0x5c/0x17c)
> > > > >
> > > > > [ 481.103045] [<c00abfc8>] (do_cow_fault.isra.95) from [<c00add2c>]
> > > > > (handle_mm_fault+0x410/0x8d8)
> > > > >
> > > > > [ 481.111813] [<c00add2c>] (handle_mm_fault) from [<c0019140>]
> > > > > (do_page_fault+0x194/0x280)
> > > > >
> > > > > [ 481.119966] [<c0019140>] (do_page_fault) from [<c0008560>]
> > > > > (do_DataAbort+0x38/0x9c)
> > > > >
> > > > > [ 481.127692] [<c0008560>] (do_DataAbort) from [<c0012a18>]
> > > > > (__dabt_svc+0x38/0x60)
> > > > >
> > > > > [ 481.135151] Exception stack(0xcaee7e78 to 0xcaee7ec0)
> > > > >
> > > > > [ 481.140273] 7e60: 00037044 00000fb4
> > > > >
> > > > > [ 481.148523] 7e80: 00000000 00000000 cbaf3880 caca7a00 caee7ed0
> > > > > 00037044 cba3b900 cae46c00
> > > > >
> > > > > [ 481.156769] 7ea0: 00037940 00037044 00000000 caee7ec0 c010ada4
> > > > > c020cb84 20000013 ffffffff
> > > > >
> > > > > [ 481.165028] [<c0012a18>] (__dabt_svc) from [<c020cb84>]
> > > > > (__clear_user_std+0x34/0x64)
> > > > >
> > > > > [ 601.170443] INFO: task kjournald:1402 blocked for more than 120
> > > > > seconds.
> > > > >
> > > > > [ 601.175773] Not tainted 3.18.3-svn1 #2
> > > > >
> > > > > [ 601.180099] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > > > > disables this message.
> > > > >
> > > > > [ 601.188008] kjournald D c04aa028 0 1402 2 0x00000000
> > > > >
> > > > > [ 601.194433] [<c04aa028>] (__schedule) from [<c04aa5f0>] (io_schedule
> > > > > +0x70/0x9c)
> > > > >
> > > > > [ 601.201816] [<c04aa5f0>] (io_schedule) from [<c04aaccc>]
> > > > > (bit_wait_io+0x34/0x58)
> > > > >
> > > > > [ 601.209278] [<c04aaccc>] (bit_wait_io) from [<c04aa920>]
> > > > > (__wait_on_bit+0x80/0xb8)
> > > > >
> > > > > [ 601.216918] [<c04aa920>] (__wait_on_bit) from [<c04aa9c4>]
> > > > > (out_of_line_wait_on_bit+0x6c/0x74)
> > > > >
> > > > > [ 601.225598] [<c04aa9c4>] (out_of_line_wait_on_bit) from [<c00f69a8>]
> > > > > (__sync_dirty_buffer+0xc0/0xec)
> > > > >
> > > > > [ 601.234800] [<c00f69a8>] (__sync_dirty_buffer) from [<c0182244>]
> > > > > (journal_commit_transaction+0xfc8/0x139c)
> > > > >
> > > > > [ 601.244535] [<c0182244>] (journal_commit_transaction) from
> > > > > [<c0184e48>] (kjournald+0xe4/0x268)
> > > > >
> > > > > [ 601.253200] [<c0184e48>] (kjournald) from [<c0039bb0>] (kthread
> > > > > +0xd8/0xf0)
> > > > >
> > > > > [ 601.260142] [<c0039bb0>] (kthread) from [<c000f1b8>] (ret_from_fork
> > > > > +0x14/0x3c)
> > > > >
> > > > > ---------------------------------------------------------------------------------
> > > > >
> > > > >
> > > > >
> > > > > Accroding to log.
> > > > >
> > > > > It seems that periperal irq arn't injected properly.
> > > > >
> > > > >
> > > > >
> > > > > I think vgic_vcpu_inject_irq function can inject irq to any vcpu, any
> > > > > irq. right?
> > > > >
> > > > >
> > > > >
> > > > > This is Bug??
> > > > >
> > > > > or Intended on Xen 4.5.x?
> > > > >
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@xxxxxxxxxxxxx
> > > > > http://lists.xen.org/xen-devel
> > > >
> > > >
> > > >
> > > >
> > > > [?img=mr%2Bm%2BBFm%2BBK9hAnZFAM9FqMdFrMqKo%2BSFrM%2FFrudF4tqMxFvKouXM4twFrKZtzFXp6UmaLl5WLl51zlqDBFdp6d5
> > > > MreRhoRx%2Bzk4M6lT70FdM6i0WzwGW40gpBE5Mr0db40%2F74FTWt%3D%3D.gif]
> > > >
> >>[?img=mqFm%2BBFm%2BBF0hAnZFAM9FqFoM6twazErKxk0FxgrMo3CF6MwM6UZpAp0K6JgMX%2B0Mou974lR74lcWNFlbX30WLloWrdQaXFdp6pCW4Y5bX3CM4knWz05
> 1
> >
> > > EI0%2BLlo1B3Z1B25MreR.gif]
> > >
> >[?img=me%2Bm%2BBFm%2BBF0hAnZFAM9p6udp4ivaAuqFAUZFAI0pAKlpxura6uwF6i4p6FCtzFXp6UmFVl5WLl51zlqDBFdp6d5MreRhoRq%2Bzk4M6lT7NFdM6i0Wzw
>
> > GW40gpBE5Mr0db40%2F74FTWt%3D%3D.gif]
> >
> [?img=mXRm%2BBFm%2BBFShAnZFAM9axuZFAI0Krt%2FFqMlK4pvM6J0p4U%2FFqKlK6UrFuIo%2BrkSKAK5W4d5W4C5bX0q%2BzkR74FTWx%2FsbX30p4J5WZlq%2BzJ
> Sp6wn16lGtzk974FG%2BHiGDVloWrd%3D.gif]
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.