[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-bugs] [Bug 433] scsi: Device offlined - not ready after error recovery
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=433 ------- Additional Comments From hien1@xxxxxxxxxx 2005-12-06 16:21 ------- It is a HS20 blade with 2 CPUs but interrupts only on CPU0 since CONFIG_SMP is not set. Do we need SMP ? Will attach .config file. The number of device interrupts stays still now because no tests running. However the errors of scsi still generated in /var/log/messages: bl3-5:/var/log # more /proc/interrupts CPU0 5: 0 Phys-irq acpi 7: 0 Phys-irq ohci_hcd:usb1 14: 72854970 Phys-irq ide0 16: 361193 Phys-irq peth0 17: 7876217 Phys-irq eth1 18: 419533939 Phys-irq qla2300 19: 0 Phys-irq qla2300 256: 175371326 Dynamic-irq timer0 257: 1048 Dynamic-irq xenbus 258: 0 Dynamic-irq console 259: 0 Dynamic-irq net-be-dbg 260: 110547684 Dynamic-irq blkif-backend 261: 17 Dynamic-irq blkif-backend 262: 497110843 Dynamic-irq blkif-backend 263: 23 Dynamic-irq vif1.0 NMI: 0 LOC: 0 ERR: 0 MIS: 0 --------- Dec 6 10:02:26 bl3-5 kernel: scsi0 (0:0): rejecting I/O to offline device Dec 6 10:02:26 bl3-5 kernel: Buffer I/O error on device sda5, logical block 4213 Dec 6 10:02:26 bl3-5 kernel: lost page write due to I/O error on sda5 Dec 6 10:02:26 bl3-5 kernel: scsi0 (0:0): rejecting I/O to offline device Dec 6 10:02:26 bl3-5 kernel: Buffer I/O error on device sda6, logical block 521 Dec 6 10:02:26 bl3-5 kernel: lost page write due to I/O error on sda6 Dec 6 10:02:57 bl3-5 kernel: scsi0 (0:0): rejecting I/O to offline device Dec 6 10:02:57 bl3-5 kernel: scsi0 (0:0): rejecting I/O to offline device ...... -- Configure bugmail: http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. _______________________________________________ Xen-bugs mailing list Xen-bugs@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-bugs
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |