[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-bugs] [Bug 1354] New: aacraid crash when high I/O on dom0
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1354 Summary: aacraid crash when high I/O on dom0 Product: Xen Version: 3.0.3 Platform: x86-64 OS/Version: Linux Status: NEW Severity: normal Priority: P2 Component: Hardware Support AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx ReportedBy: alexandre.ellert@xxxxxxx I'm trying to get xen working on my server using an Adaptec 2820SA card. Everything went fine : xen installed, network OK, domU installed ... But when i got high I/O, module aacraid says : "aacraid: Host adapter abort request (0,0,0,0) aacraid: Host adapter reset request. SCSI hang ? AAC: Host adapter BLINK LED 0x57 AAC0: adapter Kernel panic'd 57" Then, the only way is to push reset button to reboot the server. I can reproduce this "SCSI Hang" using bonnie++ to stress filesystem. Note that, bonnie++ runs fine with default kernel (ie kernel-2.6.18-92.1.10.el5), so it appears to ben a xen issue. I'm using Centos 5.2 with official xen packages : - kernel-xen-2.6.18-92.1.10.el5 - xen-3.0.3-64.el5_2.1 My motherboard (Intel Entry Server Board S3000AH) BIOS is up to date and so is my Adaptec controller BIOS (. I replace default Centos aacraid module by last version on adaptec.com (1.1.5.2459) I got same issue using debian etch dom0, xennified kernel from debian package, xen 3.2 package from debian backports and last aacraid module. Thanks for your help. PS : I report this to adaptec support, they don't know whhat's going wrong for instance. I run Centos because it's supported by Adaptec (but i prefer Debian :)) -- Configure bugmail: http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. _______________________________________________ Xen-bugs mailing list Xen-bugs@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-bugs
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |