[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] XCP 1.0 md raid kernel oops fixed??


  • To: xen-api@xxxxxxxxxxxxxxxxxxx
  • From: William Baum <bill@xxxxxxxxxxxx>
  • Date: Sun, 13 Mar 2011 15:47:02 -0500
  • Delivery-date: Sun, 13 Mar 2011 13:47:16 -0700
  • List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>

I too experienced the issue of crashes when attempting to access md raid devices XCP 1.0-beta as discussed here:

http://www.gossamer-threads.com/lists/xen/api/191687
http://www.mail-archive.com/xen-api@xxxxxxxxxxxxxxxxxxx/msg02222.html

Which seems also to be the same issue affecting XenServer 5.6.1 fp1:

http://forums.citrix.com/thread.jspa?messageID=1539379

So while the behavior seems to be slightly different, I'm still getting hard locks when attempting to access md raid devices. 

In XenServer 5.6.0, I have vm's running on md raid1 local storage:

# cat /etc/mdadm.conf
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=763e7d0c:c7bcbdcd:65b5b8a6:3e6f4aba

# pvs
  PV         VG                                                 Fmt  Attr PSize   PFree 
  /dev/md2   VG_XenStorage-6ff20299-6078-d8b0-cd1b-3c725686652a lvm2 a-   923.50G 800.51G

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
      968371328 blocks [2/2] [UU]
     
Is this supposed to work in XCP 1.0?

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.