[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Xen and iSCSI
Per Andreas Buer wrote: Markus Hochholdinger wrote:well, my idea of HA is as follows:- Two storage servers on individual SANs connected to the Xen hosts. Each storage server provides block devices per iscsi.I guess gnbd can be a drop-in replacement for iSCSI. I would think performance is better as gnbd is written for the Linux kernel - the SCSI protocol is written for hardware. I _know_ gnbd is easier to set up. You just point the client to the server and the client populates /dev/gnbd/ with the named entries (the devices are given logical names - no SCSI buses, devices or LUNS). If I remember correctly gnbd is not quite the same as iscsi. When I looked into using gnbd I figured I could not create a target disk device that would present 10-20 unique devices to the xen clients. I am using lvm to break apart a set of disks and then presenting each volume as a separate iscsi target. I did not think the same thing could be done with GNBD but then I started this about a year ago so the rules may have changed in the interfening time. If we compare your iSCSI-based setup to a setup with Heartbeat/DRBD/GNBD-setup there might be some interesting points. You can choose for yourself if you want the DomUs to act as GNBD clients or if you want to access the GNBD servers directly from your DomU - or a combination (through Dom0 for rootfs/swap - and via GNBD for data volumes).- On domU two iscsi block devices are combined to a raid1. On this raid1 we will have the rootfs.Advantages:- storage servers can easily upgraded. Because of raid1 you can savely disconnect on storage server and upgrade hard disk space. After resync the raid1 you can do the same with the other storage server.The same with Heartbeat/DRBD/GNBD. You just fail one of the storage servers and upgrade it. After it is back up DRBD does an _incremental_ sync witch usually just takes a few seconds. With such a setup you can use a _dedicated_ link for DRBD. That is a nice feature. Has anybody built a system using gnbd that supports several dom0 systms and migrating domU's?- If you use a kind of lvm on the storage servers you can easily expand the exportet iscsi block devices (the raid1 and the filesystem has also to be expanded).The same goes for Hearbeat/DRBD/GNBD I would guess.- You can make live migration without configuring the destination Xen host specially (e.g. provide block devices in dom0 to export to domU) because all is done in domU.GNBD clients are more or less stateless.- If one domU dies or the Xen host you can easily start the domUs on other Xen hosts.Disadvantages:- When one storage server dies ALL domU have to rebuild their raid1 when storage this storage server comes back. High traffic on the SANs.You will also have to rebuild a volume if a XenU dies while writing to disk.- Not easy to setup a new domU in this environment (lvm, iscsi, raid1)iSCSI for rootfs sounds lke a lot of pain.Not sure:- Performance? Can we get full network performance in domU? Ideal is we can use full bandwith of the SANs (e.g. 1GBit/s). And if the SANs can handle this (i will make raid0 with three SATA disks in each storage server).Remember that every write has to be written twice. So your write capacity might suffer a bit. -- Alvin Starr || voice: (416)585-9971 Interlink Connectivity || fax: (416)585-9974 alvin@xxxxxxxxxx || _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |