[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Re: multiple iscsi targets on bonding interface
(Please keep the thread on the mailing list.) Longina Przybyszewska <longina@xxxxxxxxxxxx> writes: > On Tue, 7 Apr 2009, Ferenc Wagner wrote: > >> We maintain iSCSI-backed virtual machines: each machine uses a >> separate iSCSI target as its disk. So it's possible. Or maybe >> I don't understand the question. > > I understand that if I would have multiple iscsi luns managed by Dom0, > I have to have the possibility for multiple TCP/IP connections from > Dom0, that means multiple network interfaces, iface0, iface1... in > openiscsi terminology. > This is why, I think about alias interfaces, in our case > bond1.xxx:{0,1,2,3}. I'm not sure what "iscsi luns managed by dom0" mean here, but you don't need multiple network interfaces (or IP address or ports) neither for exporting multiple iSCSI targets from your iSCSI server, nor for logging into multiple iSCSI targets from a client node. > Actually we have a Xen server with 2 pairs bonding interfaces: > > - BOND0 is like yours - 802.1q vlan interfaces on top of the bond, > plus bridges on top of each vlan. Virtual machines have access to > different vlans via interfaces bridged to the specific vlan-bridge. > > There is another bonding interface, BOND1, configured on ordinary > access port, for accessing SAN-storage vlan. > Meaning with bond1 is - to have seperate interface > for storage traffic==> for accessing iscsi targets. OK. This is purely a performance tweak (which can be important). > My "missing link" is how to access multiple iscsi luns on Dom0, or > how to make DomUs accessing seperate iscsi luns if binding should > happen via bond1? Now you have to be clearer. Do you want to access the iSCSI targets from the dom0 (to provide boot disks for your domUs) or from the domUs (to gain extra storage after boot)? In the first case the (dom0) kernel IP routing should take care of everything. In the second case you should share bond1 with the respective clients via a common bridge on top of the bond. > I was thinking about bridge on top of bond1 - each iscsi client > machine (Dom0, Domus) could bridged its "storage iface" to. > But I had some routning problems in Dom0 and gave up. Ah, so you want to make both dom0 and the domUs iSCSI clients! In this case you have to assign an IP address from your storage network to bond1 as well. And to each domU, of course. And shouldn't have routing problems. :) >> But this hasn't got anything to do with iSCSI, which is mostly managed >> by the dom0. We also have a couple of iSCSI rooted domUs, but that >> doesn't make a difference. > > Do we talk about iscsi luns (/dev/sd{a,b,c,d} or a one huge iscsi-lun > LVM-partitioned into smaller pieces for DomUs root/swap/data? Don't confuse the different layers. Each of our PV domUs are backed by independent iSCSI targets, to make independent live migration of domUs possible. Each domU sees its assigned target as virtual disk /dev/xvda, and has no idea whatsoever that it is an iSCSI target in reality. Then each domU uses this virtual disk as it wants: some partition it, some use it as an LVM physical volume, some put a filesystem straight on it. iSCSI is absolutely out of the picture here, with the exception of the iSCSI rooted domUs, which, on the other hand, have no disk devices assigned to them by Xen: they are (virtual) "diskless"; the Xen host doesn't know they mount iSCSI devices as their roots from initramfs. -- Cheers, Feri. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |