[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD



Dear George,

you find my answer inline.

Am 15.08.11 19:27, schrieb George Shuklin:
> Yes.
>
> XCP have two modes: with local storage (small installation) and with
> external shared storage. You will not able to use XCP normally if you
> don't use external storage. Migration can be done only with shared
> storage, HA features and so on.
Yeah I think I am well aware of that fact.
> There is some discussion early about using drbd in XCP, but they are
> still not implemented in anyway.
I noticed the threads.
>
> Right now we using XCP 0.5 and starting migraition process to XCP 1.0.
> Both of them do have DDK you can use for compilation of modules. 
Ok. I am using XCP 1.1 beta - but I would like to also compile a RPM for
that.
> FT/HA relationship more complificated then 'better/worse', they solves
> different problems and asking for different price for this (f.e.
> completely FT machine will run slower that normal one).
I have to look into the definition of these. Maybe I what I refer to FT
is just redundancy - so in that sense FT is already the thing I want to
achieve right now.
> Please note that XCP does not contain code for native XenServer HA - it
> depends on Windows (as far as I understand), the 'HA' in XCP limited to
> domain restart in case of crash/host reboot.
Okay. See above I just want the thing to be fault tolerant, I can live
with having to restart a VM in case of a host crash right now.
I only want to make sure I can start the system on the still running
server (given that I only utilize 50 % of both systems).

Cheers,
Jakob
>
> Ð ÐÐÐ, 15/08/2011 Ð 18:25 +0200, Jakob Praher ÐÐÑÐÑ:
>> Hi George,
>> Hi List,
>>
>> Am 15.08.11 13:57, schrieb George Shuklin:
>>> It not an XCP part in any way, it just storage configuration.
>>>
>>> We using lvmoiscsi target with enabled multipath in XCP, set up two
>>> hosts to use primary/primary DRBD (this done on usual debian without
>>> specific attention to XCP) and publish /dev/drbd1 on both hosts with
>>> same SN/ID.
>>>
>>> The scheme looks like:
>>>
>>> storage1
>>> (raid)----DBRD--IET~~~~~~~~~~~~[initiator+
>>>              |                 [XCP       \
>>> storage2     |                 [host       multipath----lvm-....
>>> (raid)----DRBD--IET~~~~~~~~~~~~[initiator/
>> So you have dedicated storage hosts that are connected via DRBD and
>> export both the storage to separate ISCSI clients on the XCP host, right?
>> I have a different scenario in mind, where you use commodity servers
>> (using raid1 and local storage) to manage a fault tolerant setup.
>> Do you have any experience with such a system based on DRBD?
>>> The main idea is just to make storage FT. Hosts is not FT, they only HA.
>> So FT is fault tolerant and HA is high available?
>> By this you imply that fault tolerant is better than high available?
>> (sorry for my ignorance for standard terms here).
>>> FT storage allow to keep client data in case of any storage fault, and
>>> allow even to bring storage down to maintenance. And XCP hosts pool
>>> allow to migrate clients machines for hosts maintenance. Only one not
>>> protected part is hung/crash of XCP host which require VM restart (with
>>> almost no data loss). 
>>>
>>> Network overhead of this scheme:
>>>
>>> XCP-to-storage link - almost no overhead compare to classic lvmoiscsi
>>> target.
>>> storage-to-storage link - double network load for writing, no overhead
>>> for reading.
>>>
>>> The main problem is to keep those hosts in consistent state (split-brain
>>> is very, very bad thing).
>> Thanks for the write up.
>> BTW: What version of XCP are you using / how are you developing packages
>> for XCP more recent than 0.5?
>>
>> Cheers,
>> Jakob
>>
>>
>>> Ð ÐÐÐ, 15/08/2011 Ð 12:26 +0200, Jakob Praher ÐÐÑÐÑ:
>>>> Hi George,
>>>>
>>>> thanks for the quick reply. As I already said CrossPool Migration is not
>>>> an option - but at least the wiki discusses a setup that mirrors via DRBD.
>>>> I am new to Xen-API yet we are using Xen hypervisor on Debian for half a
>>>> decade. So the basic underlying stuff is known to us, yet all the
>>>> concepts like SR, VDI, VHD, ... (which are also needed to abstract from
>>>> the physical representation) is new.
>>>>
>>>> How does this DRBD-backed ISCSI look like?
>>>> You export the DRBD block device via ISCSI protocol? Multipath means
>>>> that you can do active/active?
>>>> What is the network overhead when using this scenario to local LVM?
>>>> Is this the preferred scenario for HA on individual hosts having local
>>>> storage?
>>>> Can this be enabled in XCP 1.1?
>>>>
>>>> Regarding FT: Yes our scenario is FT since we use the SR locally. But we
>>>> would definitely like to setup a HA infrastructure since then the
>>>> decision on which machine the VM should be placed does not have to be
>>>> done at vm-install time, but can be dynamically balanced and also
>>>> xenmotion and all that stuff woul work.
>>>>
>>>> One of our goals is that we do not want to reinvent something.
>>>>
>>>> Cheers,
>>>> Jakob
>>>>
>>>>
>>>> Am 15.08.11 11:57, schrieb George Shuklin:
>>>>> Right now we running last tests before product deployment of
>>>>> DRBD-backed iscsi target with multipath. I found no specific problem
>>>>> at this moment (except the need to patching ISCSISR.py for complete
>>>>> multipath support). But I don' understand, why you need to do
>>>>> cross-pool migration for FT. In any way you can not achive FT with
>>>>> current XCP state, only HA.
>>>>>
>>>>> The difference between FT and HA: If server broken, FT-machine
>>>>> continue to run without any traces of fault, in case of HA machine
>>>>> just (almost instantly) restarting on other available hosts in the
>>>>> pool. FT is not magic key, because if VM do some bad thing (crashed)
>>>>> HA restart it, and FT will do nothing.
>>>>>
>>>>> On 15.08.2011 13:38, Jakob Praher wrote:
>>>>>> Dear List,
>>>>>>
>>>>>> I have a question regarding a fault-tolerant setup of XCP.
>>>>>> We have two hosts that are in one resource pool.
>>>>>> Furthermore we are trying to make two SRs (storage repos) as local lvm
>>>>>> volume groups where one volume group is active (owned) by one server,
>>>>>> and the other volume group is active on the other server.
>>>>>>
>>>>>> In case of failure because of the common resource pool all the meta
>>>>>> information concerning VMs are still available. After degrading the
>>>>>> system to one host the SR is still owned by the failed server. Is there
>>>>>> an easy way to migrate the SR? Is anybody using a similar solution or
>>>>>> what are your best practices?
>>>>>>
>>>>>> I think CrossPool-Migration is not an option for us since we want to
>>>>>> keep only one resource pool for both servers.
>>>>>>
>>>>>> Another question: I am currently using 1.1 of XCP - what is the best way
>>>>>> to compile system (like DRBD) RPMs for this version. Since the DDK is
>>>>>> not available I also have troubles getting the XCP distribition
>>>>>> installed into a guest VM so that I can add development packages to it
>>>>>> and compile the package there. Is there a base image that I can use so
>>>>>> that I have the right devel rpms? From yum repos.d I see that it is a
>>>>>> CentOS 5.
>>>>>>
>>>>>> Any help is appreaciated.
>>>>>>
>>>>>> Cheers,
>>>>>> Jakob
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> xen-api mailing list
>>>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>>> _______________________________________________
>>>>> xen-api mailing list
>>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>> _______________________________________________
>>>> xen-api mailing list
>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>
>>> _______________________________________________
>>> xen-api mailing list
>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>
>> _______________________________________________
>> xen-api mailing list
>> xen-api@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/mailman/listinfo/xen-api
>


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.