[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Linux Fiber or iSCSI SAN



Some people have been asking about ZFS configurations used for Gluster bricks..
Well.. it's quire simple.
On a 36 drive machine, we chose to configure it with 3 bricks @ 12
drives per brick:
 - every brick consists of 12 drives
 - 10 drives are are used for RAIDZ1 (2TB 3.5" WD Enterprise Black HDD)
 - 1 drive is used as cache (64GB 2.5" SSD with AdaptaDrive bracket
for great fit)
 - 1 drive is used as a spare (2TB 3.5" WD Enterprise Black HDD)

Here is a ZPOOL status output of one of the bricks:
asemenov@lakshmi:~$ sudo zpool status brick0
  pool: brick0
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        brick0      ONLINE       0     0     0
          raidz1-0 ONLINE       0     0     0
            slot0   ONLINE       0     0     0
            slot1   ONLINE       0     0     0
            slot2   ONLINE       0     0     0
            slot3   ONLINE       0     0     0
            slot4   ONLINE       0     0     0
            slot5   ONLINE       0     0     0
            slot6   ONLINE       0     0     0
            slot7   ONLINE       0     0     0
            slot8   ONLINE       0     0     0
            slot9   ONLINE       0     0     0
        cache
          slot10    ONLINE       0     0     0
        spares
          slot11    AVAIL

errors: No known data errors


To lower the entry $$, we have only 2 bricks per Gluster node
populated ([1] and [2]), but in the end 4 Gluster nodes will look like
this:

{[1][3][4]}   {[1][2][4]}   {[1][2][3]}   {[2][3][4]}

legend:
{ }   - is gluster node
[ ]   - is gluster brick
1-4 - is a replica/mirror id

Expanding this setup is possible with SAS attached expanders (with
disk ## multiples of x 12), or better yet, adding 4 identical nodes,
for better Gluster performance and increased throughput and better IB
fabric utilization.

Hope this inspires you and helps in your projects.

Cheers,
Anastas S
sysadmin++

>> Hello everyone,
>>
>> At my current workplace, we've been evaluating solutions from DDN vs.
>> NetApp vs. in-house.
>> The requirement was to have a low entry price for at least 1/3 PT
>> storage, as a starting point, high IO/bandwidth, low latency and
>> Hadoop compatibility, and target capacity of 1PT with further
>> The DDN and NetApp solutions were all $300k+ with limited flexibility,
>> overpriced replacement drives and limited expandability options.
>> After evaluating our own solution on old hardware we had lying around,
>> we've decided to give it a shot.
>> There was obviously some risks, convincing management to sign the PO
>> for $25k and explaining the risks and benefits, with a worst case
>> scenario - using it as more traditional storage nodes.
>>
>> We've purchased 4 x 3U SuperMicro chassis with 36 x 3.5 HDDs and
>> additional internal slots for OS drives. Along with few used $150
>> Infiniband 40Gig cards and IB switch (most expensive single piece of
>> equipment here ~ $5-7k).
>>
>> The resulted 4 node GlusterFS cluster running over RDMA transport, ZFS
>> bricks (10 HDD in raidz + 1 SSD cache + 1 spare), with 200 nano-second
>> fabric latency, highly configurable replication (we use 3x) and
>> flexible expandability.
>> In out tests so far with this system, we've seen 18GB/sec fabric
>> bandwidth, reading from all 3 replicas (which is what Gluster does
>> when you replicate - it spreads IO) at 6GB/sec per replica.
>> 6GB per second is pretty much the most you can squeeze out of 40GB
>> Infiniband (aka QDR), but that was a sequential read test. However, by
>> increasing number of Gluster nodes and bricks, you can achieve greater
>> throughput for
>> I suppose, you could do DRBD over RDMA (SDP or SuperSockets as per
>> DRBD Docs: http://www.drbd.org/users-guide/s-replication-transports.html)
>> if your environment requires it, over Gluster..
>>
>> Infiniband is now part of the linux kernel, compare to few years ago..
>> and used hardware is not that expensive.. not much different from Fiber 
>> Channel.
>> 56Gig (aka FDR) is also available, albeit more expensive..
>> Imho, Infiniband is going to become more relevant and universal in the
>> upcoming years..
>>
>> Cheers,
>> Anastas S
>> sysadmin++
>>
>>
>> On Wed, Jun 12, 2013 at 12:00 PM,  <xen-users-request@xxxxxxxxxxxxx> wrote:
>>> Send Xen-users mailing list submissions to
>>>        xen-users@xxxxxxxxxxxxx
>>>
>>> To subscribe or unsubscribe via the World Wide Web, visit
>>>        http://lists.xen.org/cgi-bin/mailman/listinfo/xen-users
>>> or, via email, send a message with subject or body 'help' to
>>>        xen-users-request@xxxxxxxxxxxxx
>>>
>>> You can reach the person managing the list at
>>>        xen-users-owner@xxxxxxxxxxxxx
>>>
>>> When replying, please edit your Subject line so it is more specific
>>> than "Re: Contents of Xen-users digest..."
>>>
>>>
>>> Today's Topics:
>>>
>>>   1. Re: Linux Fiber or iSCSI SAN (Gordan Bobic)
>>>   2. Re: Linux Fiber or iSCSI SAN (Nick Khamis)
>>>   3. Re: Linux Fiber or iSCSI SAN (Gordan Bobic)
>>>   4. Re: Linux Fiber or iSCSI SAN (Errol Neal)
>>>   5. Re: Linux Fiber or iSCSI SAN (Nick Khamis)
>>>   6. Re: Linux Fiber or iSCSI SAN (Nick Khamis)
>>>   7. Re: Linux Fiber or iSCSI SAN (Errol Neal)
>>>   8. Re: Linux Fiber or iSCSI SAN (Nick Khamis)
>>>   9. Re: Linux Fiber or iSCSI SAN (Errol Neal)
>>>  10. Re: Linux Fiber or iSCSI SAN (Nick Khamis)
>>>  11. Re: Linux Fiber or iSCSI SAN (Gordan Bobic)
>>>  12. Re: Linux Fiber or iSCSI SAN (Gordan Bobic)
>>>  13. Xen 4.1 compile from source and install on Fedora 17
>>>      (ranjith krishnan)
>>>  14. Re: Xen 4.1 compile from source and install on Fedora 17 (Wei Liu)
>>>  15. pv assign pci device (jacek burghardt)
>>>  16. Re: pv assign pci device (Gordan Bobic)
>>>  17. Re: Blog: Installing the Xen hypervisor on Fedora 19
>>>      (Dario Faggioli)
>>>  18. Xen Test Day is today! (Dario Faggioli)
>>>  19. Re: [Xen-devel] Xen Test Day is today! (Fabio Fantoni)
>>>
>>>
>>> ----------------------------------------------------------------------
>>>
>>> Message: 1
>>> Date: Tue, 11 Jun 2013 17:52:02 +0100
>>> From: Gordan Bobic <gordan@xxxxxxxxxx>
>>> To: Nick Khamis <symack@xxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <1bdcfbd8f2994ee32483e1646fcbe5ec@xxxxxxxxxxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=UTF-8; format=flowed
>>>
>>> On Tue, 11 Jun 2013 12:29:22 -0400, Nick Khamis <symack@xxxxxxxxx>
>>> wrote:
>>>> On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis  wrote:
>>>>
>>>> Hello Everyone,
>>>>
>>>> I am speaking for everyone when saying that we are really interested
>>>> in knowing what people are
>>>> using in deployment. This would be active/active replicated, block
>>>> level storage solutions at the:
>>>>
>>>> NAS Level: FreeNAS, OpenFiler (I know it's not linux), IET
>>>> FS Level: ZFS, OCFS/2, GFS/2, GlusterFS
>>>> Replication Level: DRBD vs GlusterFS
>>>> Cluster Level: OpenAIS with Pacemaker etc...
>>>>
>>>> Our hope is for an educated breakdown (i.e., comparisons, benefits,
>>>> limitation) of different setups, as opposed to
>>>> a war of words on which NAS solution is better than the other.
>>>> Comparing black boxes would also be interesting
>>>> at a performance level. Talk about pricing, not so much since we
>>>> already know that they cost and arm and a leg.
>>>>
>>>> Kind Regards,
>>>>
>>>> Nick.
>>>>
>>>> There was actually one more level I left out
>>>>
>>>> Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and
>>>> RJ), SAS (Seagate vs WD)
>>>>
>>>> I hope this thread takes off, and individuals interested in the same
>>>> topic can get some really valuable info.
>>>>
>>>> On a side note, and interesting comment I received was on the risks
>>>> that are associated with such a custom build, as
>>>> well as the lack of flexibility in some sense.
>>>
>>> The risk issue I might entertain to some extent (although
>>> personally I think the risk is LOWER if you built the system
>>> yourself and you have it adequately mirrored and backed up - if
>>> something goes wrong you actually understand how it all hangs
>>> together and can fix it yourself quickly, as opposed to hours
>>> of downtime while an engineer on the other end of the phone
>>> tries to guess what is actually wrong).
>>>
>>> But the flexibility argument is completely bogus. If you are
>>> building the solution yourself you have the flexibility to do
>>> whatever you want. When you buy and off the shelf
>>> all-in-one-black-box  appliance you are straitjacketed by
>>> whatever somebody else decided might be useful without any
>>> specific insight into your particular use case.
>>>
>>> Gordan
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 2
>>> Date: Tue, 11 Jun 2013 13:03:12 -0400
>>> From: Nick Khamis <symack@xxxxxxxxx>
>>> To: Gordan Bobic <gordan@xxxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <CAGWRaZZga+SuBc4iV0FO=D=HLthY=DNNJ-fuDEa1re8DQygZZA@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
>>>
>>>>
>>>> The risk issue I might entertain to some extent (although
>>>> personally I think the risk is LOWER if you built the system
>>>> yourself and you have it adequately mirrored and backed up - if
>>>> something goes wrong you actually understand how it all hangs
>>>> together and can fix it yourself quickly, as opposed to hours
>>>> of downtime while an engineer on the other end of the phone
>>>> tries to guess what is actually wrong).
>>>
>>> Very True!!
>>>
>>> But apples vs apples. It comes down to the warranty on your
>>> iscsi raid controller, cpu etc.. vs. whatever guts are in the
>>> powervault. And I agree with both trains of thoughts...
>>> Warranty through adaptec or Dell, in either case there
>>> will be downtime.
>>>
>>>
>>>>
>>>> But the flexibility argument is completely bogus. If you are
>>>> building the solution yourself you have the flexibility to do
>>>> whatever you want. When you buy and off the shelf
>>>> all-in-one-black-box  appliance you are straitjacketed by
>>>> whatever somebody else decided might be useful without any
>>>> specific insight into your particular use case.
>>>>
>>>> Gordan
>>>
>>> For sure... The inflexibility I was referring to are instance where
>>> one starts out an endeavour to build a replicated NAS, and finds
>>> out the hard way regarding size limitations of DRBD, lack of
>>> clustering capabilities of FreeNAS, or instability issues of OpenFiler
>>> with large instances.
>>>
>>> There is also SCSI-3 persistent reservations issues which is needed
>>> by some of the virtualization systems that may of may not be supported
>>> by FreeNAS (last I checked)...
>>>
>>> N.
>>>
>>> N.
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130611/3e06eae9/attachment.html>
>>>
>>> ------------------------------
>>>
>>> Message: 3
>>> Date: Tue, 11 Jun 2013 18:13:58 +0100
>>> From: Gordan Bobic <gordan@xxxxxxxxxx>
>>> To: Nick Khamis <symack@xxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <7d0db81b985d5c4e76781d94626b0cd9@xxxxxxxxxxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=UTF-8; format=flowed
>>>
>>> On Tue, 11 Jun 2013 13:03:12 -0400, Nick Khamis <symack@xxxxxxxxx>
>>> wrote:
>>>> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic  wrote:
>>>>
>>>> The risk issue I might entertain to some extent (although
>>>> personally I think the risk is LOWER if you built the system
>>>> yourself and you have it adequately mirrored and backed up - if
>>>> something goes wrong you actually understand how it all hangs
>>>> together and can fix it yourself quickly, as opposed to hours
>>>> of downtime while an engineer on the other end of the phone
>>>> tries to guess what is actually wrong).
>>>>
>>>> Very True!!
>>>>
>>>> But apples vs apples. It comes down to the warranty on your
>>>> iscsi raid controller, cpu etc.. vs. whatever guts are in the
>>>> powervault. And I agree with both trains of thoughts...
>>>> Warranty through adaptec or Dell, in either case there
>>>> will be downtime.
>>>
>>> If you build it yourself you will save enough money that you can
>>> have 5 of everything sitting on the shelf for spares. And it'll
>>> all still be covered by a warranty.
>>>
>>>> But the flexibility argument is completely bogus. If you are
>>>> building the solution yourself you have the flexibility to do
>>>> whatever you want. When you buy and off the shelf
>>>> all-in-one-black-box ?appliance you are straitjacketed by
>>>> whatever somebody else decided might be useful without any
>>>> specific insight into your particular use case.
>>>>
>>>> For sure... The inflexibility I was referring to are instance where
>>>> one starts out an endeavour to build a replicated NAS, and finds
>>>> out the hard way regarding size limitations of DRBD, lack of
>>>> clustering capabilities of FreeNAS, or instability issues of
>>>> OpenFiler with large instances.
>>>
>>> Heavens forbid we should do some research, prototyping and
>>> testing before building the whole solution...
>>>
>>> It ultimately comes down to what your time is worth and
>>> how much you are saving. If you are looking to deploy 10
>>> storage boxes at $10K each vs. $50K each, you can spend
>>> a year prototyping and testing and still save a fortune.
>>> If you only need one, it may or may not be worthwhile
>>> depending on your hourly rate.
>>>
>>> Gordan
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 4
>>> Date: Tue, 11 Jun 2013 13:17:10 -0400
>>> From: Errol Neal <eneal@xxxxxxxxxxxxxxxxx>
>>> To: Nick Khamis <symack@xxxxxxxxx>
>>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID: <1370971030353245500@xxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain
>>>
>>>
>>>>> I've built a number of white box SANs  using everything from OpenSolaris
>>>>> and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC.
>>>>> I've settled Ubuntu boxes booted via DRBD running SCST OR ESOS.
>>>>> From a performance perspective, I have pretty large customer that two XCP
>>>>> pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell
>>>>> 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in
>>>>> it, a 4GB HBA and installed ESOS on it.
>>>>> I never got around to finishing my testing, but the ESOS box can
>>>>> definitely keep up and things like LSI cachecade would really help to 
>>>>> bring
>>>>> it to a more enterprise-level performance with respect to random reads and
>>>>> writes.
>>>>> Lastly, there is such an abundance of DIRT CHEAP,  lightly used 4GB FC
>>>>> equipment on the market today that I find it interesting that people still
>>>>> prefer iSCSI. iSCSI is good if you have 10GBE which is still far to
>>>>> expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on
>>>>> ebay for under 100 bucks and I generally am able to purchase fully loaded
>>>>> switches (brocade 200e) for somewhere in the neighborhood of 300 bucks 
>>>>> each!
>>>>> MPIO with 2 FC ports from an initiator to a decent target can easily
>>>>> saturate the link on basic sequential r/w write tests. Not to mention,
>>>>> improved latency, access times, etc for random i/o.
>>>>
>>>> Hello Eneal,
>>>>
>>>> Thank you so much for your response. Did you experience any problems with
>>>> ESOS and your FS SAN in terms of stability.
>>>> We already have our myrinet FC cards and switches, and I agree, it was dirt
>>>> cheap.
>>>
>>> ESOS by all means is not perfect. I'm running an older release because it's 
>>> impossible to upgrade a production system without downtime using ESOS 
>>> (currently) but I was impressed with it non the less and i can see where 
>>> it's going.
>>> I think what has worked better for me is using SCST on Ubuntu. As long as 
>>> your hardware is stable, you should have no issues.
>>> At another site, I have two boxes in production (running iSCSI at this 
>>> site) and I've had zero non-hardware-related issues and I've been running 
>>> them in prod for 1 - 2 years.
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 5
>>> Date: Tue, 11 Jun 2013 13:23:05 -0400
>>> From: Nick Khamis <symack@xxxxxxxxx>
>>> To: eneal@xxxxxxxxxxxxxxxxx
>>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <CAGWRaZbBjH_bMS-Zgd-qN8f5b8zey2ng-ZZaGZ8QUkoaiKZ+XQ@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>>>> ESOS by all means is not perfect. I'm running an older release because 
>>>>> it's impossible to
>>>>> upgrade a production system without downtime using ESOS (currently) but I 
>>>>> was
>>>>> impressed with it non the less and i can see where it's going.
>>>
>>> Thanks again Errol. Just our of curiosity was any of this replicated?
>>>
>>> N.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 6
>>> Date: Tue, 11 Jun 2013 13:27:24 -0400
>>> From: Nick Khamis <symack@xxxxxxxxx>
>>> To: Gordan Bobic <gordan@xxxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <CAGWRaZbY4uqZaq5b-CWam27vG_3K=qQnZBOcM5F_7UV3jya_qw@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>> On 6/11/13, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
>>>> Heavens forbid we should do some research, prototyping and
>>>> testing before building the whole solution...
>>>>
>>>> It ultimately comes down to what your time is worth and
>>>> how much you are saving. If you are looking to deploy 10
>>>> storage boxes at $10K each vs. $50K each, you can spend
>>>> a year prototyping and testing and still save a fortune.
>>>> If you only need one, it may or may not be worthwhile
>>>> depending on your hourly rate.
>>>>
>>>> Gordan
>>>
>>> And hence the purpose of this thread :). Gordon, you mentioned that
>>> you did use DRBD
>>> for separate instances outside of the NAS. I am curious to know of
>>> your experience with NAS level replication. What you feel would be a
>>> more stable and scalable fit.
>>>
>>> N.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 7
>>> Date: Tue, 11 Jun 2013 13:26:00 -0400
>>> From: Errol Neal <eneal@xxxxxxxxxxxxxxxxx>
>>> To: Gordan Bobic <gordan@xxxxxxxxxx>
>>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>,
>>>        Nick Khamis <symack@xxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID: <1370971560963286500@xxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain
>>>
>>> On Tue, 06/11/2013 01:13 PM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
>>>>
>>>> Heavens forbid we should do some research, prototyping and
>>>> testing before building the whole solution...
>>>>
>>>> It ultimately comes down to what your time is worth and
>>>> how much you are saving. If you are looking to deploy 10
>>>> storage boxes at $10K each vs. $50K each, you can spend
>>>> a year prototyping and testing and still save a fortune.
>>>> If you only need one, it may or may not be worthwhile
>>>> depending on your hourly rate.
>>>
>>> This is a really key point. I don't like to toot my own horn, but I've done 
>>> EXTENSIVE and EXHAUSTIVE research into this. I built my first Open-E iSCSI 
>>> box in like 2006. The right combination of hard disk, hdd firmware, raid 
>>> controller, controller firmware, motherboard, memory, cpu, nics, hbas.. 
>>> everything is critical and by the time you narrow all of this down and test 
>>> sufficiently and are ready to go into production, you've spent a 
>>> significant amount of time and money.
>>> Now that said, if you able to piggy back off the knowledge of others, then 
>>> you get a nice short cut and to be fair, the open source software has 
>>> advanced and matured so much that it's really production ready for certain 
>>> workloads and environments.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 8
>>> Date: Tue, 11 Jun 2013 13:27:55 -0400
>>> From: Nick Khamis <symack@xxxxxxxxx>
>>> To: Gordan Bobic <gordan@xxxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <CAGWRaZYxn6y5D-q3HnTo-H92NyDaORWh7fSKR7Q6HWnF48xsqw@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>> Gordan, sorry for the typo!
>>>
>>> N.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 9
>>> Date: Tue, 11 Jun 2013 13:28:50 -0400
>>> From: Errol Neal <eneal@xxxxxxxxxxxxxxxxx>
>>> To: Nick Khamis <symack@xxxxxxxxx>
>>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID: <1370971730454167500@xxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain
>>>
>>> On Tue, 06/11/2013 01:23 PM, Nick Khamis &lt;symack@xxxxxxxxx&gt; wrote:
>>>>>> ESOS by all means is not perfect. I'm running an older release because 
>>>>>> it's impossible to
>>>>>> upgrade a production system without downtime using ESOS (currently) but 
>>>>>> I was
>>>>>> impressed with it non the less and i can see where it's going.
>>>>
>>>> Thanks again Errol. Just our of curiosity was any of this replicated?
>>>
>>> That is my next step. I had been planning of using Ininiband, SDP and DRBD, 
>>> but there are some funky issues there. I just never got around to it.
>>> I think what's necessary over replication is a dual head configuration.
>>> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage between 
>>> two nodes should suffice.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 10
>>> Date: Tue, 11 Jun 2013 13:32:16 -0400
>>> From: Nick Khamis <symack@xxxxxxxxx>
>>> To: eneal@xxxxxxxxxxxxxxxxx
>>> Cc: Gordan Bobic <gordan@xxxxxxxxxx>,   "xen-users@xxxxxxxxxxxxxxxxxxx"
>>>        <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID:
>>>        <CAGWRaZZzsxXSRuH+XgfULrVcX7AGiSueA9f9WLzarMgseByNpA@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>>> Now that said, if you able to piggy back off the knowledge of others, then
>>>> you get a nice short cut and to be fair, the open source software has
>>>> advanced and matured so much that it's really production ready for certain
>>>> workloads and environments.
>>>
>>> We run our BGP links on Quagga linux boxes on IBM machines and
>>> transmitting an average of 700Mbps with packet sizes upwards of
>>> 900-1000 bytes. I don't loose sleep over them....
>>>
>>> N.
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 11
>>> Date: Tue, 11 Jun 2013 19:17:04 +0100
>>> From: Gordan Bobic <gordan@xxxxxxxxxx>
>>> To: Nick Khamis <symack@xxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID: <51B769A0.8040301@xxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> On 06/11/2013 06:27 PM, Nick Khamis wrote:
>>>> On 6/11/13, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
>>>>>  Heavens forbid we should do some research, prototyping and
>>>>>  testing before building the whole solution...
>>>>>
>>>>>  It ultimately comes down to what your time is worth and
>>>>>  how much you are saving. If you are looking to deploy 10
>>>>>  storage boxes at $10K each vs. $50K each, you can spend
>>>>>  a year prototyping and testing and still save a fortune.
>>>>>  If you only need one, it may or may not be worthwhile
>>>>>  depending on your hourly rate.
>>>>>
>>>>>  Gordan
>>>>
>>>> And hence the purpose of this thread :). Gordon, you mentioned that
>>>> you did use DRBD
>>>> for separate instances outside of the NAS. I am curious to know of
>>>> your experience with NAS level replication. What you feel would be a
>>>> more stable and scalable fit.
>>>
>>> It largely depends on what exactly do you want to do with it. For a NAS,
>>> I use ZFS + lsyncd for near-synchronous replication (rsync-on-write).
>>>
>>> For a SAN I tend to use ZFS with zvols exported over iSCSI, with period
>>> ZFS send to the backup NAS. If you need real-time replication for
>>> fail-over purposes, I would probably run DRBD on top of ZFS zvols.
>>>
>>> Gordan
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 12
>>> Date: Tue, 11 Jun 2013 19:28:40 +0100
>>> From: Gordan Bobic <gordan@xxxxxxxxxx>
>>> To: eneal@xxxxxxxxxxxxxxxxx
>>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>,
>>>        Nick Khamis <symack@xxxxxxxxx>
>>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN
>>> Message-ID: <51B76C58.6090303@xxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> On 06/11/2013 06:28 PM, Errol Neal wrote:
>>>> On Tue, 06/11/2013 01:23 PM, Nick Khamis &lt;symack@xxxxxxxxx&gt; wrote:
>>>>>>> ESOS by all means is not perfect. I'm running an older release because 
>>>>>>> it's impossible to
>>>>>>> upgrade a production system without downtime using ESOS (currently) but 
>>>>>>> I was
>>>>>>> impressed with it non the less and i can see where it's going.
>>>>>
>>>>> Thanks again Errol. Just our of curiosity was any of this replicated?
>>>>
>>>> That is my next step. I had been planning of using Ininiband,
>>>> SDP and DRBD, but there are some funky issues there. I just
>>>> never got around to it.
>>>
>>> The first thing that jumps out at me here is infiniband. Do you have the
>>> infrastructure and cabling in place to actually do that? This can be
>>> very relevant depending on your environment. If you are planning to get
>>> some cheap kit on eBay to do this, that's all well and good, but will
>>> you be able to get a replacement if something breaks in a year or three?
>>> One nice thing about ethernet is that it will always be around, it will
>>> always be cheap, and it will always be compatible.
>>>
>>> For most uses multiple gigabit links bonded together are ample. Remember
>>> that you will get, on a good day, about 120 IOPS per disk. Assuming a
>>> typical 4K operation size that's 480KB/s/disk. At 16KB/op that is still
>>> 1920KB/s/disk. At that rate you'd need 50 disks to saturate a single
>>> gigabit channel. And you can bond a bunch of them together for next to
>>> nothing in switch/NIC costs.
>>>
>>>> I think what's necessary over replication is a dual head
>>>> configuration.
>>>
>>> Elaborate?
>>>
>>>> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage
>>>> between two nodes should suffice.
>>>
>>> In what configuration?
>>>
>>> Gordan
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 13
>>> Date: Tue, 11 Jun 2013 16:39:39 -0500
>>> From: ranjith krishnan <ranjithkrishnan1@xxxxxxxxx>
>>> To: xen-users@xxxxxxxxxxxxx
>>> Subject: [Xen-users] Xen 4.1 compile from source and install on Fedora
>>>        17
>>> Message-ID:
>>>        <CAEybL6wFUpGJJa_BHumwR_TgVnN63qJ4ZHGF+EmdPF9mcaD7mQ@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>> Hello,
>>>
>>> I am relatively new to Xen and need help compiling and installing Xen from
>>> source.
>>>
>>> Using some tutorials online, I have got Xen working with the 'yum install
>>> xen' method.
>>> I used virt-manager and was able to get 2 domUs working ( CentOS 5, and
>>> Fedora 16).
>>> My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2,
>>> while my dom0 is installed on sda1. Everything is working fine in this
>>> configuration.
>>> I want to use Xen 4.1 since I want to continue using
>>> virt-install/virt-manager for domU provisioning.
>>>
>>> For my work now, I want to install Xen from source and try to modify some
>>> source code files and test things out.
>>> I have seen some tutorials online, and I am not sure they give the complete
>>> picture.
>>> For ex,
>>> http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora
>>> Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken
>>> care of and things just work.
>>> When I install from source, this is not the case. Are there any tutorials
>>> which give a complete picture?
>>> Or if someone has got Xen working from source on Fedora 16, 17 or 18, can
>>> you give me tips on how to edit grub configuration so that xen boots ok.
>>> I have tried and failed once compiling and installing Xen on Fedora 16,
>>> which is when I used yum.
>>>
>>>
>>> --
>>> Ranjith krishnan
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130611/34655873/attachment.html>
>>>
>>> ------------------------------
>>>
>>> Message: 14
>>> Date: Tue, 11 Jun 2013 23:40:04 +0100
>>> From: Wei Liu <wei.liu2@xxxxxxxxxx>
>>> To: ranjith krishnan <ranjithkrishnan1@xxxxxxxxx>
>>> Cc: xen-users@xxxxxxxxxxxxx, wei.liu2@xxxxxxxxxx
>>> Subject: Re: [Xen-users] Xen 4.1 compile from source and install on
>>>        Fedora 17
>>> Message-ID: <20130611224004.GA25483@xxxxxxxxxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset="us-ascii"
>>>
>>> Hello,
>>>
>>> I've seen your mail to xen-devel as well. Given that you're still in
>>> configuration phase, my gut feeling is that this is the proper list to
>>> post. When you have questions about Xen code / development workflow you
>>> can ask them on xen-devel.
>>>
>>> On Tue, Jun 11, 2013 at 04:39:39PM -0500, ranjith krishnan wrote:
>>>> Hello,
>>>>
>>>> I am relatively new to Xen and need help compiling and installing Xen from
>>>> source.
>>>>
>>>> Using some tutorials online, I have got Xen working with the 'yum install
>>>> xen' method.
>>>> I used virt-manager and was able to get 2 domUs working ( CentOS 5, and
>>>> Fedora 16).
>>>> My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2,
>>>> while my dom0 is installed on sda1. Everything is working fine in this
>>>> configuration.
>>>> I want to use Xen 4.1 since I want to continue using
>>>> virt-install/virt-manager for domU provisioning.
>>>>
>>>> For my work now, I want to install Xen from source and try to modify some
>>>> source code files and test things out.
>>>> I have seen some tutorials online, and I am not sure they give the complete
>>>> picture.
>>>> For ex,
>>>> http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora
>>>> Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken
>>>> care of and things just work.
>>>> When I install from source, this is not the case. Are there any tutorials
>>>> which give a complete picture?
>>>> Or if someone has got Xen working from source on Fedora 16, 17 or 18, can
>>>> you give me tips on how to edit grub configuration so that xen boots ok.
>>>> I have tried and failed once compiling and installing Xen on Fedora 16,
>>>> which is when I used yum.
>>>
>>> For the grub entry, the simplest method is to place your binary under
>>> /boot and invoke update-grub2 (which is also invoked when you do 'yum
>>> install' if I'm not mistaken). In theory it should do the right thing.
>>>
>>> Another method to solve your problem is to modify grub.conf yourself.
>>> Just copy the entry that 'yum install' adds in grub.conf, replace the
>>> binary file name with the one you compile and you're all set.
>>>
>>> You might also find this page useful if you're to develop Xen.
>>> http://wiki.xen.org/wiki/Xen_Serial_Console
>>> (it also contains sample entries for legacy grub and grub2, nice ;-) )
>>>
>>>
>>> Wei.
>>>
>>>>
>>>>
>>>> --
>>>> Ranjith krishnan
>>>
>>>> _______________________________________________
>>>> Xen-users mailing list
>>>> Xen-users@xxxxxxxxxxxxx
>>>> http://lists.xen.org/xen-users
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 15
>>> Date: Tue, 11 Jun 2013 19:01:33 -0600
>>> From: jacek burghardt <jaceksburghardt@xxxxxxxxx>
>>> To: xen-users <xen-users@xxxxxxxxxxxxx>
>>> Subject: [Xen-users] pv assign pci device
>>> Message-ID:
>>>        <CAHyyzzQ53ZHYExKQ15TQSMdaXuN6t7_+wuJnFMFywvwJDYrBGA@xxxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>> I have xeon quad core server I wonder if is possible to assign pci usb
>>> device to pv if the server does not suport iommu vd-t
>>> I had blacklisted usb modules and hid devices and devices are listed as
>>> assignable
>>> but when I add them to pv I get this error libxl: error: libxl: error:
>>> libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn't support reset
>>> from sysfs for PCI device 0000:00:1d.0
>>> libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn't
>>> support reset from sysfs for PCI device 0000:00:1d.1
>>> Daemon running with PID 897
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130611/6e5ccfba/attachment.html>
>>>
>>> ------------------------------
>>>
>>> Message: 16
>>> Date: Wed, 12 Jun 2013 07:13:15 +0100
>>> From: Gordan Bobic <gordan@xxxxxxxxxx>
>>> To: jacek burghardt <jaceksburghardt@xxxxxxxxx>
>>> Cc: xen-users <xen-users@xxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] pv assign pci device
>>> Message-ID: <51B8117B.3020404@xxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> On 06/12/2013 02:01 AM, jacek burghardt wrote:
>>>> I have xeon quad core server I wonder if is possible to assign pci usb
>>>> device to pv if the server does not suport iommu vd-t
>>>> I had blacklisted usb modules and hid devices and devices are listed as
>>>> assignable
>>>> but when I add them to pv I get this error libxl: error: libxl: error:
>>>> libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn't support
>>>> reset from sysfs for PCI device 0000:00:1d.0
>>>> libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel
>>>> doesn't support reset from sysfs for PCI device 0000:00:1d.1
>>>> Daemon running with PID 897
>>>
>>> I don't think that is a fatal error. I get that on, for example, the VGA
>>> card passed through to the VM, but it still works inside the domU. It
>>> just means the device doesn't support FLR.
>>>
>>> Gordan
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 17
>>> Date: Wed, 12 Jun 2013 00:30:06 +0200
>>> From: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>>> To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
>>> Cc: xen-users@xxxxxxxxxxxxx, Russ Pavlicek
>>>        <russell.pavlicek@xxxxxxxxxxxxxx>
>>> Subject: Re: [Xen-users] Blog: Installing the Xen hypervisor on Fedora
>>>        19
>>> Message-ID: <1370989806.20028.51.camel@Solace>
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> On gio, 2013-06-06 at 09:52 +0100, Ian Campbell wrote:
>>>> On Wed, 2013-06-05 at 22:11 -0400, Russ Pavlicek wrote:
>>>>> Saw this post from Major Hayden of Rackspace:
>>>>>
>>>>> http://major.io/2013/06/02/installing-the-xen-hypervisor-on-fedora-19/
>>>>
>>>> It'd be good to get this linked from
>>>> http://wiki.xen.org/wiki/Category:Fedora
>>> Well, although I'm very happy about blog posts like these starting to
>>> come up spontaneously all around the place, allow me to say tat we have
>>> the Fedora host install page on the Wiki
>>> (http://wiki.xen.org/wiki/Fedora_Host_Installation) that contains
>>> exactly the same information (it actually has much more info, and it is
>>> of course part of the Fedora wiki category!)
>>>
>>> That being said, I guess I can add a section there (in the Fedora
>>> Category page) about 'external' pages, posts, etc... Let me think how
>>> and where to put it...
>>>
>>> Thanks and Regards,
>>> Dario
>>>
>>> --
>>> <<This happens because I choose it to happen!>> (Raistlin Majere)
>>> -----------------------------------------------------------------
>>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>>>
>>> -------------- next part --------------
>>> A non-text attachment was scrubbed...
>>> Name: signature.asc
>>> Type: application/pgp-signature
>>> Size: 198 bytes
>>> Desc: This is a digitally signed message part
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130612/67c37d4d/attachment.pgp>
>>>
>>> ------------------------------
>>>
>>> Message: 18
>>> Date: Wed, 12 Jun 2013 09:01:56 +0200
>>> From: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>>> To: xen-devel@xxxxxxxxxxxxx
>>> Cc: xen-users@xxxxxxxxxxxxx, xen-api@xxxxxxxxxxxxx
>>> Subject: [Xen-users] Xen Test Day is today!
>>> Message-ID: <1371020516.9946.5.camel@Abyss>
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Hi everybody,
>>>
>>> Allow me to remind you that the 4th Xen Test Day is happening today, so
>>> come and join us on #xentest on freenode!
>>>
>>> We will be testing Xen 4.3 RC4, released yesterday and, probably, *the*
>>> *last* release candidate! For more info, see:
>>>
>>> - on Xen Test Days:
>>>    http://wiki.xen.org/wiki/Xen_Test_Days
>>>
>>> - on getting and testing RC4:
>>>    http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions
>>>
>>> - for generic testing information:
>>>    http://wiki.xen.org/wiki/Testing_Xen
>>>
>>> See you all on freenode, channel #xentest.
>>>
>>> Regards
>>> Dario
>>>
>>> --
>>> <<This happens because I choose it to happen!>> (Raistlin Majere)
>>> -----------------------------------------------------------------
>>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>>>
>>> -------------- next part --------------
>>> A non-text attachment was scrubbed...
>>> Name: signature.asc
>>> Type: application/pgp-signature
>>> Size: 198 bytes
>>> Desc: This is a digitally signed message part
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130612/2fcb2e25/attachment.pgp>
>>>
>>> ------------------------------
>>>
>>> Message: 19
>>> Date: Wed, 12 Jun 2013 09:44:02 +0200
>>> From: Fabio Fantoni <fabio.fantoni@xxxxxxx>
>>> To: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>>> Cc: xen-users@xxxxxxxxxxxxx, xen-api@xxxxxxxxxxxxx,
>>>        xen-devel@xxxxxxxxxxxxx
>>> Subject: Re: [Xen-users] [Xen-devel] Xen Test Day is today!
>>> Message-ID: <51B826C2.3030706@xxxxxxx>
>>> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>>>
>>> Il 12/06/2013 09:01, Dario Faggioli ha scritto:
>>>> Hi everybody,
>>>>
>>>> Allow me to remind you that the 4th Xen Test Day is happening today, so
>>>> come and join us on #xentest on freenode!
>>>>
>>>> We will be testing Xen 4.3 RC4, released yesterday and, probably, *the*
>>>> *last* release candidate! For more info, see:
>>>>
>>>>  - on Xen Test Days:
>>>>     http://wiki.xen.org/wiki/Xen_Test_Days
>>>>
>>>>  - on getting and testing RC4:
>>>>     http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions
>>>>
>>>>  - for generic testing information:
>>>>     http://wiki.xen.org/wiki/Testing_Xen
>>>>
>>>> See you all on freenode, channel #xentest.
>>>>
>>>> Regards
>>>> Dario
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxx
>>>> http://lists.xen.org/xen-devel
>>> I saw that qemu upstrem tag is not updated (on Config.mk
>>> QEMU_UPSTREAM_REVISION ?= qemu-xen-4.3.0-rc1) but on git there are new
>>> patches, why?
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: 
>>> <http://lists.xen.org/archives/html/xen-users/attachments/20130612/d02fbfa4/attachment.html>
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxx
>>> http://lists.xen.org/xen-users
>>>
>>>
>>> End of Xen-users Digest, Vol 100, Issue 17
>>> ******************************************
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.