[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-users] How many guests
Hi Jonathan,
nearly each storage manufacturer has JBODs with SAS expander for their
storages.
You can use this all. Which manufacturer is not important - because it
is a nearly passive component.
So you can use a cheap one like Infortrend or Promise.
http://www.infortrend.com/main/2_product/es_s16s-j1000-rs.asp (
S16S-J1000-S)
On a iSCSI storage, you can export each block device as iSCSI target.
On the xen host side, you just connect this exported iSCSI block device
with the initiator.
Best Regards
Michael Schmidt
Am 07.06.10 10:45, schrieb Jonathan Tripathy:
Hi Michael,
Do you have any links to any of those devices you
mentioned?
Also, would using a software iSCSI initiator defeat
the purpose of using RAID10 for performance?
Thanks
Hi Jonathan,
for iSCSI a iSCSI storage is suggestable, or opene if you plan to run it
on a x86 server.
But you need dedicated lan line, a storage box with its own raid
controller and cpu, memory and so on.
And a iSCSI hostbus adapter (around 600€) on the xen host side. If you
dont have a iSCSI HBA, you have to use the
software iSCSI initiator (i dont like this piece of software, and you
have iSCSI CPU Usage on your host).
If you want to run just a few more disks for one server (without HA
options), you dont need all this overhead.
Buy additional to your xen host a SAS Raid controller with an external
port (+150€). And a 12-disk JBOD (800€) and the disks (SATA / SAS).
This provides you a lower TCO and a higher energy efficiency (3-4U and
450W for 60 VMs).
Best Regards
Michael Schmidt
Am 07.06.10 09:16, schrieb Jonathan Tripathy:
> Hi Michael,
>
> You state that iSCSI is reliable but expensive. But isn't iSCSI
nearly
> free?
>
> I agree with you that Fibre Channel systems are very expensive
>
> Would iSCSI over IP be ok?
>
> Thanks
>
>
> On 07/06/10 08:12, Michael Schmidt wrote:
>> This is not completely correct.
>> With a raid 1, you have the read performance of 2 disks and
just the
>> write performance of a single disk.
>>
>> To the other thinks following this thread:
>> If you use a network storage, you have a bandwidth limit with
the
>> connection. But in the most cases, the raw bandwidth is not the
>> bottleneck (instead of the IOs per second).
>>
>> Network Storages using NFS or NBD are not stable enough in my
eyes.
>> iSCSI and FC SANs but really stable and expansive as well. But
there
>> is another much less expensive way:
>>
>> You get the most servers with an external SAS port. There you
can
>> connect over a SAS link a JBOD with 12 - 16 disk bays (DAS).
>> This disks can be managed by the servers raid controller.
>>
>> Best Regards
>>
>> Michael Schmidt
>>
>>
>> Am 06.06.10 23:21, schrieb Bart Coninckx:
>>> RAID1 does not perform better than a single disk. It will
still
>>> depend on what
>>> those 5 to 10 VMs would do. It still might be stretching
it. For 10
>>> webservers
>>> visited by 5 users per hour: I would say no problem. For 5
heavily used
>>> database servers it will be another story.
>>>
>>> I guess the only real way to find out is to put your
guests on there
>>> and try.
>>> If you clone them, you will know quite fast.
>>>
>>>
>>> On Sunday 06 June 2010 21:38:54 Jonathan Tripathy wrote:
>>>> Thanks Micael,
>>>>
>>>> I understand what you are saying.
>>>>
>>>> With a small setup such as a RAID1 array, how many VMs
could I rent
>>>> out?
>>>>
>>>> It doesn't matter if it's a small number, it's just to
utilise the
>>>> server a bit.
>>>>
>>>> Think it would cope with 5-10?
>>>>
>>>> Thanks
>>>>
>>>> Jonathan
>>>>
>>>> On 06/06/10 20:18, Michael Schmidt wrote:
>>>>> Hi Jonathan,
>>>>>
>>>>> if you plan to migrate existing physical machines
to xen VMs, or you
>>>>> have some different machines for a comparison,
>>>>> you can easy get runtime statistics and calculate
the usage. Look at
>>>>> the running iostats and cpu usage.
>>>>>
>>>>> If you plan to rent generic VMs on this server to
customers, you disk
>>>>> / raid setup will be absolutely the bottleneck.
>>>>> A solution at this point is not easy. If you have
much write IOs, use
>>>>> raid 10 with 4 to 8 disks. With many reads - raid
6 or 50 with the
>>>>> same amount of disks.
>>>>> In each case i can suggest you 15k rpm SAS disks.
>>>>>
>>>>> Then you can run 29 VMs. Or 60 VMs with 16GB
memory and 2 CPUs.
>>>>>
>>>>> But note: You cannot set disk priority to the VMs.
So if one VM does
>>>>> heavy disk IO, all off the other VMs slowed down.
>>>>>
>>>>> Best Regards
>>>>>
>>>>> Michael Schmidt
>>>>>
>>>>> Am 06.06.10 20:45, schrieb Jonathan Tripathy:
>>>>>> Hi Michael,
>>>>>>
>>>>>> Thanks for your email.
>>>>>>
>>>>>> This is just an idea that I have floating
around in my head that
>>>>>> maybe I'd like to rent out some VPSs to
customers, just to
>>>>>> utilise my
>>>>>> machine which will be sitting in a co-lo
nearly idle.
>>>>>>
>>>>>> I'd give out VPSs with 256MB RAM and probably
5Mbps connection
>>>>>> speed.
>>>>>>
>>>>>> So the answer is, I don't know what will be
running on them, however
>>>>>> I could write up an "acceptable use policy",
as well as use some
>>>>>> throttling/scheduling?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 06/06/10 19:39, Michael Schmidt wrote:
>>>>>>> Hi Jonathan,
>>>>>>>
>>>>>>> the question is, what a kind of VM?
>>>>>>> You can over-utilize a much greater
machine with one VM.
>>>>>>> Or on the other side, you can run 40 VMs
on a shorter machine.
>>>>>>>
>>>>>>> Each ressource can be a bottleneck
>>>>>>>
>>>>>>> - Memory - this is realy easy to
calculate: Avaiable minus 768MB
>>>>>>> (Reserved for Dom0 should be enugh in this
case).
>>>>>>> - CPU - Here we need a VM statistic
>>>>>>> - Disk Bandwidth - Here we need a VM
statistic, but in the most
>>>>>>> cases not the bottleneck
>>>>>>> - Disk IOPS - Here we need a VM statistic,
in the most cases the
>>>>>>> botelneck
>>>>>>>
>>>>>>> What a kind of VMs you plane to run?
>>>>>>> Webservers / mailservers /
database-servers ...?
>>>>>>>
>>>>>>> Best Regards
>>>>>>>
>>>>>>> Michael Schmidt
>>>>>>>
>>>>>>> Am 06.06.10 00:54, schrieb Jonathan
Tripathy:
>>>>>>>> Hi Everyone,
>>>>>>>>
>>>>>>>> I have a Dell R210 server which has a
Xeon X3430 Quad Core CPU
>>>>>>>> (2.4Ghz x 4) with 8GB of RAM. I intend
to use the H200 controller
>>>>>>>> in a RAID1 setup
>>>>>>>>
>>>>>>>> How many VMs do you think I'd be able
to run on this machine?
>>>>>>>> Is 20
>>>>>>>> pushing it?
>>>>>>>>
>>>>>>>> I'd say most (if not all) guests would
be in PV mode.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
_______________________________________________
>>>>>>>> Xen-users mailing list
>>>>>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>>>>>> http://lists.xensource.com/xen-users
>>>>>> _______________________________________________
>>>>>> Xen-users mailing list
>>>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>>>> http://lists.xensource.com/xen-users
>>>> _______________________________________________
>>>> Xen-users mailing list
>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-users
>>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|