[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Please help estimate number of the domUs


  • To: <xen-users@xxxxxxxxxxxxx>
  • From: <J.Witvliet@xxxxxxxxx>
  • Date: Tue, 15 Jan 2013 09:25:33 +0100
  • Accept-language: en-US, nl-NL
  • Acceptlanguage: en-US, nl-NL
  • Delivery-date: Tue, 15 Jan 2013 08:26:33 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

If you try to startup all images at the same time, or when doing very 
disk-intensive jobs, like a compile farm, performance is probably sub-optimal 
;-)
Otoh, when all of these machines are used by remote people interactively, your 
lan is probably the weakest link.
If you have the same box as we do, you have four nics, and you will need them 
to spread the load.
You might even consider adding some extra 10Gb nics.

So the number of domU's you can practically use depends of their purpose....
If it is just a bunch of webservers, that people reach once-in-a-while, you 
might reach for the upper limits.
Otoh, if they are used as remote desktops, you will probably be memory/network 
bound, and when acting as compile/crunch farm, you'll be disk bound.

hw


-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxx] 
On Behalf Of admin@xxxxxxxxxxx
Sent: Sunday, January 13, 2013 8:52 PM
To: xen-users@xxxxxxxxxxxxx
Subject: Re: [Xen-users] Please help estimate number of the domUs

You should measure the performance of the SAN using something like 
IOmeter (running IOmeter on the hardware you plan to run XenServer or 
XCP on).  Assuming you configure those drives in RAID10, I would guess 
that SAN would deliver about 2,000 to 5,000 IOPS.  If you use RAID5 
(please don't), then you will see far less IOPS during mixed read and 
write tests.

If you want to deploy 100 VMs onto that SAN, then each VM is only have 
to have 20-50 IOPS (assuming RAID10).  The performance in each VM will 
be less than fantastic.  If the VMs need to do any IO intensive tasks, 
the owners of the VMs are probably going to complain about sluggish 
performance.  I don't think the SAN you listed can deliver enough IOPS 
to satisfy 100 VMs.

On 1/13/2013 12:17 PM, Andrey wrote:
> Well, storage is the direct-connect HP P2000 G3 FC dual-controller 
> array with 600GBx24 disks in dual-path configuration (two HBA ports -> 
> two controllers ports). I guess it is quite enough.
>
> 13.01.2013 20:45, admin@xxxxxxxxxxx ÐÐÑÐÑ:
>> You will probably run out of disk IO before you run into any hard limits
>> in XenServer or XCP.
>>
>> What type of SAN are you going to use?  What type of network
>> interconnect will you use to link your XenServer/XCP nodes to your SAN?
>> How many IOPS does your SAN deliver over your chosen network 
>> interconnect?
>>
>> On 1/13/2013 9:03 AM, Andrey wrote:
>>> Sure, will try. I see in XenServer 6.1 FAQ that maximum supported
>>> number of guests is 150 and it requires increasing dom0_mem to max
>>> 4096. It's obvious that internal limits are not quite realistic so it
>>> will be good result for me if we able to run at least 100 guests. It
>>> seems that it is more realistic number although some resources note
>>> maximum number of VMs as 4-10 per CPU core (so 32-80 in my case). But
>>> in all these cases 192 GB RAM would be redundant I think.
>>>
>>> With regards, Andrey
>>>
>>> 11.01.2013 16:43, Wei Liu ÐÐÑÐÑ:
>>>> On Fri, 2013-01-11 at 12:24 +0000, Andrey wrote:
>>>>> Thank you for the answer
>>>>>
>>>>> I'm really consider the case with creating as many DomUs as possible
>>>>> with typical load and get practical info.
>>>>>
>>>>> What about network capacity? Does this math implies to the network
>>>>> resources? Should we shape the DomUs bandwidth to prevent network
>>>>> overload? Can CPU be bottleneck in this configuration?
>>>>>
>>>>
>>>> The math I did was to show you some internal infrastructure limits 
>>>> that
>>>> I know.
>>>>
>>>> CPU / network overloading is another topic. TBH I haven't done stress
>>>> tests on CPUs and network.
>>>>
>>>> And whether you will hit any bottlenecks in CPU / network or not 
>>>> relates
>>>> closely to your use case. Boot up DomUs and do some typical 
>>>> workload is
>>>> a good idea.
>>>>
>>>>
>>>> Wei.
>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxx
>>> http://lists.xen.org/xen-users
>>
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-users
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

______________________________________________________________________
Dit bericht kan informatie bevatten die niet voor u is bestemd. Indien u niet 
de geadresseerde bent of dit bericht abusievelijk aan u is toegezonden, wordt u 
verzocht dat aan de afzender te melden en het bericht te verwijderen. De Staat 
aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband 
houdt met risico's verbonden aan het elektronisch verzenden van berichten.

This message may contain information that is not intended for you. If you are 
not the addressee or if this message was sent to you by mistake, you are 
requested to inform the sender and delete the message. The State accepts no 
liability for damage of any kind resulting from the risks inherent in the 
electronic transmission of messages.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.