[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Please help estimate number of the domUs

Those numbers are higher than I would have expected given the hardware you listed. For mixed random access, I expected your hardware would have delivered 2000 to 5000, not 49747. Of course, I test with 100% random and 67% writes. You were testing with 60% random and 35% writes. There could be considerable caching involved (especially with read tests), but it is hard to say without more data points.

If you want to run more benchmarks with IOmeter, I would suggest trying the ones that ZFSBuild uses from http://www.zfsbuild.com/pics/Graphs/Iometer-config-file.zip . That zip file contains an IOMeter.icf file. More details about those benchmarks are at http://www.zfsbuild.com/2012/12/14/zfsbuild2012-benchmark-methods/

Anyway, I am a lot more familiar with the benchmarks from ZFSBuild. If you run those benchmarks and post those results, then I could give you a very good idea what level of real world performance to expect.

Here are some InfiniBand based benchmarks using that the ZFSBuild IOmeter file:

Here are some graphs of single ethernet port benchmarks: (comparing some hardware from 2010 with hardware from 2012)

On 1/15/2013 2:33 PM, Andrey wrote:
Just finished measuring SAN performance with IOmeter (http://vmktree.org/iometer/OpenPerformanceTest.icf and 5 minutes each test) on RAID10 (data, 16GB maximum test file) and RAID50 (backup, 8GB maximum test file) both 3.6TB with one ext4 partition. SAN is configure in dual-path configuration and server has multipath configured with 2 HBA adapters. Here are the results:

RAID 5+0:
|       Test name        |   Avg iops     |    AvgMBps    |
| Max Throughput-100%Read     |    47528    |    1485    |
| RealLife-60%Rand-65%Read     |    24760    |    193    |
| Max Throughput-50%Read     |    6959    |    217    |
| Random-8k-70%Read         |    26612    |    207    |

RAID 1+0:
|       Test name        |   Avg iops     |    AvgMBps    |
| Max Throughput-100%Read     |    44031    |    1375    |
| RealLife-60%Rand-65%Read     |    49474    |    386    |
| Max Throughput-50%Read     |    43002    |    1343    |
| Random-8k-70%Read         |    49930    |    390    |

Caching is in action or else?

13.01.2013 23:52, admin@xxxxxxxxxxx ÐÐÑÐÑ:
You should measure the performance of the SAN using something like
IOmeter (running IOmeter on the hardware you plan to run XenServer or
XCP on).  Assuming you configure those drives in RAID10, I would guess
that SAN would deliver about 2,000 to 5,000 IOPS.  If you use RAID5
(please don't), then you will see far less IOPS during mixed read and
write tests.

If you want to deploy 100 VMs onto that SAN, then each VM is only have
to have 20-50 IOPS (assuming RAID10).  The performance in each VM will
be less than fantastic.  If the VMs need to do any IO intensive tasks,
the owners of the VMs are probably going to complain about sluggish
performance.  I don't think the SAN you listed can deliver enough IOPS
to satisfy 100 VMs.

On 1/13/2013 12:17 PM, Andrey wrote:
Well, storage is the direct-connect HP P2000 G3 FC dual-controller
array with 600GBx24 disks in dual-path configuration (two HBA ports ->
two controllers ports). I guess it is quite enough.

13.01.2013 20:45, admin@xxxxxxxxxxx ÐÐÑÐÑ:
You will probably run out of disk IO before you run into any hard limits
in XenServer or XCP.

What type of SAN are you going to use?  What type of network
interconnect will you use to link your XenServer/XCP nodes to your SAN?
How many IOPS does your SAN deliver over your chosen network

On 1/13/2013 9:03 AM, Andrey wrote:
Sure, will try. I see in XenServer 6.1 FAQ that maximum supported
number of guests is 150 and it requires increasing dom0_mem to max
4096. It's obvious that internal limits are not quite realistic so it
will be good result for me if we able to run at least 100 guests. It
seems that it is more realistic number although some resources note
maximum number of VMs as 4-10 per CPU core (so 32-80 in my case). But
in all these cases 192 GB RAM would be redundant I think.

With regards, Andrey

11.01.2013 16:43, Wei Liu ÐÐÑÐÑ:
On Fri, 2013-01-11 at 12:24 +0000, Andrey wrote:
Thank you for the answer

I'm really consider the case with creating as many DomUs as possible
with typical load and get practical info.

What about network capacity? Does this math implies to the network
resources? Should we shape the DomUs bandwidth to prevent network
overload? Can CPU be bottleneck in this configuration?

The math I did was to show you some internal infrastructure limits
I know.

CPU / network overloading is another topic. TBH I haven't done stress
tests on CPUs and network.

And whether you will hit any bottlenecks in CPU / network or not
closely to your use case. Boot up DomUs and do some typical
workload is
a good idea.


Xen-users mailing list

Xen-users mailing list

Xen-users mailing list

Xen-users mailing list

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.