[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] redhat native vs. redhat on XCP


  • To: Boris Quiroz <bquiroz.work@xxxxxxxxx>
  • From: Grant McWilliams <grantmasterflash@xxxxxxxxx>
  • Date: Mon, 17 Jan 2011 20:54:03 -0800
  • Cc: Henrik Andersson <henrik.j.andersson@xxxxxxxxx>, xenList <xen-users@xxxxxxxxxxxxxxxxxxx>, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
  • Delivery-date: Mon, 17 Jan 2011 20:56:03 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; b=Qjy80q4TIvTVSSTzpL9lZ/VdGYy75JyNSXIQb6Rg3m2tYuqlQ2Cj0JzAc9rStZ8irO WWBe0O88QQJTSONKE0WOzskZD6fw9hEnZrvtENF3UJEBj70OjKP83bMbcQvmRTmzBVzd +XGyWFQ8AbHGbGA+OA2mQz+s1XBAu832wyzFo=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>




On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@xxxxxxxxx> wrote:
2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>
>
> On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
> wrote:
>>
>> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> <grantmasterflash@xxxxxxxxx> wrote:
>> > As long as I use an LVM volume I get very very near real performance ie.
>> > mysqlbench comes in at about 99% of native.
>>
>> without any real load on other DomUs, i guess
>>
>> in my settings the biggest 'con' of virtualizing some loads is the
>> sharing of resources, not the hypervisor overhead. ÂSince it's easier
>> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> speed (specially on IOPS), that means that i have some database
>> servers that I can't virtualize on the near term.
>>
> But that is the same as just putting more than one service on one box. I
> believe he was wondering what the overhead was to virtualizing as apposed to
> bare metal. Anytime you have more than one process running on a box you have
> to think about the resources they use and how they'll interact with each
> other. This has nothing to do with virtualizing itself unless the hypervisor
> has a bad scheduler.
>
>> Of course, most of this would be solved by dedicating spindles instead
>> of LVs to VMs; Âmaybe when (if?) i get most boxes with lots of 2.5"
>> bays, instead of the current 3.5" ones. ÂNot using LVM is a real
>> drawback, but it still seems to be better than dedicating whole boxes.
>>
>> --
>> Javier
>
> I've moved all my VMs to running on LVs on SSDs for this purpose. The
> overhead of LV over just bare drives is very very little unless you're doing
> a lot of snapshots.
>
>
> Grant McWilliams
>
> Some people, when confronted with a problem, think "I know, I'll use
> Windows."
> Now they have two problems.
>
>

Hi list,

I did a preliminary test using [1], and the result was near to what I
expect. This was a very very small test, because I've a lot of things
to do before I can setup a good and representative test, but I think
it is a good start.

Using the tool stress I started with the default command: stress --cpu
8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
both xen and non-xen servers:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [3682] successful run completed in 10s

[root@non-xen ~]# Âstress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [5284] successful run completed in 10s

As you can see, the result is the same, but what happen when I include
hdd i/o to the test? Here's the output:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
--timeout 10s
stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [3700] successful run completed in 59s

[root@non-xen ~]# Âstress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
10 --timeout 10s
stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [5332] successful run completed in 37s

Including some HDD stress, the result is different. Both servers (xen
and non-xen) are using LVM, but to be honest, I was expecting this
kind of result because of the disk access.

Later this week I'll continue with the tests (well designed tests :P)
and I'll share the results.

Cheers.

1. http://freshmeat.net/projects/stress/

--
@cereal_bars

You weren't specific about whether the Xen tests were done on a Dom0 or DomU. I could assume DomU since there should be next to zero overhead for a Xen Dom0 over a non-xen host. Can you post your DomU config please?

Grant McWilliams

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.