[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: [Xen-devel] Direct I/O to domU seeing a 30% performance hit



Hi Ian,

I already tested the performance for four I/O schedulers: noop, deadline, Anticipatory and CFQ. There is performance impact with I/O schedulers, however the performance difference between those four I/O schedulers are all less than 10%.

I find Anticipaotry could be the best choice. So I used Anticipatory as the Linux I/O scheduler for all my testing Linux native, Xen Domain0 and DomainU (this change does not show in the config files).

Liang

----- Original Message ----- From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx> To: "Liang Yang" <multisyncfe991@xxxxxxxxxxx>; "John Byrne" <john.l.byrne@xxxxxx> Cc: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>; "Emmanuel Ackaouy" <ack@xxxxxxxxxxxxx>; <ian.pratt@xxxxxxxxxxxx>
Sent: Tuesday, November 07, 2006 11:45 AM
Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: [Xen-devel] Direct I/O to domU seeing a 30% performance hit


Attached is the diff of the two kernel configs.

There are a *lot* of differences between those kernel configs. A cursory
glance spots such gems as:

< CONFIG_DEFAULT_IOSCHED="cfq"
---
CONFIG_DEFAULT_IOSCHED="anticipatory"

All bets are off.

Ian


----- Original Message -----
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
To: "Liang Yang" <yangliang_mr@xxxxxxxxxxx>; "John Byrne"
<john.l.byrne@xxxxxx>
Cc: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>; "Emmanuel Ackaouy"
<ack@xxxxxxxxxxxxx>; <ian.pratt@xxxxxxxxxxxx>
Sent: Tuesday, November 07, 2006 11:15 AM
Subject: RE: Performance data of Linux native vs. Xen Dom0
and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
Also,
> Linux native kernel and domU kernel are all compiled as
Uni-Processor
> mode.All the testing for Linux native, domain0 and domainU
are exactly
the
> same. All used Linux kernel 2.6.16.29.

Please could you post a 'diff' of the two kernel configs.

It might be worth diff'ing the boot messages in both cases too.

Thanks,
Ian


> Regards,
>
> Liang
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
> To: "Liang Yang" <multisyncfe991@xxxxxxxxxxx>; "John Byrne"
> <john.l.byrne@xxxxxx>
> Cc: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>; "Emmanuel Ackaouy"
> <ack@xxxxxxxxxxxxx>; <ian.pratt@xxxxxxxxxxxx>
> Sent: Tuesday, November 07, 2006 11:06 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
DomU.
> Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I'm also doing some performance analysis about Linux native, dom0
and
> domU
> > (para-virtualized). Here are some brief comparison for 256K
sequential
> > read/write. The testing is done using for JBOD based on 8
Maxtor SAS
> Atlas
> > 2
> > 15K drives with LSI SAS HBA.
> >
> > 256K Sequential Read
> > Linux Native: 559.6MB/s
> > Xen Domain0: 423.3MB/s
> > Xen DomainU: 555.9MB/s
>
> This doesn't make a lot of sense. Only thing I can think of is that
> there must be some extra prefetching going on in the domU case. It
still
> doesn't explain why the dom0 result is so much worse than native.
>
> It might be worth repeating with both native and dom0 boot with
> maxcpus=1.
>
> Are you using near-identical kernels in both cases? Same
drivers, same
> part of the disk for the tests, etc?
>
> How are you doing the measurement? A timed 'dd'?
>
> Ian
>
>
> > 256K Sequential Write
> > Linux Native: 668.9MB/s
> > Xen Domain0: 708.7MB/s
> > Xen DomainU: 373.5MB/s
> >
> > Just two questions:
> >
> > It seems para-virtualized DomU outperform Dom0 in terms of
sequential
> read
> > and is very to Linux native performance. However, DomU does show
poor
> (only
> > 50%) sequential write performance compared with Linux native and
Dom0.
> >
> > Could you explain some reason behind this?
> >
> > Thanks,
> >
> > Liang
> >
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
> > To: "John Byrne" <john.l.byrne@xxxxxx>
> > Cc: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>;
"Emmanuel Ackaouy"
> > <ack@xxxxxxxxxxxxx>
> > Sent: Tuesday, November 07, 2006 10:20 AM
> > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30%
performance
> hit
> >
> >
> > > Both dom0 and the domU are SLES 10, so I don't know why
the "idle"
> > > performance of the two should be different. The obvious
asymmetry
is
> > the
> > > disk. Since the disk isn't direct, any disk I/O by the
domU would
> > > certainly impact dom0, but I don't think there should
be much, if
> any.
> > I
> > > did run a dom0 test with the domU started, but idle and
there was
no
> > > real change to dom0's numbers.
> > >
> > > What's the best way to gather information about what is going on
> with
> > > the domains without perturbing them? (Or, at least, perturbing
> > everyone
> > > equally.)
> > >
> > > As to the test, I am running netperf 2.4.1 on an outside machine
to
> > the
> > > dom0 and the domU. (So the doms are running the netserver
portion.)
> I
> > > was originally running it in the doms to the outside
machine, but
> when
> > > the bad numbers showed up I moved it to the outside machine
because
> I
> > > wondered if the bad numbers were due to something
happening to the
> > > system time in domU. The numbers is the "outside" test to domU
look
> > worse.
> >
> >
> > It might be worth checking that there's no interrupt sharing
> happening.
> > While running the test against the domU, see how much CPU
dom0 burns
> in
> > the same period using 'xm vcpu-list'.
> >
> > To keep things simple, have dom0 and domU as uniprocessor guests.
> >
> > Ian
> >
> >
> > > Ian Pratt wrote:
> > > >
> > > >> There have been a couple of network receive throughput
> > > >> performance regressions to domUs over time that were
> > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > >
> > > > The report was (I believe) with a NIC directly assigned to the
> domU,
> > so
> > > > not using netfront/back at all.
> > > >
> > > > John: please can you give more details on your config.
> > > >
> > > > Ian
> > > >
> > > >> Are you seeing any dropped packets on the vif associated with
> > > >> your domU in your dom0? If so, propagating changeset
> > > >> 11861 from unstable may help:
> > > >>
> > > >> changeset:   11861:637eace6d5c6
> > > >> user:        kfraser@xxxxxxxxxxxxxxxxxxxxx
> > > >> date:        Mon Oct 23 11:20:37 2006 +0100
> > > >> summary:     [NET] back: Fix packet queuing so that packets
> > > >> are drained if the
> > > >>
> > > >>
> > > >> In the past, we also had receive throughput issues to domUs
> > > >> that were due to socket buffer size logic but those were
> > > >> fixed a while ago.
> > > >>
> > > >> Can you send netstat -i output from dom0?
> > > >>
> > > >> Emmanuel.
> > > >>
> > > >>
> > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> system
> > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > >> running the
> > > >>> same
> > > >>> kernel: xen 3.0.3 x86_64.)
> > > >>>
> > > >>> I'm running netperf from an outside system to the domU and
> > > >> dom0 and I
> > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > >>>
> > > >>> Is this to be expected? If so, why? If not, does anyone
> > > >> have a guess
> > > >>> as to what I might be doing wrong or what the issue
might be?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> John Byrne
> > > >> _______________________________________________
> > > >> Xen-devel mailing list
> > > >> Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > >> http://lists.xensource.com/xen-devel
> > > >>
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-devel
> > > >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
>




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.