[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Degregated I/O Performance since 3.4 - Regression in 3.4?



Am Dienstag, 24. April 2012, 14:52:31 schrieb Stefano Stabellini:
> On Tue, 24 Apr 2012, Tobias Geiger wrote:
> > Am Dienstag, 24. April 2012, 09:27:42 schrieb Jan Beulich:
> > > >>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@xxxxxxxxx> wrote:
> > > > Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> > > >> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> > > >>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> > > >>>> Hello!
> > > >>>> 
> > > >>>> i noticed a considerable drop in I/O Performance when using 3.4
> > > >>>> (rc3 and rc4 tested) as Dom0 Kernel;
> > > >>>> 
> > > >>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
> > > >>>> (gplpv_Vista2008x64_0.11.0.357.msi);
> > > >>>> With 3.4 it drops to about a third of that.
> > > >>>> 
> > > >>>> Xen Version is xen-unstable:
> > > >>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100
> > > >>>> 25209:e6b20ec1824c
> > > >>>> 
> > > >>>> Disk config line is:
> > > >>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
> > > >>>> - it uses blkback.
> > > >>> 
> > > >>> I fail to see what could be the cause of the issue: nothing on the
> > > >>> blkback side should affect performances significantly.
> > > >>> You could try reverting the four patches to blkback that were
> > > >>> applied between 3.3 and 3.4-rc3 just to make sure it is not a
> > > >>> blkback regression:
> > > >>> 
> > > >>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> > > >>> 
> > > >>> Daniel De Graaf (2):
> > > >>>        xen/blkback: use grant-table.c hypercall wrappers
> > > >> 
> > > >> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> > > >> the compiler would have kept the result of the first call to
> > > >> vaddr(req, i) somewhere.. but not sure) lost with the mentioned
> > > >> patch:
> > > >> 
> > > >> diff --git a/drivers/block/xen-blkback/blkback.c
> > > > 
> > > > b/drivers/block/xen-blkback/blkback.c
> > > > 
> > > >> index 73f196c..65dbadc 100644
> > > >> --- a/drivers/block/xen-blkback/blkback.c
> > > >> +++ b/drivers/block/xen-blkback/blkback.c
> > > >> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req
> > > >> *req)
> > > >> 
> > > >>        int ret;
> > > >>        
> > > >>        for (i = 0; i<  req->nr_pages; i++) {
> > > >> 
> > > >> +              unsigned long addr;
> > > >> 
> > > >>                handle = pending_handle(req, i);
> > > >>                if (handle == BLKBACK_INVALID_HANDLE)
> > > >>                
> > > >>                        continue;
> > > >> 
> > > >> -              gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> > > >> +              addr = vaddr(req, i);
> > > >> +              gnttab_set_unmap_op(&unmap[invcount], addr,
> > > >> 
> > > >>                                    GNTMAP_host_map, handle);
> > > >>                
> > > >>                pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> > > >> 
> > > >> -              pages[invcount] = virt_to_page(vaddr(req, i));
> > > >> +              pages[invcount] = virt_to_page(addr);
> > > >> 
> > > >>                invcount++;
> > > >>        
> > > >>        }
> > > >>        
> > > >>>        xen/blkback: Enable blkback on HVM guests
> > > >>> 
> > > >>> Konrad Rzeszutek Wilk (2):
> > > >>>        xen/blkback: Squash the discard support for 'file' and 'phy'
> > > >>>        type. xen/blkback: Make optional features be really
> > > >>>        optional.
> > > >>> 
> > > >>> _______________________________________________
> > > >>> Xen-devel mailing list
> > > >>> Xen-devel@xxxxxxxxxxxxx
> > > >>> http://lists.xen.org/xen-devel
> > > > 
> > > > that made it even worse :)
> > > > Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> > > > Read "only" down to 40mb/s (with 3.3: ~140mb/s)
> > > 
> > > I doubt this patch can have any meaningful positive or negative
> > > performance effect at all - are you sure you're doing comparable
> > > runs? After all this is all just about a few arithmetic operations
> > > and an array access, which I'd expect to hide in the noise.
> > > 
> > > Jan
> > 
> > I redid the test;
> > 
> > a) with 3.3.0 kernel
> > b) with 3.4.0-rc4
> > c) with 3.40-rc4 and above patch
> > 
> > everything else remained the same, i.e. test-program and test-scenario
> > was not changed and started after about 5min of domu bootup (so that no
> > strange bootup-effects become relevant); same phy-backend (lvm on ssd),
> > same everything else; so i cant see what else except the used dom0
> > kernel is causing this issue; but here are the numbers:
> > 
> > a) read: 135mb/s write: 142mb/s
> > b) read: 39mb/s  write: 39mb/s
> > c) read: 40mb/s  write: 40mb/s
> > 
> > Only thing that may become relevant is the difference in kernel-config
> > betwen 3.3 and 3.4 - here's the diff :
> > http://pastebin.com/raw.php?i=Dy71Fegq
> > 
> > Jan, it seems you're right: The patch doesn't add extra performance
> > regression - i guess i had an i/o intensive task running in dom0 while
> > doing the benchmark yesterday, so that the write performance got so bad.
> > sorry for that.
> > 
> > Still there's a significant performance penalty from 3.3 to 3.4
> 
> Could you please try to revert the following commits?
> 
> git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
> git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5
> git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f
> git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595
> git revert -n 4f14faaab4ee46a046b6baff85644be199de718c
> git revert -n 9846ff10af12f9e7caac696737db6c990592a74a

after reverting said 6 commits (thanks for the ids of these - had difficulties 
to find them), the performance is back to normal.

should i try to circle it down to one of this 6, or do you have a hint on 
which it might be?

Greetings
Tobias

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.