[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] load+stability tests result: Event 129 xenvbd, benchmark regression



> -----Original Message-----
> From: Andreas Kinzler [mailto:ml-ak@xxxxxxxxx]
> Sent: 31 January 2017 21:51
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; win-pv-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] load+stability tests result: Event 129 xenvbd,
> benchmark regression
> 
> Hello Paul,
> 
> thanks - logging is now working. However, I paused the stability tests
> mentioned in my original email because there is a serious performance
> regression when switching from dom0 kernel 3.10 to 4.8. Here are some
> results of my benchmarks: it tests all 4 combinations of dom0 either
> 3.10 or 4.8 and PV driver either citrixpv 8.2-rc2 or GPLPV. citrixpv is
> only a bit slower on 3.10 but totally breaks on 4.8. Can you think of
> any reason for that?
> 

I suspect it will be all the multi-page ring stuff in blkback. Are you using 
blkback as your backend? If so I suggest trying to tweak it so it only uses a 
single page shared ring, in which case it should behave broadly the same as it 
did in 3.10. The other thing it could be is trim support... I think blkback in 
3.10 still ignored discard ops from the frontend.
Have you tried bisecting the kernel a bit more... it would be useful to know 
exactly when the regression occurred. We may be able to take some remedial 
action in the frontend.

Cheers,

  Paul

> xen 4.8, kernel 3.10.73, win81, citrixpv, ? VCPUs, ramdisk in dom0
> pass #  309: emptyDir... done (1313 msec) writing... done (46821 msec,
> 961.1 MB/sec) reading... done (24055 msec, 1870.7 MB/sec)
> pass #  310: emptyDir... done (1331 msec) writing... done (47347 msec,
> 950.5 MB/sec) reading... done (24164 msec, 1862.5 MB/sec)
> 
> xen 4.8, kernel 3.10.73, win81, gplpv, 4 VCPUs, ramdisk in dom0
> pass #    5: emptyDir... done (1316 msec) writing... done (44257 msec,
> 1017.3 MB/sec) reading... done (23763 msec, 1894.7 MB/sec)
> pass #    6: emptyDir... done (1239 msec) writing... done (44344 msec,
> 1015.1 MB/sec) reading... done (23629 msec, 1905.1 MB/sec)
> 
> xen 4.8, kernel 4.8.17, win81, gplpv, 4 VCPUs, ramdisk in dom0
> pass #  148: emptyDir... done (1269 msec) writing... done (41125 msec,
> 1094.2 MB/sec) reading... done (23542 msec, 1911.4 MB/sec)
> pass #  149: emptyDir... done (1193 msec) writing... done (41259 msec,
> 1090.8 MB/sec) reading... done (23916 msec, 1881.9 MB/sec)
> 
> xen 4.8, kernel 4.8.17, win81, citrixpv, 4 VCPUs, ramdisk in dom0
> pass #    1: emptyDir... done (0 msec) writing... done (55009 msec,
> 818.5 MB/sec) reading... done (136126 msec, 330.7 MB/sec)
> pass #    2: emptyDir... done (1356 msec) writing... done (57478 msec,
> 783.1 MB/sec) reading... done (101489 msec, 443.5 MB/sec)
> 
> Regards Andreas
> 
> On 25.01.2017 12:32, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Andreas Kinzler [mailto:ml-ak@xxxxxxxxx]
> >> Sent: 24 January 2017 16:28
> >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> >> Subject: Re: [win-pv-devel] load+stability tests result: Event 129 xenvbd
> >>
> >> I tried 'xl debug-keys q' but nothing is written to the qemu log? I am 
> >> using
> the
> >> release build of win-pv. Do I need a debug build of win-pv ?
> >>
> > No, I suspect the issue is with your QEMU. I assume you are using
> upstream? If so then you need to enable the appropriate trace event. This is
> what I do...
> >
> > Create a file called 'events' somewhere and put the following line in it:
> >
> > xen_platform_log
> >
> > Then, in the xl.cfg for your VM, add the following line:
> >
> > device_model_args=[ "-trace", "events=<path to your events fie>" ]
> >
> > You should then see logging from the PV drivers when you boot the VM,
> and when you do 'xl debug-keys q'.
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.