|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: Increase device model startup timeout to 1min.
On Thu, Jul 02, 2015 at 01:38:37PM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("Re: [PATCH] libxl: Increase device model startup
> timeout to 1min."):
> > I have tested an increase timeout this night. And here are the result.
> >
> > The machine is a AMD Opteron(tm) Processor 4284, with 8G of RAM and 8 pCPU.
> > It's running Ubuntu 14.04, with Xen 4.4. On top of that, OpenStack have
> > been deployed via devstack on a single.
> >
> > The test is to run Tempest with --concurrency=4. There are 4 tests runned
> > in parallel, but they don't necessarly start a VM. When they do, it's a PV
> > with 64MB and 1 vCPU and sometime with double amount of RAM.
> >
> > The stats:
> > Tempest run: 22
> > Tempest run time for each run: ~3000s
> > Tempest number of test: 1143
> > after 22 run of tempest:
> > QEMU start: 3352
> > number of run that took more than 2s: 20
> > number of run that took more than 9s: 6
> > maximum start time: 10.973713s
> >
> > I have gathered the QEMU start time by having strace running for each of
> > them. I have then look at the time it took from the first syscall
> > execve('qemu') until the syscall where QEMU respond on its QMP socket
> > (libxl have acknoledge that QEMU is running at that time).
>
> Thanks for this information.
>
> So from what you say it appears that we are running at most 4 copies
> of libxl and qemu in parallel, along with at most 4 VMs ?
Some test do run 3 VM, so at most it would be 12 VM. I think that still
only one copy of libxl since everythings goes through libvirt, right?
> And out of 3352 qemu starts, we have
> <= 2s 3332
> >2s <= 9s 14
> > 9s 6
> ?
>
> Do you have any information about the maximum system load in general ?
> dom0 load, vcpu overcommit, etc. ? Do you know what the guests are
> doing ?
I've got some stat collected via `dstat`.
During the tests, the load average can go from 5 to 8 in dom0.
The CPUs in dom0 can spend quite a lot of time waiting, it's not rare to
see beetween 35 and 60% of time waiting (total of all 8 cpu). As for the
disk, dstat reveal that there is often between 50M and 120M per second of
total disk write.
The guests usully does not do much, beside booting. It run a small system
call CirrOS.
> I'm starting to think that this might be a real bug but that the bug
> might be "Linux's I/O subsystem sometimes produces appalling latency
> under load" (which is hardly news).
I guess the straces support this, here are few quote from different strace:
04:11:50.602067 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or
directory) <0.000036>
04:11:50.602166 open("/usr/lib/x86_64-linux-gnu/libidn.so.11",
O_RDONLY|O_CLOEXEC) = 3 <0.000039>
04:11:50.602263 read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300+\0\0\0\0\0\0"..., 832) =
832 <0.000035>
04:11:50.602359 fstat(3, {st_mode=S_IFREG|0644, st_size=207128, ...}) = 0
<0.000033>
04:11:50.602453 mmap(NULL, 2302208, PROT_READ|PROT_EXEC,
MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f845b9f8000 <0.000036>
04:11:50.602546 mprotect(0x7f845ba29000, 2097152, PROT_NONE) = 0 <0.000037>
04:11:50.602639 mmap(0x7f845bc29000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x31000) = 0x7f845bc29000 <0.000038>
04:11:51.257654 close(3) = 0 <0.000042>
04:11:51.257859 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or
directory) <0.000078>
18:08:37.635570 open("/usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so",
O_RDONLY|O_CLOEXEC) = 28 <0.000050>
18:08:37.635681 read(28,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\20\0\0\0\0\0\0"..., 832) =
832 <1.011536>
18:08:38.647329 fstat(28, {st_mode=S_IFREG|0644, st_size=22592, ...}) = 0
<0.000042>
18:08:38.647499 mmap(NULL, 2117872, PROT_READ|PROT_EXEC,
MAP_PRIVATE|MAP_DENYWRITE, 28, 0) = 0x7f6c29128000 <0.000048>
18:08:38.647613 mprotect(0x7f6c2912c000, 2097152, PROT_NONE) = 0 <0.000049>
18:08:38.647721 mmap(0x7f6c2932c000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 28, 0x4000) = 0x7f6c2932c000 <0.000043>
18:08:38.647862 close(28) = 0 <0.000035>
The first quote is a pattern I'm seeing very often on slow dm start, where
it take a long time between the mmap and the next syscall. On the second
quote, read() is to blame, it took 1s.
I guess even the first quote imply there is going to be I/O after the mmap
call, isn't it?
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |