[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v20210701 15/40] tools: prepare to allocate saverestore arrays once



Am Mon, 5 Jul 2021 11:44:30 +0100
schrieb Andrew Cooper <andrew.cooper3@xxxxxxxxxx>:

> On 01/07/2021 10:56, Olaf Hering wrote:
> I agree that the repeated alloc/free of same-sized memory regions on
> each iteration is a waste.  However, if we are going to fix this by
> using one-off allocations, then we want to compensate with logic such as
> the call to VALGRIND_MAKE_MEM_UNDEFINED() in flush_batch(), and I think
> we still need individual allocations to let the tools work properly.

If this is a concern, lets just do a few individual arrays.

> > This patch is just prepartion, subsequent changes will populate the arrays.
> >
> > Once all changes are applied, migration of a busy HVM domU changes like 
> > that:
> >
> > Without this series, from sr650 to sr950 
> > (xen-4.15.20201027T173911.16a20963b3 xen_testing):
> > 2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 
> > 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
> > 2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 
> > 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
> > 2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 
> > 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
> > 2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 
> > 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
> > 2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 
> > 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
> > 2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 
> > 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error
> >
> > With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 
> > xen_unstable):
> > 2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 
> > 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
> > 2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 
> > 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
> > 2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 
> > 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
> > 2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 
> > 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
> > 2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 
> > 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
> > 2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 
> > 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error  
> 
> These are good numbers, and clearly show that there is some value here,
> but shouldn't they be in the series header?  They're not terribly
> relevant to this patch specifically.

The cover letter is unfortunately not under version control.
Perhaps there are ways with git notes, I never use it.

> Also, while I can believe that the first sample is slower than the later
> ones (in particular, during the first round, we've got to deal with the
> non-RAM regions too and therefore spend more time making hypercalls),
> I'm not sure I believe the final sample.  Given the byte/page count, the
> substantially smaller elapsed time looks suspicious.

The first one is slower because it has to wait for the receiver to allocate 
pages.
But maybe as you said there are other aspects as well.
The last one is always way faster because apparently map/unmap is less costly 
with a stopped guest.
Right now the code may reach up to 15Gbit/s. The next step is to map the domU 
just once to reach wirespeed.

> Are these observations with an otherwise idle dom0?

Yes. Idle dom0 and a domU busy with touching its memory.

Unfortunately, I'm not able to prove the reported gain with the systems I have 
today.
I'm waiting for preparation of different hardware, right now I have only a pair 
of CoyotePass and WilsonCity.

I'm sure there were NUMA effects involved. Last years libvirt was unable to 
properly pin vcpus. If I pin all the involved memory to node#0 there is some 
jitter in the logged numbers, but no obvious improvement. The fist iteration is 
slightly faster, but that is it.

Meanwhile I think this commit message needs to be redone.

> Even if CPU time in dom0 wasn't the bottlekneck with a 1G link, the
> reduction in CPU time you observe at higher link speeds will still be
> making a difference at 1G, and will probably be visible if you perform
> multiple concurrent migrations.

Yes, I will see what numbers I get with two or more migrations running in 
parallel.

Olaf

Attachment: pgpwHRUDOFypU.pgp
Description: Digitale Signatur von OpenPGP


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.