[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] superpages lost after migration of HVM domU



On 20/04/17 16:35, Olaf Hering wrote:
> Andrew,
>
> with eab806a097b ("tools/libxc: x86 PV restore code") the only call of
> xc_domain_populate_physmap_exact was added to the new restore code.
> This call always sets order=0. The old migration code did consider
> superpages, the new one does not.
>
> What is the reason for not using superpages when populating a HVM domU?
>
> I supposed the first iteration would allocate all of the required memory
> for a domU, perhaps as superpages. ~ollowing iterations would just refill
> existing pages.

That was actually a bugfix for an existing migration failure, and at the
time I didn't consider the performance impact.  (At the time of
migration v2, post-migrate runtime performance was at the very bottom of
the priority list).

The calculations of when to use larger order allocations were buggy, and
could end up trying to allocate more than nr_pages, which causes a hard
failure of the migration.  This only ended up being a problem when
certain gfns had been ballooned out, but it resulted in a hard failure
on the destination side.

As it currently stands, the sending side iterates from 0 to p2m_size,
and sends every frame on the first pass.  This means we get PAGE_DATA
records linearly, in batches of 1024, or two aligned 2M superpages.

Therefore, it should be easy to tweak xc_sr_restore.c:populate_pfns() to
find ranges of 512 consecuative gfns of XEN_DOMCTL_PFINFO_NOTAB and make
a single order 9 allocation, rather than an 512 order 0 allocations.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.