[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn

  • To: John Levon <levon@xxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Tue, 07 Oct 2008 08:19:58 +0100
  • Cc:
  • Delivery-date: Tue, 07 Oct 2008 00:20:33 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AckoTRqPWT+AJJRAEd2h+wAWy6hiGQ==
  • Thread-topic: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn

On 6/10/08 17:47, "John Levon" <levon@xxxxxxxxxxxxxxxxx> wrote:

> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> ....
> dom 11 max gpfn 985087
> (1Gb Solaris HVM domU).
> I'm not sure how this should be fixed?

You are correct that there is a general issue here, if the guest arbitrarily
increases max_mapped_pfn. However, yours is more likely a specific problem
-- mappings being added in the 'I/O hole' 0xF0000000-0xFFFFFFFF by PV
drivers. This is strictly easier because we can fix it by assuming that no
new mappings will be created above 4GB after the domain starts/resumes
running. A simple fix, then, is for xc_domain_restore() to map something at
page 0xFFFFF (e.g., shared_info) if max_mapped_pfn is smaller than that.
This will bump max_mapped_pfn as high as necessary. Note that a newly-built
HVM guest will always have 0xFFFFF as minimum max_mapped_pfn since
xc_hvm_build() maps shared_info at 0xFFFFF to initialise it (arguably
xc_domain_restore() should be doing the same!).

 -- Keir

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.