[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] libxc: remove stale error check for domain size in xc_sr_save_x86_hvm.c
On 23/10/17 11:20, Juergen Gross wrote: > On 06/10/17 15:30, Julien Grall wrote: >> Hi, >> >> On 27/09/17 15:36, Wei Liu wrote: >>> On Tue, Sep 26, 2017 at 02:02:56PM +0200, Juergen Gross wrote: >>>> Long ago domains to be saved were limited to 1TB size due to the >>>> migration stream v1 limitations which used a 32 bit value for the >>>> PFN and the frame type (4 bits) leaving only 28 bits for the PFN. >>>> >>>> Migration stream V2 uses a 64 bit value for this purpose, so there >>>> is no need to refuse saving (or migrating) domains larger than 1 TB. >>>> >>>> For 32 bit toolstacks there is still a size limit, as domains larger >>>> than about 1TB will lead to an exhausted virtual address space of the >>>> saving process. So keep the test for 32 bit, but don't base it on the >>>> page type macros. As a migration could lead to the situation where a >>>> 32 bit toolstack would have to handle such a large domain (in case the >>>> sending side is 64 bit) the same test should be added for restoring a >>>> domain. >>>> >>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> >>> I will leave this to Andrew. >>> >>> I don't really have an opinion here. >> >> I will wait Andrew feedback before giving a release ack on this patch. > Andrew? Sorry - this completely fell off my radar. This is probably fine overall. One area which is now more important for vendors to take care over is preventing migration of VMs from swapping dom0 to death. There are a number of large structures allocated (only O(n) with the size of the VM), and this will get worse as/when steps are taken to address the ballooning issues. The best way is probably to limit the number of concurrent migrations which can be performed. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |