|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 3/3] tools/libxc: use superpages during restore of HVM guest
On Sat, Aug 26, 2017 at 12:33:32PM +0200, Olaf Hering wrote:
[...]
> +static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
> + const xen_pfn_t *original_pfns,
> + const uint32_t *types)
> +{
> + xc_interface *xch = ctx->xch;
> + xen_pfn_t pfn, min_pfn = original_pfns[0], max_pfn = original_pfns[0];
> + unsigned i, freed = 0, order;
> + int rc = -1;
> +
> + for ( i = 0; i < count; ++i )
> + {
> + if ( original_pfns[i] < min_pfn )
> + min_pfn = original_pfns[i];
> + if ( original_pfns[i] > max_pfn )
> + max_pfn = original_pfns[i];
> + }
> + DPRINTF("batch of %u pfns between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
> + count, min_pfn, max_pfn);
> +
> + for ( i = 0; i < count; ++i )
> + {
> + if ( (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
> + types[i] != XEN_DOMCTL_PFINFO_BROKEN) &&
> + !pfn_is_populated(ctx, original_pfns[i]) )
> + {
> + rc = x86_hvm_allocate_pfn(ctx, original_pfns[i]);
> + if ( rc )
> + goto err;
> + rc = pfn_set_populated(ctx, original_pfns[i]);
> + if ( rc )
> + goto err;
> + }
> + }
>
As far as I can tell the algorithm in the patch can't handle:
1. First pfn in a batch points to start of second 1G address space
2. Second pfn in a batch points to a page in the middle of first 1G
3. Guest can only use 1G ram
This is a valid scenario in post-copy migration algorithm.
Please correct me if I'm wrong.
> +
> + /*
> + * Scan the entire superpage because several batches will fit into
> + * a superpage, and it is unknown which pfn triggered the allocation.
> + */
> + order = SUPERPAGE_1GB_SHIFT;
> + pfn = min_pfn = (min_pfn >> order) << order;
> +
> + while ( pfn <= max_pfn )
> + {
> + struct xc_sr_bitmap *bm;
> + bm = &ctx->x86_hvm.restore.allocated_pfns;
> + if ( !xc_sr_bitmap_resize(bm, pfn) )
> + {
> + PERROR("Failed to realloc allocated_pfns %" PRI_xen_pfn, pfn);
> + goto err;
> + }
> + if ( !pfn_is_populated(ctx, pfn) &&
> + xc_sr_test_and_clear_bit(pfn, bm) ) {
> + xen_pfn_t p = pfn;
> + rc = xc_domain_decrease_reservation_exact(xch, ctx->domid, 1, 0,
> &p);
> + if ( rc )
> + {
> + PERROR("Failed to release pfn %" PRI_xen_pfn, pfn);
> + goto err;
> + }
> + ctx->restore.tot_pages--;
> + freed++;
> + }
> + pfn++;
> + }
> + if ( freed )
> + DPRINTF("freed %u between %" PRI_xen_pfn " %" PRI_xen_pfn "\n",
> + freed, min_pfn, max_pfn);
> +
> + rc = 0;
> +
> + err:
> + return rc;
> +}
> +
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |