[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 07/10] libxc: introduce soft reset for HVM domains



On Wed, 2015-05-27 at 17:25 +0200, Vitaly Kuznetsov wrote:
> Add new xc_soft_reset() function which performs so-called 'soft reset'
> for an HVM domain. It is being performed in the following way:
> - Save HVM context and all HVM params of the source domain;
> - Transfer all the source domain's memory to the destinatio domain with

"destination"

>   XEN_DOMCTL_soft_reset;
> - Restore HVM context, HVM params, seed grant table for the new domain.
> When the operation succeeds the source domain is being destroyed and the

This is sounding a lot like a migration, with the refactoring done for
migration v2 is there any possibility we might be able to reuse any of
that?

(e.g. the list of hvmparams to be dealt with seems like something which
needs to be the same everywhere)

> +    if ( xc_domain_get_pod_target(xch, source_dom, &pod_info[0], 
> &pod_info[1],
> +                                  &pod_info[2]) )

I don't like all the open coded [0], [1] and [2] in the following code
which this implies. You could define symbolic names TOT_PAGES=1, etc but
TBH I think you would be better off just having 3 appropriately named
variables instead of the array.

> +    {
> +        PERROR("Could not get old domain's PoD info");
> +        return 1;
> +    }
> +
> +    if ( xc_domain_getinfo(xch, dest_dom, 1, &new_info) != 1 ||
> +         new_info.domid != dest_dom )
> +    {
> +        PERROR("Could not get new domain's info");
> +        return 1;
> +    }
> +
> +    if ( !old_info.hvm || !new_info.hvm )
> +    {
> +        PERROR("Soft reset is supported for HVM only");
> +        return 1;
> +    }

Should do these sorts of checks before real work, like getting the PoD
info.

> +
> +    sharedinfo_pfn = old_info.shared_info_frame;
> +    if ( xc_get_pfn_type_batch(xch, source_dom, 1, &sharedinfo_pfn) )
> +    {
> +        PERROR("xc_get_pfn_type_batch failed");
> +        goto out;
> +    }
> +
> +    hvm_buf_size = xc_domain_hvm_getcontext(xch, source_dom, 0, 0);
> +    if ( hvm_buf_size == -1 )
> +    {
> +        PERROR("Couldn't get HVM context size from Xen");
> +        goto out;
> +    }
> +
> +    hvm_buf = malloc(hvm_buf_size);
> +    if ( !hvm_buf )
> +    {
> +        ERROR("Couldn't allocate memory");
> +        goto out;
> +    }
> +
> +    if ( xc_domain_hvm_getcontext(xch, source_dom, hvm_buf,
> +                                  hvm_buf_size) == -1 )
> +    {
> +        PERROR("HVM:Could not get hvm buffer");
> +        goto out;
> +    }
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_STORE_PFN,
> +                     &hvm_params[HVM_PARAM_STORE_PFN]);
> +    store_pfn = hvm_params[HVM_PARAM_STORE_PFN];
> +    *store_mfn = store_pfn;
> +
> +    xc_hvm_param_get(xch, source_dom,
> +                     HVM_PARAM_CONSOLE_PFN,
> +                     &hvm_params[HVM_PARAM_CONSOLE_PFN]);
> +    console_pfn = hvm_params[HVM_PARAM_CONSOLE_PFN];
> +    *console_mfn = console_pfn;
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_BUFIOREQ_PFN,
> +                     &hvm_params[HVM_PARAM_BUFIOREQ_PFN]);
> +    buffio_pfn = hvm_params[HVM_PARAM_BUFIOREQ_PFN];
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_IOREQ_PFN,
> +                     &hvm_params[HVM_PARAM_IOREQ_PFN]);
> +    io_pfn = hvm_params[HVM_PARAM_IOREQ_PFN];
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_IDENT_PT,
> +                     &hvm_params[HVM_PARAM_IDENT_PT]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_PAGING_RING_PFN,
> +                     &hvm_params[HVM_PARAM_PAGING_RING_PFN]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_VM86_TSS,
> +                     &hvm_params[HVM_PARAM_VM86_TSS]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_ACPI_IOPORTS_LOCATION,
> +                     &hvm_params[HVM_PARAM_ACPI_IOPORTS_LOCATION]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_VIRIDIAN,
> +                     &hvm_params[HVM_PARAM_VIRIDIAN]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_PAE_ENABLED,
> +                     &hvm_params[HVM_PARAM_PAE_ENABLED]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_STORE_EVTCHN,
> +                     &hvm_params[HVM_PARAM_STORE_EVTCHN]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_IOREQ_SERVER_PFN,
> +                     &hvm_params[HVM_PARAM_IOREQ_SERVER_PFN]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_NR_IOREQ_SERVER_PAGES,
> +                     &hvm_params[HVM_PARAM_NR_IOREQ_SERVER_PAGES]);
> +
> +    xc_hvm_param_get(xch, source_dom, HVM_PARAM_VM_GENERATION_ID_ADDR,
> +                     &hvm_params[HVM_PARAM_VM_GENERATION_ID_ADDR]);

Even if this can't be shared with the migration please at least make it
table driven (i.e. an array of HVM_PARAM_* to wander over) so that
get/set can use the same table and not get out of sync.

Does the set have to follow the xc_domain_soft_reset and/or the get
precede the xc_domain_soft_reset? Or could a simple helper implement get
+set in one "move" operation? That would also remove the possibility of
forgetting to do both halves of one hvm param.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.