[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND v5 6/6] xen/arm: Implement toolstack for xl restore/save and migrate

13.11.2013 15:09, Ian Campbell wrote:
On Wed, 2013-11-13 at 13:57 +0400, Eugene Fedotov wrote:
diff --git a/tools/libxc/xc_arm_migrate.c b/tools/libxc/xc_arm_migrate.c
new file mode 100644
index 0000000..461e339
--- /dev/null
+++ b/tools/libxc/xc_arm_migrate.c
@@ -0,0 +1,712 @@
Is this implementing the exact protocol as described in
tools/libxc/xg_save_restore.h or is it a variant? Are there any docs of
the specifics of the ARM protocol?
This implements a quite different from tools/libxc/xg_save_restore.h
protocol, it is much more simplified because we do not need some things
that implemented for x86. So you're right, it has to be documented.
Should we use a different header to  place documentation to this (and
place some definitions), for example xc_arm_migrate.h?
I think xg_arm_save_restore.h would be the consistent name. At some
point we should rename some of the x86 specific stuff, but you don't
need to worry about that.
We will eventually need to make a statement about the stability of the
protocol, i.e on x86 we support X->X+1 migrations across Xen versions. I
think we'd need to make similar guarantees on ARM before we would remove
the "tech preview" label from the migration feature.
So, should you believe our results (and where should we place this
statement) or should you make tests from your side?
By stability I mean the stability of the migration ABI across version
changes, not stability as in the bugginess or otherwise of the code. IOW
a commitment to supporting migration from version X to version X+1 in
the future.

WRT "tech preview" I don't think we are going to be able to honestly
call this production ready for the general case in 4.4, I expect it'll
need a bit longer to "bed in" before we are happy with that statement.
(this has no bearing on whether you want to fully support it as
production ready in your particular application).

Our migration protocol consists of: header (Console and XenStore PFN), memory data, HVM context. Can we guarantee that HVM context of version X is compatible (or the same) as HVM context of version X ? I think if we add some HVM hardware (such as virtual RTC) in version X+1 we may have a compatibility problem, because HVM buffer structure had been changed. I think we cannot predict such things in future versions. But our migration protocol may have version number and migration between X and X+1 Xen versions is possible when we use the same protocol number in both sides (we may compare protocol versions during the migration runtime).

+/* ============ Memory ============= */
+static int save_memory(xc_interface *xch, int io_fd, uint32_t dom,
+                       struct save_callbacks *callbacks,
+                       uint32_t max_iters, uint32_t max_factor,
+                       guest_params_t *params)
+    int live =  !!(params->flags & XCFLAGS_LIVE);
+    int debug =  !!(params->flags & XCFLAGS_DEBUG);
+    xen_pfn_t i;
+    char reportbuf[80];
+    int iter = 0;
+    int last_iter = !live;
+    int total_dirty_pages_num = 0;
+    int dirty_pages_on_prev_iter_num = 0;
+    int count = 0;
+    char *page = 0;
+    xen_pfn_t *busy_pages = 0;
+    int busy_pages_count = 0;
+    int busy_pages_max = 256;
+    DECLARE_HYPERCALL_BUFFER(unsigned long, to_send);
+    xen_pfn_t start = params->start_gpfn;
+    const xen_pfn_t end = params->max_gpfn;
+    const xen_pfn_t mem_size = end - start;
+    if ( debug )
+    {
+        IPRINTF("(save mem) start=%llx end=%llx!\n", start, end);
+    }
FYI you don't need the {}'s for cases like this.
Actually we don't need {}, this has been done because we was not sure if
this macro can be empty-substituted.
is if ( debug ) IPRINTF(...) not the equivalent of DPRINTF?
This equivalence is not obvious for me, because in current code we
obtain debug flag  with XCFLAGS_DEBUG mask (when --debug option passed).
If it is equivalent I'll use DPRINTF.
No you are right, these are different. Actually DPRINTF is "detail
level", there is a DBGPRINTF which is "debug level" (levels here being
in terms of XTL_FOO from xentoollog.h). So you probably do want to use
if (debug), although you may separately want to consider which level the
resulting messages are printed at when they are printed...
+static int restore_guest_params(xc_interface *xch, int io_fd,
+                                uint32_t dom, guest_params_t *params)
+    if ( xc_domain_setmaxmem(xch, dom, maxmemkb) )
+    {
+        ERROR("Can't set memory map");
+        return -1;
+    }
+    /* Set max. number of vcpus as max_vcpu_id + 1 */
+    if ( xc_domain_max_vcpus(xch, dom, params->max_vcpu_id + 1) )
Does the higher level toolstack not take care of vcpus and maxmem? I
thought so. I think this is how it shoud be.
For my tests guest config information is not transferred for ARM case
from high-level stack. At the migration receiver side toolstack always
create a new domain with vcpus=1 and default max. mem. So we have to
send guest information as our local guest_params structure (at the
beginning of migration).
It is easy way to work "xl save" or "xl migrate" without modification of
libxl level, but you may have another idea?
Also, toolstack_restore callback is not set (NULL) for ARM case.
So you aren't using xl to do the migration? This is what we should
ultimately be aiming for. It is almost certainly going to require fixes
at the libxl level though.

If you need to send additional information for your usecase then it
should be done at the layer above libxc.
I found the reason why guest config information is not saved in libxl. the following line in xl_cmdimpl.c:
if (!dom_info->quiet)
        printf("Parsing config from %s\n", config_source);
in function "create_domain"
calls printf instead of fprintf(stderr, ...).
It is not the same on ARM. I have feeling that printf() causes output to socket buffer and it breaks config data. Should we use fprintf(stderr, "Parsing config from %s\n", config_source); here? If we do this, config is transferred and we do not need set VCPU number and memory manually.

Best regards,

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.