[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 6/9] tools/libxc: x86 pv save implementation



On Wed, 2014-05-07 at 16:20 +0100, Andrew Cooper wrote:
> On 07/05/14 14:58, Ian Campbell wrote:
> > On Wed, 2014-04-30 at 19:36 +0100, Andrew Cooper wrote:
> >
> > The previous "common" patch seemed to have a fair bit of x86 pv save
> > code in it, and there seems to be a fair bit of common code here. I
> > suppose the split was pretty artificial to make the patches manageable?
> >
> >> @@ -121,6 +123,9 @@ struct context
> >>      };
> >>  };
> >>  
> >> +/* Saves an x86 PV domain. */
> > I should hope so!
> >
> >> +int save_x86_pv(struct context *ctx);
> >> +
> >>  /*
> >>   * Write the image and domain headers to the stream.
> >>   * (to eventually make static in save.c)
> >> @@ -137,6 +142,21 @@ struct record
> >>      void *data;
> >>  };
> >>  
> >> +/* Gets a field from an *_any union */
> > Unless the #undefs which were above before I trimmed them relate to
> > patch #2/9 they should go right before the redefine IMHO.
> 
> At the moment the series is specifically designed to allow the legacy
> and v2 code to live together for testing purposes.
> 
> When this series leaves RFC, the old macros will be killed with fire. 
> (See also gross abortions with scoped state)

I think it was later on where I requested for things subject to that 2/9
related future path be explicily tagged as such.

> >> +    if ( !rec_pfns )
> >> +    {
> >> +        ERROR("Unable to allocate memory for page data pfn list");
> >> +        goto err;
> >> +    }
> >> +
> >> +    for ( i = 0; i < nr_pfns; ++i )
> >> +        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->batch_pfns[i];
> > Could this not have a more structured type? (I noticed this in the
> > decode bit of the previous patch too).
> 
> Not without resorting to bitfields, which I would prefer to avoid for
> data ending up being written into a stream.

Fair enough.

> >> +static int flush_batch(struct context *ctx)
> >> +{
> >> +    int rc = 0;
> >> +
> >> +    if ( ctx->nr_batch_pfns == 0 )
> >> +        return rc;
> >> +
> >> +    rc = write_batch(ctx);
> >> +
> >> +    if ( !rc )
> >> +    {
> >> +        /* Valgrind sanity check */
> > What now?
> 
> This results in the array being marked as unused again, which was useful
> when I found two pieces of code playing with the index and write_batch()
> finding some plausible-but-wrong pfns to send.

If it is going to stay then the comment needs to be more explicit about
this.

Maybe make all this stuff #idndef NDEBUG? Except we don't have that for
tools, nevermind.

> >> +    /* Older GCC cant initialise anonymous unions */
> > How old?
> 
> 4.7 in debian is fine.  4.4 in CentOS 6 isn't

How irritating. Please mention it so someone looking at this in 5 years
can decide to clean it up.

> >
> >> +    /* 64bit guests: Convert CR1 (guest pagetables) to PFN */
> >> +    if ( ctx->x86_pv.levels == 4 && vcpu.x64.ctrlreg[1] )
> >> +    {
> >> +        mfn = vcpu.x64.ctrlreg[1] >> PAGE_SHIFT;
> >> +        if ( !mfn_in_pseudophysmap(ctx, mfn) )
> >> +        {
> >> +            ERROR("Bad MFN for vcpu%u's cr1", id);
> >> +            pseudophysmap_walk(ctx, mfn);
> >> +            errno = ERANGE;
> >> +            goto err;
> >> +        }
> >> +
> >> +        pfn = mfn_to_pfn(ctx, mfn);
> >> +        vcpu.x64.ctrlreg[1] = 1 | ((uint64_t)pfn << PAGE_SHIFT);
> >> +    }
> >> +
> >> +    if ( ctx->x86_pv.width == 8 )
> > The use of x86_pv.levels vs x86_pv.width to determine the type of guest
> > is rather inconsistent. .width would seem to be the right one to use
> > almost all the time I think.
> 
> I (believe I) have consistently used levels when dealing with pagetable
> stuff, and width when dealing with bitness issues.

I think I first noticed it when there was one within the other, but I
don't remember which patch that was in.

> I suppose the difference was more pronounced in the legacy code with
> 32bit 2 and 3 level guests, but I still think this split is the better
> way of distinguishing.

OK then.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.