[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/4] libxc: rework vnuma bits in setup_guest



On Mon, 2015-06-01 at 16:18 -0400, Boris Ostrovsky wrote:
> On 06/01/2015 06:19 AM, Wei Liu wrote:
> > Make the setup process similar to PV counterpart. That is, to allocate a
> > P2M array that covers the whole memory range and start from there. This
> > is clearer than using an array with no holes in it.
> >
> > Also the dummy layout should take MMIO hole into consideration. We might
> > end up having two vmemranges in the dummy layout.
> >
> > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> 
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

+ my ack on all 4 -> applied.

> with a couple of nits below that you might consider.

These can be done in followups, so I applied.

I have a passing comment on one of the nits:

> > +    for ( vmemid = 0; vmemid < args->nr_vmemranges; vmemid++ )
> > +    {
> > +        uint64_t pfn;
> > +
> > +        for ( pfn = args->vmemranges[vmemid].start >> PAGE_SHIFT;
> > +              pfn < args->vmemranges[vmemid].end >> PAGE_SHIFT;
> > +              pfn++ )
> > +            page_array[pfn] = pfn;
> > +    }
> >   
> >       /*
> >        * Try to claim pages for early warning of insufficient memory 
> > available.
> > @@ -645,6 +679,12 @@ static int setup_guest(xc_interface *xch,
> >    error_out:
> >       rc = -1;
> >    out:
> > +    if ( use_dummy )
> 
> Or 'if (args->vmemranges == dummy_vmemrange)' and drop use_dummy variable.

FWIW a more normal approach would be to do all the calculations on local
variables and propagate them to the caller in the !use_dummy case. I
don't know if that implies some huge restructure of this function
though.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.