[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 15/21] libxc: allocate memory with vNUMA information for HVM guest



On Wed, Jan 28, 2015 at 04:36:56PM +0000, Ian Campbell wrote:
> On Fri, 2015-01-23 at 11:13 +0000, Wei Liu wrote:
> > The algorithm is more or less the same as the one used for PV guest.
> 
> Any reason the code can't be shared then? :-D

Because the details are quite different. HVM path is entangled with
superpage handling logic...

> >  
> > +    if ( nr_pages > target_pages )
> > +        memflags |= XENMEMF_populate_on_demand;
> 
> OOI how does vNUMA and PoD interact? Do you prefer to fill a node as
> much as possible (leaving other nodes entirely PoD) or do you prefer to
> balance the PoD pages between nodes?
> 
> I think the former?
> 
> I suspect this depends a lot on the guest behaviour, so there probably
> isn't a right answer. Are there corner cases where this might go wrong
> though?
> 

I don't think vnuma and PoD interact well at this point. We need to at
least implement numa-aware PoD to make them work well.

> > +
> > +    if ( args->nr_vnuma_info == 0 )
> > +    {
> > +        /* Build dummy vnode information */
> > +        dummy_vnuma_info.vnode = 0;
> > +        dummy_vnuma_info.pnode = XC_VNUMA_NO_NODE;
> > +        dummy_vnuma_info.pages = args->mem_size >> PAGE_SHIFT;
> > +        args->nr_vnuma_info = 1;
> > +        args->vnuma_info = &dummy_vnuma_info;
> 
> You could have done this in the PV case too, I nearly suggested it then
> but realised I already had, so I figured there was a reason not to?
> 

Yes I thought about that.

In PV case memory is managed by libxc's simple memory pool. Using stack
variables like this won't actually make code cleaner. 

In HVM case there is no libxc memory management involved, using stack
variables is simpler IMHO.

> > +    }
> > +    else
> > +    {
> > +        if ( nr_pages > target_pages )
> > +        {
> > +            PERROR("Cannot enable vNUMA and PoD at the same time");
> 
> And there's the answer to my question above ;-)
> 
> > +            goto error_out;
> >  
> > +    for ( i = 0; i < args->nr_vnuma_info; i++ )
> >      {
> 
> Reindenting in a precursor patch made this diff a lot easier to read,
> thanks.
> 
> >              if ( count > max_pages )
> > @@ -388,19 +440,20 @@ static int setup_guest(xc_interface *xch,
> >                  unsigned long nr_extents = count >> SUPERPAGE_1GB_SHIFT;
> >                  xen_pfn_t sp_extents[nr_extents];
> >  
> > -                for ( i = 0; i < nr_extents; i++ )
> > -                    sp_extents[i] =
> > -                        page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
> > +                for ( j = 0; j < nr_extents; j++ )
> > +                    sp_extents[j] =
> > +                        page_array[cur_pages+(j<<SUPERPAGE_1GB_SHIFT)];
> 
> You might condider s/i/j/ for a precursor patch too. Or given the scope
> of the outermost loop is quite large a more specific name than i (like
> vnid) might be more appropriate.
> 

Ack.

Wei.

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.