[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/7] xen: vNUMA support for PV guests



On gio, 2013-11-21 at 18:00 +0800, Li Yechen wrote:
> Dario,
> I just reply to this email again in case you haven't seen it :)
> 
No, don't worry, I haven't forgot. :-)

> On Tue, Oct 22, 2013 at 10:31 PM, Li Yechen <lccycc123@xxxxxxxxx>
> wrote:
>         Hi Elena,
>         
>         Congratulations to your work again!
>         
>         
>         Have you considered the other memory operations in
>         xen/common/memory.c?
>         
>         There are two important function: decrease_reservation(&args)
>         and  populate_physmap(&args)
>         
>         decrease_reservation(&args) remove pages from domain.
>         populate_physmap(&args) alloc pages for domain.
>
Yes, that's definitely something we need to adress, probably in this
patch series, even before thinking about NUMA aware ballooning.

>         Guest domain pass the mask of nodes to xen by these two
>         hypercalls. 
>         
>         For decrease_reservation, xen will also receive a number of
>         pages. We just free them from domain. Here, we should update
>         the memory size of vnodes and pnodes
>         (I think you keep a counter for the page numbers of each vnode
>         and pnode, something as vnuma_memszs, but please forgive me
>         that you have submitted such a huge patch that I could not
>         understand everything in time : - |    )
>         
>         For populate_physmap, xen will allocate blank pages from its
>         heap for domain guest, from specific nodes, according to the
>         nodemask. Here we should update your counters too!
>         
Well, I haven't gone re-check the code, but that does make sense.

In Edinburgh, Elena told me that she did some tests of ballooning with
her series applied, and nothing exploded (which is already
something. :-D).

We definitely should double check what happens, from where the pages
came /are taken from, and ensure the accounting is done right.

>         And as I see, we don't have a protocol here on whether the
>         nodemask in (&args ) is pnode or vnode.
>         
>         I think it should be vnode, since guest domain knows nothing
>         about the node affinity.
>         
>         So my idea could be: we communicate with guest domain using
>         vnode IDs. If we need to change the memory size of guest
>         domain, for example, memory increase/decrease on pnode[0],
>         we . And in the two functions of memory.c mentioned above, we
>         received the vnode_mask, transfer it back to pnode_mask, thus
>         it will work perfectly! And we don't need an extra hypercall
>         for guest domain any more!
>         
Mmm.. it may be me, but I'm not sure I follow. I agree that the guest
should speak vnode-s, but I'm not sure I get what you mean when you say
"use your node affinity to change pnode[0] to vnodes_mask, pass it to
guest domain".

Anyway, I was thinking, you did a pretty nice job on hacking something
together when for NUMA aware ballooning when vNUMA wasn't even released.
Now that Elena's patchset is out, how about you try to adapt what you
had at the time, plus the outcome of all that nice discussion we also
had, on top of it, and show us what happens? :-)

Elena's patches are not in the final form, but that should constitute a
fairly decent basis for another, and this time easier to understand and
to review, proof of concept implementation, isn't that so?

Of course, there's no hurry, this will definitely be something we'll
consider for the next release of Xen (so 4.5, not 4.4 which will
hopefully be released in January), i.e., there should be plenty of
time. :-D

What do you think?

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.