[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: [Xen-devel] Balloon driver for Linux/HVM


  • To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • From: Chu Rui <ruichu@xxxxxxxxx>
  • Date: Wed, 17 Nov 2010 19:50:18 +0800
  • Cc: Xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 17 Nov 2010 04:16:16 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=NoSqeJiA98UXnRr11jmm4ASK9JN8tzxfCyzylbAvW6CqlPX/rotOIYPVczoOPQVD69 RCuJUpXCvLcwZ+gpEGNJDQyjnDPoPBAwI5+xGFUmveoPTWI5seHqY+XwvkkqaNYWfbfj +ofmZvyW2H4A+FR9BRtpnJ27kdPbDABg9TiGA=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

You are right, so balloon is an important tool to adjust the capacity of the buffer caches among the guests. But balloon is usually criticized for its long reactive time. Would you please tell me how slow it is? Can we temporarily suspend the guest when the balloon deflating speed is not as far as required?
Furthermore, with HVM, the balloon does not work when the guest is short of memory and swapping, even the host has a lot of surplus at that time. Besides promise a large size to the booting guest, it there any better way? Maybe the dom0 cache could reduce the swapping consumption, since the swapping IO is also cached.
 
Anyway, PoD is a very cool contribution :-)

 
在 2010年11月17日 下午5:53,George Dunlap <George.Dunlap@xxxxxxxxxxxxx>写道:
I should point out also, that the balloon driver will most likely
(indirectly) pull memory from the guest's IO cache.  The balloon
driver asks the guest OS for a page, and the guest OS decides which
page is the least useful at this point.  If it doesn't have any free
pages, it will most likely choose either a page from the buffer cache,
or page out a not-recently-used application memory page.  The guest is
really in the best position to know which will have the least impact
on performance at that point.

Also, making dom0's buffer cache tiny and giving all the memory to the
guests allows the guests to use memory the way they see fit as well.
If the guest OS thinks having a larger buffer cache will be
advantageous, it can do that; OTOH, if it thinks giving almost all the
memory to processes will be more advantageous, it can do that too.
Having memory set aside for a dom0 guest-disk cache doesn't give the
guest that choice.

 -George

2010/11/17 George Dunlap <George.Dunlap@xxxxxxxxxxxxx>:
> PoD is a mechanism designed for exactly one purpose: to allow a VM to
> "boot ballooned".  It's designed to allow the guest to run on less
> than the amount of memory it thinks it has until the balloon driver
> loads.  After that, its job is done.  So you're right, it is designed
> to work for the system initialization stage.
>
> Regarding disk caching: I disagree about the guest IO cache.  I'd say
> if one cache is to go, it should be the dom0 cache.  There are lots of
> reasons for this:
> * It's more fair: if you did all caching in dom0, then VM A might be
> able to use almost the entire cache, leaving VM B without.  If each
> guest does its own caching, then it's using its own resources and not
> impacting someone else.
> * I think the guest OS has a better idea what blocks need to be cached
> and which don't.  It's much better to let that decision happen
> locally, than to try to guess it from dom0, where we don't know
> anything about processes, disk layout, &c.
> * As Dan said, for write caching there's a consistency issue; better
> to let the guest decide when it's safe not to write a page.
> * If dom0 memory isn't being used for something else, it doesn't hurt
> to have duplicate copies of things in memory.  But ideally guest disk
> caching shouldn't take away from anything else on the system.
>
> My $0.02. :-)
>
>  -George
>
> 2010/11/16 Chu Rui <ruichu@xxxxxxxxx>:
>> Thank you for your kind reply, George.
>>
>> I am interested on the PoD memory. In my perspective, PoD mainly works in
>> the system initialization stage. Before the balloon driver begins to work,
>> it can limit the  memory consumption of the guests. However, after a while
>> the guest OS will commit more memory, but PoD cannot reclaim any more at
>> that time even when the committed pages is IO cache. While the balloon keeps
>> work all of the time.
>>
>> Would you please tell me whether my thought is correct?
>>
>> Actually, in my opinion, the guest IO cache is mostly useless, since the
>> Dom0 will cache the IO operations. Such a double-cache wastes the memory
>> resources. Is there any good idea for that like Transcendent Memory while
>> works with HVM?
>>
>> 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@xxxxxxxxx>写道:
>>>
>>> 2010/11/16 牛立新 <topperxin@xxxxxxx>:
>>> > o, it's strange, the old version have no this limitation.
>>>
>>> No; unfortunately a great deal of functionality present in "classic
>>> xen" has been lost in the process of getting the core dom0 support
>>> into the pvops kernel.  I think the plan is, once we have the
>>> necessary changes to non-xen code pushed up stream, we can start
>>> working on getting feature parity with classic xen.
>>>
>>> >
>>> >
>>> > At 2010-11-16 19:35:50,"Stefano Stabellini"
>>> > <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>>> >
>>> >>On Tue, 16 Nov 2010, Chu Rui wrote:
>>> >>> Hi,
>>> >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there
>>> >>> exists the snippet as this:
>>> >>>
>>> >>> static int __init balloon_init(void)
>>> >>> {
>>> >>> unsigned long pfn;
>>> >>> struct page *page;
>>> >>> if (!xen_pv_domain())
>>> >>> return -ENODEV;
>>> >>> .....
>>> >>> }
>>> >>>
>>> >>> Does it means the driver will not work in HVM? If so, where is the
>>> >>> HVN-enabled code for that?
>>> >>
>>> >>not yet, even though I have a patch ready to enable it:
>>> >>
>>> >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
>>> >> 2.6.36-rc7-pvhvm-v1
>>> >
>>> >
>>> > ________________________________
>>> > 网易163/126邮箱百分百兼容iphone ipad邮件收发
>>> > _______________________________________________
>>> > Xen-devel mailing list
>>> > Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> > http://lists.xensource.com/xen-devel
>>> >
>>> >
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.