[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Re: [Xen-devel] Balloon driver for Linux/HVM



As George pointed out in a separate branch of this email thread, disabling a guests caching is probably a bad idea in general.


The goal of tmem is to explore if physical memory utilization can be improved when the guest is aware that it is running as a guest and when the guest kernel can be modified (slightly) for that case.  This implies that Windows would have to be modified to use tmem, though it has been suggested that a Windows kernel expert might be able to somehow interpose binary code to do a similar thing.  Since I know nothing about Windows, someone else will have to explore that.

 

From: Chu Rui [mailto:ruichu@xxxxxxxxx]
Sent: Tuesday, November 16, 2010 7:28 PM
To: Dan Magenheimer; xen-devel@xxxxxxxxxxxxxxxxxxx; George Dunlap
Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM

 

Thank you, Dan.

 

It is a pity that tmem cannot be used for Windows guest. But can we disable the guest Windows caching? If so, the guest OS is no longer a memory hog (as referred in your talk), and maybe we can manage its memory consumption on demand, as a ring3 application does.

 

BTW, as far as I am concerned, Windows XP does NOT zeros all of the memory at startup stage. Actually, even the memory allocated in ring3 application was not commited until it was really accessed. So PoD memory may work well in that case.

 

 

20101117日 上午1:10Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>写道:

FYI, Transcendent Memory does work with HVM, with a recent Xen and the proper Linux guest-side patches (including Stefanos PV-on-HVM patchset).  There is extra overhead in an HVM for each tmem call due to vmenter/vmexit and I have not measured performance, but this overhead should not be too large on newer processors.  Also, of course, Transcendent Memory will not work with Windows guests (or any guests that do not have tmem patches), while PoD is primarily intended to work with Windows (because, IIRC, Windows zeroes all of memory).

 

I agree that guest IO cacheing is mostly useless for CLEAN pages if the dom0 page cache is large enough for all guests (or if tmem is working).  For dirty pages, using dom0 cacheing risks data integrity problems (e.g. the guest believes a transaction to disk is complete but the data is in a dom0 cache that has not been flushed to disk).


Dan

 

From: Chu Rui [mailto:ruichu@xxxxxxxxx]
Sent: Tuesday, November 16, 2010 8:37 AM
To: George Dunlap; Xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM

 

Thank you for your kind reply, George.

 

I am interested on the PoD memory. In my perspective, PoD mainly works in the system initialization stage. Before the balloon driver begins to work, it can limit the  memory consumption of the guests. However, after a while the guest OS will commit more memory, but PoD cannot reclaim any more at that time even when the committed pages is IO cache. While the balloon keeps work all of the time.

Would you please tell me whether my thought is correct?

Actually, in my opinion, the guest IO cache is mostly useless, since the Dom0 will cache the IO operations. Such a double-cache wastes the memory resources. Is there any good idea for that like Transcendent Memory while works with HVM?

20101116日 下午8:56George Dunlap <dunlapg@xxxxxxxxx>写道:

2010/11/16 牛立新 <topperxin@xxxxxxx>:

> o, it's strange, the old version have no this limitation.

No; unfortunately a great deal of functionality present in "classic
xen" has been lost in the process of getting the core dom0 support
into the pvops kernel.  I think the plan is, once we have the
necessary changes to non-xen code pushed up stream, we can start
working on getting feature parity with classic xen.


>
>
> At 2010-11-16 19:35:50"Stefano Stabellini" <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>
>>On Tue, 16 Nov 2010, Chu Rui wrote:
>>> Hi,
>>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this:
>>>
>>> static int __init balloon_init(void)
>>> {
>>> unsigned long pfn;
>>> struct page *page;
>>> if (!xen_pv_domain())
>>> return -ENODEV;
>>> .....
>>> }
>>>
>>> Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that?
>>
>>not yet, even though I have a patch ready to enable it:
>>
>>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1
>
>
> ________________________________
> 网易163/126邮箱百分百兼容iphone ipad邮件收发

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>

 

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.