[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-ia64-devel] [PATCH] [Resend]Enable hash vtlb



Hi Alex,

I also try some testing in kernel build. I have a curious question. Which Cset 
did you get the result? I found some strange unbelievable results after Cset 
9495. There are only 1100~1200 seconds for kernel building in Xen0 and XenU. 
But in 9492, it is still need 1900~2100 seconds.

I am sure the .config is right and there is a vmlinux built out. The date and 
time also seemed correct.

Best Regards,
Yongkang (Kangkang) 永康
>-----Original Message-----
>From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Alex
>Williamson
>Sent: 2006年4月10日 23:14
>To: Xu, Anthony
>Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [Xen-ia64-devel] [PATCH] [Resend]Enable hash vtlb
>
>On Mon, 2006-04-10 at 23:01 +0800, Xu, Anthony wrote:
>
>> If we configure domU with memory 256MB, domU will complain "at least
>256M
>> is needed."
>> Yes there should a best ratio of memory size of domU and size of VHPT.
>
>My tests are:
>
>dom0: boot w/ dom0_mem=768M, kill off all daemons, build
>domU: boot w/ default dom0 mem (512MB), kill all daemons in dom0,
>specify 768M memory from domU, boot domU, kill all domU daemons, build
>
>256MB certainly isn't enough memory to have a worthwhile kernel build
>benchmark.
>
>> >   I don't understand this result.  I was surprised to see domU perform
>> >better than dom0 in my testing, but I can't see how domU could perform
>> >better than bare metal.  Perhaps 512MB is insufficient for kernel
>> >builds.  You may be disproportionately benefiting from dom0's buffer
>> >cache.
>> >
>> I think there maybe two reasons.
>> 1. As you said, domU benefits from dom0's buffer cache. There are
>somewhat
>> parallel executions. DomU is response of compilation, Dom0 is response of
>> read/write of disk.
>> 2. The services running on Dom0 or DomU are less than that on native
>machine.
>
>   Services can also be stopped on the native machine.  I did this in my
>test case.  I think it's very possible that 512MB is not a sufficient
>amount of memory for a valid test.  768MB may not be enough either.  To
>properly benchmark this change we need to have the entire working set of
>the test fit in memory (preferably we'd do the builds out of a tmpfs
>mount to avoid I/O entirely).  If we have extra activity, like swapping
>or text getting pushed out of buffer cache and reloaded, anything we can
>read into the results is suspect.  Thanks,
>
>       Alex
>
>--
>Alex Williamson                             HP Linux & Open Source Lab
>
>
>_______________________________________________
>Xen-ia64-devel mailing list
>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-ia64-devel

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.