[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel]RID virtualization discussion



On Thu, May 24, 2007 at 02:02:32PM +0800, Xu, Anthony wrote:

> Currently, we adopted static RID partition solution to virtualize RID.
> Why do we need to virtualize RID?
> We have the assumption in mind that "purge all" is very expensive.
> If we don't virtualize RID, we need to purge all when VCPU switch happens.
> 
> We did following test to see how many the penalty is.
> 
> Following patch is to let XEN "purge all" when VCPU switch happens.
> 
> We did KB on SMP domU and SMP VTI-domain.
> >From the following result, surprisingly, there is no impact.
> 
> I can find three reasons for this.
> 1. Most machine TLBs are backed up by VHPT
> 2. VHPT table is TR mapped.
> 3. some machine TLBs are not used by the scheduled-in VCPU, and are replaced 
> by new tries.
> 
> I don't have many IPF machine in hand, so I can't do more tests.
> Hope community can help.
> 
> If this is the case, we don't need to virtualize RID, every domain have 24 
> bit RID,
> And XEN/IPF can support more domains in the same time.
> 
> What's your opinion?

Could you explain your test in detail?
I suppose KB = Kernel Bnech = measuring kernel complile time, right?
How domains are created? There are four cases with/without patches.

Did you run kernel compile simultaniously in 4 domains or only 1 domain?
i.e. created 5 domains and run kernel compile in each 4 domains
     dom0 + domU(2vcpu) + domU(4vcpu) + domVTi(2vcpu) + domVTi(4vcpu)
     at the same time

Or create dom0 + single domain and run kernel compile in 1 domain.
   repeated each cases.
     dom0 + domU(2vcpu),
     dom0 + domU(4vcpu),
     dom0 + domVTi(2vcpu),
     dom0 + domVTi(4vcpu)


thanks.

> Patch:
> 
> diff -r afb27041a2ce xen/arch/ia64/xen/domain.c
>       2 --- a/xen/arch/ia64/xen/domain.c    Wed May 16 10:42:07 2007 -0600
>       3 +++ b/xen/arch/ia64/xen/domain.c    Wed May 23 15:18:35 2007 +0800
>       4 @@ -237,6 +238,8 @@ void context_switch(struct vcpu *prev, s
>       5      ia64_disable_vhpt_walker();
>       6      lazy_fp_switch(prev, current);
>       7
>       8 +   local_flush_tlb_all();
>       9      prev = ia64_switch_to(next);
>      10
> 11 /* Note: ia64_switch_to does not return here at vcpu initialization.  */
> 
> 
> Test environment
> 6 core physical cpus with HT disabled
> Cset: 15056
> 
> Test result,
> 
> XenU:
> 2vcpus -j4:
> Without patch?? ?????????????????? with patch
> Real 7m34.439s????????????????????????? 7m31.873s
> User 13m48.040s??????????????????????? 13m49.450s
> Sys 0m48.910s????????????????????????? 0m49.140s
> Flush tlb all times:258068
> 
> 4vcpus -j6????????????????? 
> Real 4m5.281s????????????????????? 4m5.260s??????????????????????????? 
> User 13m44.890s               ????????????? 13m43.820s
> sys?? 0m48s?????????????? ???????? 0m48.600s
> Flush tlb all times: 224185
> 
> 
> 
> One VTI-doamain
> 
> With patch:                                                                
> without patch
> 2vcpu -j4
> Real: 8m23.096s                                                     8m23.218s
> User: 13m45.549s                                                  13m45.084s
> Sys: 2m2.740s                                                         
> 1m58.990s
> Local_flush_tlb_all() times: 1545803
> 
> 4vcpu -j6
> Real: 4m40.605s                                                     4m39.939s
> User: 14m0.623s                                                     13m59.779s
> Sys: 2m26.782s                                                       2m28.917s
> Local_flush_tlb_all() times: 1741648
> 
> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel
> 

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.