 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] [IPF-ia64] with Cset 10690, creating a VTImake xen0 hang
 On Tue, Jul 11, 2006 at 06:22:39AM -0600, Alex Williamson wrote:
> On Tue, 2006-07-11 at 19:42 +0800, Zhang, Xiantao wrote:
> > Hi Alex,
> >     Seems this issue was caused by Cset 10688. In vcpu_itr_d, the current
> > logic purges vhpt with cpu_flush_vhpt_range but it is very heavy to
> > xen0. When creating VTi domain @ early stage, IO operation is very
> > excessive, so qemu was scheduled out and in very frequently and this
> > logic was executed every time. In addition, cpu_flush_vhpt_range using
> > identity map to purge vhpt may cause more tlb miss due to no TR map.
> > If remove vcpu_flush_tlb_vhpt_range logic although it definitely
> > needed, seems VTi becomes healthy. Maybe potential bugs exist there.:)
> 
>    Thanks for investigating Xiantao.  Isaku, any thoughts on how to
> regain VTI performance?  Thanks,
To be honest, I'm seeing considerable perfomance loss with the C/Set.
I haven't found any good optimization idea to lower the cost of
vcpu_flush_tlb_vhpt_range() in vcpu_itr_{i, d}().
I also looked the commit log to check whether a consideration on this issues
was given. But I couldn't find.
Given that the xen page size is 16KB, the VHPT size is 64KB, 
the vhpt long format entry size is 32 bytes and
Linux uses dtr[IA64_TR_CURRENT_STACK] with 64MB page size.
The first attached patch halvs the flush cost.
However I don't think this optimization solves the issue.
It seems to be safe not to flush vTLB considering linux dtr usage.
As a short term work around, vcpu_itr_{i, d}() vTLB flush can be 
disabled.
-- 
yamahata
Attachment:
10717:0ac7c4c8ae50_slight_optimization_vcpu_itr_d_vcpu_itr_i.patch Attachment:
10718:99174e194b6a_disable_vtlb_flush_in_vcpu_itr.patch _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |