[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] RE: Code merge between VTI code and non VTI code


  • To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
  • From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
  • Date: Wed, 18 May 2005 09:38:22 -0700
  • Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 18 May 2005 16:37:40 +0000
  • List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AcVWQvKg47P+8ri/Rz2eCbb86mLeRQAIRo+AAExKdUAADj3+0AAB8clAAHgKNVAAHdw84ABQqqDAABTosnA=
  • Thread-topic: Code merge between VTI code and non VTI code

(Apologies to the list if this content is a repeat.  I think
the original was off-list but I can't find it to confirm.)

I am very much in favor of Xen/ia64 fully supporting VTI.
I am also very much in favor of Xen/ia64 supporting both
non-VTI (paravirtualized) domains and VTI domains simultaneously
on a VTI system.

However, I am concerned that we have some different objectives
and don't fully understand each others' objectives, so merging
too much code too quickly may require us to separate code later.
In particular, I see paravirtualization disadvantages from merging
the vcpu data structure, and differences in the need for
large per-domain persistent memory allocations.

I'm also concerned that it is difficult to continue forward
progress on areas of common functionality once a merge
happens, as VTI is not publicly/widely available yet (even
I don't have one) and you don't have an rx26X0 box which
is what most of the other Xen/ia64 developer's are using.

Given that VTI systems are still "in the future" (even if I
knew exactly when, I'm sure I couldn't say), I am hesitant
to slow progress on the paravirtualized front.

Comments?


> -----Original Message-----
> From: Dong, Eddie [mailto:eddie.dong@xxxxxxxxx] 
> Sent: Wednesday, May 18, 2005 1:30 AM
> To: Magenheimer, Dan (HP Labs Fort Collins)
> Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: Code merge between VTI code and non VTI code
> 
> Dan:
>       Base on previous discussion, we got some agreement. Let 
> us have well discssion on the left issues.
>       Adding per domain flag indicating for VTI domain has no 
> problem, it is actually already there now. 
> (exec_domain.arch.arch_vmx.flags). For the compile option, 
> yes we will eliminate it eventually, but we are looking for 
> whole solutions to reduce the rebase effort for all of us. 
> What in my mind for next steps to merge code together  before 
> domain N comes out is:
>       step1:  Merge vcpu context definition. (I.e 
> exec_domain->arch_exec_domain->arch_vmx_struct vs. 
> domain->shared_info_t->vcpu_info_t->arch_vcpu_info_t). Within 
> this merge, some bug fix for current code we found (like 
> Tiger MCA issue) and some common feature enhancement (like 
> lsapic delivery mechanism enhancement) can be done. Defintely 
> vcpu.c will be merged into one. 
> 
>       step2:  Merge pt_regs. After this merge, ivt.S and some 
> VTI specified intialization code will be merged.
> 
>       step3:  Domain N support merge. We are near end of 
> domain N support coding and defintely we want to share them 
> to public so that others can do more. This patch will include 
> the hypercall shared page support, FM support, Control Panel 
> and Device Model. Without step1, this one will get more 
> difference and the rebase effort in future may increase exponentially
> .
>       step4: VTLB/VHPT merge. Base on the discussion, we can 
> merge vTLB together or keep 2 solutions dynamically. Same for 
> VHPT. -- TBD
> 
>       Any suggestions?  For the details of merging vcpu 
> context, please refer to another thread.
> thanks,eddie
> 
>       
> 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.